User login
Are parents infecting their children with contagious negativity?
A couple of weeks ago I stumbled across a report of a Pew Research Center’s survey titled “Parenting in America today” (Pew Research Center. Jan. 24, 2023), which found that 40% of parents in the United States with children younger than 18 are “extremely or very worried” that at some point their children might struggle with anxiety or depression. Thirty-six percent replied that they were “somewhat” worried. This total of more than 75% represents a significant change from the 2015 Pew Center survey in which only 54% of parents were “somewhat” worried about their children’s mental health.
Prompted by these findings I began work on a column in which I planned to encourage pediatricians to think more like family physicians when we were working with children who were experiencing serious mental health problems. My primary message was going to be that we should turn more of our attention to the mental health of the anxious parents who must endure the often long and frustrating path toward effective psychiatric care for their children. This might come in the form of some simple suggestions about nonpharmacologic self-help strategies. Or, it could mean encouraging parents to seek psychiatric care or counseling for themselves as they wait for help for their child.
However, as I began that column, my thoughts kept drifting toward a broader consideration of the relationship between parents and pediatric mental health. If mental health of children is causing their parents to be anxious and depressed isn’t it just as likely that this is a bidirectional connection? This was not exactly an “aha” moment for me because it is a relationship I have considered for sometime. However, it is a concept that I have come to realize is receiving far too little attention.
There are exceptions. For example, a recent opinion piece in the New York Times by David French, “What if Kids Are Sad and Stressed Because Their Parents Are?” (March 19, 2023) echoes many of my concerns. Drawing on his experiences traveling around college campuses, Mr. French observes, “Just as parents are upset about their children’s anxiety and depression, children are anxious about their parent’s mental health.”
He notes that an August 2022 NBC News poll found that 58% of registered voters feel this country’s best days are behind it and joins me in imagining that this negative mind set is filtering down to the pediatric population. He acknowledges that there are other likely contributors to teen unhappiness including the ubiquity of smart phones, the secularization of society, and the media’s focus on the political divide. However, Mr. French wonders if the parenting style that results in childhood experiences that are dominated by adult supervision and protection may also be playing a large role.
In his conclusion, Mr. French asks us to consider “How much fear and anxiety should we import to our lives and homes?” as we adults search for an answer.
As I continued to drill down for other possible solutions, I encountered an avenue of psychological research that suggests that instead of, or in addition to, filtering out the anxiety-generating deluge of information, we begin to give some thought to how our beliefs may be coloring our perception of reality.
Jeremy D.W. Clifton, PhD, a psychologist at the University of Pennsylvania Positive Psychology Center has done extensive research on the relationship between our basic beliefs about the world (known as primal beliefs or simply primals in psychologist lingo) and how we interpret reality. For example, one of your primal beliefs may be that the world is a dangerous place. I, on the other hand, may see the world as a stimulating environment offering me endless opportunities to explore. I may see the world as an abundant resource limited only by my creativity. You, however, see it as a barren wasteland.
Dr. Clifton’s research has shown that our primals (at least those of adults) are relatively immutable through one’s lifetime and “do not appear to be the consequence of our experiences.” For example, living in a ZIP code with a high crime rate does not predict that you will see the world as a dangerous place. Nor does being affluent guarantee that an adult sees the world rich with opportunities.
It is unclear exactly when and by what process we develop our primal beliefs, but it is safe to say our parents probably play a large role. Exactly to what degree the tsunami of bad news we are allowing to inundate our children’s lives plays a role is unclear. However, it is reasonable to assume that news about climate change, school shootings, and the pandemic must be a contributor.
According to Dr. Clifton, there is some evidence that certain mind exercises, when applied diligently, can occasionally modify the primal beliefs of an individual who sees the world as dangerous and barren. Until such strategies become more readily accessible, the best we can do is acknowledge that our children are like canaries in a coal mine full of negative perceptions, then do our best to clear the air.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
A couple of weeks ago I stumbled across a report of a Pew Research Center’s survey titled “Parenting in America today” (Pew Research Center. Jan. 24, 2023), which found that 40% of parents in the United States with children younger than 18 are “extremely or very worried” that at some point their children might struggle with anxiety or depression. Thirty-six percent replied that they were “somewhat” worried. This total of more than 75% represents a significant change from the 2015 Pew Center survey in which only 54% of parents were “somewhat” worried about their children’s mental health.
Prompted by these findings I began work on a column in which I planned to encourage pediatricians to think more like family physicians when we were working with children who were experiencing serious mental health problems. My primary message was going to be that we should turn more of our attention to the mental health of the anxious parents who must endure the often long and frustrating path toward effective psychiatric care for their children. This might come in the form of some simple suggestions about nonpharmacologic self-help strategies. Or, it could mean encouraging parents to seek psychiatric care or counseling for themselves as they wait for help for their child.
However, as I began that column, my thoughts kept drifting toward a broader consideration of the relationship between parents and pediatric mental health. If mental health of children is causing their parents to be anxious and depressed isn’t it just as likely that this is a bidirectional connection? This was not exactly an “aha” moment for me because it is a relationship I have considered for sometime. However, it is a concept that I have come to realize is receiving far too little attention.
There are exceptions. For example, a recent opinion piece in the New York Times by David French, “What if Kids Are Sad and Stressed Because Their Parents Are?” (March 19, 2023) echoes many of my concerns. Drawing on his experiences traveling around college campuses, Mr. French observes, “Just as parents are upset about their children’s anxiety and depression, children are anxious about their parent’s mental health.”
He notes that an August 2022 NBC News poll found that 58% of registered voters feel this country’s best days are behind it and joins me in imagining that this negative mind set is filtering down to the pediatric population. He acknowledges that there are other likely contributors to teen unhappiness including the ubiquity of smart phones, the secularization of society, and the media’s focus on the political divide. However, Mr. French wonders if the parenting style that results in childhood experiences that are dominated by adult supervision and protection may also be playing a large role.
In his conclusion, Mr. French asks us to consider “How much fear and anxiety should we import to our lives and homes?” as we adults search for an answer.
As I continued to drill down for other possible solutions, I encountered an avenue of psychological research that suggests that instead of, or in addition to, filtering out the anxiety-generating deluge of information, we begin to give some thought to how our beliefs may be coloring our perception of reality.
Jeremy D.W. Clifton, PhD, a psychologist at the University of Pennsylvania Positive Psychology Center has done extensive research on the relationship between our basic beliefs about the world (known as primal beliefs or simply primals in psychologist lingo) and how we interpret reality. For example, one of your primal beliefs may be that the world is a dangerous place. I, on the other hand, may see the world as a stimulating environment offering me endless opportunities to explore. I may see the world as an abundant resource limited only by my creativity. You, however, see it as a barren wasteland.
Dr. Clifton’s research has shown that our primals (at least those of adults) are relatively immutable through one’s lifetime and “do not appear to be the consequence of our experiences.” For example, living in a ZIP code with a high crime rate does not predict that you will see the world as a dangerous place. Nor does being affluent guarantee that an adult sees the world rich with opportunities.
It is unclear exactly when and by what process we develop our primal beliefs, but it is safe to say our parents probably play a large role. Exactly to what degree the tsunami of bad news we are allowing to inundate our children’s lives plays a role is unclear. However, it is reasonable to assume that news about climate change, school shootings, and the pandemic must be a contributor.
According to Dr. Clifton, there is some evidence that certain mind exercises, when applied diligently, can occasionally modify the primal beliefs of an individual who sees the world as dangerous and barren. Until such strategies become more readily accessible, the best we can do is acknowledge that our children are like canaries in a coal mine full of negative perceptions, then do our best to clear the air.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
A couple of weeks ago I stumbled across a report of a Pew Research Center’s survey titled “Parenting in America today” (Pew Research Center. Jan. 24, 2023), which found that 40% of parents in the United States with children younger than 18 are “extremely or very worried” that at some point their children might struggle with anxiety or depression. Thirty-six percent replied that they were “somewhat” worried. This total of more than 75% represents a significant change from the 2015 Pew Center survey in which only 54% of parents were “somewhat” worried about their children’s mental health.
Prompted by these findings I began work on a column in which I planned to encourage pediatricians to think more like family physicians when we were working with children who were experiencing serious mental health problems. My primary message was going to be that we should turn more of our attention to the mental health of the anxious parents who must endure the often long and frustrating path toward effective psychiatric care for their children. This might come in the form of some simple suggestions about nonpharmacologic self-help strategies. Or, it could mean encouraging parents to seek psychiatric care or counseling for themselves as they wait for help for their child.
However, as I began that column, my thoughts kept drifting toward a broader consideration of the relationship between parents and pediatric mental health. If mental health of children is causing their parents to be anxious and depressed isn’t it just as likely that this is a bidirectional connection? This was not exactly an “aha” moment for me because it is a relationship I have considered for sometime. However, it is a concept that I have come to realize is receiving far too little attention.
There are exceptions. For example, a recent opinion piece in the New York Times by David French, “What if Kids Are Sad and Stressed Because Their Parents Are?” (March 19, 2023) echoes many of my concerns. Drawing on his experiences traveling around college campuses, Mr. French observes, “Just as parents are upset about their children’s anxiety and depression, children are anxious about their parent’s mental health.”
He notes that an August 2022 NBC News poll found that 58% of registered voters feel this country’s best days are behind it and joins me in imagining that this negative mind set is filtering down to the pediatric population. He acknowledges that there are other likely contributors to teen unhappiness including the ubiquity of smart phones, the secularization of society, and the media’s focus on the political divide. However, Mr. French wonders if the parenting style that results in childhood experiences that are dominated by adult supervision and protection may also be playing a large role.
In his conclusion, Mr. French asks us to consider “How much fear and anxiety should we import to our lives and homes?” as we adults search for an answer.
As I continued to drill down for other possible solutions, I encountered an avenue of psychological research that suggests that instead of, or in addition to, filtering out the anxiety-generating deluge of information, we begin to give some thought to how our beliefs may be coloring our perception of reality.
Jeremy D.W. Clifton, PhD, a psychologist at the University of Pennsylvania Positive Psychology Center has done extensive research on the relationship between our basic beliefs about the world (known as primal beliefs or simply primals in psychologist lingo) and how we interpret reality. For example, one of your primal beliefs may be that the world is a dangerous place. I, on the other hand, may see the world as a stimulating environment offering me endless opportunities to explore. I may see the world as an abundant resource limited only by my creativity. You, however, see it as a barren wasteland.
Dr. Clifton’s research has shown that our primals (at least those of adults) are relatively immutable through one’s lifetime and “do not appear to be the consequence of our experiences.” For example, living in a ZIP code with a high crime rate does not predict that you will see the world as a dangerous place. Nor does being affluent guarantee that an adult sees the world rich with opportunities.
It is unclear exactly when and by what process we develop our primal beliefs, but it is safe to say our parents probably play a large role. Exactly to what degree the tsunami of bad news we are allowing to inundate our children’s lives plays a role is unclear. However, it is reasonable to assume that news about climate change, school shootings, and the pandemic must be a contributor.
According to Dr. Clifton, there is some evidence that certain mind exercises, when applied diligently, can occasionally modify the primal beliefs of an individual who sees the world as dangerous and barren. Until such strategies become more readily accessible, the best we can do is acknowledge that our children are like canaries in a coal mine full of negative perceptions, then do our best to clear the air.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
‘Startling’ cost barriers after abnormal screening mammogram
Despite federal legislation doing away with cost-sharing for initial breast cancer screening, out-of-pocket costs for needed follow-up tests remain significant financial barriers for many women.
An analysis of claims data found that women with higher cost-sharing undergo fewer subsequent breast diagnostic tests after an abnormal screening mammogram, compared with peers with lower cost-sharing.
“The chief clinical implication is that women with abnormal mammograms – that is, potentially at risk for cancer – are deciding not to follow-up on diagnostic imaging because of high out-of-pocket costs,” Danny Hughes, PhD, professor, College of Health Solutions, Arizona State University in Phoenix, told this news organization.
One course of action for radiologists is to “strongly communicate the importance of adhering to recommended follow-on testing,” Dr. Hughes said.
Another is to “work to pass national and state legislation, such as recently passed [legislation] in Connecticut, that removes out-of-pocket costs for follow-on diagnostic breast imaging and biopsy in the same way that these patient costs are prohibited for screening mammography,” he suggested.
The study was published online in JAMA Network Open.
‘Worrisome’ findings
The Affordable Care Act removed out-of-pocket costs for preventive health care, such as screening mammograms in women aged 40 and over.
However, lingering cost barriers remain for some individuals who have a positive initial screening mammogram and need follow-up tests. For instance, research shows that women in high-deductible plans, which often have higher out-of-pocket costs than other plans, may experience delays in follow-on care, including diagnostic breast imaging.
Dr. Hughes and colleagues examined the association between the degree of patient cost-sharing across different health plans – those dominated by copays, coinsurance, or deductibles as well as those classified as balanced across the three categories – and the use of diagnostic breast cancer imaging after a screening mammogram.
The data came from Optum’s database of administrative health claims for members of large commercial and Medicare Advantage health plans. The team used a machine learning algorithm to rank patient insurance plans by type of cost-sharing.
The sample included 230,845 mostly White (71%) women 40 years and older with no prior history of breast cancer who underwent screening mammography. These women were covered by 22,828 distinct insurance plans associated with roughly 6 million enrollees and nearly 45 million distinct medical claims.
Plans dominated by coinsurance had the lowest average out-of-pocket costs ($945), followed by plans balanced across the three cost-sharing categories ($1,017), plans dominated by copays ($1,020), and plans dominated by deductibles ($1,186).
Compared with women with coinsurance plans, those with copay- and deductible-dominated plans underwent significantly fewer subsequent breast-imaging procedures – 24 and 16 fewer procedures per 1,000 women, respectively.
Use of follow-on breast MRI was nearly 24% lower among women in plans with the highest cost-sharing versus those in plans with the lowest cost-sharing.
The team found no statistically significant difference in breast biopsy use between plan types.
Considering the risks posed by an unconfirmed positive mammogram result, these findings are “startling” and question the efficacy of legislation that eliminated cost-sharing from many preventive services, including screening mammograms, Dr. Hughes and colleagues write.
“Additional policy changes, such as removing cost-sharing for subsequent tests after abnormal screening results or bundling all breast cancer diagnostic testing into a single reimbursement, may provide avenues to mitigate these financial barriers to care,” the authors add.
The authors of an accompanying editorial found the study’s main finding – that some women who have an abnormal result on a mammogram may not get appropriate follow-up because of cost – is “worrisome.”
“From a population health perspective, failure to complete the screening process limits the program’s effectiveness and likely exacerbates health disparities,” write Ilana Richman, MD, with Yale University, New Haven, Conn., and A. Mark Fendrick, MD, with the University of Michigan, Ann Arbor.
“On an individual level, high out-of-pocket costs may directly contribute to worse health outcomes or require individuals to use scarce financial resources that may otherwise be used for critical items such as food or rent,” Dr. Richman and Dr. Fendrick add. And “the removal of financial barriers for the entire breast cancer screening process has potential to improve total screening uptake and follow-up rates.”
Support for the study was provided by the Harvey L. Neiman Health Policy Institute. Dr. Hughes has reported no relevant financial relationships. Dr. Richman has reported receiving salary support from the Centers for Medicare & Medicaid Services to develop health care quality measures outside the submitted work. Dr. Fendrick has reported serving as a consultant for AbbVie, Amgen, Bayer, CareFirst, BlueCross BlueShield, Centivo, Community Oncology Association, Covered California, EmblemHealth, Exact Sciences, GRAIL, Harvard University, HealthCorum, Hygieia, Johnson & Johnson, MedZed, Merck, Mercer, Montana Health Cooperative, Phathom Pharmaceuticals, Proton Intelligence, RA Capital, Teladoc Health, U.S. Department of Defense, Virginia Center for Health Innovation, Washington Health Benefit Exchange, Wildflower Health, and Yale-New Haven Health System; and serving as a consultant for and holding equity in Health at Scale Technologies, Pair Team, Sempre Health, Silver Fern Health, and Wellth.
A version of this article originally appeared on Medscape.com.
Despite federal legislation doing away with cost-sharing for initial breast cancer screening, out-of-pocket costs for needed follow-up tests remain significant financial barriers for many women.
An analysis of claims data found that women with higher cost-sharing undergo fewer subsequent breast diagnostic tests after an abnormal screening mammogram, compared with peers with lower cost-sharing.
“The chief clinical implication is that women with abnormal mammograms – that is, potentially at risk for cancer – are deciding not to follow-up on diagnostic imaging because of high out-of-pocket costs,” Danny Hughes, PhD, professor, College of Health Solutions, Arizona State University in Phoenix, told this news organization.
One course of action for radiologists is to “strongly communicate the importance of adhering to recommended follow-on testing,” Dr. Hughes said.
Another is to “work to pass national and state legislation, such as recently passed [legislation] in Connecticut, that removes out-of-pocket costs for follow-on diagnostic breast imaging and biopsy in the same way that these patient costs are prohibited for screening mammography,” he suggested.
The study was published online in JAMA Network Open.
‘Worrisome’ findings
The Affordable Care Act removed out-of-pocket costs for preventive health care, such as screening mammograms in women aged 40 and over.
However, lingering cost barriers remain for some individuals who have a positive initial screening mammogram and need follow-up tests. For instance, research shows that women in high-deductible plans, which often have higher out-of-pocket costs than other plans, may experience delays in follow-on care, including diagnostic breast imaging.
Dr. Hughes and colleagues examined the association between the degree of patient cost-sharing across different health plans – those dominated by copays, coinsurance, or deductibles as well as those classified as balanced across the three categories – and the use of diagnostic breast cancer imaging after a screening mammogram.
The data came from Optum’s database of administrative health claims for members of large commercial and Medicare Advantage health plans. The team used a machine learning algorithm to rank patient insurance plans by type of cost-sharing.
The sample included 230,845 mostly White (71%) women 40 years and older with no prior history of breast cancer who underwent screening mammography. These women were covered by 22,828 distinct insurance plans associated with roughly 6 million enrollees and nearly 45 million distinct medical claims.
Plans dominated by coinsurance had the lowest average out-of-pocket costs ($945), followed by plans balanced across the three cost-sharing categories ($1,017), plans dominated by copays ($1,020), and plans dominated by deductibles ($1,186).
Compared with women with coinsurance plans, those with copay- and deductible-dominated plans underwent significantly fewer subsequent breast-imaging procedures – 24 and 16 fewer procedures per 1,000 women, respectively.
Use of follow-on breast MRI was nearly 24% lower among women in plans with the highest cost-sharing versus those in plans with the lowest cost-sharing.
The team found no statistically significant difference in breast biopsy use between plan types.
Considering the risks posed by an unconfirmed positive mammogram result, these findings are “startling” and question the efficacy of legislation that eliminated cost-sharing from many preventive services, including screening mammograms, Dr. Hughes and colleagues write.
“Additional policy changes, such as removing cost-sharing for subsequent tests after abnormal screening results or bundling all breast cancer diagnostic testing into a single reimbursement, may provide avenues to mitigate these financial barriers to care,” the authors add.
The authors of an accompanying editorial found the study’s main finding – that some women who have an abnormal result on a mammogram may not get appropriate follow-up because of cost – is “worrisome.”
“From a population health perspective, failure to complete the screening process limits the program’s effectiveness and likely exacerbates health disparities,” write Ilana Richman, MD, with Yale University, New Haven, Conn., and A. Mark Fendrick, MD, with the University of Michigan, Ann Arbor.
“On an individual level, high out-of-pocket costs may directly contribute to worse health outcomes or require individuals to use scarce financial resources that may otherwise be used for critical items such as food or rent,” Dr. Richman and Dr. Fendrick add. And “the removal of financial barriers for the entire breast cancer screening process has potential to improve total screening uptake and follow-up rates.”
Support for the study was provided by the Harvey L. Neiman Health Policy Institute. Dr. Hughes has reported no relevant financial relationships. Dr. Richman has reported receiving salary support from the Centers for Medicare & Medicaid Services to develop health care quality measures outside the submitted work. Dr. Fendrick has reported serving as a consultant for AbbVie, Amgen, Bayer, CareFirst, BlueCross BlueShield, Centivo, Community Oncology Association, Covered California, EmblemHealth, Exact Sciences, GRAIL, Harvard University, HealthCorum, Hygieia, Johnson & Johnson, MedZed, Merck, Mercer, Montana Health Cooperative, Phathom Pharmaceuticals, Proton Intelligence, RA Capital, Teladoc Health, U.S. Department of Defense, Virginia Center for Health Innovation, Washington Health Benefit Exchange, Wildflower Health, and Yale-New Haven Health System; and serving as a consultant for and holding equity in Health at Scale Technologies, Pair Team, Sempre Health, Silver Fern Health, and Wellth.
A version of this article originally appeared on Medscape.com.
Despite federal legislation doing away with cost-sharing for initial breast cancer screening, out-of-pocket costs for needed follow-up tests remain significant financial barriers for many women.
An analysis of claims data found that women with higher cost-sharing undergo fewer subsequent breast diagnostic tests after an abnormal screening mammogram, compared with peers with lower cost-sharing.
“The chief clinical implication is that women with abnormal mammograms – that is, potentially at risk for cancer – are deciding not to follow-up on diagnostic imaging because of high out-of-pocket costs,” Danny Hughes, PhD, professor, College of Health Solutions, Arizona State University in Phoenix, told this news organization.
One course of action for radiologists is to “strongly communicate the importance of adhering to recommended follow-on testing,” Dr. Hughes said.
Another is to “work to pass national and state legislation, such as recently passed [legislation] in Connecticut, that removes out-of-pocket costs for follow-on diagnostic breast imaging and biopsy in the same way that these patient costs are prohibited for screening mammography,” he suggested.
The study was published online in JAMA Network Open.
‘Worrisome’ findings
The Affordable Care Act removed out-of-pocket costs for preventive health care, such as screening mammograms in women aged 40 and over.
However, lingering cost barriers remain for some individuals who have a positive initial screening mammogram and need follow-up tests. For instance, research shows that women in high-deductible plans, which often have higher out-of-pocket costs than other plans, may experience delays in follow-on care, including diagnostic breast imaging.
Dr. Hughes and colleagues examined the association between the degree of patient cost-sharing across different health plans – those dominated by copays, coinsurance, or deductibles as well as those classified as balanced across the three categories – and the use of diagnostic breast cancer imaging after a screening mammogram.
The data came from Optum’s database of administrative health claims for members of large commercial and Medicare Advantage health plans. The team used a machine learning algorithm to rank patient insurance plans by type of cost-sharing.
The sample included 230,845 mostly White (71%) women 40 years and older with no prior history of breast cancer who underwent screening mammography. These women were covered by 22,828 distinct insurance plans associated with roughly 6 million enrollees and nearly 45 million distinct medical claims.
Plans dominated by coinsurance had the lowest average out-of-pocket costs ($945), followed by plans balanced across the three cost-sharing categories ($1,017), plans dominated by copays ($1,020), and plans dominated by deductibles ($1,186).
Compared with women with coinsurance plans, those with copay- and deductible-dominated plans underwent significantly fewer subsequent breast-imaging procedures – 24 and 16 fewer procedures per 1,000 women, respectively.
Use of follow-on breast MRI was nearly 24% lower among women in plans with the highest cost-sharing versus those in plans with the lowest cost-sharing.
The team found no statistically significant difference in breast biopsy use between plan types.
Considering the risks posed by an unconfirmed positive mammogram result, these findings are “startling” and question the efficacy of legislation that eliminated cost-sharing from many preventive services, including screening mammograms, Dr. Hughes and colleagues write.
“Additional policy changes, such as removing cost-sharing for subsequent tests after abnormal screening results or bundling all breast cancer diagnostic testing into a single reimbursement, may provide avenues to mitigate these financial barriers to care,” the authors add.
The authors of an accompanying editorial found the study’s main finding – that some women who have an abnormal result on a mammogram may not get appropriate follow-up because of cost – is “worrisome.”
“From a population health perspective, failure to complete the screening process limits the program’s effectiveness and likely exacerbates health disparities,” write Ilana Richman, MD, with Yale University, New Haven, Conn., and A. Mark Fendrick, MD, with the University of Michigan, Ann Arbor.
“On an individual level, high out-of-pocket costs may directly contribute to worse health outcomes or require individuals to use scarce financial resources that may otherwise be used for critical items such as food or rent,” Dr. Richman and Dr. Fendrick add. And “the removal of financial barriers for the entire breast cancer screening process has potential to improve total screening uptake and follow-up rates.”
Support for the study was provided by the Harvey L. Neiman Health Policy Institute. Dr. Hughes has reported no relevant financial relationships. Dr. Richman has reported receiving salary support from the Centers for Medicare & Medicaid Services to develop health care quality measures outside the submitted work. Dr. Fendrick has reported serving as a consultant for AbbVie, Amgen, Bayer, CareFirst, BlueCross BlueShield, Centivo, Community Oncology Association, Covered California, EmblemHealth, Exact Sciences, GRAIL, Harvard University, HealthCorum, Hygieia, Johnson & Johnson, MedZed, Merck, Mercer, Montana Health Cooperative, Phathom Pharmaceuticals, Proton Intelligence, RA Capital, Teladoc Health, U.S. Department of Defense, Virginia Center for Health Innovation, Washington Health Benefit Exchange, Wildflower Health, and Yale-New Haven Health System; and serving as a consultant for and holding equity in Health at Scale Technologies, Pair Team, Sempre Health, Silver Fern Health, and Wellth.
A version of this article originally appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Some diets better than others for heart protection
In an analysis of randomized trials, the Mediterranean diet and low-fat diets were linked to reduced risks of all-cause mortality and nonfatal MI over 3 years in adults at increased risk for cardiovascular disease (CVD), while the Mediterranean diet also showed lower risk of stroke.
Five other popular diets appeared to have little or no benefit with regard to these outcomes.
“These findings with data presentations are extremely important for patients who are skeptical about the desirability of diet change,” wrote the authors, led by Giorgio Karam, a medical student at the University of Manitoba, Winnipeg.
The results were published online in The BMJ.
Dietary guidelines recommend various diets along with physical activity or other cointerventions for adults at increased CVD risk, but they are often based on low-certainty evidence from nonrandomized studies and on surrogate outcomes.
Several meta-analyses of randomized controlled trials with mortality and major CV outcomes have reported benefits of some dietary programs, but those studies did not use network meta-analysis to give absolute estimates and certainty of estimates for adults at intermediate and high risk, the authors noted.
For this study, Mr. Karam and colleagues conducted a comprehensive systematic review and network meta-analysis in which they compared the effects of seven popular structured diets on mortality and CVD events for adults with CVD or CVD risk factors.
The seven diet plans were the Mediterranean, low fat, very low fat, modified fat, combined low fat and low sodium, Ornish, and Pritikin diets. Data for the analysis came from 40 randomized controlled trials that involved 35,548 participants who were followed for an average of 3 years.
There was evidence of “moderate” certainty that the Mediterranean diet was superior to minimal intervention for all-cause mortality (odds ratio [OR], 0.72), CV mortality (OR, 0.55), stroke (OR, 0.65), and nonfatal MI (OR, 0.48).
On an absolute basis (per 1,000 over 5 years), the Mediterranean diet let to 17 fewer deaths from any cause, 13 fewer CV deaths, seven fewer strokes, and 17 fewer nonfatal MIs.
There was evidence of moderate certainty that a low-fat diet was superior to minimal intervention for prevention of all-cause mortality (OR, 0.84; nine fewer deaths per 1,000) and nonfatal MI (OR, 0.77; seven fewer deaths per 1,000). The low-fat diet had little to no benefit with regard to stroke reduction.
The Mediterranean diet was not “convincingly” superior to a low-fat diet for mortality or nonfatal MI, the authors noted.
The absolute effects for the Mediterranean and low-fat diets were more pronounced in adults at high CVD risk. With the Mediterranean diet, there were 36 fewer all-cause deaths and 39 fewer CV deaths per 1,000 over 5 years.
The five other dietary programs generally had “little or no benefit” compared with minimal intervention. The evidence was of low to moderate certainty.
The studies did not provide enough data to gauge the impact of the diets on angina, heart failure, peripheral vascular events, and atrial fibrillation.
The researchers say that strengths of their analysis include a comprehensive review and thorough literature search and a rigorous assessment of study bias. In addition, the researchers adhered to recognized GRADE methods for assessing the certainty of estimates.
Limitations of their work include not being able to measure adherence to dietary programs and the possibility that some of the benefits may have been due to other factors, such as drug treatment and support for quitting smoking.
The study had no specific funding. The authors have disclosed no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
In an analysis of randomized trials, the Mediterranean diet and low-fat diets were linked to reduced risks of all-cause mortality and nonfatal MI over 3 years in adults at increased risk for cardiovascular disease (CVD), while the Mediterranean diet also showed lower risk of stroke.
Five other popular diets appeared to have little or no benefit with regard to these outcomes.
“These findings with data presentations are extremely important for patients who are skeptical about the desirability of diet change,” wrote the authors, led by Giorgio Karam, a medical student at the University of Manitoba, Winnipeg.
The results were published online in The BMJ.
Dietary guidelines recommend various diets along with physical activity or other cointerventions for adults at increased CVD risk, but they are often based on low-certainty evidence from nonrandomized studies and on surrogate outcomes.
Several meta-analyses of randomized controlled trials with mortality and major CV outcomes have reported benefits of some dietary programs, but those studies did not use network meta-analysis to give absolute estimates and certainty of estimates for adults at intermediate and high risk, the authors noted.
For this study, Mr. Karam and colleagues conducted a comprehensive systematic review and network meta-analysis in which they compared the effects of seven popular structured diets on mortality and CVD events for adults with CVD or CVD risk factors.
The seven diet plans were the Mediterranean, low fat, very low fat, modified fat, combined low fat and low sodium, Ornish, and Pritikin diets. Data for the analysis came from 40 randomized controlled trials that involved 35,548 participants who were followed for an average of 3 years.
There was evidence of “moderate” certainty that the Mediterranean diet was superior to minimal intervention for all-cause mortality (odds ratio [OR], 0.72), CV mortality (OR, 0.55), stroke (OR, 0.65), and nonfatal MI (OR, 0.48).
On an absolute basis (per 1,000 over 5 years), the Mediterranean diet let to 17 fewer deaths from any cause, 13 fewer CV deaths, seven fewer strokes, and 17 fewer nonfatal MIs.
There was evidence of moderate certainty that a low-fat diet was superior to minimal intervention for prevention of all-cause mortality (OR, 0.84; nine fewer deaths per 1,000) and nonfatal MI (OR, 0.77; seven fewer deaths per 1,000). The low-fat diet had little to no benefit with regard to stroke reduction.
The Mediterranean diet was not “convincingly” superior to a low-fat diet for mortality or nonfatal MI, the authors noted.
The absolute effects for the Mediterranean and low-fat diets were more pronounced in adults at high CVD risk. With the Mediterranean diet, there were 36 fewer all-cause deaths and 39 fewer CV deaths per 1,000 over 5 years.
The five other dietary programs generally had “little or no benefit” compared with minimal intervention. The evidence was of low to moderate certainty.
The studies did not provide enough data to gauge the impact of the diets on angina, heart failure, peripheral vascular events, and atrial fibrillation.
The researchers say that strengths of their analysis include a comprehensive review and thorough literature search and a rigorous assessment of study bias. In addition, the researchers adhered to recognized GRADE methods for assessing the certainty of estimates.
Limitations of their work include not being able to measure adherence to dietary programs and the possibility that some of the benefits may have been due to other factors, such as drug treatment and support for quitting smoking.
The study had no specific funding. The authors have disclosed no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
In an analysis of randomized trials, the Mediterranean diet and low-fat diets were linked to reduced risks of all-cause mortality and nonfatal MI over 3 years in adults at increased risk for cardiovascular disease (CVD), while the Mediterranean diet also showed lower risk of stroke.
Five other popular diets appeared to have little or no benefit with regard to these outcomes.
“These findings with data presentations are extremely important for patients who are skeptical about the desirability of diet change,” wrote the authors, led by Giorgio Karam, a medical student at the University of Manitoba, Winnipeg.
The results were published online in The BMJ.
Dietary guidelines recommend various diets along with physical activity or other cointerventions for adults at increased CVD risk, but they are often based on low-certainty evidence from nonrandomized studies and on surrogate outcomes.
Several meta-analyses of randomized controlled trials with mortality and major CV outcomes have reported benefits of some dietary programs, but those studies did not use network meta-analysis to give absolute estimates and certainty of estimates for adults at intermediate and high risk, the authors noted.
For this study, Mr. Karam and colleagues conducted a comprehensive systematic review and network meta-analysis in which they compared the effects of seven popular structured diets on mortality and CVD events for adults with CVD or CVD risk factors.
The seven diet plans were the Mediterranean, low fat, very low fat, modified fat, combined low fat and low sodium, Ornish, and Pritikin diets. Data for the analysis came from 40 randomized controlled trials that involved 35,548 participants who were followed for an average of 3 years.
There was evidence of “moderate” certainty that the Mediterranean diet was superior to minimal intervention for all-cause mortality (odds ratio [OR], 0.72), CV mortality (OR, 0.55), stroke (OR, 0.65), and nonfatal MI (OR, 0.48).
On an absolute basis (per 1,000 over 5 years), the Mediterranean diet let to 17 fewer deaths from any cause, 13 fewer CV deaths, seven fewer strokes, and 17 fewer nonfatal MIs.
There was evidence of moderate certainty that a low-fat diet was superior to minimal intervention for prevention of all-cause mortality (OR, 0.84; nine fewer deaths per 1,000) and nonfatal MI (OR, 0.77; seven fewer deaths per 1,000). The low-fat diet had little to no benefit with regard to stroke reduction.
The Mediterranean diet was not “convincingly” superior to a low-fat diet for mortality or nonfatal MI, the authors noted.
The absolute effects for the Mediterranean and low-fat diets were more pronounced in adults at high CVD risk. With the Mediterranean diet, there were 36 fewer all-cause deaths and 39 fewer CV deaths per 1,000 over 5 years.
The five other dietary programs generally had “little or no benefit” compared with minimal intervention. The evidence was of low to moderate certainty.
The studies did not provide enough data to gauge the impact of the diets on angina, heart failure, peripheral vascular events, and atrial fibrillation.
The researchers say that strengths of their analysis include a comprehensive review and thorough literature search and a rigorous assessment of study bias. In addition, the researchers adhered to recognized GRADE methods for assessing the certainty of estimates.
Limitations of their work include not being able to measure adherence to dietary programs and the possibility that some of the benefits may have been due to other factors, such as drug treatment and support for quitting smoking.
The study had no specific funding. The authors have disclosed no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
New antiobesity drugs will benefit many. Is that bad?
where some economists opined that their coverage would be disastrous for Medicare.
Among their concerns? The drugs need to be taken long term (just like drugs for any other chronic condition). The new drugs are more expensive than the old drugs (just like new drugs for any other chronic condition). Lots of people will want to take them (just like highly effective drugs for any other chronic condition that has a significant quality-of-life or clinical impact). The U.K. recommended that they be covered only for 2 years (unlike drugs for any other chronic condition). And the Institute for Clinical and Economic Review (ICER) on which they lean heavily decided that $13,618 annually was too expensive for a medication that leads to sustained 15%-20% weight losses and those losses’ consequential benefits.
As a clinician working with patients who sustain those levels of weight loss, I find that conclusion confusing. Whether by way of lifestyle alone, or more often by way of lifestyle efforts plus medication or lifestyle efforts plus surgery, the benefits reported and seen with 15%-20% weight losses are almost uniformly huge. Patients are regularly seen discontinuing or reducing the dosage of multiple medications as a result of improvements to multiple weight-responsive comorbidities, and they also report objective benefits to mood, sleep, mobility, pain, and energy. Losing that much weight changes lives. Not to mention the impact that that degree of loss has on the primary prevention of so many diseases, including plausible reductions in many common cancers – reductions that have been shown to occur after surgery-related weight losses and for which there’s no plausible reason to imagine that they wouldn’t occur with pharmaceutical-related losses.
Are those discussions found in the NEJM op-ed or in the ICER report? Well, yes, sort of. However, in the NEJM op-ed, the word “prevention” isn’t used once, and unlike with oral hypoglycemics or antihypertensives, the authors state that with antiobesity medications, additional research is needed to determine whether medication-induced changes to A1c, blood pressure, and waist circumference would have clinical benefits: “Antiobesity medications have been shown to improve the surrogate end points of weight, glycated hemoglobin levels, systolic blood pressure, and waist circumference. Long-term studies are needed, however, to clarify how medication-induced changes in these surrogate markers translate to health outcomes.”
Primary prevention is mentioned in the ICER review, but in the “limitations” section where the authors explain that they didn’t include it in their modeling: “The long-term benefits of preventing other comorbidities including cancer, chronic kidney disease, osteoarthritis, and sleep apnea were not explicitly modeled in the base case.”
And they pretended that the impact on existing weight-responsive comorbidities mostly didn’t exist, too: “To limit the complexity of the cost-effectiveness model and to prevent double-counting of treatment benefits, we limited the long-term effects of treatments for weight management to cardiovascular risk and delays in the onset and/or diagnosis of diabetes mellitus.”
As far as cardiovascular disease (CVD) benefits go, you might have thought that it would be a slam dunk on that basis alone, at least according to a recent simple back-of-the-envelope math exercise presented at a recent American College of Cardiology conference, which applied the semaglutide treatment group weight changes in the STEP 1 trial to estimate the population impact on weight and obesity in 30- to 74-year-olds without prior CVD, and estimated 10-year CVD risks utilizing the BMI-based Framingham CVD risk scores. By their accounting, semaglutide treatment in eligible American patients has the potential to prevent over 1.6 million CVD events over 10 years.
Finally, even putting aside ICER’s admittedly and exceedingly narrow base case, what lifestyle-alone studies could ICER possibly be comparing with drug efficacy? And what does “alone” mean? Does “alone” mean with a months- or years long interprofessional behavioral program? Does “alone” mean by way of diet books? Does “alone” mean by way of simply “moving more and eating less”? I’m not aware of robust studies demonstrating any long-term meaningful, predictable, reproducible, durable weight loss outcomes for any lifestyle-only approach, intensive or otherwise.
It’s difficult for me to imagine a situation in which a drug other than an antiobesity drug would be found to have too many benefits to include in your cost-effectiveness analysis but where you’d be comfortable to run that analysis anyhow, and then come out against the drug’s recommendation and fearmonger about its use.
But then again, systemic weight bias is a hell of a drug.
Dr. Freedhoff is associate professor, department of family medicine, University of Ottawa, and medical director, Bariatric Medical Institute, Ottawa. He disclosed ties with Constant Health and Novo Nordisk, and has shared opinions via Weighty Matters and social media.
A version of this article originally appeared on Medscape.com.
where some economists opined that their coverage would be disastrous for Medicare.
Among their concerns? The drugs need to be taken long term (just like drugs for any other chronic condition). The new drugs are more expensive than the old drugs (just like new drugs for any other chronic condition). Lots of people will want to take them (just like highly effective drugs for any other chronic condition that has a significant quality-of-life or clinical impact). The U.K. recommended that they be covered only for 2 years (unlike drugs for any other chronic condition). And the Institute for Clinical and Economic Review (ICER) on which they lean heavily decided that $13,618 annually was too expensive for a medication that leads to sustained 15%-20% weight losses and those losses’ consequential benefits.
As a clinician working with patients who sustain those levels of weight loss, I find that conclusion confusing. Whether by way of lifestyle alone, or more often by way of lifestyle efforts plus medication or lifestyle efforts plus surgery, the benefits reported and seen with 15%-20% weight losses are almost uniformly huge. Patients are regularly seen discontinuing or reducing the dosage of multiple medications as a result of improvements to multiple weight-responsive comorbidities, and they also report objective benefits to mood, sleep, mobility, pain, and energy. Losing that much weight changes lives. Not to mention the impact that that degree of loss has on the primary prevention of so many diseases, including plausible reductions in many common cancers – reductions that have been shown to occur after surgery-related weight losses and for which there’s no plausible reason to imagine that they wouldn’t occur with pharmaceutical-related losses.
Are those discussions found in the NEJM op-ed or in the ICER report? Well, yes, sort of. However, in the NEJM op-ed, the word “prevention” isn’t used once, and unlike with oral hypoglycemics or antihypertensives, the authors state that with antiobesity medications, additional research is needed to determine whether medication-induced changes to A1c, blood pressure, and waist circumference would have clinical benefits: “Antiobesity medications have been shown to improve the surrogate end points of weight, glycated hemoglobin levels, systolic blood pressure, and waist circumference. Long-term studies are needed, however, to clarify how medication-induced changes in these surrogate markers translate to health outcomes.”
Primary prevention is mentioned in the ICER review, but in the “limitations” section where the authors explain that they didn’t include it in their modeling: “The long-term benefits of preventing other comorbidities including cancer, chronic kidney disease, osteoarthritis, and sleep apnea were not explicitly modeled in the base case.”
And they pretended that the impact on existing weight-responsive comorbidities mostly didn’t exist, too: “To limit the complexity of the cost-effectiveness model and to prevent double-counting of treatment benefits, we limited the long-term effects of treatments for weight management to cardiovascular risk and delays in the onset and/or diagnosis of diabetes mellitus.”
As far as cardiovascular disease (CVD) benefits go, you might have thought that it would be a slam dunk on that basis alone, at least according to a recent simple back-of-the-envelope math exercise presented at a recent American College of Cardiology conference, which applied the semaglutide treatment group weight changes in the STEP 1 trial to estimate the population impact on weight and obesity in 30- to 74-year-olds without prior CVD, and estimated 10-year CVD risks utilizing the BMI-based Framingham CVD risk scores. By their accounting, semaglutide treatment in eligible American patients has the potential to prevent over 1.6 million CVD events over 10 years.
Finally, even putting aside ICER’s admittedly and exceedingly narrow base case, what lifestyle-alone studies could ICER possibly be comparing with drug efficacy? And what does “alone” mean? Does “alone” mean with a months- or years long interprofessional behavioral program? Does “alone” mean by way of diet books? Does “alone” mean by way of simply “moving more and eating less”? I’m not aware of robust studies demonstrating any long-term meaningful, predictable, reproducible, durable weight loss outcomes for any lifestyle-only approach, intensive or otherwise.
It’s difficult for me to imagine a situation in which a drug other than an antiobesity drug would be found to have too many benefits to include in your cost-effectiveness analysis but where you’d be comfortable to run that analysis anyhow, and then come out against the drug’s recommendation and fearmonger about its use.
But then again, systemic weight bias is a hell of a drug.
Dr. Freedhoff is associate professor, department of family medicine, University of Ottawa, and medical director, Bariatric Medical Institute, Ottawa. He disclosed ties with Constant Health and Novo Nordisk, and has shared opinions via Weighty Matters and social media.
A version of this article originally appeared on Medscape.com.
where some economists opined that their coverage would be disastrous for Medicare.
Among their concerns? The drugs need to be taken long term (just like drugs for any other chronic condition). The new drugs are more expensive than the old drugs (just like new drugs for any other chronic condition). Lots of people will want to take them (just like highly effective drugs for any other chronic condition that has a significant quality-of-life or clinical impact). The U.K. recommended that they be covered only for 2 years (unlike drugs for any other chronic condition). And the Institute for Clinical and Economic Review (ICER) on which they lean heavily decided that $13,618 annually was too expensive for a medication that leads to sustained 15%-20% weight losses and those losses’ consequential benefits.
As a clinician working with patients who sustain those levels of weight loss, I find that conclusion confusing. Whether by way of lifestyle alone, or more often by way of lifestyle efforts plus medication or lifestyle efforts plus surgery, the benefits reported and seen with 15%-20% weight losses are almost uniformly huge. Patients are regularly seen discontinuing or reducing the dosage of multiple medications as a result of improvements to multiple weight-responsive comorbidities, and they also report objective benefits to mood, sleep, mobility, pain, and energy. Losing that much weight changes lives. Not to mention the impact that that degree of loss has on the primary prevention of so many diseases, including plausible reductions in many common cancers – reductions that have been shown to occur after surgery-related weight losses and for which there’s no plausible reason to imagine that they wouldn’t occur with pharmaceutical-related losses.
Are those discussions found in the NEJM op-ed or in the ICER report? Well, yes, sort of. However, in the NEJM op-ed, the word “prevention” isn’t used once, and unlike with oral hypoglycemics or antihypertensives, the authors state that with antiobesity medications, additional research is needed to determine whether medication-induced changes to A1c, blood pressure, and waist circumference would have clinical benefits: “Antiobesity medications have been shown to improve the surrogate end points of weight, glycated hemoglobin levels, systolic blood pressure, and waist circumference. Long-term studies are needed, however, to clarify how medication-induced changes in these surrogate markers translate to health outcomes.”
Primary prevention is mentioned in the ICER review, but in the “limitations” section where the authors explain that they didn’t include it in their modeling: “The long-term benefits of preventing other comorbidities including cancer, chronic kidney disease, osteoarthritis, and sleep apnea were not explicitly modeled in the base case.”
And they pretended that the impact on existing weight-responsive comorbidities mostly didn’t exist, too: “To limit the complexity of the cost-effectiveness model and to prevent double-counting of treatment benefits, we limited the long-term effects of treatments for weight management to cardiovascular risk and delays in the onset and/or diagnosis of diabetes mellitus.”
As far as cardiovascular disease (CVD) benefits go, you might have thought that it would be a slam dunk on that basis alone, at least according to a recent simple back-of-the-envelope math exercise presented at a recent American College of Cardiology conference, which applied the semaglutide treatment group weight changes in the STEP 1 trial to estimate the population impact on weight and obesity in 30- to 74-year-olds without prior CVD, and estimated 10-year CVD risks utilizing the BMI-based Framingham CVD risk scores. By their accounting, semaglutide treatment in eligible American patients has the potential to prevent over 1.6 million CVD events over 10 years.
Finally, even putting aside ICER’s admittedly and exceedingly narrow base case, what lifestyle-alone studies could ICER possibly be comparing with drug efficacy? And what does “alone” mean? Does “alone” mean with a months- or years long interprofessional behavioral program? Does “alone” mean by way of diet books? Does “alone” mean by way of simply “moving more and eating less”? I’m not aware of robust studies demonstrating any long-term meaningful, predictable, reproducible, durable weight loss outcomes for any lifestyle-only approach, intensive or otherwise.
It’s difficult for me to imagine a situation in which a drug other than an antiobesity drug would be found to have too many benefits to include in your cost-effectiveness analysis but where you’d be comfortable to run that analysis anyhow, and then come out against the drug’s recommendation and fearmonger about its use.
But then again, systemic weight bias is a hell of a drug.
Dr. Freedhoff is associate professor, department of family medicine, University of Ottawa, and medical director, Bariatric Medical Institute, Ottawa. He disclosed ties with Constant Health and Novo Nordisk, and has shared opinions via Weighty Matters and social media.
A version of this article originally appeared on Medscape.com.
Don’t fear testing for, and delabeling, penicillin allergy
You are seeing a 28-year-old man for a same-day appointment. He has a history of opioid use disorder and chronic hepatitis C virus infection. He has been using injections of heroin and fentanyl for more than 6 years, and you can see in his medical record that he has had four outpatient appointments for cutaneous infections along with three emergency department visits for same in the past 2 years. His chief complaint today is pain over his left forearm for the past 3 days. He does not report fever or other constitutional symptoms.
Examination of the left forearm reveals 8 cm of erythema with induration and calor but no fluctuance. The area is moderately tender to palpation. He has no other abnormal findings on exam.
What’s your course of action?
Dr. Vega’s take
You want to treat this patient with antibiotics and close follow-up, and you note that he has a history of penicillin allergy. A note in his record states that he had a rash after receiving amoxicillin as a child.
Sometimes, we have to take the most expedient action in health care. But most of the time, we should do the right thing, even if it’s harder. I would gather more history of this reaction to penicillin and consider an oral challenge, hoping that the work that we put in to testing him for penicillin allergy pays dividends for him now and for years to come.
Penicillin allergy is very commonly listed in patient health records. In a retrospective analysis of the charts of 11,761 patients seen at a single U.S. urban outpatient system in 2012, 11.5% had documentation of penicillin allergy. Rash was the most common manifestation listed for allergy (37% of cases), followed by unknown symptoms (20%), hives (19%), swelling/angioedema (12%), and anaphylaxis (7%). Women were nearly twice as likely as men were to report a history of penicillin allergy, and patients of Asian descent had half the reported prevalence of penicillin allergy, compared with White patients.
Only 6% of the patients reporting penicillin allergy in this study had been referred to an allergy specialist. Given the consequences of true penicillin allergy, this rate is far too low. Patients with a history of penicillin allergy have higher risks for mortality from coexisting hematologic malignancies and penicillin-sensitive infections such as Staphylococcus species. They more frequently develop resistance to multiple antimicrobials and have longer average lengths of stay in the hospital.
Getting a good history for penicillin allergy can be challenging. Approximately three-quarters of penicillin allergies are diagnosed prior to age 3 years. Some children with a family history of penicillin allergy are mislabeled as having an active allergy, even though family history is not a significant contributor to penicillin allergy. Most rashes blamed on penicillin among children are actually not immunoglobulin (Ig) E–mediated and instead represent viral exanthems.
In response to these challenges, at the end of 2022, the American Academy of Allergy, Asthma & Immunology along with the American College of Allergy, Asthma and Immunology published new recommendations for the management of drug allergy. These recommendations provide an algorithm for the active reassessment of penicillin allergy. Like other recommendations in recent years, they call for a proactive approach in questioning the potential clinical consequences of the penicillin allergy listed in the health record.
First, the guidelines recommend against needing any testing for previous adverse reactions to penicillin, such as headache, nausea/vomiting, or diarrhea, that are not IgE-mediated. However, patients who have experienced these adverse reactions may still be reticent to take penicillin. For them and for adults with a history of mild to moderate reactions to penicillin more than 5 years ago, a single oral challenge test with amoxicillin is practical and can be used to exclude penicillin allergy.
The oral amoxicillin challenge
After patients take a treatment dose of oral amoxicillin, they should be observed for 1 hour for any objective reaction. The clinical setting should be able to support patients in the rare case of a more severe reaction to penicillin. Subjective symptoms such as pruritus without objective findings such as rash may be considered a successful challenge, and penicillin may be taken off the list of allergies. The treating team can bill CPT codes for drug challenge testing.
Some research has supported multidose testing with amoxicillin to assess for late reactions to a penicillin oral challenge, but the current guidelines recommend against this approach based on the very limited yield in finding additional cases of true allergy with extra doses of antibiotics. One method to address this issue is to have patients advise the practice if symptoms develop within 10 days of the oral challenge, with photos or prompt clinical evaluation to assess for an IgE-mediated reaction.
Many patients, and certainly some clinicians, will have significant trepidation regarding an oral challenge, despite the low risk for complications. For these patients, as well as children with a history of penicillin allergy and patients with a history of anaphylaxis to penicillin or probable IgE-mediated reaction to penicillin in the past several years, skin testing is recommended. Lower-risk patients might feel reassured to complete an oral challenge test after a negative skin test.
Penicillin skin testing is more reliable than a radioallergosorbent test or an enzyme-linked immunoassay and carries a high specificity. However, skin testing requires the specialized care of an allergy clinic, and this resource is limited in many communities.
Many patients will have negative oral challenge or skin testing for penicillin allergy, but there are still some critical responsibilities for the clinician after testing is complete. First, the label of penicillin allergy should be expunged from all available health records. Second, the clinician should communicate clearly and with empathy to the patient that they can take penicillin-based antibiotics safely and with confidence. Repeat testing is unnecessary unless new symptoms develop.
But the application of this policy to clinical practice is challenging on several levels, from patient and clinician fear to practical constraints on time.
Dr. Vega is health sciences clinical professor, family medicine, University of California, Irvine. He has disclosed ties with McNeil Pharmaceuticals.
A version of this article originally appeared on Medscape.com.
You are seeing a 28-year-old man for a same-day appointment. He has a history of opioid use disorder and chronic hepatitis C virus infection. He has been using injections of heroin and fentanyl for more than 6 years, and you can see in his medical record that he has had four outpatient appointments for cutaneous infections along with three emergency department visits for same in the past 2 years. His chief complaint today is pain over his left forearm for the past 3 days. He does not report fever or other constitutional symptoms.
Examination of the left forearm reveals 8 cm of erythema with induration and calor but no fluctuance. The area is moderately tender to palpation. He has no other abnormal findings on exam.
What’s your course of action?
Dr. Vega’s take
You want to treat this patient with antibiotics and close follow-up, and you note that he has a history of penicillin allergy. A note in his record states that he had a rash after receiving amoxicillin as a child.
Sometimes, we have to take the most expedient action in health care. But most of the time, we should do the right thing, even if it’s harder. I would gather more history of this reaction to penicillin and consider an oral challenge, hoping that the work that we put in to testing him for penicillin allergy pays dividends for him now and for years to come.
Penicillin allergy is very commonly listed in patient health records. In a retrospective analysis of the charts of 11,761 patients seen at a single U.S. urban outpatient system in 2012, 11.5% had documentation of penicillin allergy. Rash was the most common manifestation listed for allergy (37% of cases), followed by unknown symptoms (20%), hives (19%), swelling/angioedema (12%), and anaphylaxis (7%). Women were nearly twice as likely as men were to report a history of penicillin allergy, and patients of Asian descent had half the reported prevalence of penicillin allergy, compared with White patients.
Only 6% of the patients reporting penicillin allergy in this study had been referred to an allergy specialist. Given the consequences of true penicillin allergy, this rate is far too low. Patients with a history of penicillin allergy have higher risks for mortality from coexisting hematologic malignancies and penicillin-sensitive infections such as Staphylococcus species. They more frequently develop resistance to multiple antimicrobials and have longer average lengths of stay in the hospital.
Getting a good history for penicillin allergy can be challenging. Approximately three-quarters of penicillin allergies are diagnosed prior to age 3 years. Some children with a family history of penicillin allergy are mislabeled as having an active allergy, even though family history is not a significant contributor to penicillin allergy. Most rashes blamed on penicillin among children are actually not immunoglobulin (Ig) E–mediated and instead represent viral exanthems.
In response to these challenges, at the end of 2022, the American Academy of Allergy, Asthma & Immunology along with the American College of Allergy, Asthma and Immunology published new recommendations for the management of drug allergy. These recommendations provide an algorithm for the active reassessment of penicillin allergy. Like other recommendations in recent years, they call for a proactive approach in questioning the potential clinical consequences of the penicillin allergy listed in the health record.
First, the guidelines recommend against needing any testing for previous adverse reactions to penicillin, such as headache, nausea/vomiting, or diarrhea, that are not IgE-mediated. However, patients who have experienced these adverse reactions may still be reticent to take penicillin. For them and for adults with a history of mild to moderate reactions to penicillin more than 5 years ago, a single oral challenge test with amoxicillin is practical and can be used to exclude penicillin allergy.
The oral amoxicillin challenge
After patients take a treatment dose of oral amoxicillin, they should be observed for 1 hour for any objective reaction. The clinical setting should be able to support patients in the rare case of a more severe reaction to penicillin. Subjective symptoms such as pruritus without objective findings such as rash may be considered a successful challenge, and penicillin may be taken off the list of allergies. The treating team can bill CPT codes for drug challenge testing.
Some research has supported multidose testing with amoxicillin to assess for late reactions to a penicillin oral challenge, but the current guidelines recommend against this approach based on the very limited yield in finding additional cases of true allergy with extra doses of antibiotics. One method to address this issue is to have patients advise the practice if symptoms develop within 10 days of the oral challenge, with photos or prompt clinical evaluation to assess for an IgE-mediated reaction.
Many patients, and certainly some clinicians, will have significant trepidation regarding an oral challenge, despite the low risk for complications. For these patients, as well as children with a history of penicillin allergy and patients with a history of anaphylaxis to penicillin or probable IgE-mediated reaction to penicillin in the past several years, skin testing is recommended. Lower-risk patients might feel reassured to complete an oral challenge test after a negative skin test.
Penicillin skin testing is more reliable than a radioallergosorbent test or an enzyme-linked immunoassay and carries a high specificity. However, skin testing requires the specialized care of an allergy clinic, and this resource is limited in many communities.
Many patients will have negative oral challenge or skin testing for penicillin allergy, but there are still some critical responsibilities for the clinician after testing is complete. First, the label of penicillin allergy should be expunged from all available health records. Second, the clinician should communicate clearly and with empathy to the patient that they can take penicillin-based antibiotics safely and with confidence. Repeat testing is unnecessary unless new symptoms develop.
But the application of this policy to clinical practice is challenging on several levels, from patient and clinician fear to practical constraints on time.
Dr. Vega is health sciences clinical professor, family medicine, University of California, Irvine. He has disclosed ties with McNeil Pharmaceuticals.
A version of this article originally appeared on Medscape.com.
You are seeing a 28-year-old man for a same-day appointment. He has a history of opioid use disorder and chronic hepatitis C virus infection. He has been using injections of heroin and fentanyl for more than 6 years, and you can see in his medical record that he has had four outpatient appointments for cutaneous infections along with three emergency department visits for same in the past 2 years. His chief complaint today is pain over his left forearm for the past 3 days. He does not report fever or other constitutional symptoms.
Examination of the left forearm reveals 8 cm of erythema with induration and calor but no fluctuance. The area is moderately tender to palpation. He has no other abnormal findings on exam.
What’s your course of action?
Dr. Vega’s take
You want to treat this patient with antibiotics and close follow-up, and you note that he has a history of penicillin allergy. A note in his record states that he had a rash after receiving amoxicillin as a child.
Sometimes, we have to take the most expedient action in health care. But most of the time, we should do the right thing, even if it’s harder. I would gather more history of this reaction to penicillin and consider an oral challenge, hoping that the work that we put in to testing him for penicillin allergy pays dividends for him now and for years to come.
Penicillin allergy is very commonly listed in patient health records. In a retrospective analysis of the charts of 11,761 patients seen at a single U.S. urban outpatient system in 2012, 11.5% had documentation of penicillin allergy. Rash was the most common manifestation listed for allergy (37% of cases), followed by unknown symptoms (20%), hives (19%), swelling/angioedema (12%), and anaphylaxis (7%). Women were nearly twice as likely as men were to report a history of penicillin allergy, and patients of Asian descent had half the reported prevalence of penicillin allergy, compared with White patients.
Only 6% of the patients reporting penicillin allergy in this study had been referred to an allergy specialist. Given the consequences of true penicillin allergy, this rate is far too low. Patients with a history of penicillin allergy have higher risks for mortality from coexisting hematologic malignancies and penicillin-sensitive infections such as Staphylococcus species. They more frequently develop resistance to multiple antimicrobials and have longer average lengths of stay in the hospital.
Getting a good history for penicillin allergy can be challenging. Approximately three-quarters of penicillin allergies are diagnosed prior to age 3 years. Some children with a family history of penicillin allergy are mislabeled as having an active allergy, even though family history is not a significant contributor to penicillin allergy. Most rashes blamed on penicillin among children are actually not immunoglobulin (Ig) E–mediated and instead represent viral exanthems.
In response to these challenges, at the end of 2022, the American Academy of Allergy, Asthma & Immunology along with the American College of Allergy, Asthma and Immunology published new recommendations for the management of drug allergy. These recommendations provide an algorithm for the active reassessment of penicillin allergy. Like other recommendations in recent years, they call for a proactive approach in questioning the potential clinical consequences of the penicillin allergy listed in the health record.
First, the guidelines recommend against needing any testing for previous adverse reactions to penicillin, such as headache, nausea/vomiting, or diarrhea, that are not IgE-mediated. However, patients who have experienced these adverse reactions may still be reticent to take penicillin. For them and for adults with a history of mild to moderate reactions to penicillin more than 5 years ago, a single oral challenge test with amoxicillin is practical and can be used to exclude penicillin allergy.
The oral amoxicillin challenge
After patients take a treatment dose of oral amoxicillin, they should be observed for 1 hour for any objective reaction. The clinical setting should be able to support patients in the rare case of a more severe reaction to penicillin. Subjective symptoms such as pruritus without objective findings such as rash may be considered a successful challenge, and penicillin may be taken off the list of allergies. The treating team can bill CPT codes for drug challenge testing.
Some research has supported multidose testing with amoxicillin to assess for late reactions to a penicillin oral challenge, but the current guidelines recommend against this approach based on the very limited yield in finding additional cases of true allergy with extra doses of antibiotics. One method to address this issue is to have patients advise the practice if symptoms develop within 10 days of the oral challenge, with photos or prompt clinical evaluation to assess for an IgE-mediated reaction.
Many patients, and certainly some clinicians, will have significant trepidation regarding an oral challenge, despite the low risk for complications. For these patients, as well as children with a history of penicillin allergy and patients with a history of anaphylaxis to penicillin or probable IgE-mediated reaction to penicillin in the past several years, skin testing is recommended. Lower-risk patients might feel reassured to complete an oral challenge test after a negative skin test.
Penicillin skin testing is more reliable than a radioallergosorbent test or an enzyme-linked immunoassay and carries a high specificity. However, skin testing requires the specialized care of an allergy clinic, and this resource is limited in many communities.
Many patients will have negative oral challenge or skin testing for penicillin allergy, but there are still some critical responsibilities for the clinician after testing is complete. First, the label of penicillin allergy should be expunged from all available health records. Second, the clinician should communicate clearly and with empathy to the patient that they can take penicillin-based antibiotics safely and with confidence. Repeat testing is unnecessary unless new symptoms develop.
But the application of this policy to clinical practice is challenging on several levels, from patient and clinician fear to practical constraints on time.
Dr. Vega is health sciences clinical professor, family medicine, University of California, Irvine. He has disclosed ties with McNeil Pharmaceuticals.
A version of this article originally appeared on Medscape.com.
Impact of child abuse differs by gender
PARIS – , new research shows.
Investigators found childhood emotional and sexual abuse had a greater effect on women than men, whereas men were more adversely affected by emotional and physical neglect.
“Our findings indicate that exposure to childhood maltreatment increases the risk of having psychiatric symptoms in both men and women,” lead researcher Thanavadee Prachason, PhD, department of psychiatry and neuropsychology, Maastricht (the Netherlands) University Medical Center, said in a press release.
“Exposure to emotionally or sexually abusive experiences during childhood increases the risk of a variety of psychiatric symptoms, particularly in women. In contrast, a history of emotional or physical neglect in childhood increases the risk of having psychiatric symptoms more in men,” Dr. Prachason added.
The findings were presented at the European Psychiatric Association 2023 Congress.
A leading mental illness risk factor
Study presenter Laura Fusar-Poli, MD, PhD, from the department of brain and behavioral sciences, University of Pavia (Italy), said that the differential impact of trauma subtypes in men and women indicate that both gender and the type of childhood adversity experienced need to be taken into account in future studies.
Dr. Fusar-Poli began by highlighting that 13%-36% of individuals have experienced some kind of childhood trauma, with 30% exposed to at least two types of trauma.
Trauma has been identified as a risk factor for a range of mental health problems.
“It is estimated that, worldwide, around one third of all psychiatric disorders are related to childhood trauma,” senior researcher Sinan Gül
Consequently, “childhood trauma is a leading preventable risk factor for mental illness,” he added.
Previous research suggests the subtype of trauma has an impact on subsequent biological changes and clinical outcomes, and that there are gender differences in the effects of childhood trauma.
To investigate, the researchers examined data from TwinssCan, a Belgian cohort of twins and siblings aged 15-35 years without a diagnosis of pervasive mental disorders.
The study included 477 females and 314 males who had completed the Childhood Trauma Questionnaire–Short Form (CTQ) and the Symptom Checklist-90 SR (SCL-90) to determine exposure to childhood adversity and levels of psychopathology, respectively.
Results showed that total CTQ scores were significantly associated with total SCL-90 scores in both men and women, as well as with each of the nine symptom domains of the SCL-90 (P < .001 for all assessments). These included psychoticism, paranoid ideation, anxiety, depression, somatization, obsessive-compulsive, interpersonal sensitivity, hostility, and phobic anxiety.
There were no significant differences in the associations with total CTQ scores between men and women.
However, when the researchers examined trauma subtypes and psychopathology, clear gender differences emerged.
Investigators found a significant association between emotional abuse on the CTQ and total SCL-90 scores in both men (P < .023) and women (P < .001), but that the association was significantly stronger in women (P = .043).
Sexual abuse was significantly associated with total SCL-90 scores in women (P < .001), while emotional neglect and physical neglect were significantly associated with psychopathology scores in men (P = .026 and P < .001, respectively).
“Physical neglect may include experiences of not having enough to eat, wearing dirty clothes, not being taken care of, and not getting taken to the doctor when the person was growing up,” said Dr. Prachason.
“Emotional neglect may include childhood experiences like not feeling loved or important, and not feeling close to the family.”
In women, emotional abuse was significantly associated with all nine symptom domains of the SCL-90, while sexual abuse was associated with seven: psychoticism, paranoid ideation, anxiety, depression, somatization, obsessive-compulsive, and hostility.
Physical neglect, in men, was significantly associated with eight of the symptom domains (all but somatization), but emotional neglect was linked only to depression, Dr. Fusar-Poli reported.
“This study showed a very important consequence of childhood trauma, and not only in people with mental disorders. I would like to underline that this is a general population, composed of adolescents and young adults, which is the age in which the majority of mental disorders starts, Dr. Fusar-Poli said in an interview.
She emphasized that psychotic disorders are only a part of the “broad range” of conditions that may be related to childhood trauma, which “can have an impact on sub-threshold symptoms that can affect functioning and quality of life in the general population.”
Addressing the differential findings in men and women, Dr. Gül
However, he said, this is “something that we really need understand,” as there is likely an underlying mechanism, “and not only a biological mechanism but probably a societal one.”
Dr. Gül
Compromised cognitive, emotional function
Commenting on the findings for this news organization, Elaine F. Walker, PhD, professor of psychology and neuroscience at Emory University in Atlanta, said stress exposure in general, including childhood trauma, “has transdiagnostic effects on vulnerability to mental disorders.”
“The effects are primarily mediated by the hypothalamic-pituitary-adrenal axis, which triggers the release of cortisol. When persistently elevated, this can result in neurobiological processes that have adverse effects on brain structure and circuitry which, in turn, compromises cognitive and emotional functioning,” said Dr. Walker, who was not associated with the study.
She noted that, “while it is possible that there are sex differences in biological sensitivity to certain subtypes of childhood trauma, it may also be the case that sex differences in the likelihood of exposure to trauma subtypes is actually the key factor.”
“At the present time, there are not specific treatment protocols aimed at addressing childhood trauma subtypes, but most experienced therapists will incorporate information about the individual’s trauma history in their treatment,” Dr. Walker added.
Also commenting on the research, Philip Gorwood, MD, PhD, head of the Clinique des Maladies Mentales et de l’Encéphale at Centre Hospitalier Sainte Anne in Paris, said the results are “important … as childhood trauma has been clearly recognized as a major risk factor for the vast majority of psychiatric disorders, but with poor knowledge of gender specificities.”
“Understanding which aspects of trauma are more damaging according to gender will facilitate research on the resilience process. Many intervention strategies will indeed benefit from a more personalized approach,” he said in a statement. Dr. Gorwood was not involved with this study.
The study authors, Dr. Gorwood, and Dr. Walker report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
PARIS – , new research shows.
Investigators found childhood emotional and sexual abuse had a greater effect on women than men, whereas men were more adversely affected by emotional and physical neglect.
“Our findings indicate that exposure to childhood maltreatment increases the risk of having psychiatric symptoms in both men and women,” lead researcher Thanavadee Prachason, PhD, department of psychiatry and neuropsychology, Maastricht (the Netherlands) University Medical Center, said in a press release.
“Exposure to emotionally or sexually abusive experiences during childhood increases the risk of a variety of psychiatric symptoms, particularly in women. In contrast, a history of emotional or physical neglect in childhood increases the risk of having psychiatric symptoms more in men,” Dr. Prachason added.
The findings were presented at the European Psychiatric Association 2023 Congress.
A leading mental illness risk factor
Study presenter Laura Fusar-Poli, MD, PhD, from the department of brain and behavioral sciences, University of Pavia (Italy), said that the differential impact of trauma subtypes in men and women indicate that both gender and the type of childhood adversity experienced need to be taken into account in future studies.
Dr. Fusar-Poli began by highlighting that 13%-36% of individuals have experienced some kind of childhood trauma, with 30% exposed to at least two types of trauma.
Trauma has been identified as a risk factor for a range of mental health problems.
“It is estimated that, worldwide, around one third of all psychiatric disorders are related to childhood trauma,” senior researcher Sinan Gül
Consequently, “childhood trauma is a leading preventable risk factor for mental illness,” he added.
Previous research suggests the subtype of trauma has an impact on subsequent biological changes and clinical outcomes, and that there are gender differences in the effects of childhood trauma.
To investigate, the researchers examined data from TwinssCan, a Belgian cohort of twins and siblings aged 15-35 years without a diagnosis of pervasive mental disorders.
The study included 477 females and 314 males who had completed the Childhood Trauma Questionnaire–Short Form (CTQ) and the Symptom Checklist-90 SR (SCL-90) to determine exposure to childhood adversity and levels of psychopathology, respectively.
Results showed that total CTQ scores were significantly associated with total SCL-90 scores in both men and women, as well as with each of the nine symptom domains of the SCL-90 (P < .001 for all assessments). These included psychoticism, paranoid ideation, anxiety, depression, somatization, obsessive-compulsive, interpersonal sensitivity, hostility, and phobic anxiety.
There were no significant differences in the associations with total CTQ scores between men and women.
However, when the researchers examined trauma subtypes and psychopathology, clear gender differences emerged.
Investigators found a significant association between emotional abuse on the CTQ and total SCL-90 scores in both men (P < .023) and women (P < .001), but that the association was significantly stronger in women (P = .043).
Sexual abuse was significantly associated with total SCL-90 scores in women (P < .001), while emotional neglect and physical neglect were significantly associated with psychopathology scores in men (P = .026 and P < .001, respectively).
“Physical neglect may include experiences of not having enough to eat, wearing dirty clothes, not being taken care of, and not getting taken to the doctor when the person was growing up,” said Dr. Prachason.
“Emotional neglect may include childhood experiences like not feeling loved or important, and not feeling close to the family.”
In women, emotional abuse was significantly associated with all nine symptom domains of the SCL-90, while sexual abuse was associated with seven: psychoticism, paranoid ideation, anxiety, depression, somatization, obsessive-compulsive, and hostility.
Physical neglect, in men, was significantly associated with eight of the symptom domains (all but somatization), but emotional neglect was linked only to depression, Dr. Fusar-Poli reported.
“This study showed a very important consequence of childhood trauma, and not only in people with mental disorders. I would like to underline that this is a general population, composed of adolescents and young adults, which is the age in which the majority of mental disorders starts, Dr. Fusar-Poli said in an interview.
She emphasized that psychotic disorders are only a part of the “broad range” of conditions that may be related to childhood trauma, which “can have an impact on sub-threshold symptoms that can affect functioning and quality of life in the general population.”
Addressing the differential findings in men and women, Dr. Gül
However, he said, this is “something that we really need understand,” as there is likely an underlying mechanism, “and not only a biological mechanism but probably a societal one.”
Dr. Gül
Compromised cognitive, emotional function
Commenting on the findings for this news organization, Elaine F. Walker, PhD, professor of psychology and neuroscience at Emory University in Atlanta, said stress exposure in general, including childhood trauma, “has transdiagnostic effects on vulnerability to mental disorders.”
“The effects are primarily mediated by the hypothalamic-pituitary-adrenal axis, which triggers the release of cortisol. When persistently elevated, this can result in neurobiological processes that have adverse effects on brain structure and circuitry which, in turn, compromises cognitive and emotional functioning,” said Dr. Walker, who was not associated with the study.
She noted that, “while it is possible that there are sex differences in biological sensitivity to certain subtypes of childhood trauma, it may also be the case that sex differences in the likelihood of exposure to trauma subtypes is actually the key factor.”
“At the present time, there are not specific treatment protocols aimed at addressing childhood trauma subtypes, but most experienced therapists will incorporate information about the individual’s trauma history in their treatment,” Dr. Walker added.
Also commenting on the research, Philip Gorwood, MD, PhD, head of the Clinique des Maladies Mentales et de l’Encéphale at Centre Hospitalier Sainte Anne in Paris, said the results are “important … as childhood trauma has been clearly recognized as a major risk factor for the vast majority of psychiatric disorders, but with poor knowledge of gender specificities.”
“Understanding which aspects of trauma are more damaging according to gender will facilitate research on the resilience process. Many intervention strategies will indeed benefit from a more personalized approach,” he said in a statement. Dr. Gorwood was not involved with this study.
The study authors, Dr. Gorwood, and Dr. Walker report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
PARIS – , new research shows.
Investigators found childhood emotional and sexual abuse had a greater effect on women than men, whereas men were more adversely affected by emotional and physical neglect.
“Our findings indicate that exposure to childhood maltreatment increases the risk of having psychiatric symptoms in both men and women,” lead researcher Thanavadee Prachason, PhD, department of psychiatry and neuropsychology, Maastricht (the Netherlands) University Medical Center, said in a press release.
“Exposure to emotionally or sexually abusive experiences during childhood increases the risk of a variety of psychiatric symptoms, particularly in women. In contrast, a history of emotional or physical neglect in childhood increases the risk of having psychiatric symptoms more in men,” Dr. Prachason added.
The findings were presented at the European Psychiatric Association 2023 Congress.
A leading mental illness risk factor
Study presenter Laura Fusar-Poli, MD, PhD, from the department of brain and behavioral sciences, University of Pavia (Italy), said that the differential impact of trauma subtypes in men and women indicate that both gender and the type of childhood adversity experienced need to be taken into account in future studies.
Dr. Fusar-Poli began by highlighting that 13%-36% of individuals have experienced some kind of childhood trauma, with 30% exposed to at least two types of trauma.
Trauma has been identified as a risk factor for a range of mental health problems.
“It is estimated that, worldwide, around one third of all psychiatric disorders are related to childhood trauma,” senior researcher Sinan Gül
Consequently, “childhood trauma is a leading preventable risk factor for mental illness,” he added.
Previous research suggests the subtype of trauma has an impact on subsequent biological changes and clinical outcomes, and that there are gender differences in the effects of childhood trauma.
To investigate, the researchers examined data from TwinssCan, a Belgian cohort of twins and siblings aged 15-35 years without a diagnosis of pervasive mental disorders.
The study included 477 females and 314 males who had completed the Childhood Trauma Questionnaire–Short Form (CTQ) and the Symptom Checklist-90 SR (SCL-90) to determine exposure to childhood adversity and levels of psychopathology, respectively.
Results showed that total CTQ scores were significantly associated with total SCL-90 scores in both men and women, as well as with each of the nine symptom domains of the SCL-90 (P < .001 for all assessments). These included psychoticism, paranoid ideation, anxiety, depression, somatization, obsessive-compulsive, interpersonal sensitivity, hostility, and phobic anxiety.
There were no significant differences in the associations with total CTQ scores between men and women.
However, when the researchers examined trauma subtypes and psychopathology, clear gender differences emerged.
Investigators found a significant association between emotional abuse on the CTQ and total SCL-90 scores in both men (P < .023) and women (P < .001), but that the association was significantly stronger in women (P = .043).
Sexual abuse was significantly associated with total SCL-90 scores in women (P < .001), while emotional neglect and physical neglect were significantly associated with psychopathology scores in men (P = .026 and P < .001, respectively).
“Physical neglect may include experiences of not having enough to eat, wearing dirty clothes, not being taken care of, and not getting taken to the doctor when the person was growing up,” said Dr. Prachason.
“Emotional neglect may include childhood experiences like not feeling loved or important, and not feeling close to the family.”
In women, emotional abuse was significantly associated with all nine symptom domains of the SCL-90, while sexual abuse was associated with seven: psychoticism, paranoid ideation, anxiety, depression, somatization, obsessive-compulsive, and hostility.
Physical neglect, in men, was significantly associated with eight of the symptom domains (all but somatization), but emotional neglect was linked only to depression, Dr. Fusar-Poli reported.
“This study showed a very important consequence of childhood trauma, and not only in people with mental disorders. I would like to underline that this is a general population, composed of adolescents and young adults, which is the age in which the majority of mental disorders starts, Dr. Fusar-Poli said in an interview.
She emphasized that psychotic disorders are only a part of the “broad range” of conditions that may be related to childhood trauma, which “can have an impact on sub-threshold symptoms that can affect functioning and quality of life in the general population.”
Addressing the differential findings in men and women, Dr. Gül
However, he said, this is “something that we really need understand,” as there is likely an underlying mechanism, “and not only a biological mechanism but probably a societal one.”
Dr. Gül
Compromised cognitive, emotional function
Commenting on the findings for this news organization, Elaine F. Walker, PhD, professor of psychology and neuroscience at Emory University in Atlanta, said stress exposure in general, including childhood trauma, “has transdiagnostic effects on vulnerability to mental disorders.”
“The effects are primarily mediated by the hypothalamic-pituitary-adrenal axis, which triggers the release of cortisol. When persistently elevated, this can result in neurobiological processes that have adverse effects on brain structure and circuitry which, in turn, compromises cognitive and emotional functioning,” said Dr. Walker, who was not associated with the study.
She noted that, “while it is possible that there are sex differences in biological sensitivity to certain subtypes of childhood trauma, it may also be the case that sex differences in the likelihood of exposure to trauma subtypes is actually the key factor.”
“At the present time, there are not specific treatment protocols aimed at addressing childhood trauma subtypes, but most experienced therapists will incorporate information about the individual’s trauma history in their treatment,” Dr. Walker added.
Also commenting on the research, Philip Gorwood, MD, PhD, head of the Clinique des Maladies Mentales et de l’Encéphale at Centre Hospitalier Sainte Anne in Paris, said the results are “important … as childhood trauma has been clearly recognized as a major risk factor for the vast majority of psychiatric disorders, but with poor knowledge of gender specificities.”
“Understanding which aspects of trauma are more damaging according to gender will facilitate research on the resilience process. Many intervention strategies will indeed benefit from a more personalized approach,” he said in a statement. Dr. Gorwood was not involved with this study.
The study authors, Dr. Gorwood, and Dr. Walker report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
AT EPA 2023
Do B vitamins reduce Parkinson’s risk?
Though there was some evidence that vitamin B12 early in life was associated with decreased PD risk, the findings were inconsistent and were observed only in people whose daily intake was 10 times the recommended level.
“The results of this large prospective study do not support the hypothesis that increasing folate or vitamin B6 intakes above the current levels would reduce PD risk in this population of mostly White U.S. health professionals,” lead investigator Mario H. Flores-Torres, MD, PhD, a research scientist in the department of nutrition at the Harvard T.H. Chan School of Public Health, Boston, said in an interview.
However, he added, the study “leaves open the possibility that in some individuals the intake of vitamin B12 contributes to PD risk – a finding that warrants further research.”
The findings were published online in Movement Disorders.
Mixed findings
Previous studies have suggested B vitamins – including folate, B6 and B12 – might affect PD risk, but results have been mixed.
The new study included 80,965 women from the Nurses’ Health Study (1984-2016) and 48,837 men from the Health Professionals Follow-up Study (1986-2016). The average age at baseline was 50 years in women and 54 years in men, and participants were followed for about 30 years.
Participants completed questionnaires about diet at the beginning of the study and again every 4 years.
To account for the possibility of reverse causation due to the long prodromal phase of PD, investigators conducted lagged analyses at 8, 12, 16, and 20 years.
During the follow-up period, 1,426 incident cases of PD were diagnosed (687 in women and 739 in men).
Researchers found no link between reduced PD risk and intake of vitamin B6 or folate.
Though the total cumulative average intake of vitamin B12 was not associated with PD risk, investigators noted a modest decrease in risk between those with highest baseline of B12 and participants with the lowest baseline levels (hazard ratio, 0.80; P = .01).
Individuals in the highest quintile of B12 intake at baseline had an average intake of 21-22 mcg/d, close to 10 times the recommended daily intake of 2.4 mcg/d.
“Although some of our results suggest that a higher intake of vitamin B12 may decrease the risk of PD in a population of U.S. health professionals, the associations we observed were modest and not entirely consistent,” Dr. Flores-Torres said.
“Additional studies need to confirm our findings to better understand whether people who take higher amounts of B12 younger in life may have a protective benefit against PD,” he added.
The whole picture?
Commenting on the findings for this article, Rebecca Gilbert, MD, PhD, chief scientific officer of the American Parkinson Disease Association, New York, noted that checking B vitamin levels is a fairly standard practice for most clinicians. In that regard, this study highlights why this is important.
“Neurologists will often test B12 levels and recommend a supplement if your level is below the normal range,” she said. “No one is questioning the value of B12 for nerves and recommend that B12 is in the normal to high normal range.”
But understanding how B vitamins may or may not affect PD risk might require a different kind of study.
“This analysis, much like many others, is trying so hard to figure out what is it in diets that affects Parkinson’s disease risk,” Dr. Gilbert said. “But we have yet to say these are the nutrients that prevent Parkinson’s or increase the risk.”
One reason for the conflicting results in studies such as this could be that the explanation for the link between diet and PD risk may not be in specific minerals consumed but rather in the diet as a whole.
“Focusing on specific elements of a diet may not give us the answer,” Dr. Gilbert said. “We should be analyzing diet as a complete holistic picture because it’s not just the elements but how everything in what we eat works together.”
The study was funded by the National Institutes of Health and the Parkinson’s Foundation. Dr. Flores-Torres and Dr. Gilbert report no relevant conflicts.
A version of this article originally appeared on Medscape.com.
Though there was some evidence that vitamin B12 early in life was associated with decreased PD risk, the findings were inconsistent and were observed only in people whose daily intake was 10 times the recommended level.
“The results of this large prospective study do not support the hypothesis that increasing folate or vitamin B6 intakes above the current levels would reduce PD risk in this population of mostly White U.S. health professionals,” lead investigator Mario H. Flores-Torres, MD, PhD, a research scientist in the department of nutrition at the Harvard T.H. Chan School of Public Health, Boston, said in an interview.
However, he added, the study “leaves open the possibility that in some individuals the intake of vitamin B12 contributes to PD risk – a finding that warrants further research.”
The findings were published online in Movement Disorders.
Mixed findings
Previous studies have suggested B vitamins – including folate, B6 and B12 – might affect PD risk, but results have been mixed.
The new study included 80,965 women from the Nurses’ Health Study (1984-2016) and 48,837 men from the Health Professionals Follow-up Study (1986-2016). The average age at baseline was 50 years in women and 54 years in men, and participants were followed for about 30 years.
Participants completed questionnaires about diet at the beginning of the study and again every 4 years.
To account for the possibility of reverse causation due to the long prodromal phase of PD, investigators conducted lagged analyses at 8, 12, 16, and 20 years.
During the follow-up period, 1,426 incident cases of PD were diagnosed (687 in women and 739 in men).
Researchers found no link between reduced PD risk and intake of vitamin B6 or folate.
Though the total cumulative average intake of vitamin B12 was not associated with PD risk, investigators noted a modest decrease in risk between those with highest baseline of B12 and participants with the lowest baseline levels (hazard ratio, 0.80; P = .01).
Individuals in the highest quintile of B12 intake at baseline had an average intake of 21-22 mcg/d, close to 10 times the recommended daily intake of 2.4 mcg/d.
“Although some of our results suggest that a higher intake of vitamin B12 may decrease the risk of PD in a population of U.S. health professionals, the associations we observed were modest and not entirely consistent,” Dr. Flores-Torres said.
“Additional studies need to confirm our findings to better understand whether people who take higher amounts of B12 younger in life may have a protective benefit against PD,” he added.
The whole picture?
Commenting on the findings for this article, Rebecca Gilbert, MD, PhD, chief scientific officer of the American Parkinson Disease Association, New York, noted that checking B vitamin levels is a fairly standard practice for most clinicians. In that regard, this study highlights why this is important.
“Neurologists will often test B12 levels and recommend a supplement if your level is below the normal range,” she said. “No one is questioning the value of B12 for nerves and recommend that B12 is in the normal to high normal range.”
But understanding how B vitamins may or may not affect PD risk might require a different kind of study.
“This analysis, much like many others, is trying so hard to figure out what is it in diets that affects Parkinson’s disease risk,” Dr. Gilbert said. “But we have yet to say these are the nutrients that prevent Parkinson’s or increase the risk.”
One reason for the conflicting results in studies such as this could be that the explanation for the link between diet and PD risk may not be in specific minerals consumed but rather in the diet as a whole.
“Focusing on specific elements of a diet may not give us the answer,” Dr. Gilbert said. “We should be analyzing diet as a complete holistic picture because it’s not just the elements but how everything in what we eat works together.”
The study was funded by the National Institutes of Health and the Parkinson’s Foundation. Dr. Flores-Torres and Dr. Gilbert report no relevant conflicts.
A version of this article originally appeared on Medscape.com.
Though there was some evidence that vitamin B12 early in life was associated with decreased PD risk, the findings were inconsistent and were observed only in people whose daily intake was 10 times the recommended level.
“The results of this large prospective study do not support the hypothesis that increasing folate or vitamin B6 intakes above the current levels would reduce PD risk in this population of mostly White U.S. health professionals,” lead investigator Mario H. Flores-Torres, MD, PhD, a research scientist in the department of nutrition at the Harvard T.H. Chan School of Public Health, Boston, said in an interview.
However, he added, the study “leaves open the possibility that in some individuals the intake of vitamin B12 contributes to PD risk – a finding that warrants further research.”
The findings were published online in Movement Disorders.
Mixed findings
Previous studies have suggested B vitamins – including folate, B6 and B12 – might affect PD risk, but results have been mixed.
The new study included 80,965 women from the Nurses’ Health Study (1984-2016) and 48,837 men from the Health Professionals Follow-up Study (1986-2016). The average age at baseline was 50 years in women and 54 years in men, and participants were followed for about 30 years.
Participants completed questionnaires about diet at the beginning of the study and again every 4 years.
To account for the possibility of reverse causation due to the long prodromal phase of PD, investigators conducted lagged analyses at 8, 12, 16, and 20 years.
During the follow-up period, 1,426 incident cases of PD were diagnosed (687 in women and 739 in men).
Researchers found no link between reduced PD risk and intake of vitamin B6 or folate.
Though the total cumulative average intake of vitamin B12 was not associated with PD risk, investigators noted a modest decrease in risk between those with highest baseline of B12 and participants with the lowest baseline levels (hazard ratio, 0.80; P = .01).
Individuals in the highest quintile of B12 intake at baseline had an average intake of 21-22 mcg/d, close to 10 times the recommended daily intake of 2.4 mcg/d.
“Although some of our results suggest that a higher intake of vitamin B12 may decrease the risk of PD in a population of U.S. health professionals, the associations we observed were modest and not entirely consistent,” Dr. Flores-Torres said.
“Additional studies need to confirm our findings to better understand whether people who take higher amounts of B12 younger in life may have a protective benefit against PD,” he added.
The whole picture?
Commenting on the findings for this article, Rebecca Gilbert, MD, PhD, chief scientific officer of the American Parkinson Disease Association, New York, noted that checking B vitamin levels is a fairly standard practice for most clinicians. In that regard, this study highlights why this is important.
“Neurologists will often test B12 levels and recommend a supplement if your level is below the normal range,” she said. “No one is questioning the value of B12 for nerves and recommend that B12 is in the normal to high normal range.”
But understanding how B vitamins may or may not affect PD risk might require a different kind of study.
“This analysis, much like many others, is trying so hard to figure out what is it in diets that affects Parkinson’s disease risk,” Dr. Gilbert said. “But we have yet to say these are the nutrients that prevent Parkinson’s or increase the risk.”
One reason for the conflicting results in studies such as this could be that the explanation for the link between diet and PD risk may not be in specific minerals consumed but rather in the diet as a whole.
“Focusing on specific elements of a diet may not give us the answer,” Dr. Gilbert said. “We should be analyzing diet as a complete holistic picture because it’s not just the elements but how everything in what we eat works together.”
The study was funded by the National Institutes of Health and the Parkinson’s Foundation. Dr. Flores-Torres and Dr. Gilbert report no relevant conflicts.
A version of this article originally appeared on Medscape.com.
FROM MOVEMENT DISORDERS
Subclinical CAD by CT predicts MI risk, with or without stenoses
About half of middle-aged adults in the community without cardiovascular (CV) symptoms have coronary atherosclerosis by CT angiography (CTA) that puts them at substantial risk for myocardial infarction (MI), suggests a prospective cohort study.
The 10% of participants who had subclinical disease considered obstructive at CTA showed a ninefold increased risk for MI over several years. Obstructive disease seemed to elevate risk more than subclinical disease that wasn’t obstructive but still considered extensive within the coronary arteries.
The findings, based on a Copenhagen General Population Study cohort, are new for CTA but consistent with research based on coronary artery calcium (CAC) scores and other ways to assess CV risk, say researchers.
Although all participants underwent CTA, such imaging isn’t used in the general population for atherosclerosis screening. But the findings may have implications for “opportunistic screening” for subclinical coronary disease at CTA conducted for other reasons, notes the study’s report, published online in the Annals of Internal Medicine.
“Identification of luminal obstructive or extensive subclinical coronary atherosclerosis” could potentially provide “clinically relevant, incremental risk assessment” in nonischemic patients who undergo cardiac CT or electrocardiogram-gated chest CT before procedures such as arrhythmia ablation or valve repair, it states.
Such patients found with subclinical coronary atherosclerosis might potentially “benefit from referral to intensified cardiovascular primary prevention therapy,” write the authors, led by Andreas Fuchs, MD, PhD, Copenhagen University Hospital-Rigshospitalet.
The group acknowledges the findings may not entirely apply to a non-Danish population.
A screening role for CTA?
Whether CTA has a role to play in adults without symptoms “is a big, open question in the field right now,” observed Ron Blankstein, MD, not associated with the current analysis, for this news organization.
Most population studies of CV risk prediction, such as MESA, have looked at CAC scores, not CTA, and have shown that “the more plaque individuals have, the higher the risk.” The current findings are similar but novel in coming from coronary CTA in a large asymptomatic community population, said Dr. Blankstein, who is director of cardiac CT at Brigham and Women’s Hospital, Boston.
“It’s possible that patients who have obstructive plaque in general tend to have a larger amount of plaque as well,” he said. So, while the study suggests that “the more plaque individuals have, the worse their overall risk,” it also shows that the risk “is enhanced even more if they have obstructive disease.”
The Danish cohort analysis “provides a unique opportunity to study the contemporary natural history of coronary artery disease in the absence of intervention,” notes an accompanying editorial.
For example, both patients and clinicians were blinded to CTA results, and CV preventive therapies weren’t common, observe Michael McDermott, MBChB, and David E. Newby, DM, PhD, of the BHF Centre for Cardiovascular Science, University of Edinburgh.
The analysis suggests that subclinical coronary disease that is obstructive predicts MI risk more strongly than extensive coronary disease, they note, and may be present in two-thirds of MI patients. “This contrasts with symptomatic populations, where nonobstructive disease accounts for most future myocardial infarctions, presumably from plaque rupture.”
It also points to “strong associations between nonobstructive extensive disease and adverse plaque characteristics,” write Dr. McDermott and Dr. Newby. “This underscores the major importance of plaque burden” for the prediction of coronary events.
Graded risk
The analysis included 9,533 persons aged 40 and older without known ischemic heart disease or symptoms with available CTA assessments.
Obstructive disease, defined as presence of a luminal stenosis of at least 50%, was seen in 10% and nonobstructive disease in 36% of the total cohort, the report states.
Disease occupying more than one-third of the coronary tree was considered extensive and less than one-third of the coronaries nonextensive, occurring in 10.5% and 35.8% of the cohort, respectively.
There were 71 MIs and 193 deaths over a median of 3.5 years. The adjusted relative risk for MI, compared with those without coronary atherosclerosis, was:
- 7.65 (95% confidence interval, 3.53-16.57) overall in patients with extensive disease.
- 8.28 (95% CI, 3.75-18.32) in those with obstructive but nonextensive disease.
- 9.19 (95% CI, 4.49-18.82) overall in those with obstructive disease.
- 12.48 (95% CI, 5.50-28.12) in those with or obstructive and extensive disease.
The adjusted RR for the composite of death or MI was also elevated in persons with extensive disease:
- 2.70 (95% CI, 1.72-4.25) in those with extensive but nonobstructive disease.
- 3.15 (95% CI, 2.05-4.83) in those with extensive and obstructive disease.
“It’s one thing to show that the more plaque, the higher the risk,” Dr. Blankstein said. But “does the information ultimately lead to better outcomes? Do patients have fewer MIs or fewer deaths?” Several ongoing randomized trials are exploring these questions.
They include DANE-HEART (Computed Tomography Coronary Angiography for Primary Prevention), projected to enroll about 6,000 participants from the Copenhagen General Population Study cohort who have at least one CV risk factor, and SCOT-HEART 2 (second Computed Tomography Coronary Angiography for the Prevention of Myocardial Infarction), enrolling a similar cohort in Scotland.
The study was supported by grants from AP Møller og Hustru Chastine Mc-Kinney Møllers Fond, the Research Council of Rigshospitalet, and Danish Heart Foundation. Dr. Fuchs reports no relevant financial relationships. Disclosures for the other authors can be found here. Dr. Blankstein recently disclosed serving as a consultant to Amgen, Caristo Diagnostics, Novartis, and Silence Therapeutics. Disclosures for Dr. McDermott and Dr. Newby, who are SCOT-HEART 2 investigators, can be found here.
A version of this article originally appeared on Medscape.com.
About half of middle-aged adults in the community without cardiovascular (CV) symptoms have coronary atherosclerosis by CT angiography (CTA) that puts them at substantial risk for myocardial infarction (MI), suggests a prospective cohort study.
The 10% of participants who had subclinical disease considered obstructive at CTA showed a ninefold increased risk for MI over several years. Obstructive disease seemed to elevate risk more than subclinical disease that wasn’t obstructive but still considered extensive within the coronary arteries.
The findings, based on a Copenhagen General Population Study cohort, are new for CTA but consistent with research based on coronary artery calcium (CAC) scores and other ways to assess CV risk, say researchers.
Although all participants underwent CTA, such imaging isn’t used in the general population for atherosclerosis screening. But the findings may have implications for “opportunistic screening” for subclinical coronary disease at CTA conducted for other reasons, notes the study’s report, published online in the Annals of Internal Medicine.
“Identification of luminal obstructive or extensive subclinical coronary atherosclerosis” could potentially provide “clinically relevant, incremental risk assessment” in nonischemic patients who undergo cardiac CT or electrocardiogram-gated chest CT before procedures such as arrhythmia ablation or valve repair, it states.
Such patients found with subclinical coronary atherosclerosis might potentially “benefit from referral to intensified cardiovascular primary prevention therapy,” write the authors, led by Andreas Fuchs, MD, PhD, Copenhagen University Hospital-Rigshospitalet.
The group acknowledges the findings may not entirely apply to a non-Danish population.
A screening role for CTA?
Whether CTA has a role to play in adults without symptoms “is a big, open question in the field right now,” observed Ron Blankstein, MD, not associated with the current analysis, for this news organization.
Most population studies of CV risk prediction, such as MESA, have looked at CAC scores, not CTA, and have shown that “the more plaque individuals have, the higher the risk.” The current findings are similar but novel in coming from coronary CTA in a large asymptomatic community population, said Dr. Blankstein, who is director of cardiac CT at Brigham and Women’s Hospital, Boston.
“It’s possible that patients who have obstructive plaque in general tend to have a larger amount of plaque as well,” he said. So, while the study suggests that “the more plaque individuals have, the worse their overall risk,” it also shows that the risk “is enhanced even more if they have obstructive disease.”
The Danish cohort analysis “provides a unique opportunity to study the contemporary natural history of coronary artery disease in the absence of intervention,” notes an accompanying editorial.
For example, both patients and clinicians were blinded to CTA results, and CV preventive therapies weren’t common, observe Michael McDermott, MBChB, and David E. Newby, DM, PhD, of the BHF Centre for Cardiovascular Science, University of Edinburgh.
The analysis suggests that subclinical coronary disease that is obstructive predicts MI risk more strongly than extensive coronary disease, they note, and may be present in two-thirds of MI patients. “This contrasts with symptomatic populations, where nonobstructive disease accounts for most future myocardial infarctions, presumably from plaque rupture.”
It also points to “strong associations between nonobstructive extensive disease and adverse plaque characteristics,” write Dr. McDermott and Dr. Newby. “This underscores the major importance of plaque burden” for the prediction of coronary events.
Graded risk
The analysis included 9,533 persons aged 40 and older without known ischemic heart disease or symptoms with available CTA assessments.
Obstructive disease, defined as presence of a luminal stenosis of at least 50%, was seen in 10% and nonobstructive disease in 36% of the total cohort, the report states.
Disease occupying more than one-third of the coronary tree was considered extensive and less than one-third of the coronaries nonextensive, occurring in 10.5% and 35.8% of the cohort, respectively.
There were 71 MIs and 193 deaths over a median of 3.5 years. The adjusted relative risk for MI, compared with those without coronary atherosclerosis, was:
- 7.65 (95% confidence interval, 3.53-16.57) overall in patients with extensive disease.
- 8.28 (95% CI, 3.75-18.32) in those with obstructive but nonextensive disease.
- 9.19 (95% CI, 4.49-18.82) overall in those with obstructive disease.
- 12.48 (95% CI, 5.50-28.12) in those with or obstructive and extensive disease.
The adjusted RR for the composite of death or MI was also elevated in persons with extensive disease:
- 2.70 (95% CI, 1.72-4.25) in those with extensive but nonobstructive disease.
- 3.15 (95% CI, 2.05-4.83) in those with extensive and obstructive disease.
“It’s one thing to show that the more plaque, the higher the risk,” Dr. Blankstein said. But “does the information ultimately lead to better outcomes? Do patients have fewer MIs or fewer deaths?” Several ongoing randomized trials are exploring these questions.
They include DANE-HEART (Computed Tomography Coronary Angiography for Primary Prevention), projected to enroll about 6,000 participants from the Copenhagen General Population Study cohort who have at least one CV risk factor, and SCOT-HEART 2 (second Computed Tomography Coronary Angiography for the Prevention of Myocardial Infarction), enrolling a similar cohort in Scotland.
The study was supported by grants from AP Møller og Hustru Chastine Mc-Kinney Møllers Fond, the Research Council of Rigshospitalet, and Danish Heart Foundation. Dr. Fuchs reports no relevant financial relationships. Disclosures for the other authors can be found here. Dr. Blankstein recently disclosed serving as a consultant to Amgen, Caristo Diagnostics, Novartis, and Silence Therapeutics. Disclosures for Dr. McDermott and Dr. Newby, who are SCOT-HEART 2 investigators, can be found here.
A version of this article originally appeared on Medscape.com.
About half of middle-aged adults in the community without cardiovascular (CV) symptoms have coronary atherosclerosis by CT angiography (CTA) that puts them at substantial risk for myocardial infarction (MI), suggests a prospective cohort study.
The 10% of participants who had subclinical disease considered obstructive at CTA showed a ninefold increased risk for MI over several years. Obstructive disease seemed to elevate risk more than subclinical disease that wasn’t obstructive but still considered extensive within the coronary arteries.
The findings, based on a Copenhagen General Population Study cohort, are new for CTA but consistent with research based on coronary artery calcium (CAC) scores and other ways to assess CV risk, say researchers.
Although all participants underwent CTA, such imaging isn’t used in the general population for atherosclerosis screening. But the findings may have implications for “opportunistic screening” for subclinical coronary disease at CTA conducted for other reasons, notes the study’s report, published online in the Annals of Internal Medicine.
“Identification of luminal obstructive or extensive subclinical coronary atherosclerosis” could potentially provide “clinically relevant, incremental risk assessment” in nonischemic patients who undergo cardiac CT or electrocardiogram-gated chest CT before procedures such as arrhythmia ablation or valve repair, it states.
Such patients found with subclinical coronary atherosclerosis might potentially “benefit from referral to intensified cardiovascular primary prevention therapy,” write the authors, led by Andreas Fuchs, MD, PhD, Copenhagen University Hospital-Rigshospitalet.
The group acknowledges the findings may not entirely apply to a non-Danish population.
A screening role for CTA?
Whether CTA has a role to play in adults without symptoms “is a big, open question in the field right now,” observed Ron Blankstein, MD, not associated with the current analysis, for this news organization.
Most population studies of CV risk prediction, such as MESA, have looked at CAC scores, not CTA, and have shown that “the more plaque individuals have, the higher the risk.” The current findings are similar but novel in coming from coronary CTA in a large asymptomatic community population, said Dr. Blankstein, who is director of cardiac CT at Brigham and Women’s Hospital, Boston.
“It’s possible that patients who have obstructive plaque in general tend to have a larger amount of plaque as well,” he said. So, while the study suggests that “the more plaque individuals have, the worse their overall risk,” it also shows that the risk “is enhanced even more if they have obstructive disease.”
The Danish cohort analysis “provides a unique opportunity to study the contemporary natural history of coronary artery disease in the absence of intervention,” notes an accompanying editorial.
For example, both patients and clinicians were blinded to CTA results, and CV preventive therapies weren’t common, observe Michael McDermott, MBChB, and David E. Newby, DM, PhD, of the BHF Centre for Cardiovascular Science, University of Edinburgh.
The analysis suggests that subclinical coronary disease that is obstructive predicts MI risk more strongly than extensive coronary disease, they note, and may be present in two-thirds of MI patients. “This contrasts with symptomatic populations, where nonobstructive disease accounts for most future myocardial infarctions, presumably from plaque rupture.”
It also points to “strong associations between nonobstructive extensive disease and adverse plaque characteristics,” write Dr. McDermott and Dr. Newby. “This underscores the major importance of plaque burden” for the prediction of coronary events.
Graded risk
The analysis included 9,533 persons aged 40 and older without known ischemic heart disease or symptoms with available CTA assessments.
Obstructive disease, defined as presence of a luminal stenosis of at least 50%, was seen in 10% and nonobstructive disease in 36% of the total cohort, the report states.
Disease occupying more than one-third of the coronary tree was considered extensive and less than one-third of the coronaries nonextensive, occurring in 10.5% and 35.8% of the cohort, respectively.
There were 71 MIs and 193 deaths over a median of 3.5 years. The adjusted relative risk for MI, compared with those without coronary atherosclerosis, was:
- 7.65 (95% confidence interval, 3.53-16.57) overall in patients with extensive disease.
- 8.28 (95% CI, 3.75-18.32) in those with obstructive but nonextensive disease.
- 9.19 (95% CI, 4.49-18.82) overall in those with obstructive disease.
- 12.48 (95% CI, 5.50-28.12) in those with or obstructive and extensive disease.
The adjusted RR for the composite of death or MI was also elevated in persons with extensive disease:
- 2.70 (95% CI, 1.72-4.25) in those with extensive but nonobstructive disease.
- 3.15 (95% CI, 2.05-4.83) in those with extensive and obstructive disease.
“It’s one thing to show that the more plaque, the higher the risk,” Dr. Blankstein said. But “does the information ultimately lead to better outcomes? Do patients have fewer MIs or fewer deaths?” Several ongoing randomized trials are exploring these questions.
They include DANE-HEART (Computed Tomography Coronary Angiography for Primary Prevention), projected to enroll about 6,000 participants from the Copenhagen General Population Study cohort who have at least one CV risk factor, and SCOT-HEART 2 (second Computed Tomography Coronary Angiography for the Prevention of Myocardial Infarction), enrolling a similar cohort in Scotland.
The study was supported by grants from AP Møller og Hustru Chastine Mc-Kinney Møllers Fond, the Research Council of Rigshospitalet, and Danish Heart Foundation. Dr. Fuchs reports no relevant financial relationships. Disclosures for the other authors can be found here. Dr. Blankstein recently disclosed serving as a consultant to Amgen, Caristo Diagnostics, Novartis, and Silence Therapeutics. Disclosures for Dr. McDermott and Dr. Newby, who are SCOT-HEART 2 investigators, can be found here.
A version of this article originally appeared on Medscape.com.
The physician as leader
Physicians are placed in positions of leadership by the medical team, by the community, and by society, particularly during times of crisis such as the COVID pandemic. They are looked to by the media at times of health care news such as the overturning of Roe v. Wade.1 In a 2015 survey of resident physicians, two-thirds agreed that a formalized leadership curriculum would help them become better supervisors and clinicians.2 While all physicians are viewed as leaders, the concept of leadership is rarely, if ever, described or developed as a part of medical training. This month’s column will provide insights into defining leadership as a physician in the medical and administrative settings.
Benefits of effective leadership
Physicians, whether they are clinicians, researchers, administrators, or teachers, are expected to oversee and engage their teams. A report by the Institute of Medicine recommended that academic health centers “develop leaders at all levels who can manage the organizational and system changes necessary to improve health through innovation in health professions education, patient care, and research.”3 Hospitals with higher-rated management practices and more highly rated boards of directors have been shown to deliver higher-quality care and better clinical outcomes, including lower mortality.
To illustrate, the clinicians at the Mayo Clinic annually rate their supervisors on a Leader Index, a simple 12-question survey of five leadership domains: truthfulness, transparency, character, capability, and partnership. All supervisors were physicians and scientists. Their findings revealed that for each one-point increase in composite leadership score, there was a 3.3% decrease in the likelihood of burnout and a 9.0% increase in the likelihood of satisfaction in the physicians supervised.4
Interprofessional teamwork and engagement are vital skills for a leader to create a successful team. Enhanced management practices have also been associated with higher patient approval ratings and better financial performance. Effective leadership additionally affects physician well-being, with stronger leadership associated with less physician burnout and higher satisfaction.5
Leadership styles enhance quality measures in health care.6 The most effective leadership styles are ones in which the staff feels they are part of a team, are engaged, and are mentored.7 While leadership styles can vary, the common theme is staff engagement. An authoritative style leader is one who mobilizes the team toward a vision, that is, “Come with me.” An affiliative style leader creates harmony and builds emotional bonds where “people come first.” Democratic leaders forge a consensus through staff participation by asking, “What do you think?” Finally, a leader who uses a coaching style helps staff to identify their strengths and weaknesses and work toward improvement. These leadership behaviors are in contradistinction to the unsuccessful coercive leader who demands immediate compliance, that is, “Do what I tell you.”
Five fundamental leadership principles are shown in Table 1.8
Effective leaders have an open (growth) mindset, unwavering attention to diversity, equity, and inclusion, and to building relationships and trust; they practice effective communication and listening, focus on results, and cocreate support structures.
A growth mindset is the belief that one’s abilities are not innate but can improve through effort and learning.9
Emotional intelligence
A survey of business senior managers rated the qualities found in the most outstanding leaders. Using objective criteria like profitability the study psychologists interviewed the highest-rated leaders to compare their capabilities. While intellects and cognitive skills were important, the results showed that emotional intelligence (EI) was twice as important as technical skills and IQ.10 As an example, in a 1996 study, when senior managers had an optimal level of EI, their division’s yearly earnings were 20% higher than estimated.11
EI is a leadership competency that deals with the ability to understand and manage your own emotions and your interactions with others.10 At the Cleveland Clinic, EI is exemplified by the acronym HEART, whereby the team strives to improve the patient experience, mainly when an error occurs. The health care team is using EI by showing its the ability to Hear, Empathize, Apologize, Reply, and Thank. When an untoward event occurs, the physician, as the leader of the team, must lead by example when communicating with staff and patients. EI consists of five components (Table 2).13
- Self-awareness is insight by which you can improve. Maintaining a journal of your daily thoughts may assist with this as well as simply pausing to pay attention during times of heightened emotions.
- Self-regulation shows control, that is, behaving according to your values, and being accountable and calm when challenged.
- Purpose, knowing your “why,” produces motivation and helps maintain optimism.
- Empathy shows the ability to understand the emotions of other people.
- Social skill is the ability to establish mutually rewarding relationships.
Given all the above benefits, it is no surprise that companies are actively trying use artificial intelligence to improve EI.12
Learning to be a leader
In medical school, students are expected to develop skills to handle and resolve conflicts, learn to share leadership, take mutual responsibility, and monitor their own performance.13 Although training of young physicians in leadership is not unprecedented, a systemic review revealed a lack of analytic studies to evaluate the effectiveness of the teaching methods.14 During undergraduate medical education, standard curricula and methods of instruction on leadership are not established, resulting in variable outcomes.
The Association of American Medical Colleges offers a curriculum, “Preparing Medical Students to Be Physician Leaders: A Leadership Training Program for Students Designed and Led by Students.”15 The objectives of this training are to help students identify their “personal style of leadership, recognize strengths and weaknesses, utilize effective communication strategies, appropriately delegate team member responsibilities, and provide constructive feedback to help improve team function.”
Take-home points
Following the completion of formal medical education, physicians are thrust into leadership roles. The key to being an effective leader is using EI to mentor the team and make staff feel connected to the team’s meaning and purpose, so they feel valued.
Dr. Trolice is director of The IVF Center in Winter Park, Fla., and professor of obstetrics and gynecology at the University of Central Florida, Orlando.
References
1. Carsen S and Xia C. McGill J Med. 2006 Jan;9(1):1-2.
2. Jardine D et al. J Grad Med Educ. 2015;7(2):307-9.
3. Institute of Medicine. Acad Emerg Med. July 2004;11(7):802-6.
4. Shanafelt TD et al. Mayo Clin Proc. April 2015;90(4):432-40.
5. Rotenstein LS et al. Harv Bus Rev. Oct. 17, 2018.
6. Sfantou SF. Healthcare 2017;5(4):73.
7. Goleman D. Harv Bus Rev. March-April 2000.
8. Collins-Nakai R. McGill J Med [Internet]. 2020 Dec. 1 [cited 2023 Mar. 28];9(1).
9. Dweck C. Harv Bus Rev. Jan. 13, 2016.
10. Goleman D. Harv Bus Rev. 1998 Nov-Dec;76(6):93-102..
11. Goleman D et al. Primal leadership: Realizing the power of emotional intelligence. Boston: Harvard Business School Publishing, 2002.12. Limon D and Plaster B. Harv Bus Rev. Jan. 25, 2022.
13. Chen T-Y. Tzu Chi Med J. Apr–Jun 2018;30(2):66-70.
14. Kumar B et al. BMC Med Educ. 2020;20:175.
15. Richards K et al. Med Ed Portal. Dec. 13 2019.
Physicians are placed in positions of leadership by the medical team, by the community, and by society, particularly during times of crisis such as the COVID pandemic. They are looked to by the media at times of health care news such as the overturning of Roe v. Wade.1 In a 2015 survey of resident physicians, two-thirds agreed that a formalized leadership curriculum would help them become better supervisors and clinicians.2 While all physicians are viewed as leaders, the concept of leadership is rarely, if ever, described or developed as a part of medical training. This month’s column will provide insights into defining leadership as a physician in the medical and administrative settings.
Benefits of effective leadership
Physicians, whether they are clinicians, researchers, administrators, or teachers, are expected to oversee and engage their teams. A report by the Institute of Medicine recommended that academic health centers “develop leaders at all levels who can manage the organizational and system changes necessary to improve health through innovation in health professions education, patient care, and research.”3 Hospitals with higher-rated management practices and more highly rated boards of directors have been shown to deliver higher-quality care and better clinical outcomes, including lower mortality.
To illustrate, the clinicians at the Mayo Clinic annually rate their supervisors on a Leader Index, a simple 12-question survey of five leadership domains: truthfulness, transparency, character, capability, and partnership. All supervisors were physicians and scientists. Their findings revealed that for each one-point increase in composite leadership score, there was a 3.3% decrease in the likelihood of burnout and a 9.0% increase in the likelihood of satisfaction in the physicians supervised.4
Interprofessional teamwork and engagement are vital skills for a leader to create a successful team. Enhanced management practices have also been associated with higher patient approval ratings and better financial performance. Effective leadership additionally affects physician well-being, with stronger leadership associated with less physician burnout and higher satisfaction.5
Leadership styles enhance quality measures in health care.6 The most effective leadership styles are ones in which the staff feels they are part of a team, are engaged, and are mentored.7 While leadership styles can vary, the common theme is staff engagement. An authoritative style leader is one who mobilizes the team toward a vision, that is, “Come with me.” An affiliative style leader creates harmony and builds emotional bonds where “people come first.” Democratic leaders forge a consensus through staff participation by asking, “What do you think?” Finally, a leader who uses a coaching style helps staff to identify their strengths and weaknesses and work toward improvement. These leadership behaviors are in contradistinction to the unsuccessful coercive leader who demands immediate compliance, that is, “Do what I tell you.”
Five fundamental leadership principles are shown in Table 1.8
Effective leaders have an open (growth) mindset, unwavering attention to diversity, equity, and inclusion, and to building relationships and trust; they practice effective communication and listening, focus on results, and cocreate support structures.
A growth mindset is the belief that one’s abilities are not innate but can improve through effort and learning.9
Emotional intelligence
A survey of business senior managers rated the qualities found in the most outstanding leaders. Using objective criteria like profitability the study psychologists interviewed the highest-rated leaders to compare their capabilities. While intellects and cognitive skills were important, the results showed that emotional intelligence (EI) was twice as important as technical skills and IQ.10 As an example, in a 1996 study, when senior managers had an optimal level of EI, their division’s yearly earnings were 20% higher than estimated.11
EI is a leadership competency that deals with the ability to understand and manage your own emotions and your interactions with others.10 At the Cleveland Clinic, EI is exemplified by the acronym HEART, whereby the team strives to improve the patient experience, mainly when an error occurs. The health care team is using EI by showing its the ability to Hear, Empathize, Apologize, Reply, and Thank. When an untoward event occurs, the physician, as the leader of the team, must lead by example when communicating with staff and patients. EI consists of five components (Table 2).13
- Self-awareness is insight by which you can improve. Maintaining a journal of your daily thoughts may assist with this as well as simply pausing to pay attention during times of heightened emotions.
- Self-regulation shows control, that is, behaving according to your values, and being accountable and calm when challenged.
- Purpose, knowing your “why,” produces motivation and helps maintain optimism.
- Empathy shows the ability to understand the emotions of other people.
- Social skill is the ability to establish mutually rewarding relationships.
Given all the above benefits, it is no surprise that companies are actively trying use artificial intelligence to improve EI.12
Learning to be a leader
In medical school, students are expected to develop skills to handle and resolve conflicts, learn to share leadership, take mutual responsibility, and monitor their own performance.13 Although training of young physicians in leadership is not unprecedented, a systemic review revealed a lack of analytic studies to evaluate the effectiveness of the teaching methods.14 During undergraduate medical education, standard curricula and methods of instruction on leadership are not established, resulting in variable outcomes.
The Association of American Medical Colleges offers a curriculum, “Preparing Medical Students to Be Physician Leaders: A Leadership Training Program for Students Designed and Led by Students.”15 The objectives of this training are to help students identify their “personal style of leadership, recognize strengths and weaknesses, utilize effective communication strategies, appropriately delegate team member responsibilities, and provide constructive feedback to help improve team function.”
Take-home points
Following the completion of formal medical education, physicians are thrust into leadership roles. The key to being an effective leader is using EI to mentor the team and make staff feel connected to the team’s meaning and purpose, so they feel valued.
Dr. Trolice is director of The IVF Center in Winter Park, Fla., and professor of obstetrics and gynecology at the University of Central Florida, Orlando.
References
1. Carsen S and Xia C. McGill J Med. 2006 Jan;9(1):1-2.
2. Jardine D et al. J Grad Med Educ. 2015;7(2):307-9.
3. Institute of Medicine. Acad Emerg Med. July 2004;11(7):802-6.
4. Shanafelt TD et al. Mayo Clin Proc. April 2015;90(4):432-40.
5. Rotenstein LS et al. Harv Bus Rev. Oct. 17, 2018.
6. Sfantou SF. Healthcare 2017;5(4):73.
7. Goleman D. Harv Bus Rev. March-April 2000.
8. Collins-Nakai R. McGill J Med [Internet]. 2020 Dec. 1 [cited 2023 Mar. 28];9(1).
9. Dweck C. Harv Bus Rev. Jan. 13, 2016.
10. Goleman D. Harv Bus Rev. 1998 Nov-Dec;76(6):93-102..
11. Goleman D et al. Primal leadership: Realizing the power of emotional intelligence. Boston: Harvard Business School Publishing, 2002.12. Limon D and Plaster B. Harv Bus Rev. Jan. 25, 2022.
13. Chen T-Y. Tzu Chi Med J. Apr–Jun 2018;30(2):66-70.
14. Kumar B et al. BMC Med Educ. 2020;20:175.
15. Richards K et al. Med Ed Portal. Dec. 13 2019.
Physicians are placed in positions of leadership by the medical team, by the community, and by society, particularly during times of crisis such as the COVID pandemic. They are looked to by the media at times of health care news such as the overturning of Roe v. Wade.1 In a 2015 survey of resident physicians, two-thirds agreed that a formalized leadership curriculum would help them become better supervisors and clinicians.2 While all physicians are viewed as leaders, the concept of leadership is rarely, if ever, described or developed as a part of medical training. This month’s column will provide insights into defining leadership as a physician in the medical and administrative settings.
Benefits of effective leadership
Physicians, whether they are clinicians, researchers, administrators, or teachers, are expected to oversee and engage their teams. A report by the Institute of Medicine recommended that academic health centers “develop leaders at all levels who can manage the organizational and system changes necessary to improve health through innovation in health professions education, patient care, and research.”3 Hospitals with higher-rated management practices and more highly rated boards of directors have been shown to deliver higher-quality care and better clinical outcomes, including lower mortality.
To illustrate, the clinicians at the Mayo Clinic annually rate their supervisors on a Leader Index, a simple 12-question survey of five leadership domains: truthfulness, transparency, character, capability, and partnership. All supervisors were physicians and scientists. Their findings revealed that for each one-point increase in composite leadership score, there was a 3.3% decrease in the likelihood of burnout and a 9.0% increase in the likelihood of satisfaction in the physicians supervised.4
Interprofessional teamwork and engagement are vital skills for a leader to create a successful team. Enhanced management practices have also been associated with higher patient approval ratings and better financial performance. Effective leadership additionally affects physician well-being, with stronger leadership associated with less physician burnout and higher satisfaction.5
Leadership styles enhance quality measures in health care.6 The most effective leadership styles are ones in which the staff feels they are part of a team, are engaged, and are mentored.7 While leadership styles can vary, the common theme is staff engagement. An authoritative style leader is one who mobilizes the team toward a vision, that is, “Come with me.” An affiliative style leader creates harmony and builds emotional bonds where “people come first.” Democratic leaders forge a consensus through staff participation by asking, “What do you think?” Finally, a leader who uses a coaching style helps staff to identify their strengths and weaknesses and work toward improvement. These leadership behaviors are in contradistinction to the unsuccessful coercive leader who demands immediate compliance, that is, “Do what I tell you.”
Five fundamental leadership principles are shown in Table 1.8
Effective leaders have an open (growth) mindset, unwavering attention to diversity, equity, and inclusion, and to building relationships and trust; they practice effective communication and listening, focus on results, and cocreate support structures.
A growth mindset is the belief that one’s abilities are not innate but can improve through effort and learning.9
Emotional intelligence
A survey of business senior managers rated the qualities found in the most outstanding leaders. Using objective criteria like profitability the study psychologists interviewed the highest-rated leaders to compare their capabilities. While intellects and cognitive skills were important, the results showed that emotional intelligence (EI) was twice as important as technical skills and IQ.10 As an example, in a 1996 study, when senior managers had an optimal level of EI, their division’s yearly earnings were 20% higher than estimated.11
EI is a leadership competency that deals with the ability to understand and manage your own emotions and your interactions with others.10 At the Cleveland Clinic, EI is exemplified by the acronym HEART, whereby the team strives to improve the patient experience, mainly when an error occurs. The health care team is using EI by showing its the ability to Hear, Empathize, Apologize, Reply, and Thank. When an untoward event occurs, the physician, as the leader of the team, must lead by example when communicating with staff and patients. EI consists of five components (Table 2).13
- Self-awareness is insight by which you can improve. Maintaining a journal of your daily thoughts may assist with this as well as simply pausing to pay attention during times of heightened emotions.
- Self-regulation shows control, that is, behaving according to your values, and being accountable and calm when challenged.
- Purpose, knowing your “why,” produces motivation and helps maintain optimism.
- Empathy shows the ability to understand the emotions of other people.
- Social skill is the ability to establish mutually rewarding relationships.
Given all the above benefits, it is no surprise that companies are actively trying use artificial intelligence to improve EI.12
Learning to be a leader
In medical school, students are expected to develop skills to handle and resolve conflicts, learn to share leadership, take mutual responsibility, and monitor their own performance.13 Although training of young physicians in leadership is not unprecedented, a systemic review revealed a lack of analytic studies to evaluate the effectiveness of the teaching methods.14 During undergraduate medical education, standard curricula and methods of instruction on leadership are not established, resulting in variable outcomes.
The Association of American Medical Colleges offers a curriculum, “Preparing Medical Students to Be Physician Leaders: A Leadership Training Program for Students Designed and Led by Students.”15 The objectives of this training are to help students identify their “personal style of leadership, recognize strengths and weaknesses, utilize effective communication strategies, appropriately delegate team member responsibilities, and provide constructive feedback to help improve team function.”
Take-home points
Following the completion of formal medical education, physicians are thrust into leadership roles. The key to being an effective leader is using EI to mentor the team and make staff feel connected to the team’s meaning and purpose, so they feel valued.
Dr. Trolice is director of The IVF Center in Winter Park, Fla., and professor of obstetrics and gynecology at the University of Central Florida, Orlando.
References
1. Carsen S and Xia C. McGill J Med. 2006 Jan;9(1):1-2.
2. Jardine D et al. J Grad Med Educ. 2015;7(2):307-9.
3. Institute of Medicine. Acad Emerg Med. July 2004;11(7):802-6.
4. Shanafelt TD et al. Mayo Clin Proc. April 2015;90(4):432-40.
5. Rotenstein LS et al. Harv Bus Rev. Oct. 17, 2018.
6. Sfantou SF. Healthcare 2017;5(4):73.
7. Goleman D. Harv Bus Rev. March-April 2000.
8. Collins-Nakai R. McGill J Med [Internet]. 2020 Dec. 1 [cited 2023 Mar. 28];9(1).
9. Dweck C. Harv Bus Rev. Jan. 13, 2016.
10. Goleman D. Harv Bus Rev. 1998 Nov-Dec;76(6):93-102..
11. Goleman D et al. Primal leadership: Realizing the power of emotional intelligence. Boston: Harvard Business School Publishing, 2002.12. Limon D and Plaster B. Harv Bus Rev. Jan. 25, 2022.
13. Chen T-Y. Tzu Chi Med J. Apr–Jun 2018;30(2):66-70.
14. Kumar B et al. BMC Med Educ. 2020;20:175.
15. Richards K et al. Med Ed Portal. Dec. 13 2019.
Looking at CGRP-Related Medications for Migraine, April 2023
Since 2018, the field of headache medicine has changed significantly. The development of calcitonin gene-related peptide (CGRP)-targeting preventive medications has led to the ability to treat migraine in a much more specific manner. The development of CGRP acute oral medications over the past 2 years has allowed people with migraine the ability to use well-tolerated, migraine-specific, abortive treatments. Triptan medications were the first migraine-specific acute treatments developed, some of which were nonoral, such as injectable sumatriptan and intranasal sumatriptan and zolmitriptan. The study by Lipton and colleagues assesses the safety and tolerability of a novel acute CGRP antagonist nonoral treatment, zavegepant.
In this double-blind, randomized, multicentered trial, nearly 2000 participants were enrolled with a diagnosis of episodic migraine with or without aura; they were excluded if they had previously used another CGRP antagonist, either an injectable or oral medication, before enrolling in this study. In addition to assessing migraine pain, participants were asked to identify their otherwise most bothersome symptom, specifically photophobia, phonophobia, or nausea. They were given a nasal spray to self-administer and were assessed at 15 minutes after treatment and at multiple additional intervals, up to 48 hours after the initial dosing. The primary endpoints were freedom from pain and freedom from the most bothersome symptom at 2 hours after treatment onset. There were 17 secondary endpoints.
At 2 hours after treatment onset, a statistically significant group had achieved freedom from pain. The percentage, however, did remain somewhat low: 24%. Freedom from the most bothersome symptom was also statistically significant but was up to 40%. For 13 of the 17 endpoints, the results were also statistically significant, including pain relief at 2 hours, sustained pain relief at 2-24 hours and 48 hours, functional improvement, and freedom from photophobia and phonophobia. The most common adverse effects were poor taste, nasal discomfort, and throat irritation. No serious adverse events were noted.
Zavegepant has been FDA approved for the acute treatment of migraine on the basis of these data. This is a novel, well-tolerated, nonoral acute treatment for migraine. We can now treat patients with very severe nausea or more sudden-onset pain with a CGRP option that can potentially treat their attacks more quickly.
One early finding in many of the CGRP studies was that a certain subpopulation of migraine patients have a robust and rapid preventive response to monoclonal antibody treatment. Raffaelli and colleagues sought to evaluate potential characteristics that would better predict the efficacy of CGRP antagonist monoclonal antibodies for the prevention of migraine.
In this study, the definition of a superresponse to CGRP antagonist treatment was a >75% reduction in monthly headache days after 3 months of treatment. Nonresponse was defined as <25% reduction over this same period. This was a retrospective cohort study at one headache center in Berlin, Germany. A total of 260 patients were enrolled, all with a diagnosis of migraine and all given a preventive CGRP monoclonal antibody.
There was no significant difference between nonresponders and superresponders when compared for sex, age, or time since migraine diagnosis. Erenumab was the most commonly prescribed CGRP antagonist medication, but all CGRP antagonists were included. There was no significant difference when CGRP receptor or ligand targeting antibodies were compared. Nonresponders were seen as more likely to have chronic migraine and higher monthly headache day and monthly migraine day frequencies. Superresponders were seen to have more "typical" migraine characteristics, such as unilateral or localized migraines or migraines with pulsating/throbbing characteristics, as well as the presence of photophobia and nausea; however, this was not statistically significant. Of note, superresponders were also significantly more likely to report improvement of their acute migraine attacks with triptan medications as compared with nonresponders.
Patients with less frequent migraine attacks and more classic migraine attacks appear to be much more likely to respond quickly and effectively to many preventive options; this appears to be most robust with the CGRP antibody class. Although the reason for this robust response is not entirely clear, it would certainly be best for providers to consider the initiation of CGRP antagonist preventive treatment in patients with these characteristics.
The newest generation of migraine-specific medications targets either the inflammatory neurotransmitter CGRP or the CGRP receptor. Erenumab is a CGRP receptor blocker, whereas both fremanezumab and galcanezumab block the CGRP ligand. Erenumab has been associated with constipation and high blood pressure, whereas the other CGRP antagonist medications are not associated with these side effects. Whether this is due to the difference in mechanism of action, and specifically whether the antibodies block the CGRP receptor or are an antagonist, is under consideration. Schiano di Cola and colleagues specifically sought to investigate the subtle differences between these two subclasses of treatment.
Patients with high-frequency episodic and chronic migraine were enrolled in this retrospective study; 6 months of data were included. The researchers here specifically looked at efficacy after 1, 3, and 6 months of treatment. They examined, as a primary outcome, monthly headache and migraine days, and migraine disability as based on the Migraine Disability Assessment Scale (MIDAS) and Headache Impact Test (HIT-6) score. Concomitant analgesic medication consumption and response rate relative to baseline were also compared.
A total of 152 patients were enrolled, 68 with CGRP ligand-targeting therapy and 84 with CGRP receptor-blocking therapy. Medication overuse was present in 73% of patients. Although a significant improvement from baseline was noted in monthly headache days, monthly migraine days, severity, analgesic consumption, and disability, MIDAS scores were significantly lower in the CGRP ligand-blocking group compared with the CGRP receptor group at 1 and 3 months. Number of monthly migraine days was also lower in the CGRP ligand-blocking group, but only after 3 months. The other variables, including monthly headache days per month, analgesic consumption, severity, and disability, were not statistically different.
Adverse events were not compared between the two groups, even though this was a prior noted difference between these two classes of medications. Although there are some slight differences in efficacy, the majority of outcome metrics did not appear to be significantly different in either group. One would be hard-pressed to choose a specific CGRP medication on the basis of these data.
Since 2018, the field of headache medicine has changed significantly. The development of calcitonin gene-related peptide (CGRP)-targeting preventive medications has led to the ability to treat migraine in a much more specific manner. The development of CGRP acute oral medications over the past 2 years has allowed people with migraine the ability to use well-tolerated, migraine-specific, abortive treatments. Triptan medications were the first migraine-specific acute treatments developed, some of which were nonoral, such as injectable sumatriptan and intranasal sumatriptan and zolmitriptan. The study by Lipton and colleagues assesses the safety and tolerability of a novel acute CGRP antagonist nonoral treatment, zavegepant.
In this double-blind, randomized, multicentered trial, nearly 2000 participants were enrolled with a diagnosis of episodic migraine with or without aura; they were excluded if they had previously used another CGRP antagonist, either an injectable or oral medication, before enrolling in this study. In addition to assessing migraine pain, participants were asked to identify their otherwise most bothersome symptom, specifically photophobia, phonophobia, or nausea. They were given a nasal spray to self-administer and were assessed at 15 minutes after treatment and at multiple additional intervals, up to 48 hours after the initial dosing. The primary endpoints were freedom from pain and freedom from the most bothersome symptom at 2 hours after treatment onset. There were 17 secondary endpoints.
At 2 hours after treatment onset, a statistically significant group had achieved freedom from pain. The percentage, however, did remain somewhat low: 24%. Freedom from the most bothersome symptom was also statistically significant but was up to 40%. For 13 of the 17 endpoints, the results were also statistically significant, including pain relief at 2 hours, sustained pain relief at 2-24 hours and 48 hours, functional improvement, and freedom from photophobia and phonophobia. The most common adverse effects were poor taste, nasal discomfort, and throat irritation. No serious adverse events were noted.
Zavegepant has been FDA approved for the acute treatment of migraine on the basis of these data. This is a novel, well-tolerated, nonoral acute treatment for migraine. We can now treat patients with very severe nausea or more sudden-onset pain with a CGRP option that can potentially treat their attacks more quickly.
One early finding in many of the CGRP studies was that a certain subpopulation of migraine patients have a robust and rapid preventive response to monoclonal antibody treatment. Raffaelli and colleagues sought to evaluate potential characteristics that would better predict the efficacy of CGRP antagonist monoclonal antibodies for the prevention of migraine.
In this study, the definition of a superresponse to CGRP antagonist treatment was a >75% reduction in monthly headache days after 3 months of treatment. Nonresponse was defined as <25% reduction over this same period. This was a retrospective cohort study at one headache center in Berlin, Germany. A total of 260 patients were enrolled, all with a diagnosis of migraine and all given a preventive CGRP monoclonal antibody.
There was no significant difference between nonresponders and superresponders when compared for sex, age, or time since migraine diagnosis. Erenumab was the most commonly prescribed CGRP antagonist medication, but all CGRP antagonists were included. There was no significant difference when CGRP receptor or ligand targeting antibodies were compared. Nonresponders were seen as more likely to have chronic migraine and higher monthly headache day and monthly migraine day frequencies. Superresponders were seen to have more "typical" migraine characteristics, such as unilateral or localized migraines or migraines with pulsating/throbbing characteristics, as well as the presence of photophobia and nausea; however, this was not statistically significant. Of note, superresponders were also significantly more likely to report improvement of their acute migraine attacks with triptan medications as compared with nonresponders.
Patients with less frequent migraine attacks and more classic migraine attacks appear to be much more likely to respond quickly and effectively to many preventive options; this appears to be most robust with the CGRP antibody class. Although the reason for this robust response is not entirely clear, it would certainly be best for providers to consider the initiation of CGRP antagonist preventive treatment in patients with these characteristics.
The newest generation of migraine-specific medications targets either the inflammatory neurotransmitter CGRP or the CGRP receptor. Erenumab is a CGRP receptor blocker, whereas both fremanezumab and galcanezumab block the CGRP ligand. Erenumab has been associated with constipation and high blood pressure, whereas the other CGRP antagonist medications are not associated with these side effects. Whether this is due to the difference in mechanism of action, and specifically whether the antibodies block the CGRP receptor or are an antagonist, is under consideration. Schiano di Cola and colleagues specifically sought to investigate the subtle differences between these two subclasses of treatment.
Patients with high-frequency episodic and chronic migraine were enrolled in this retrospective study; 6 months of data were included. The researchers here specifically looked at efficacy after 1, 3, and 6 months of treatment. They examined, as a primary outcome, monthly headache and migraine days, and migraine disability as based on the Migraine Disability Assessment Scale (MIDAS) and Headache Impact Test (HIT-6) score. Concomitant analgesic medication consumption and response rate relative to baseline were also compared.
A total of 152 patients were enrolled, 68 with CGRP ligand-targeting therapy and 84 with CGRP receptor-blocking therapy. Medication overuse was present in 73% of patients. Although a significant improvement from baseline was noted in monthly headache days, monthly migraine days, severity, analgesic consumption, and disability, MIDAS scores were significantly lower in the CGRP ligand-blocking group compared with the CGRP receptor group at 1 and 3 months. Number of monthly migraine days was also lower in the CGRP ligand-blocking group, but only after 3 months. The other variables, including monthly headache days per month, analgesic consumption, severity, and disability, were not statistically different.
Adverse events were not compared between the two groups, even though this was a prior noted difference between these two classes of medications. Although there are some slight differences in efficacy, the majority of outcome metrics did not appear to be significantly different in either group. One would be hard-pressed to choose a specific CGRP medication on the basis of these data.
Since 2018, the field of headache medicine has changed significantly. The development of calcitonin gene-related peptide (CGRP)-targeting preventive medications has led to the ability to treat migraine in a much more specific manner. The development of CGRP acute oral medications over the past 2 years has allowed people with migraine the ability to use well-tolerated, migraine-specific, abortive treatments. Triptan medications were the first migraine-specific acute treatments developed, some of which were nonoral, such as injectable sumatriptan and intranasal sumatriptan and zolmitriptan. The study by Lipton and colleagues assesses the safety and tolerability of a novel acute CGRP antagonist nonoral treatment, zavegepant.
In this double-blind, randomized, multicentered trial, nearly 2000 participants were enrolled with a diagnosis of episodic migraine with or without aura; they were excluded if they had previously used another CGRP antagonist, either an injectable or oral medication, before enrolling in this study. In addition to assessing migraine pain, participants were asked to identify their otherwise most bothersome symptom, specifically photophobia, phonophobia, or nausea. They were given a nasal spray to self-administer and were assessed at 15 minutes after treatment and at multiple additional intervals, up to 48 hours after the initial dosing. The primary endpoints were freedom from pain and freedom from the most bothersome symptom at 2 hours after treatment onset. There were 17 secondary endpoints.
At 2 hours after treatment onset, a statistically significant group had achieved freedom from pain. The percentage, however, did remain somewhat low: 24%. Freedom from the most bothersome symptom was also statistically significant but was up to 40%. For 13 of the 17 endpoints, the results were also statistically significant, including pain relief at 2 hours, sustained pain relief at 2-24 hours and 48 hours, functional improvement, and freedom from photophobia and phonophobia. The most common adverse effects were poor taste, nasal discomfort, and throat irritation. No serious adverse events were noted.
Zavegepant has been FDA approved for the acute treatment of migraine on the basis of these data. This is a novel, well-tolerated, nonoral acute treatment for migraine. We can now treat patients with very severe nausea or more sudden-onset pain with a CGRP option that can potentially treat their attacks more quickly.
One early finding in many of the CGRP studies was that a certain subpopulation of migraine patients have a robust and rapid preventive response to monoclonal antibody treatment. Raffaelli and colleagues sought to evaluate potential characteristics that would better predict the efficacy of CGRP antagonist monoclonal antibodies for the prevention of migraine.
In this study, the definition of a superresponse to CGRP antagonist treatment was a >75% reduction in monthly headache days after 3 months of treatment. Nonresponse was defined as <25% reduction over this same period. This was a retrospective cohort study at one headache center in Berlin, Germany. A total of 260 patients were enrolled, all with a diagnosis of migraine and all given a preventive CGRP monoclonal antibody.
There was no significant difference between nonresponders and superresponders when compared for sex, age, or time since migraine diagnosis. Erenumab was the most commonly prescribed CGRP antagonist medication, but all CGRP antagonists were included. There was no significant difference when CGRP receptor or ligand targeting antibodies were compared. Nonresponders were seen as more likely to have chronic migraine and higher monthly headache day and monthly migraine day frequencies. Superresponders were seen to have more "typical" migraine characteristics, such as unilateral or localized migraines or migraines with pulsating/throbbing characteristics, as well as the presence of photophobia and nausea; however, this was not statistically significant. Of note, superresponders were also significantly more likely to report improvement of their acute migraine attacks with triptan medications as compared with nonresponders.
Patients with less frequent migraine attacks and more classic migraine attacks appear to be much more likely to respond quickly and effectively to many preventive options; this appears to be most robust with the CGRP antibody class. Although the reason for this robust response is not entirely clear, it would certainly be best for providers to consider the initiation of CGRP antagonist preventive treatment in patients with these characteristics.
The newest generation of migraine-specific medications targets either the inflammatory neurotransmitter CGRP or the CGRP receptor. Erenumab is a CGRP receptor blocker, whereas both fremanezumab and galcanezumab block the CGRP ligand. Erenumab has been associated with constipation and high blood pressure, whereas the other CGRP antagonist medications are not associated with these side effects. Whether this is due to the difference in mechanism of action, and specifically whether the antibodies block the CGRP receptor or are an antagonist, is under consideration. Schiano di Cola and colleagues specifically sought to investigate the subtle differences between these two subclasses of treatment.
Patients with high-frequency episodic and chronic migraine were enrolled in this retrospective study; 6 months of data were included. The researchers here specifically looked at efficacy after 1, 3, and 6 months of treatment. They examined, as a primary outcome, monthly headache and migraine days, and migraine disability as based on the Migraine Disability Assessment Scale (MIDAS) and Headache Impact Test (HIT-6) score. Concomitant analgesic medication consumption and response rate relative to baseline were also compared.
A total of 152 patients were enrolled, 68 with CGRP ligand-targeting therapy and 84 with CGRP receptor-blocking therapy. Medication overuse was present in 73% of patients. Although a significant improvement from baseline was noted in monthly headache days, monthly migraine days, severity, analgesic consumption, and disability, MIDAS scores were significantly lower in the CGRP ligand-blocking group compared with the CGRP receptor group at 1 and 3 months. Number of monthly migraine days was also lower in the CGRP ligand-blocking group, but only after 3 months. The other variables, including monthly headache days per month, analgesic consumption, severity, and disability, were not statistically different.
Adverse events were not compared between the two groups, even though this was a prior noted difference between these two classes of medications. Although there are some slight differences in efficacy, the majority of outcome metrics did not appear to be significantly different in either group. One would be hard-pressed to choose a specific CGRP medication on the basis of these data.