User login
For MD-IQ use only
U.S. primary care seen lagging in key markers
In delivery of primary care, including access and coordination, the U.S. trails well behind 10 other wealthy countries, according to a new report from the Commonwealth Fund.
The document, released March 15, concludes that the shortcomings in the U.S. system – from a lack of a relationship with a primary care physician to unequal access to after-hours care – “disproportionately affect Black and Latinx communities and rural areas, exacerbating disparities that have widened during the COVID-19 pandemic.”
“This report really shows that the U.S. is falling behind. We know that a strong primary care system yields better health outcomes. We have a lot to learn from other high-income countries,” coauthor Munira Z. Gunja, MPH, a senior researcher for the Commonwealth Fund’s International Program in Health Policy and Practice Innovations, told this news organization. “At baseline, we really need to make sure that everyone has health insurance in this country so they can actually use primary care services, and we need to increase the supply of those services.”
The report draws from the Commonwealth Fund’s 2019 and 2020 International Health Policy Surveys and the 2020 International Profiles of Health Care Systems. Among the main points:
- U.S. adults are the least likely to have a regular physician or place of care or a long-standing relationship with a primary care provider: 43% of American adults have a long-term relationship with a primary care doctor, compared with highs of 71% in Germany and the Netherlands.
- Access to home visits or after-hours care – excluding emergency department visits – is lowest in the United States (45%). In the Netherlands, Norway, New Zealand, and Germany, the rate is 90% to 96%.
- Half of primary care providers in the United States report adequate coordination with specialists and hospitals – around the average for the 11 countries studied.
‘Dismal mess’
Experts reacted to the report with a mix of concern and frustration – but not surprise.
“The results in this report are not surprising, and we have known them all for a number of years now,” Timothy Hoff, PhD, a health policy expert at Northeastern University, Boston, said. “Primary care doctors remain the backbone of our primary care system. But there are too few of them in the United States, and there likely will remain too few of them in the future. This opens the door to other and more diverse forms of innovation that will be required to help complement the work they do.”
Dr. Hoff, author of Searching for the Family Doctor: Primary Care on the Brink, added that comparing the United States to smaller countries like Norway or the United Kingdom is “somewhat problematic.”
“Our system has to take care of several hundred million people, trapped in a fragmented and market-based delivery system focused on specialty care, each of whom may have a different insurance plan,” he said. “Doing some of the things very small countries with government-funded insurance and a history of strong primary care delivery do in taking care of far fewer citizens is not realistic.”
Jeffrey Borkan, MD, PhD, chair and professor in the department of family medicine at the Alpert Medical School of Brown University, Providence, R.I., said the most shocking finding in the report is that despite spending far more on health care than any other country, “we cannot manage to provide one of the least expensive and most efficacious services: a relationship with a primary care doctor.”
Arthur Caplan, PhD, director of the Division of Medical Ethics at New York University Langone Medical Center, called primary care in this country “a dismal mess. It has been for many years. This is especially so in mental health. Access in many counties is nonexistent, and many primary care physicians are opting into boutique care.”
R. Shawn Martin, CEO of the 133,000-member American Academy of Family Physicians, said, “None of this surprises me. I think these are trendlines; we have been following this for many, many years here at the Academy.”
Mr. Martin added that he was disappointed that the recent, large investments in sharing and digitizing information have not closed the gaps that hinder the efficient and widespread delivery of primary care.
The findings in the report weren’t all bad. More primary care providers in the United States (30%) screen their patients for social needs such as housing, food security, and transportation – the highest among all 11 nations studied.
Also, Commonwealth Fund said the proportion of patients who said they received information on meeting their social needs and screening for domestic violence or social isolation was low everywhere. However, the percentage in the United States, Canada, and Norway was the highest, at 9%. Sweden had the lowest rate for such screenings, at 1%.
The researchers noted that social determinants of health account for as much as 55% of health outcomes. “In some countries, like the United States, the higher rates of receiving such information may be a response to the higher rates of material hardship, along with a weaker safety net,” the report states.
Ms. Gunja and her colleagues suggested several options for changes in policies, including narrowing the wage gap between primary care providers and higher-paid specialists; subsidizing medical school tuition to give students incentives to enter primary care; investing in telehealth to make primary care more accessible; and rewarding and holding providers accountable for continuity of care.
“The U.S. had the largest wage gap and highest tuition fees among the countries we studied,” Ms. Gunja told this news organization..
Researchers noted that U.S. patients could benefit from the introduction of incentives such as those paid in New Zealand to primary health organizations, which receive additional funding per capita to promote health and coordinate care.
But Dr. Caplan was skeptical that those measures would do much to correct the problems.
“We have no will to fix this ongoing, scandalous situation,” he said. “Specialist care still pays inordinately large salaries. Nurses and physician extenders are underused. Academic prestige does little to reward primary care. Plus, patients are not pressing for better access. Sorry, but I see no solutions pending in the current climate. Obamacare barely survived.”
The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In delivery of primary care, including access and coordination, the U.S. trails well behind 10 other wealthy countries, according to a new report from the Commonwealth Fund.
The document, released March 15, concludes that the shortcomings in the U.S. system – from a lack of a relationship with a primary care physician to unequal access to after-hours care – “disproportionately affect Black and Latinx communities and rural areas, exacerbating disparities that have widened during the COVID-19 pandemic.”
“This report really shows that the U.S. is falling behind. We know that a strong primary care system yields better health outcomes. We have a lot to learn from other high-income countries,” coauthor Munira Z. Gunja, MPH, a senior researcher for the Commonwealth Fund’s International Program in Health Policy and Practice Innovations, told this news organization. “At baseline, we really need to make sure that everyone has health insurance in this country so they can actually use primary care services, and we need to increase the supply of those services.”
The report draws from the Commonwealth Fund’s 2019 and 2020 International Health Policy Surveys and the 2020 International Profiles of Health Care Systems. Among the main points:
- U.S. adults are the least likely to have a regular physician or place of care or a long-standing relationship with a primary care provider: 43% of American adults have a long-term relationship with a primary care doctor, compared with highs of 71% in Germany and the Netherlands.
- Access to home visits or after-hours care – excluding emergency department visits – is lowest in the United States (45%). In the Netherlands, Norway, New Zealand, and Germany, the rate is 90% to 96%.
- Half of primary care providers in the United States report adequate coordination with specialists and hospitals – around the average for the 11 countries studied.
‘Dismal mess’
Experts reacted to the report with a mix of concern and frustration – but not surprise.
“The results in this report are not surprising, and we have known them all for a number of years now,” Timothy Hoff, PhD, a health policy expert at Northeastern University, Boston, said. “Primary care doctors remain the backbone of our primary care system. But there are too few of them in the United States, and there likely will remain too few of them in the future. This opens the door to other and more diverse forms of innovation that will be required to help complement the work they do.”
Dr. Hoff, author of Searching for the Family Doctor: Primary Care on the Brink, added that comparing the United States to smaller countries like Norway or the United Kingdom is “somewhat problematic.”
“Our system has to take care of several hundred million people, trapped in a fragmented and market-based delivery system focused on specialty care, each of whom may have a different insurance plan,” he said. “Doing some of the things very small countries with government-funded insurance and a history of strong primary care delivery do in taking care of far fewer citizens is not realistic.”
Jeffrey Borkan, MD, PhD, chair and professor in the department of family medicine at the Alpert Medical School of Brown University, Providence, R.I., said the most shocking finding in the report is that despite spending far more on health care than any other country, “we cannot manage to provide one of the least expensive and most efficacious services: a relationship with a primary care doctor.”
Arthur Caplan, PhD, director of the Division of Medical Ethics at New York University Langone Medical Center, called primary care in this country “a dismal mess. It has been for many years. This is especially so in mental health. Access in many counties is nonexistent, and many primary care physicians are opting into boutique care.”
R. Shawn Martin, CEO of the 133,000-member American Academy of Family Physicians, said, “None of this surprises me. I think these are trendlines; we have been following this for many, many years here at the Academy.”
Mr. Martin added that he was disappointed that the recent, large investments in sharing and digitizing information have not closed the gaps that hinder the efficient and widespread delivery of primary care.
The findings in the report weren’t all bad. More primary care providers in the United States (30%) screen their patients for social needs such as housing, food security, and transportation – the highest among all 11 nations studied.
Also, Commonwealth Fund said the proportion of patients who said they received information on meeting their social needs and screening for domestic violence or social isolation was low everywhere. However, the percentage in the United States, Canada, and Norway was the highest, at 9%. Sweden had the lowest rate for such screenings, at 1%.
The researchers noted that social determinants of health account for as much as 55% of health outcomes. “In some countries, like the United States, the higher rates of receiving such information may be a response to the higher rates of material hardship, along with a weaker safety net,” the report states.
Ms. Gunja and her colleagues suggested several options for changes in policies, including narrowing the wage gap between primary care providers and higher-paid specialists; subsidizing medical school tuition to give students incentives to enter primary care; investing in telehealth to make primary care more accessible; and rewarding and holding providers accountable for continuity of care.
“The U.S. had the largest wage gap and highest tuition fees among the countries we studied,” Ms. Gunja told this news organization..
Researchers noted that U.S. patients could benefit from the introduction of incentives such as those paid in New Zealand to primary health organizations, which receive additional funding per capita to promote health and coordinate care.
But Dr. Caplan was skeptical that those measures would do much to correct the problems.
“We have no will to fix this ongoing, scandalous situation,” he said. “Specialist care still pays inordinately large salaries. Nurses and physician extenders are underused. Academic prestige does little to reward primary care. Plus, patients are not pressing for better access. Sorry, but I see no solutions pending in the current climate. Obamacare barely survived.”
The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In delivery of primary care, including access and coordination, the U.S. trails well behind 10 other wealthy countries, according to a new report from the Commonwealth Fund.
The document, released March 15, concludes that the shortcomings in the U.S. system – from a lack of a relationship with a primary care physician to unequal access to after-hours care – “disproportionately affect Black and Latinx communities and rural areas, exacerbating disparities that have widened during the COVID-19 pandemic.”
“This report really shows that the U.S. is falling behind. We know that a strong primary care system yields better health outcomes. We have a lot to learn from other high-income countries,” coauthor Munira Z. Gunja, MPH, a senior researcher for the Commonwealth Fund’s International Program in Health Policy and Practice Innovations, told this news organization. “At baseline, we really need to make sure that everyone has health insurance in this country so they can actually use primary care services, and we need to increase the supply of those services.”
The report draws from the Commonwealth Fund’s 2019 and 2020 International Health Policy Surveys and the 2020 International Profiles of Health Care Systems. Among the main points:
- U.S. adults are the least likely to have a regular physician or place of care or a long-standing relationship with a primary care provider: 43% of American adults have a long-term relationship with a primary care doctor, compared with highs of 71% in Germany and the Netherlands.
- Access to home visits or after-hours care – excluding emergency department visits – is lowest in the United States (45%). In the Netherlands, Norway, New Zealand, and Germany, the rate is 90% to 96%.
- Half of primary care providers in the United States report adequate coordination with specialists and hospitals – around the average for the 11 countries studied.
‘Dismal mess’
Experts reacted to the report with a mix of concern and frustration – but not surprise.
“The results in this report are not surprising, and we have known them all for a number of years now,” Timothy Hoff, PhD, a health policy expert at Northeastern University, Boston, said. “Primary care doctors remain the backbone of our primary care system. But there are too few of them in the United States, and there likely will remain too few of them in the future. This opens the door to other and more diverse forms of innovation that will be required to help complement the work they do.”
Dr. Hoff, author of Searching for the Family Doctor: Primary Care on the Brink, added that comparing the United States to smaller countries like Norway or the United Kingdom is “somewhat problematic.”
“Our system has to take care of several hundred million people, trapped in a fragmented and market-based delivery system focused on specialty care, each of whom may have a different insurance plan,” he said. “Doing some of the things very small countries with government-funded insurance and a history of strong primary care delivery do in taking care of far fewer citizens is not realistic.”
Jeffrey Borkan, MD, PhD, chair and professor in the department of family medicine at the Alpert Medical School of Brown University, Providence, R.I., said the most shocking finding in the report is that despite spending far more on health care than any other country, “we cannot manage to provide one of the least expensive and most efficacious services: a relationship with a primary care doctor.”
Arthur Caplan, PhD, director of the Division of Medical Ethics at New York University Langone Medical Center, called primary care in this country “a dismal mess. It has been for many years. This is especially so in mental health. Access in many counties is nonexistent, and many primary care physicians are opting into boutique care.”
R. Shawn Martin, CEO of the 133,000-member American Academy of Family Physicians, said, “None of this surprises me. I think these are trendlines; we have been following this for many, many years here at the Academy.”
Mr. Martin added that he was disappointed that the recent, large investments in sharing and digitizing information have not closed the gaps that hinder the efficient and widespread delivery of primary care.
The findings in the report weren’t all bad. More primary care providers in the United States (30%) screen their patients for social needs such as housing, food security, and transportation – the highest among all 11 nations studied.
Also, Commonwealth Fund said the proportion of patients who said they received information on meeting their social needs and screening for domestic violence or social isolation was low everywhere. However, the percentage in the United States, Canada, and Norway was the highest, at 9%. Sweden had the lowest rate for such screenings, at 1%.
The researchers noted that social determinants of health account for as much as 55% of health outcomes. “In some countries, like the United States, the higher rates of receiving such information may be a response to the higher rates of material hardship, along with a weaker safety net,” the report states.
Ms. Gunja and her colleagues suggested several options for changes in policies, including narrowing the wage gap between primary care providers and higher-paid specialists; subsidizing medical school tuition to give students incentives to enter primary care; investing in telehealth to make primary care more accessible; and rewarding and holding providers accountable for continuity of care.
“The U.S. had the largest wage gap and highest tuition fees among the countries we studied,” Ms. Gunja told this news organization..
Researchers noted that U.S. patients could benefit from the introduction of incentives such as those paid in New Zealand to primary health organizations, which receive additional funding per capita to promote health and coordinate care.
But Dr. Caplan was skeptical that those measures would do much to correct the problems.
“We have no will to fix this ongoing, scandalous situation,” he said. “Specialist care still pays inordinately large salaries. Nurses and physician extenders are underused. Academic prestige does little to reward primary care. Plus, patients are not pressing for better access. Sorry, but I see no solutions pending in the current climate. Obamacare barely survived.”
The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Empagliflozin scores topline win in EMPA-KIDNEY trial
Researchers running the EMPA-KIDNEY trial that’s been testing the safety and efficacy of the SGLT2 inhibitor empagliflozin (Jardiance) in about 6,600 patients with chronic kidney disease (CKD) announced on March 16 that they had stopped the trial early because of positive efficacy that met the study’s prespecified threshold for early termination.
EMPA-KIDNEY is the third major trial of an agent from the sodium-glucose cotransport 2 (SGLT2) inhibitor class tested in patients with CKD to be stopped early because of positive results that met a prespecified termination rule.
In 2020, the DAPA-CKD trial of dapagliflozin (Farxiga) stopped early, after a median follow-up of 2.4 years, because of positive efficacy results. In 2019, the same thing happened in the CREDENCE trial of canagliflozin (Invokana), with the unexpected halt coming after a median follow-up of 2.62 years.
The announcement about EMPA-KIDNEY did not include information on median follow-up, but enrollment into the trial ran from May 2019 to April 2021, which means that the longest that enrolled patients could have been in the study was about 2.85 years.
The primary efficacy endpoint in EMPA-KIDNEY was a composite of a sustained decline in estimated glomerular filtration rate (eGFR) to less than 10 mL/min/1.73 m2, renal death, a sustained decline of at least 40% in eGFR from baseline, or cardiovascular death. The announcement of the trial’s early termination provided no details on the efficacy results.
EMPA-KIDNEY enrolled a wider range of patients
EMPA-KIDNEY expands the scope of types of patients with CKD now shown to benefit from treatment with an SGLT2 inhibitor. CREDENCE tested canagliflozin only in patients with type 2 diabetes and diabetic nephropathy, and in DAPA-CKD, two-thirds of enrolled patients had type 2 diabetes, and all had CKD. In EMPA-KIDNEY, 46% of the 6,609 enrolled patients had diabetes (including a very small number with type 1 diabetes).
Another departure from prior studies of an SGLT2 inhibitor for patients selected primarily for having CKD was that in EMPA-KIDNEY, 20% of patients did not have albuminuria, and for 34%, eGFR at entry was less than 30 mL/min/1.73 m2, with all enrolled patients required to have an eGFR at entry of greater than or equal to 20 mL/min/1.73 m2. Average eGFR in EMPA-KIDNEY was about 38 mL/min/1.73 m2. To be included in the trial, patients were not required to have albuminuria, except those whose eGFR was greater than or equal to 45 mL/min/1.73 m2.
In DAPA-CKD, the minimum eGFR at entry had to be greater than or equal to 25 mL/min/1.73 m2, and roughly 14% of enrolled patients had an eGFR of less than 30 mL/min/1.73 m2. The average eGFR in DAPA-CKD was about 43 mL/min/1.73 m2. In addition, all patients had at least microalbuminuria, with a minimum urinary albumin-to-creatinine ratio of 200. In CREDENCE, the minimum eGFR for enrollment was 30 mL/min/1.73 m2, and the average eGFR was about 56 mL/min/1.73 m2. All patients in CREDENCE had to have macroalbuminuria, with a urinary albumin-to-creatinine ratio of more than 300.
According to the researchers who designed EMPA-KIDNEY, the trial enrollment criteria aimed to include adults with CKD “who are frequently seen in practice but were under-represented in previous SGLT2 inhibitor trials.”
Indications for empagliflozin are expanding
The success of empagliflozin in EMPA-KIDNEY follows its positive results in both the EMPEROR-Reduced and EMPEROR-Preserved trials, which collectively proved the efficacy of the agent for patients with heart failure regardless of their left ventricular ejection fraction and regardless of whether they also had diabetes.
These results led the U.S. Food and Drug Administration to recently expand the labeled indication for empagliflozin to all patients with heart failure. Empagliflozin also has labeled indications for glycemic control in patients with type 2 diabetes and to reduce the risk of cardiovascular death in adults with type 2 diabetes and established cardiovascular disease.
As of today, empagliflozin has no labeled indication for treating patients with CKD. Dapagliflozin received that indication in April 2021, and canagliflozin received an indication for treating patients with type 2 diabetes, diabetic nephropathy, and albuminuria in September 2019.
EMPA-KIDNEY is sponsored by Boehringer Ingelheim and Lilly, the two companies that jointly market empagliflozin (Jardiance).
A version of this article first appeared on Medscape.com.
Researchers running the EMPA-KIDNEY trial that’s been testing the safety and efficacy of the SGLT2 inhibitor empagliflozin (Jardiance) in about 6,600 patients with chronic kidney disease (CKD) announced on March 16 that they had stopped the trial early because of positive efficacy that met the study’s prespecified threshold for early termination.
EMPA-KIDNEY is the third major trial of an agent from the sodium-glucose cotransport 2 (SGLT2) inhibitor class tested in patients with CKD to be stopped early because of positive results that met a prespecified termination rule.
In 2020, the DAPA-CKD trial of dapagliflozin (Farxiga) stopped early, after a median follow-up of 2.4 years, because of positive efficacy results. In 2019, the same thing happened in the CREDENCE trial of canagliflozin (Invokana), with the unexpected halt coming after a median follow-up of 2.62 years.
The announcement about EMPA-KIDNEY did not include information on median follow-up, but enrollment into the trial ran from May 2019 to April 2021, which means that the longest that enrolled patients could have been in the study was about 2.85 years.
The primary efficacy endpoint in EMPA-KIDNEY was a composite of a sustained decline in estimated glomerular filtration rate (eGFR) to less than 10 mL/min/1.73 m2, renal death, a sustained decline of at least 40% in eGFR from baseline, or cardiovascular death. The announcement of the trial’s early termination provided no details on the efficacy results.
EMPA-KIDNEY enrolled a wider range of patients
EMPA-KIDNEY expands the scope of types of patients with CKD now shown to benefit from treatment with an SGLT2 inhibitor. CREDENCE tested canagliflozin only in patients with type 2 diabetes and diabetic nephropathy, and in DAPA-CKD, two-thirds of enrolled patients had type 2 diabetes, and all had CKD. In EMPA-KIDNEY, 46% of the 6,609 enrolled patients had diabetes (including a very small number with type 1 diabetes).
Another departure from prior studies of an SGLT2 inhibitor for patients selected primarily for having CKD was that in EMPA-KIDNEY, 20% of patients did not have albuminuria, and for 34%, eGFR at entry was less than 30 mL/min/1.73 m2, with all enrolled patients required to have an eGFR at entry of greater than or equal to 20 mL/min/1.73 m2. Average eGFR in EMPA-KIDNEY was about 38 mL/min/1.73 m2. To be included in the trial, patients were not required to have albuminuria, except those whose eGFR was greater than or equal to 45 mL/min/1.73 m2.
In DAPA-CKD, the minimum eGFR at entry had to be greater than or equal to 25 mL/min/1.73 m2, and roughly 14% of enrolled patients had an eGFR of less than 30 mL/min/1.73 m2. The average eGFR in DAPA-CKD was about 43 mL/min/1.73 m2. In addition, all patients had at least microalbuminuria, with a minimum urinary albumin-to-creatinine ratio of 200. In CREDENCE, the minimum eGFR for enrollment was 30 mL/min/1.73 m2, and the average eGFR was about 56 mL/min/1.73 m2. All patients in CREDENCE had to have macroalbuminuria, with a urinary albumin-to-creatinine ratio of more than 300.
According to the researchers who designed EMPA-KIDNEY, the trial enrollment criteria aimed to include adults with CKD “who are frequently seen in practice but were under-represented in previous SGLT2 inhibitor trials.”
Indications for empagliflozin are expanding
The success of empagliflozin in EMPA-KIDNEY follows its positive results in both the EMPEROR-Reduced and EMPEROR-Preserved trials, which collectively proved the efficacy of the agent for patients with heart failure regardless of their left ventricular ejection fraction and regardless of whether they also had diabetes.
These results led the U.S. Food and Drug Administration to recently expand the labeled indication for empagliflozin to all patients with heart failure. Empagliflozin also has labeled indications for glycemic control in patients with type 2 diabetes and to reduce the risk of cardiovascular death in adults with type 2 diabetes and established cardiovascular disease.
As of today, empagliflozin has no labeled indication for treating patients with CKD. Dapagliflozin received that indication in April 2021, and canagliflozin received an indication for treating patients with type 2 diabetes, diabetic nephropathy, and albuminuria in September 2019.
EMPA-KIDNEY is sponsored by Boehringer Ingelheim and Lilly, the two companies that jointly market empagliflozin (Jardiance).
A version of this article first appeared on Medscape.com.
Researchers running the EMPA-KIDNEY trial that’s been testing the safety and efficacy of the SGLT2 inhibitor empagliflozin (Jardiance) in about 6,600 patients with chronic kidney disease (CKD) announced on March 16 that they had stopped the trial early because of positive efficacy that met the study’s prespecified threshold for early termination.
EMPA-KIDNEY is the third major trial of an agent from the sodium-glucose cotransport 2 (SGLT2) inhibitor class tested in patients with CKD to be stopped early because of positive results that met a prespecified termination rule.
In 2020, the DAPA-CKD trial of dapagliflozin (Farxiga) stopped early, after a median follow-up of 2.4 years, because of positive efficacy results. In 2019, the same thing happened in the CREDENCE trial of canagliflozin (Invokana), with the unexpected halt coming after a median follow-up of 2.62 years.
The announcement about EMPA-KIDNEY did not include information on median follow-up, but enrollment into the trial ran from May 2019 to April 2021, which means that the longest that enrolled patients could have been in the study was about 2.85 years.
The primary efficacy endpoint in EMPA-KIDNEY was a composite of a sustained decline in estimated glomerular filtration rate (eGFR) to less than 10 mL/min/1.73 m2, renal death, a sustained decline of at least 40% in eGFR from baseline, or cardiovascular death. The announcement of the trial’s early termination provided no details on the efficacy results.
EMPA-KIDNEY enrolled a wider range of patients
EMPA-KIDNEY expands the scope of types of patients with CKD now shown to benefit from treatment with an SGLT2 inhibitor. CREDENCE tested canagliflozin only in patients with type 2 diabetes and diabetic nephropathy, and in DAPA-CKD, two-thirds of enrolled patients had type 2 diabetes, and all had CKD. In EMPA-KIDNEY, 46% of the 6,609 enrolled patients had diabetes (including a very small number with type 1 diabetes).
Another departure from prior studies of an SGLT2 inhibitor for patients selected primarily for having CKD was that in EMPA-KIDNEY, 20% of patients did not have albuminuria, and for 34%, eGFR at entry was less than 30 mL/min/1.73 m2, with all enrolled patients required to have an eGFR at entry of greater than or equal to 20 mL/min/1.73 m2. Average eGFR in EMPA-KIDNEY was about 38 mL/min/1.73 m2. To be included in the trial, patients were not required to have albuminuria, except those whose eGFR was greater than or equal to 45 mL/min/1.73 m2.
In DAPA-CKD, the minimum eGFR at entry had to be greater than or equal to 25 mL/min/1.73 m2, and roughly 14% of enrolled patients had an eGFR of less than 30 mL/min/1.73 m2. The average eGFR in DAPA-CKD was about 43 mL/min/1.73 m2. In addition, all patients had at least microalbuminuria, with a minimum urinary albumin-to-creatinine ratio of 200. In CREDENCE, the minimum eGFR for enrollment was 30 mL/min/1.73 m2, and the average eGFR was about 56 mL/min/1.73 m2. All patients in CREDENCE had to have macroalbuminuria, with a urinary albumin-to-creatinine ratio of more than 300.
According to the researchers who designed EMPA-KIDNEY, the trial enrollment criteria aimed to include adults with CKD “who are frequently seen in practice but were under-represented in previous SGLT2 inhibitor trials.”
Indications for empagliflozin are expanding
The success of empagliflozin in EMPA-KIDNEY follows its positive results in both the EMPEROR-Reduced and EMPEROR-Preserved trials, which collectively proved the efficacy of the agent for patients with heart failure regardless of their left ventricular ejection fraction and regardless of whether they also had diabetes.
These results led the U.S. Food and Drug Administration to recently expand the labeled indication for empagliflozin to all patients with heart failure. Empagliflozin also has labeled indications for glycemic control in patients with type 2 diabetes and to reduce the risk of cardiovascular death in adults with type 2 diabetes and established cardiovascular disease.
As of today, empagliflozin has no labeled indication for treating patients with CKD. Dapagliflozin received that indication in April 2021, and canagliflozin received an indication for treating patients with type 2 diabetes, diabetic nephropathy, and albuminuria in September 2019.
EMPA-KIDNEY is sponsored by Boehringer Ingelheim and Lilly, the two companies that jointly market empagliflozin (Jardiance).
A version of this article first appeared on Medscape.com.
Cancer patients vulnerable to COVID misinformation
For the past 2 years, oncology practitioners around the world have struggled with the same dilemma: how to maintain their patients’ cancer care without exposing them to COVID-19. Regardless of the country, language, or even which wave of the pandemic, the conversations have likely been very similar: weighing risks versuss benefits, and individualizing each patient’s pandemic cancer plan.
But one question most oncologists have probably overlooked in these discussions is about where their patients get their COVID information – or misinformation.
Surprisingly, this seemingly small detail could make a big difference in a patient’s prognosis.
A recent study found that building on an earlier finding of similar vulnerabilities among parents of children with cancer, compared with parents of healthy children.
“It doesn’t matter what you search for, there is an overwhelming level of information online,” the lead author on both studies, Jeanine Guidry, PhD, from Virginia Commonwealth University’s Massey Cancer Center in Richmond, said in an interview. “If misinformation is the first thing you encounter about a topic, you’re much more likely to believe it and it’s going to be very hard to convince you otherwise.”
Before the pandemic, Dr. Guidry, who is director of the Media+Health Lab at VCU, had already been studying vaccine misinformation on Pinterest and Instagram.
So when data coming out at the start of the pandemic suggested that an increase in pediatric cancer mortality might be partially because of COVID-19 misinformation, she jumped on it.
Dr. Guidry and associates designed a questionnaire involving COVID misinformation statements available online and found that parents of children with cancer were significantly more likely to endorse them, compared with parents of healthy children.
“Our advice to clinicians is you may have an issue here,” Dr. Guidry said in an interview. “You may want to check where they get their news, and if there’s any pieces of misinformation that could be harmful.”
Some beliefs, such as eating more garlic protects against COVID, are not particularly harmful, she acknowledged, but others – such as drinking bleach being protective – are quite harmful, and they often stem from the same misinformation sources.
Both of Dr. Guidry’s studies involved surveys of either adult patients with cancer or parents of children with cancer.
The adult patient survey was conducted June 1-15, 2020, and included 897 respondents, of whom 287 were patients in active treatment for cancer, 301 were survivors not currently in treatment, and 309 had no cancer history.
The parents’ survey, conducted in May 2020, included 735 parents of children aged 2-17 years, 315 of whom had children currently undergoing cancer treatment, and 420 of whom had children with no history of cancer.
Among the misinformation they were asked to agree or disagree with were statements such as “it is unsafe to receive mail from China,” “antibiotics can prevent and treat COVID-19,” and “COVID is less deadly than the ‘flu,’ ” among others.
The surveys revealed that the patients in current treatment for cancer and the parents of patients in current treatment were most likely to endorse COVID misinformation. Results from the parents’ survey showed that “believing misinformation was also more likely for fathers, younger parents, and parents with higher perceived stress from COVID-19,” the authors wrote. Among adult patients and controls, patients in active treatment were most likely to believe misinformation, with cancer survivors no longer in treatment being the least likely to believe it, compared with healthy controls who were in between.
Why the difference? The authors suggested that patients in active treatment “may seek out more information on the internet or via social media where they are more exposed to misinformation,” whereas survivors no longer undergoing treatment may be more “media savvy and have learned to be wary of questionable health information.”
In their articles, Dr. Guidry and associates advised oncologists to be aware of their patients’ potential to endorse COVID misinformation and to “proactively address this in routine visits as well as tailored written materials.” This is easier said than done, she commented, acknowledging that keeping up with the latest misinformation is a challenge.
The misinformation statements her group used in their surveys were popular early in the pandemic, but “some of them have shown fairly remarkable staying power and some have been replaced,” she said. She invited interested clinicians to contact her team for guidance on newer misinformation.
Ultimately, she believes most patients with cancer who endorse misinformation are simply afraid, and looking for help. “They’re already dealing with a level of stress from their illness and then they’re thrown into a pandemic,” Dr. Guidry said. “At some point you just want a solution. Hydroxychloroquine? Great! Horse dewormer? Fantastic! Just wanting to control the situation and not having something else to deal with.”
Both studies were funded by the National Cancer Institute at the National Institutes of Health. The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
For the past 2 years, oncology practitioners around the world have struggled with the same dilemma: how to maintain their patients’ cancer care without exposing them to COVID-19. Regardless of the country, language, or even which wave of the pandemic, the conversations have likely been very similar: weighing risks versuss benefits, and individualizing each patient’s pandemic cancer plan.
But one question most oncologists have probably overlooked in these discussions is about where their patients get their COVID information – or misinformation.
Surprisingly, this seemingly small detail could make a big difference in a patient’s prognosis.
A recent study found that building on an earlier finding of similar vulnerabilities among parents of children with cancer, compared with parents of healthy children.
“It doesn’t matter what you search for, there is an overwhelming level of information online,” the lead author on both studies, Jeanine Guidry, PhD, from Virginia Commonwealth University’s Massey Cancer Center in Richmond, said in an interview. “If misinformation is the first thing you encounter about a topic, you’re much more likely to believe it and it’s going to be very hard to convince you otherwise.”
Before the pandemic, Dr. Guidry, who is director of the Media+Health Lab at VCU, had already been studying vaccine misinformation on Pinterest and Instagram.
So when data coming out at the start of the pandemic suggested that an increase in pediatric cancer mortality might be partially because of COVID-19 misinformation, she jumped on it.
Dr. Guidry and associates designed a questionnaire involving COVID misinformation statements available online and found that parents of children with cancer were significantly more likely to endorse them, compared with parents of healthy children.
“Our advice to clinicians is you may have an issue here,” Dr. Guidry said in an interview. “You may want to check where they get their news, and if there’s any pieces of misinformation that could be harmful.”
Some beliefs, such as eating more garlic protects against COVID, are not particularly harmful, she acknowledged, but others – such as drinking bleach being protective – are quite harmful, and they often stem from the same misinformation sources.
Both of Dr. Guidry’s studies involved surveys of either adult patients with cancer or parents of children with cancer.
The adult patient survey was conducted June 1-15, 2020, and included 897 respondents, of whom 287 were patients in active treatment for cancer, 301 were survivors not currently in treatment, and 309 had no cancer history.
The parents’ survey, conducted in May 2020, included 735 parents of children aged 2-17 years, 315 of whom had children currently undergoing cancer treatment, and 420 of whom had children with no history of cancer.
Among the misinformation they were asked to agree or disagree with were statements such as “it is unsafe to receive mail from China,” “antibiotics can prevent and treat COVID-19,” and “COVID is less deadly than the ‘flu,’ ” among others.
The surveys revealed that the patients in current treatment for cancer and the parents of patients in current treatment were most likely to endorse COVID misinformation. Results from the parents’ survey showed that “believing misinformation was also more likely for fathers, younger parents, and parents with higher perceived stress from COVID-19,” the authors wrote. Among adult patients and controls, patients in active treatment were most likely to believe misinformation, with cancer survivors no longer in treatment being the least likely to believe it, compared with healthy controls who were in between.
Why the difference? The authors suggested that patients in active treatment “may seek out more information on the internet or via social media where they are more exposed to misinformation,” whereas survivors no longer undergoing treatment may be more “media savvy and have learned to be wary of questionable health information.”
In their articles, Dr. Guidry and associates advised oncologists to be aware of their patients’ potential to endorse COVID misinformation and to “proactively address this in routine visits as well as tailored written materials.” This is easier said than done, she commented, acknowledging that keeping up with the latest misinformation is a challenge.
The misinformation statements her group used in their surveys were popular early in the pandemic, but “some of them have shown fairly remarkable staying power and some have been replaced,” she said. She invited interested clinicians to contact her team for guidance on newer misinformation.
Ultimately, she believes most patients with cancer who endorse misinformation are simply afraid, and looking for help. “They’re already dealing with a level of stress from their illness and then they’re thrown into a pandemic,” Dr. Guidry said. “At some point you just want a solution. Hydroxychloroquine? Great! Horse dewormer? Fantastic! Just wanting to control the situation and not having something else to deal with.”
Both studies were funded by the National Cancer Institute at the National Institutes of Health. The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
For the past 2 years, oncology practitioners around the world have struggled with the same dilemma: how to maintain their patients’ cancer care without exposing them to COVID-19. Regardless of the country, language, or even which wave of the pandemic, the conversations have likely been very similar: weighing risks versuss benefits, and individualizing each patient’s pandemic cancer plan.
But one question most oncologists have probably overlooked in these discussions is about where their patients get their COVID information – or misinformation.
Surprisingly, this seemingly small detail could make a big difference in a patient’s prognosis.
A recent study found that building on an earlier finding of similar vulnerabilities among parents of children with cancer, compared with parents of healthy children.
“It doesn’t matter what you search for, there is an overwhelming level of information online,” the lead author on both studies, Jeanine Guidry, PhD, from Virginia Commonwealth University’s Massey Cancer Center in Richmond, said in an interview. “If misinformation is the first thing you encounter about a topic, you’re much more likely to believe it and it’s going to be very hard to convince you otherwise.”
Before the pandemic, Dr. Guidry, who is director of the Media+Health Lab at VCU, had already been studying vaccine misinformation on Pinterest and Instagram.
So when data coming out at the start of the pandemic suggested that an increase in pediatric cancer mortality might be partially because of COVID-19 misinformation, she jumped on it.
Dr. Guidry and associates designed a questionnaire involving COVID misinformation statements available online and found that parents of children with cancer were significantly more likely to endorse them, compared with parents of healthy children.
“Our advice to clinicians is you may have an issue here,” Dr. Guidry said in an interview. “You may want to check where they get their news, and if there’s any pieces of misinformation that could be harmful.”
Some beliefs, such as eating more garlic protects against COVID, are not particularly harmful, she acknowledged, but others – such as drinking bleach being protective – are quite harmful, and they often stem from the same misinformation sources.
Both of Dr. Guidry’s studies involved surveys of either adult patients with cancer or parents of children with cancer.
The adult patient survey was conducted June 1-15, 2020, and included 897 respondents, of whom 287 were patients in active treatment for cancer, 301 were survivors not currently in treatment, and 309 had no cancer history.
The parents’ survey, conducted in May 2020, included 735 parents of children aged 2-17 years, 315 of whom had children currently undergoing cancer treatment, and 420 of whom had children with no history of cancer.
Among the misinformation they were asked to agree or disagree with were statements such as “it is unsafe to receive mail from China,” “antibiotics can prevent and treat COVID-19,” and “COVID is less deadly than the ‘flu,’ ” among others.
The surveys revealed that the patients in current treatment for cancer and the parents of patients in current treatment were most likely to endorse COVID misinformation. Results from the parents’ survey showed that “believing misinformation was also more likely for fathers, younger parents, and parents with higher perceived stress from COVID-19,” the authors wrote. Among adult patients and controls, patients in active treatment were most likely to believe misinformation, with cancer survivors no longer in treatment being the least likely to believe it, compared with healthy controls who were in between.
Why the difference? The authors suggested that patients in active treatment “may seek out more information on the internet or via social media where they are more exposed to misinformation,” whereas survivors no longer undergoing treatment may be more “media savvy and have learned to be wary of questionable health information.”
In their articles, Dr. Guidry and associates advised oncologists to be aware of their patients’ potential to endorse COVID misinformation and to “proactively address this in routine visits as well as tailored written materials.” This is easier said than done, she commented, acknowledging that keeping up with the latest misinformation is a challenge.
The misinformation statements her group used in their surveys were popular early in the pandemic, but “some of them have shown fairly remarkable staying power and some have been replaced,” she said. She invited interested clinicians to contact her team for guidance on newer misinformation.
Ultimately, she believes most patients with cancer who endorse misinformation are simply afraid, and looking for help. “They’re already dealing with a level of stress from their illness and then they’re thrown into a pandemic,” Dr. Guidry said. “At some point you just want a solution. Hydroxychloroquine? Great! Horse dewormer? Fantastic! Just wanting to control the situation and not having something else to deal with.”
Both studies were funded by the National Cancer Institute at the National Institutes of Health. The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM PATIENT EDUCATION AND COUNSELING
Doctors treat osteoporosis with hormone therapy against guidelines
This type of hormone therapy (HT) can be given as estrogen or a combination of hormones including estrogen. The physicians interviewed for this piece who prescribe HT for osteoporosis suggest the benefits outweigh the downsides to its use for some of their patients. But such doctors may be a minority group, suggests Michael R. McClung, MD, founding director of the Oregon Osteoporosis Center, Portland.
According to Dr. McClung, HT is now rarely prescribed as treatment – as opposed to prevention – for osteoporosis in the absence of additional benefits such as reducing vasomotor symptoms.
Researchers’ findings on HT use in women with osteoporosis are complex. While HT is approved for menopausal prevention of osteoporosis, it is not indicated as a treatment for the disease by the Food and Drug Administration. See the prescribing information for Premarin tablets, which contain a mixture of estrogen hormones, for an example of the FDA’s indications and usage for the type of HT addressed in this article.
Women’s Health Initiative findings
The Women’s Health Initiative (WHI) hormone therapy trials showed that HT reduces the incidence of all osteoporosis-related fractures in postmenopausal women, even those at low risk of fracture, but osteoporosis-related fractures was not a study endpoint. These trials also revealed that HT was associated with increased risks of cardiovascular and cerebrovascular events, an increased risk of breast cancer, and other adverse health outcomes.
The release of the interim results of the WHI trials in 2002 led to a fair amount of fear and confusion about the use of HT after menopause. After the WHI findings were published, estrogen use dropped dramatically, but for everything, including for vasomotor symptoms and the prevention and treatment of osteoporosis.
Prior to the WHI study, it was very common for hormone therapy to be prescribed as women neared or entered menopause, said Risa Kagan MD, clinical professor of obstetrics, gynecology, and reproductive sciences, University of California, San Francisco.
“When a woman turned 50, that was one of the first things we did – was to put her on hormone therapy. All that changed with the WHI, but now we are coming full circle,” noted Dr. Kagan, who currently prescribes HT as first line treatment for osteoporosis to some women.
Hormone therapy’s complex history
HT’s ability to reduce bone loss in postmenopausal women is well-documented in many papers, including one published March 8, 2018, in Osteoporosis International, by Dr. Kagan and colleagues. This reduced bone loss has been shown to significantly reduce fractures in patients with low bone mass and osteoporosis.
While a growing number of therapies are now available to treat osteoporosis, HT was traditionally viewed as a standard method of preventing fractures in this population. It was also widely used to prevent other types of symptoms associated with the menopause, such as hot flashes, night sweats, and sleep disturbances, and multiple observational studies had demonstrated that its use appeared to reduce the incidence of cardiovascular disease (CVD) in symptomatic menopausal women who initiated HT in early menopause.
Even though the WHI studies were the largest randomized trials ever performed in postmenopausal women, they had notable limitations, according to Dr. Kagan.
“The women were older – the average age was 63 years,” she said. “And they only investigated one route and one dose of estrogen.”
Since then, many different formulations and routes of administration with more favorable safety profiles than what was used in the WHI have become available.
It’s both scientifically and clinically unsound to extrapolate the unfavorable risk-benefit profile of HT seen in the WHI trials to all women regardless of age, HT dosage or formulation, or the length of time they’re on it, she added.
Today’s use of HT in women with osteoporosis
Re-analyses and follow-up studies from the WHI trials, along with data from other studies, have suggested that the benefit-risk profiles of HT are affected by a variety of factors. These include the timing of use in relation to menopause and chronological age and the type of hormone regimen.
“Clinically, many advocate for [hormone therapy] use, especially in the newer younger postmenopausal women to prevent bone loss, but also in younger women who are diagnosed with osteoporosis and then as they get older transition to more bone specific agents,” noted Dr. Kagan.
“Some advocate preserving bone mass and preventing osteoporosis and even treating the younger newly postmenopausal women who have no contraindications with hormone therapy initially, and then gradually transitioning them to a bone specific agent as they get older and at risk for fracture.
“If a woman is already fractured and/or has very low bone density with no other obvious secondary metabolic reason, we also often advocate anabolic agents for 1-2 years then consider estrogen for maintenance – again, if [there is] no contraindication to using HT,” she added.
Thus, an individualized approach is recommended to determine a woman’s risk-benefit ratio of HT use based on the absolute risk of adverse effects, Dr. Kagan noted.
“Transdermal and low/ultra-low doses of HT, have a favorable risk profile, and are effective in preserving bone mineral density and bone quality in many women,” she said.
According to Dr. McClung, HT “is most often used for treatment in women in whom hormone therapy was begun for hot flashes and then, when osteoporosis was found later, was simply continued.
“Society guidelines are cautious about recommending hormone therapy for osteoporosis treatment since estrogen is not approved for treatment, despite the clear fracture protection benefit observed in the WHI study,” he said. “Since [women in the WHI trials] were not recruited as having osteoporosis, those results do not meet the FDA requirement for treatment approval, namely the reduction in fracture risk in patients with osteoporosis. However, knowing what we know about the salutary skeletal effects of estrogen, many of us do use them in our patients with osteoporosis – although not prescribed for that purpose.”
Additional scenarios when doctors may advise HT
“I often recommend – and I think colleagues do as well – that women with recent menopause and menopausal symptoms who also have low bone mineral density or even scores showing osteoporosis see their gynecologist to discuss HT for a few years, perhaps until age 60 if no contraindications, and if it is well tolerated,” said Ethel S. Siris, MD, professor of medicine at Columbia University Medical Center in New York.
“Once they stop it we can then give one of our other bone drugs, but it delays the need to start them since on adequate estrogen the bone density should remain stable while they take it,” added Dr. Siris, an endocrinologist and internist, and director of the Toni Stabile Osteoporosis Center in New York. “They may need a bisphosphonate or another bone drug to further protect them from bone loss and future fracture [after stopping HT].”
Victor L. Roberts, MD, founder of Endocrine Associates of Florida, Lake Mary, pointed out that women now have many options for treatment of osteoporosis.
“If a woman is in early menopause and is having other symptoms, then estrogen is warranted,” he said. “If she has osteoporosis, then it’s a bonus.”
“We have better agents that are bone specific,” for a patient who presents with osteoporosis and no other symptoms, he said.
“If a woman is intolerant of alendronate or other similar drugs, or chooses not to have an injectable, then estrogen or a SERM [selective estrogen receptor modulator] would be an option.”
Dr. Roberts added that HT would be more of a niche drug.
“It has a role and documented benefit and works,” he said. “There is good scientific data for the use of estrogen.”
Dr. Kagan is a consultant for Pfizer, Therapeutics MD, Amgen, on the Medical and Scientific Advisory Board of American Bone Health. The other experts interviewed for this piece reported no conflicts.
This type of hormone therapy (HT) can be given as estrogen or a combination of hormones including estrogen. The physicians interviewed for this piece who prescribe HT for osteoporosis suggest the benefits outweigh the downsides to its use for some of their patients. But such doctors may be a minority group, suggests Michael R. McClung, MD, founding director of the Oregon Osteoporosis Center, Portland.
According to Dr. McClung, HT is now rarely prescribed as treatment – as opposed to prevention – for osteoporosis in the absence of additional benefits such as reducing vasomotor symptoms.
Researchers’ findings on HT use in women with osteoporosis are complex. While HT is approved for menopausal prevention of osteoporosis, it is not indicated as a treatment for the disease by the Food and Drug Administration. See the prescribing information for Premarin tablets, which contain a mixture of estrogen hormones, for an example of the FDA’s indications and usage for the type of HT addressed in this article.
Women’s Health Initiative findings
The Women’s Health Initiative (WHI) hormone therapy trials showed that HT reduces the incidence of all osteoporosis-related fractures in postmenopausal women, even those at low risk of fracture, but osteoporosis-related fractures was not a study endpoint. These trials also revealed that HT was associated with increased risks of cardiovascular and cerebrovascular events, an increased risk of breast cancer, and other adverse health outcomes.
The release of the interim results of the WHI trials in 2002 led to a fair amount of fear and confusion about the use of HT after menopause. After the WHI findings were published, estrogen use dropped dramatically, but for everything, including for vasomotor symptoms and the prevention and treatment of osteoporosis.
Prior to the WHI study, it was very common for hormone therapy to be prescribed as women neared or entered menopause, said Risa Kagan MD, clinical professor of obstetrics, gynecology, and reproductive sciences, University of California, San Francisco.
“When a woman turned 50, that was one of the first things we did – was to put her on hormone therapy. All that changed with the WHI, but now we are coming full circle,” noted Dr. Kagan, who currently prescribes HT as first line treatment for osteoporosis to some women.
Hormone therapy’s complex history
HT’s ability to reduce bone loss in postmenopausal women is well-documented in many papers, including one published March 8, 2018, in Osteoporosis International, by Dr. Kagan and colleagues. This reduced bone loss has been shown to significantly reduce fractures in patients with low bone mass and osteoporosis.
While a growing number of therapies are now available to treat osteoporosis, HT was traditionally viewed as a standard method of preventing fractures in this population. It was also widely used to prevent other types of symptoms associated with the menopause, such as hot flashes, night sweats, and sleep disturbances, and multiple observational studies had demonstrated that its use appeared to reduce the incidence of cardiovascular disease (CVD) in symptomatic menopausal women who initiated HT in early menopause.
Even though the WHI studies were the largest randomized trials ever performed in postmenopausal women, they had notable limitations, according to Dr. Kagan.
“The women were older – the average age was 63 years,” she said. “And they only investigated one route and one dose of estrogen.”
Since then, many different formulations and routes of administration with more favorable safety profiles than what was used in the WHI have become available.
It’s both scientifically and clinically unsound to extrapolate the unfavorable risk-benefit profile of HT seen in the WHI trials to all women regardless of age, HT dosage or formulation, or the length of time they’re on it, she added.
Today’s use of HT in women with osteoporosis
Re-analyses and follow-up studies from the WHI trials, along with data from other studies, have suggested that the benefit-risk profiles of HT are affected by a variety of factors. These include the timing of use in relation to menopause and chronological age and the type of hormone regimen.
“Clinically, many advocate for [hormone therapy] use, especially in the newer younger postmenopausal women to prevent bone loss, but also in younger women who are diagnosed with osteoporosis and then as they get older transition to more bone specific agents,” noted Dr. Kagan.
“Some advocate preserving bone mass and preventing osteoporosis and even treating the younger newly postmenopausal women who have no contraindications with hormone therapy initially, and then gradually transitioning them to a bone specific agent as they get older and at risk for fracture.
“If a woman is already fractured and/or has very low bone density with no other obvious secondary metabolic reason, we also often advocate anabolic agents for 1-2 years then consider estrogen for maintenance – again, if [there is] no contraindication to using HT,” she added.
Thus, an individualized approach is recommended to determine a woman’s risk-benefit ratio of HT use based on the absolute risk of adverse effects, Dr. Kagan noted.
“Transdermal and low/ultra-low doses of HT, have a favorable risk profile, and are effective in preserving bone mineral density and bone quality in many women,” she said.
According to Dr. McClung, HT “is most often used for treatment in women in whom hormone therapy was begun for hot flashes and then, when osteoporosis was found later, was simply continued.
“Society guidelines are cautious about recommending hormone therapy for osteoporosis treatment since estrogen is not approved for treatment, despite the clear fracture protection benefit observed in the WHI study,” he said. “Since [women in the WHI trials] were not recruited as having osteoporosis, those results do not meet the FDA requirement for treatment approval, namely the reduction in fracture risk in patients with osteoporosis. However, knowing what we know about the salutary skeletal effects of estrogen, many of us do use them in our patients with osteoporosis – although not prescribed for that purpose.”
Additional scenarios when doctors may advise HT
“I often recommend – and I think colleagues do as well – that women with recent menopause and menopausal symptoms who also have low bone mineral density or even scores showing osteoporosis see their gynecologist to discuss HT for a few years, perhaps until age 60 if no contraindications, and if it is well tolerated,” said Ethel S. Siris, MD, professor of medicine at Columbia University Medical Center in New York.
“Once they stop it we can then give one of our other bone drugs, but it delays the need to start them since on adequate estrogen the bone density should remain stable while they take it,” added Dr. Siris, an endocrinologist and internist, and director of the Toni Stabile Osteoporosis Center in New York. “They may need a bisphosphonate or another bone drug to further protect them from bone loss and future fracture [after stopping HT].”
Victor L. Roberts, MD, founder of Endocrine Associates of Florida, Lake Mary, pointed out that women now have many options for treatment of osteoporosis.
“If a woman is in early menopause and is having other symptoms, then estrogen is warranted,” he said. “If she has osteoporosis, then it’s a bonus.”
“We have better agents that are bone specific,” for a patient who presents with osteoporosis and no other symptoms, he said.
“If a woman is intolerant of alendronate or other similar drugs, or chooses not to have an injectable, then estrogen or a SERM [selective estrogen receptor modulator] would be an option.”
Dr. Roberts added that HT would be more of a niche drug.
“It has a role and documented benefit and works,” he said. “There is good scientific data for the use of estrogen.”
Dr. Kagan is a consultant for Pfizer, Therapeutics MD, Amgen, on the Medical and Scientific Advisory Board of American Bone Health. The other experts interviewed for this piece reported no conflicts.
This type of hormone therapy (HT) can be given as estrogen or a combination of hormones including estrogen. The physicians interviewed for this piece who prescribe HT for osteoporosis suggest the benefits outweigh the downsides to its use for some of their patients. But such doctors may be a minority group, suggests Michael R. McClung, MD, founding director of the Oregon Osteoporosis Center, Portland.
According to Dr. McClung, HT is now rarely prescribed as treatment – as opposed to prevention – for osteoporosis in the absence of additional benefits such as reducing vasomotor symptoms.
Researchers’ findings on HT use in women with osteoporosis are complex. While HT is approved for menopausal prevention of osteoporosis, it is not indicated as a treatment for the disease by the Food and Drug Administration. See the prescribing information for Premarin tablets, which contain a mixture of estrogen hormones, for an example of the FDA’s indications and usage for the type of HT addressed in this article.
Women’s Health Initiative findings
The Women’s Health Initiative (WHI) hormone therapy trials showed that HT reduces the incidence of all osteoporosis-related fractures in postmenopausal women, even those at low risk of fracture, but osteoporosis-related fractures was not a study endpoint. These trials also revealed that HT was associated with increased risks of cardiovascular and cerebrovascular events, an increased risk of breast cancer, and other adverse health outcomes.
The release of the interim results of the WHI trials in 2002 led to a fair amount of fear and confusion about the use of HT after menopause. After the WHI findings were published, estrogen use dropped dramatically, but for everything, including for vasomotor symptoms and the prevention and treatment of osteoporosis.
Prior to the WHI study, it was very common for hormone therapy to be prescribed as women neared or entered menopause, said Risa Kagan MD, clinical professor of obstetrics, gynecology, and reproductive sciences, University of California, San Francisco.
“When a woman turned 50, that was one of the first things we did – was to put her on hormone therapy. All that changed with the WHI, but now we are coming full circle,” noted Dr. Kagan, who currently prescribes HT as first line treatment for osteoporosis to some women.
Hormone therapy’s complex history
HT’s ability to reduce bone loss in postmenopausal women is well-documented in many papers, including one published March 8, 2018, in Osteoporosis International, by Dr. Kagan and colleagues. This reduced bone loss has been shown to significantly reduce fractures in patients with low bone mass and osteoporosis.
While a growing number of therapies are now available to treat osteoporosis, HT was traditionally viewed as a standard method of preventing fractures in this population. It was also widely used to prevent other types of symptoms associated with the menopause, such as hot flashes, night sweats, and sleep disturbances, and multiple observational studies had demonstrated that its use appeared to reduce the incidence of cardiovascular disease (CVD) in symptomatic menopausal women who initiated HT in early menopause.
Even though the WHI studies were the largest randomized trials ever performed in postmenopausal women, they had notable limitations, according to Dr. Kagan.
“The women were older – the average age was 63 years,” she said. “And they only investigated one route and one dose of estrogen.”
Since then, many different formulations and routes of administration with more favorable safety profiles than what was used in the WHI have become available.
It’s both scientifically and clinically unsound to extrapolate the unfavorable risk-benefit profile of HT seen in the WHI trials to all women regardless of age, HT dosage or formulation, or the length of time they’re on it, she added.
Today’s use of HT in women with osteoporosis
Re-analyses and follow-up studies from the WHI trials, along with data from other studies, have suggested that the benefit-risk profiles of HT are affected by a variety of factors. These include the timing of use in relation to menopause and chronological age and the type of hormone regimen.
“Clinically, many advocate for [hormone therapy] use, especially in the newer younger postmenopausal women to prevent bone loss, but also in younger women who are diagnosed with osteoporosis and then as they get older transition to more bone specific agents,” noted Dr. Kagan.
“Some advocate preserving bone mass and preventing osteoporosis and even treating the younger newly postmenopausal women who have no contraindications with hormone therapy initially, and then gradually transitioning them to a bone specific agent as they get older and at risk for fracture.
“If a woman is already fractured and/or has very low bone density with no other obvious secondary metabolic reason, we also often advocate anabolic agents for 1-2 years then consider estrogen for maintenance – again, if [there is] no contraindication to using HT,” she added.
Thus, an individualized approach is recommended to determine a woman’s risk-benefit ratio of HT use based on the absolute risk of adverse effects, Dr. Kagan noted.
“Transdermal and low/ultra-low doses of HT, have a favorable risk profile, and are effective in preserving bone mineral density and bone quality in many women,” she said.
According to Dr. McClung, HT “is most often used for treatment in women in whom hormone therapy was begun for hot flashes and then, when osteoporosis was found later, was simply continued.
“Society guidelines are cautious about recommending hormone therapy for osteoporosis treatment since estrogen is not approved for treatment, despite the clear fracture protection benefit observed in the WHI study,” he said. “Since [women in the WHI trials] were not recruited as having osteoporosis, those results do not meet the FDA requirement for treatment approval, namely the reduction in fracture risk in patients with osteoporosis. However, knowing what we know about the salutary skeletal effects of estrogen, many of us do use them in our patients with osteoporosis – although not prescribed for that purpose.”
Additional scenarios when doctors may advise HT
“I often recommend – and I think colleagues do as well – that women with recent menopause and menopausal symptoms who also have low bone mineral density or even scores showing osteoporosis see their gynecologist to discuss HT for a few years, perhaps until age 60 if no contraindications, and if it is well tolerated,” said Ethel S. Siris, MD, professor of medicine at Columbia University Medical Center in New York.
“Once they stop it we can then give one of our other bone drugs, but it delays the need to start them since on adequate estrogen the bone density should remain stable while they take it,” added Dr. Siris, an endocrinologist and internist, and director of the Toni Stabile Osteoporosis Center in New York. “They may need a bisphosphonate or another bone drug to further protect them from bone loss and future fracture [after stopping HT].”
Victor L. Roberts, MD, founder of Endocrine Associates of Florida, Lake Mary, pointed out that women now have many options for treatment of osteoporosis.
“If a woman is in early menopause and is having other symptoms, then estrogen is warranted,” he said. “If she has osteoporosis, then it’s a bonus.”
“We have better agents that are bone specific,” for a patient who presents with osteoporosis and no other symptoms, he said.
“If a woman is intolerant of alendronate or other similar drugs, or chooses not to have an injectable, then estrogen or a SERM [selective estrogen receptor modulator] would be an option.”
Dr. Roberts added that HT would be more of a niche drug.
“It has a role and documented benefit and works,” he said. “There is good scientific data for the use of estrogen.”
Dr. Kagan is a consultant for Pfizer, Therapeutics MD, Amgen, on the Medical and Scientific Advisory Board of American Bone Health. The other experts interviewed for this piece reported no conflicts.
Infectious disease pop quiz: Clinical challenge #19 for the ObGyn
Should a postpartum patient with chronic hepatitis C infection be discouraged from breastfeeding her infant?
Continue to the answer...
Hepatitis C is not a contraindication to breastfeeding. Although the virus has been identified in breast milk, the risk of transmission to the infant is exceedingly low.
- Duff P. Maternal and perinatal infections: bacterial. In: Landon MB, Galan HL, Jauniaux ERM, et al. Gabbe’s Obstetrics: Normal and Problem Pregnancies. 8th ed. Elsevier; 2021:1124-1146.
- Duff P. Maternal and fetal infections. In: Resnik R, Lockwood CJ, Moore TJ, et al. Creasy & Resnik’s Maternal-Fetal Medicine: Principles and Practice. 8th ed. Elsevier; 2019:862-919.
Should a postpartum patient with chronic hepatitis C infection be discouraged from breastfeeding her infant?
Continue to the answer...
Hepatitis C is not a contraindication to breastfeeding. Although the virus has been identified in breast milk, the risk of transmission to the infant is exceedingly low.
Should a postpartum patient with chronic hepatitis C infection be discouraged from breastfeeding her infant?
Continue to the answer...
Hepatitis C is not a contraindication to breastfeeding. Although the virus has been identified in breast milk, the risk of transmission to the infant is exceedingly low.
- Duff P. Maternal and perinatal infections: bacterial. In: Landon MB, Galan HL, Jauniaux ERM, et al. Gabbe’s Obstetrics: Normal and Problem Pregnancies. 8th ed. Elsevier; 2021:1124-1146.
- Duff P. Maternal and fetal infections. In: Resnik R, Lockwood CJ, Moore TJ, et al. Creasy & Resnik’s Maternal-Fetal Medicine: Principles and Practice. 8th ed. Elsevier; 2019:862-919.
- Duff P. Maternal and perinatal infections: bacterial. In: Landon MB, Galan HL, Jauniaux ERM, et al. Gabbe’s Obstetrics: Normal and Problem Pregnancies. 8th ed. Elsevier; 2021:1124-1146.
- Duff P. Maternal and fetal infections. In: Resnik R, Lockwood CJ, Moore TJ, et al. Creasy & Resnik’s Maternal-Fetal Medicine: Principles and Practice. 8th ed. Elsevier; 2019:862-919.
Doctors have failed them, say those with transgender regret
In a unique Zoom conference,
The forum was convened on what was dubbed #DetransitionAwarenessDay by Genspect, a parent-based organization that seeks to put the brakes on medical transitions for children and adolescents. The group has doubts about the gender-affirming care model supported by the World Professional Association for Transgender Health, the American Medical Association, the American Academy of Pediatrics, and other medical groups.
“Affirmative” medical care is defined as treatment with puberty blockers and cross-sex hormones for those with gender dysphoria to transition to the opposite sex and is often followed by gender reassignment surgery. However, there is growing concern among many doctors and other health care professionals as to whether this is, in fact, the best way to proceed for those under aged 18, in particular, with several countries pulling back on medical treatment and instead emphasizing psychotherapy first.
The purpose of the second annual Genspect meeting was to shed light on the experiences of individuals who have detransitioned – those that identified as transgender and transitioned, but then decided to end their medical transition. People logged on from all over the United States, Canada, New Zealand, Australia, the United Kingdom, Germany, Spain, Chile, and Brazil, among other countries.
“This is a minority within a minority,” said Genspect advisor Stella O’Malley, adding that the first meeting in 2021 was held because “too many people were dismissing the stories of the detransitioners.” Ms. O’Malley is a psychotherapist, a clinical advisor to the Society for Evidence-Based Gender Medicine, and a founding member of the International Association of Therapists for Desisters and Detransitioners.
“It’s become blindingly obvious over the last year that ... ‘detrans’ is a huge part of the trans phenomenon,” said Ms. O’Malley, adding that detransitioners have been “undermined and dismissed.”
Laura Edwards-Leeper, PhD (@DrLauraEL), a prominent gender therapist who has recently expressed concern regarding adequate gatekeeping when treating youth with gender dysphoria, agreed.
She tweeted: “You simply can’t call yourself a legit gender provider if you don’t believe that detransitioners exist. As part of the informed consent process for transitioning, it is unethical to not discuss this possibility with young people.” Dr. Edwards-Leeper is professor emeritus at Pacific University in Hillsboro, Ore.
Speakers in the forum largely offered experiences, not data. They pointed out that there has been little to no study of detransition, but all testified that it was less rare than it has been portrayed by the transgender community.
Struggles with going back
“There are so many reasons why people detransition,” said Sinead Watson, aged 30, a Genspect advisor who transitioned from female to male, starting in 2015, and who decided to detransition in 2019. Citing a study by Lisa Littman, MD, MPH, published in 2021, Ms. Watson said the most common reasons for detransitioning were realizing that gender dysphoria was caused by other issues; internal homophobia; and the unbearable nature of transphobia.
Ms. Watson said the hardest part of detransitioning was admitting to herself that her transition had been a mistake. “It’s embarrassing and you feel ashamed and guilty,” she said, adding that it may mean losing friends who now regard you as a “bigot, while you’re also dealing with transition regret.”
“It’s a living hell, especially when none of your therapists or counselors will listen to you,” she said. “Detransitioning isn’t fun.”
Carol (@sourpatches2077) said she knew for a year that her transition had been a mistake.
“The biggest part was I couldn’t tell my family,” said Carol, who identifies as a lesbian. “I put them through so much. It seems ridiculous to go: ‘Oops, I made this huge [expletive] mistake,’ ” she said, describing the moment she did tell them as “devastating.”
Grace (@hormonehangover) said she remembers finally hitting a moment of “undeniability” some years after transitioning. “I accept it, I’ve ruined my life, this is wrong,” she remembers thinking. “It was devastating, but I couldn’t deny it anymore.”
Don’t trust therapists
People experiencing feelings of unease “need a therapist who will listen to them,” said Ms. Watson. When she first detransitioned, her therapists treated her badly. “They just didn’t want to speak about detransition,” she said, adding that “it was like a kick in the stomach.”
Ms. Watson said she’d like to see more training about detransition, but also on “preventative techniques,” adding that many people transition who should not. “I don’t want more detransitioners – I want less.
“In order for that to happen, we need to treat people with gender dysphoria properly,” said Ms. Watson, adding that the affirmative model is “disgusting, and that’s what needs to change.”
“I would tell somebody to not go to a therapist,” said Carol. Identifying as a butch lesbian, she felt like her therapists had pushed her into transitioning to male. “The No. 1 thing not understood by the mental health professionals is that the vast majority of homosexuals were gender-nonconforming children.” She added that this is especially true of butch lesbians.
Therapists – and doctors – also need to acknowledge both the trauma of transition and detransition, she said.
Kaiser, where she had transitioned, offered her breast reconstruction. Carol said it felt demeaning. “Like you’re Mr. Potatohead: ‘Here, we can just ... put on some new parts and you’re good to go.’ ”
“Doctors are concretizing transient obsessions,” said Helena Kerschner (@lacroicsz), quoting a chatroom user.
Ms. Kerschner gave a presentation on “fandom”: becoming obsessed with a movie, book, TV show, musician, or celebrity, spending every waking hour chatting online or writing fan fiction, or attempting to interact with the celebrity online. It’s a fantasy-dominated world and “the vast majority” of participants are teenage girls who are “identifying as trans,” in part, because they are fed a community-reinforced message that it’s better to be a boy.
Therapists and physicians who help them transition “are harming them for life based on something they would have grown out of or overcome without the permanent damage,” Ms. Kerschner added.
Doctors ‘gaslighting’ people into believing that transition is the answer
A pervasive theme during the webinar was that many people are being misdiagnosed with gender dysphoria, which may not be resolved by medical transition.
Allie, a 22-year-old who stopped taking testosterone after 1½ years, said she initially started the transition to male when she gave up trying to figure out why she could not identify with, or befriend, women, and after a childhood and adolescence spent mostly in the company of boys and being more interested in traditionally male activities.
She endured sexual abuse as a teenager and her parents divorced while she was in high school. Allie also had multiple suicide attempts and many incidents of self-harm. When she decided to transition, at age 18, she went to a private clinic and received cross-sex hormones within a few months of her first and only 30-minute consultation. “There was no explorative therapy,” she said, adding that she was never given a formal diagnosis of gender dysphoria.
For the first year, she said she was “over the freaking moon” because she felt like it was the answer. But things started to unravel while she attended university, and she attempted suicide attempt at age 20. A social worker at the school identified her symptoms – which had been the same since childhood – as autism. She then decided to cease her transition.
Another detransitioner, Laura Becker, said it took 5 years after her transition to recognize that she had undiagnosed PTSD from emotional and psychiatric abuse. Despite a history of substance abuse, self-harm, suicidal ideation, and other mental health issues, she was given testosterone and had a double mastectomy at age 20. She became fixated on gay men, which devolved into a methamphetamine- and crack-fueled relationship with a man she met on the gay dating platform Grindr.
“No one around me knew any better or knew how to help, including the medical professionals who performed the mastectomy and who casually signed off and administered my medical transition,” she said.
Once she was aware of her PTSD she started to detransition, which itself was traumatic, said Laura.
Limpida, aged 24, said he felt pushed into transitioning after seeking help at a Planned Parenthood clinic. He identified as trans at age 15 and spent years attempting to be a woman socially, but every step made him feel more miserable, he said. When he went to the clinic at age 21 to get estrogen, he said he felt like the staff was dismissive of his mental health concerns – including that he was suicidal, had substance abuse, and was severely depressed. He was told he was the “perfect candidate” for transitioning.
A year later, he said he felt worse. The nurse suggested he seek out surgery. After Limpida researched what was involved, he decided to detransition. He has since received an autism diagnosis.
Robin, also aged 24, said the idea of surgery had helped push him into detransitioning, which began in 2020 after 4 years of estrogen. He said he had always been gender nonconforming and knew he was gay at an early age. He believes that gender-nonconforming people are “gaslighted” into thinking that transitioning is the answer.
Lack of evidence-based, informed consent
Michelle Alleva, who stopped identifying as transgender in 2020 but had ceased testosterone 4 years earlier because of side effects, cited what she called a lack of evidence base for the effectiveness and safety of medical transitions.
“You need to have a really, really good evidence base in place if you’re going straight to an invasive treatment that is going to cause permanent changes to your body,” she said.
Access to medical transition used to involve more “gatekeeping” through mental health evaluations and other interventions, she said, but there has been a shift from treating what was considered a psychiatric issue to essentially affirming an identity.
“This shift was activist driven, not evidence based,” she emphasized.
Most studies showing satisfaction with transition only involve a few years of follow-up, she said. She added that the longest follow-up study of transition, published in 2011 and spanning 30 years, showed that the suicide rate 10-15 years post surgery was 20 times higher than the general population.
Studies of regret were primarily conducted before the rapid increase in the number of trans-identifying individuals, she said, which makes it hard to draw conclusions about pediatric transition. Getting estimates on this population is difficult because so many who detransition do not tell their clinicians, and many studies have short follow-up times or a high loss to follow-up.
Ms. Alleva also took issue with the notion that physicians were offering true informed consent, noting that it’s not possible to know if someone is psychologically sound if they haven’t had a thorough mental health evaluation and that there are so many unknowns with medical transition, including that many of the therapies are not approved for the uses being employed.
With regret on the rise, “we need professionals that are prepared for detransitioners,” said Ms. Alleva. “Some of us have lost trust in health care professionals as a result of our experience.”
“It’s a huge feeling of institutional betrayal,” said Grace.
A version of this article first appeared on Medscape.com.
In a unique Zoom conference,
The forum was convened on what was dubbed #DetransitionAwarenessDay by Genspect, a parent-based organization that seeks to put the brakes on medical transitions for children and adolescents. The group has doubts about the gender-affirming care model supported by the World Professional Association for Transgender Health, the American Medical Association, the American Academy of Pediatrics, and other medical groups.
“Affirmative” medical care is defined as treatment with puberty blockers and cross-sex hormones for those with gender dysphoria to transition to the opposite sex and is often followed by gender reassignment surgery. However, there is growing concern among many doctors and other health care professionals as to whether this is, in fact, the best way to proceed for those under aged 18, in particular, with several countries pulling back on medical treatment and instead emphasizing psychotherapy first.
The purpose of the second annual Genspect meeting was to shed light on the experiences of individuals who have detransitioned – those that identified as transgender and transitioned, but then decided to end their medical transition. People logged on from all over the United States, Canada, New Zealand, Australia, the United Kingdom, Germany, Spain, Chile, and Brazil, among other countries.
“This is a minority within a minority,” said Genspect advisor Stella O’Malley, adding that the first meeting in 2021 was held because “too many people were dismissing the stories of the detransitioners.” Ms. O’Malley is a psychotherapist, a clinical advisor to the Society for Evidence-Based Gender Medicine, and a founding member of the International Association of Therapists for Desisters and Detransitioners.
“It’s become blindingly obvious over the last year that ... ‘detrans’ is a huge part of the trans phenomenon,” said Ms. O’Malley, adding that detransitioners have been “undermined and dismissed.”
Laura Edwards-Leeper, PhD (@DrLauraEL), a prominent gender therapist who has recently expressed concern regarding adequate gatekeeping when treating youth with gender dysphoria, agreed.
She tweeted: “You simply can’t call yourself a legit gender provider if you don’t believe that detransitioners exist. As part of the informed consent process for transitioning, it is unethical to not discuss this possibility with young people.” Dr. Edwards-Leeper is professor emeritus at Pacific University in Hillsboro, Ore.
Speakers in the forum largely offered experiences, not data. They pointed out that there has been little to no study of detransition, but all testified that it was less rare than it has been portrayed by the transgender community.
Struggles with going back
“There are so many reasons why people detransition,” said Sinead Watson, aged 30, a Genspect advisor who transitioned from female to male, starting in 2015, and who decided to detransition in 2019. Citing a study by Lisa Littman, MD, MPH, published in 2021, Ms. Watson said the most common reasons for detransitioning were realizing that gender dysphoria was caused by other issues; internal homophobia; and the unbearable nature of transphobia.
Ms. Watson said the hardest part of detransitioning was admitting to herself that her transition had been a mistake. “It’s embarrassing and you feel ashamed and guilty,” she said, adding that it may mean losing friends who now regard you as a “bigot, while you’re also dealing with transition regret.”
“It’s a living hell, especially when none of your therapists or counselors will listen to you,” she said. “Detransitioning isn’t fun.”
Carol (@sourpatches2077) said she knew for a year that her transition had been a mistake.
“The biggest part was I couldn’t tell my family,” said Carol, who identifies as a lesbian. “I put them through so much. It seems ridiculous to go: ‘Oops, I made this huge [expletive] mistake,’ ” she said, describing the moment she did tell them as “devastating.”
Grace (@hormonehangover) said she remembers finally hitting a moment of “undeniability” some years after transitioning. “I accept it, I’ve ruined my life, this is wrong,” she remembers thinking. “It was devastating, but I couldn’t deny it anymore.”
Don’t trust therapists
People experiencing feelings of unease “need a therapist who will listen to them,” said Ms. Watson. When she first detransitioned, her therapists treated her badly. “They just didn’t want to speak about detransition,” she said, adding that “it was like a kick in the stomach.”
Ms. Watson said she’d like to see more training about detransition, but also on “preventative techniques,” adding that many people transition who should not. “I don’t want more detransitioners – I want less.
“In order for that to happen, we need to treat people with gender dysphoria properly,” said Ms. Watson, adding that the affirmative model is “disgusting, and that’s what needs to change.”
“I would tell somebody to not go to a therapist,” said Carol. Identifying as a butch lesbian, she felt like her therapists had pushed her into transitioning to male. “The No. 1 thing not understood by the mental health professionals is that the vast majority of homosexuals were gender-nonconforming children.” She added that this is especially true of butch lesbians.
Therapists – and doctors – also need to acknowledge both the trauma of transition and detransition, she said.
Kaiser, where she had transitioned, offered her breast reconstruction. Carol said it felt demeaning. “Like you’re Mr. Potatohead: ‘Here, we can just ... put on some new parts and you’re good to go.’ ”
“Doctors are concretizing transient obsessions,” said Helena Kerschner (@lacroicsz), quoting a chatroom user.
Ms. Kerschner gave a presentation on “fandom”: becoming obsessed with a movie, book, TV show, musician, or celebrity, spending every waking hour chatting online or writing fan fiction, or attempting to interact with the celebrity online. It’s a fantasy-dominated world and “the vast majority” of participants are teenage girls who are “identifying as trans,” in part, because they are fed a community-reinforced message that it’s better to be a boy.
Therapists and physicians who help them transition “are harming them for life based on something they would have grown out of or overcome without the permanent damage,” Ms. Kerschner added.
Doctors ‘gaslighting’ people into believing that transition is the answer
A pervasive theme during the webinar was that many people are being misdiagnosed with gender dysphoria, which may not be resolved by medical transition.
Allie, a 22-year-old who stopped taking testosterone after 1½ years, said she initially started the transition to male when she gave up trying to figure out why she could not identify with, or befriend, women, and after a childhood and adolescence spent mostly in the company of boys and being more interested in traditionally male activities.
She endured sexual abuse as a teenager and her parents divorced while she was in high school. Allie also had multiple suicide attempts and many incidents of self-harm. When she decided to transition, at age 18, she went to a private clinic and received cross-sex hormones within a few months of her first and only 30-minute consultation. “There was no explorative therapy,” she said, adding that she was never given a formal diagnosis of gender dysphoria.
For the first year, she said she was “over the freaking moon” because she felt like it was the answer. But things started to unravel while she attended university, and she attempted suicide attempt at age 20. A social worker at the school identified her symptoms – which had been the same since childhood – as autism. She then decided to cease her transition.
Another detransitioner, Laura Becker, said it took 5 years after her transition to recognize that she had undiagnosed PTSD from emotional and psychiatric abuse. Despite a history of substance abuse, self-harm, suicidal ideation, and other mental health issues, she was given testosterone and had a double mastectomy at age 20. She became fixated on gay men, which devolved into a methamphetamine- and crack-fueled relationship with a man she met on the gay dating platform Grindr.
“No one around me knew any better or knew how to help, including the medical professionals who performed the mastectomy and who casually signed off and administered my medical transition,” she said.
Once she was aware of her PTSD she started to detransition, which itself was traumatic, said Laura.
Limpida, aged 24, said he felt pushed into transitioning after seeking help at a Planned Parenthood clinic. He identified as trans at age 15 and spent years attempting to be a woman socially, but every step made him feel more miserable, he said. When he went to the clinic at age 21 to get estrogen, he said he felt like the staff was dismissive of his mental health concerns – including that he was suicidal, had substance abuse, and was severely depressed. He was told he was the “perfect candidate” for transitioning.
A year later, he said he felt worse. The nurse suggested he seek out surgery. After Limpida researched what was involved, he decided to detransition. He has since received an autism diagnosis.
Robin, also aged 24, said the idea of surgery had helped push him into detransitioning, which began in 2020 after 4 years of estrogen. He said he had always been gender nonconforming and knew he was gay at an early age. He believes that gender-nonconforming people are “gaslighted” into thinking that transitioning is the answer.
Lack of evidence-based, informed consent
Michelle Alleva, who stopped identifying as transgender in 2020 but had ceased testosterone 4 years earlier because of side effects, cited what she called a lack of evidence base for the effectiveness and safety of medical transitions.
“You need to have a really, really good evidence base in place if you’re going straight to an invasive treatment that is going to cause permanent changes to your body,” she said.
Access to medical transition used to involve more “gatekeeping” through mental health evaluations and other interventions, she said, but there has been a shift from treating what was considered a psychiatric issue to essentially affirming an identity.
“This shift was activist driven, not evidence based,” she emphasized.
Most studies showing satisfaction with transition only involve a few years of follow-up, she said. She added that the longest follow-up study of transition, published in 2011 and spanning 30 years, showed that the suicide rate 10-15 years post surgery was 20 times higher than the general population.
Studies of regret were primarily conducted before the rapid increase in the number of trans-identifying individuals, she said, which makes it hard to draw conclusions about pediatric transition. Getting estimates on this population is difficult because so many who detransition do not tell their clinicians, and many studies have short follow-up times or a high loss to follow-up.
Ms. Alleva also took issue with the notion that physicians were offering true informed consent, noting that it’s not possible to know if someone is psychologically sound if they haven’t had a thorough mental health evaluation and that there are so many unknowns with medical transition, including that many of the therapies are not approved for the uses being employed.
With regret on the rise, “we need professionals that are prepared for detransitioners,” said Ms. Alleva. “Some of us have lost trust in health care professionals as a result of our experience.”
“It’s a huge feeling of institutional betrayal,” said Grace.
A version of this article first appeared on Medscape.com.
In a unique Zoom conference,
The forum was convened on what was dubbed #DetransitionAwarenessDay by Genspect, a parent-based organization that seeks to put the brakes on medical transitions for children and adolescents. The group has doubts about the gender-affirming care model supported by the World Professional Association for Transgender Health, the American Medical Association, the American Academy of Pediatrics, and other medical groups.
“Affirmative” medical care is defined as treatment with puberty blockers and cross-sex hormones for those with gender dysphoria to transition to the opposite sex and is often followed by gender reassignment surgery. However, there is growing concern among many doctors and other health care professionals as to whether this is, in fact, the best way to proceed for those under aged 18, in particular, with several countries pulling back on medical treatment and instead emphasizing psychotherapy first.
The purpose of the second annual Genspect meeting was to shed light on the experiences of individuals who have detransitioned – those that identified as transgender and transitioned, but then decided to end their medical transition. People logged on from all over the United States, Canada, New Zealand, Australia, the United Kingdom, Germany, Spain, Chile, and Brazil, among other countries.
“This is a minority within a minority,” said Genspect advisor Stella O’Malley, adding that the first meeting in 2021 was held because “too many people were dismissing the stories of the detransitioners.” Ms. O’Malley is a psychotherapist, a clinical advisor to the Society for Evidence-Based Gender Medicine, and a founding member of the International Association of Therapists for Desisters and Detransitioners.
“It’s become blindingly obvious over the last year that ... ‘detrans’ is a huge part of the trans phenomenon,” said Ms. O’Malley, adding that detransitioners have been “undermined and dismissed.”
Laura Edwards-Leeper, PhD (@DrLauraEL), a prominent gender therapist who has recently expressed concern regarding adequate gatekeeping when treating youth with gender dysphoria, agreed.
She tweeted: “You simply can’t call yourself a legit gender provider if you don’t believe that detransitioners exist. As part of the informed consent process for transitioning, it is unethical to not discuss this possibility with young people.” Dr. Edwards-Leeper is professor emeritus at Pacific University in Hillsboro, Ore.
Speakers in the forum largely offered experiences, not data. They pointed out that there has been little to no study of detransition, but all testified that it was less rare than it has been portrayed by the transgender community.
Struggles with going back
“There are so many reasons why people detransition,” said Sinead Watson, aged 30, a Genspect advisor who transitioned from female to male, starting in 2015, and who decided to detransition in 2019. Citing a study by Lisa Littman, MD, MPH, published in 2021, Ms. Watson said the most common reasons for detransitioning were realizing that gender dysphoria was caused by other issues; internal homophobia; and the unbearable nature of transphobia.
Ms. Watson said the hardest part of detransitioning was admitting to herself that her transition had been a mistake. “It’s embarrassing and you feel ashamed and guilty,” she said, adding that it may mean losing friends who now regard you as a “bigot, while you’re also dealing with transition regret.”
“It’s a living hell, especially when none of your therapists or counselors will listen to you,” she said. “Detransitioning isn’t fun.”
Carol (@sourpatches2077) said she knew for a year that her transition had been a mistake.
“The biggest part was I couldn’t tell my family,” said Carol, who identifies as a lesbian. “I put them through so much. It seems ridiculous to go: ‘Oops, I made this huge [expletive] mistake,’ ” she said, describing the moment she did tell them as “devastating.”
Grace (@hormonehangover) said she remembers finally hitting a moment of “undeniability” some years after transitioning. “I accept it, I’ve ruined my life, this is wrong,” she remembers thinking. “It was devastating, but I couldn’t deny it anymore.”
Don’t trust therapists
People experiencing feelings of unease “need a therapist who will listen to them,” said Ms. Watson. When she first detransitioned, her therapists treated her badly. “They just didn’t want to speak about detransition,” she said, adding that “it was like a kick in the stomach.”
Ms. Watson said she’d like to see more training about detransition, but also on “preventative techniques,” adding that many people transition who should not. “I don’t want more detransitioners – I want less.
“In order for that to happen, we need to treat people with gender dysphoria properly,” said Ms. Watson, adding that the affirmative model is “disgusting, and that’s what needs to change.”
“I would tell somebody to not go to a therapist,” said Carol. Identifying as a butch lesbian, she felt like her therapists had pushed her into transitioning to male. “The No. 1 thing not understood by the mental health professionals is that the vast majority of homosexuals were gender-nonconforming children.” She added that this is especially true of butch lesbians.
Therapists – and doctors – also need to acknowledge both the trauma of transition and detransition, she said.
Kaiser, where she had transitioned, offered her breast reconstruction. Carol said it felt demeaning. “Like you’re Mr. Potatohead: ‘Here, we can just ... put on some new parts and you’re good to go.’ ”
“Doctors are concretizing transient obsessions,” said Helena Kerschner (@lacroicsz), quoting a chatroom user.
Ms. Kerschner gave a presentation on “fandom”: becoming obsessed with a movie, book, TV show, musician, or celebrity, spending every waking hour chatting online or writing fan fiction, or attempting to interact with the celebrity online. It’s a fantasy-dominated world and “the vast majority” of participants are teenage girls who are “identifying as trans,” in part, because they are fed a community-reinforced message that it’s better to be a boy.
Therapists and physicians who help them transition “are harming them for life based on something they would have grown out of or overcome without the permanent damage,” Ms. Kerschner added.
Doctors ‘gaslighting’ people into believing that transition is the answer
A pervasive theme during the webinar was that many people are being misdiagnosed with gender dysphoria, which may not be resolved by medical transition.
Allie, a 22-year-old who stopped taking testosterone after 1½ years, said she initially started the transition to male when she gave up trying to figure out why she could not identify with, or befriend, women, and after a childhood and adolescence spent mostly in the company of boys and being more interested in traditionally male activities.
She endured sexual abuse as a teenager and her parents divorced while she was in high school. Allie also had multiple suicide attempts and many incidents of self-harm. When she decided to transition, at age 18, she went to a private clinic and received cross-sex hormones within a few months of her first and only 30-minute consultation. “There was no explorative therapy,” she said, adding that she was never given a formal diagnosis of gender dysphoria.
For the first year, she said she was “over the freaking moon” because she felt like it was the answer. But things started to unravel while she attended university, and she attempted suicide attempt at age 20. A social worker at the school identified her symptoms – which had been the same since childhood – as autism. She then decided to cease her transition.
Another detransitioner, Laura Becker, said it took 5 years after her transition to recognize that she had undiagnosed PTSD from emotional and psychiatric abuse. Despite a history of substance abuse, self-harm, suicidal ideation, and other mental health issues, she was given testosterone and had a double mastectomy at age 20. She became fixated on gay men, which devolved into a methamphetamine- and crack-fueled relationship with a man she met on the gay dating platform Grindr.
“No one around me knew any better or knew how to help, including the medical professionals who performed the mastectomy and who casually signed off and administered my medical transition,” she said.
Once she was aware of her PTSD she started to detransition, which itself was traumatic, said Laura.
Limpida, aged 24, said he felt pushed into transitioning after seeking help at a Planned Parenthood clinic. He identified as trans at age 15 and spent years attempting to be a woman socially, but every step made him feel more miserable, he said. When he went to the clinic at age 21 to get estrogen, he said he felt like the staff was dismissive of his mental health concerns – including that he was suicidal, had substance abuse, and was severely depressed. He was told he was the “perfect candidate” for transitioning.
A year later, he said he felt worse. The nurse suggested he seek out surgery. After Limpida researched what was involved, he decided to detransition. He has since received an autism diagnosis.
Robin, also aged 24, said the idea of surgery had helped push him into detransitioning, which began in 2020 after 4 years of estrogen. He said he had always been gender nonconforming and knew he was gay at an early age. He believes that gender-nonconforming people are “gaslighted” into thinking that transitioning is the answer.
Lack of evidence-based, informed consent
Michelle Alleva, who stopped identifying as transgender in 2020 but had ceased testosterone 4 years earlier because of side effects, cited what she called a lack of evidence base for the effectiveness and safety of medical transitions.
“You need to have a really, really good evidence base in place if you’re going straight to an invasive treatment that is going to cause permanent changes to your body,” she said.
Access to medical transition used to involve more “gatekeeping” through mental health evaluations and other interventions, she said, but there has been a shift from treating what was considered a psychiatric issue to essentially affirming an identity.
“This shift was activist driven, not evidence based,” she emphasized.
Most studies showing satisfaction with transition only involve a few years of follow-up, she said. She added that the longest follow-up study of transition, published in 2011 and spanning 30 years, showed that the suicide rate 10-15 years post surgery was 20 times higher than the general population.
Studies of regret were primarily conducted before the rapid increase in the number of trans-identifying individuals, she said, which makes it hard to draw conclusions about pediatric transition. Getting estimates on this population is difficult because so many who detransition do not tell their clinicians, and many studies have short follow-up times or a high loss to follow-up.
Ms. Alleva also took issue with the notion that physicians were offering true informed consent, noting that it’s not possible to know if someone is psychologically sound if they haven’t had a thorough mental health evaluation and that there are so many unknowns with medical transition, including that many of the therapies are not approved for the uses being employed.
With regret on the rise, “we need professionals that are prepared for detransitioners,” said Ms. Alleva. “Some of us have lost trust in health care professionals as a result of our experience.”
“It’s a huge feeling of institutional betrayal,” said Grace.
A version of this article first appeared on Medscape.com.
French fries vs. almonds every day for a month: What changes?
Eat french fries every day for a month? Sure, as long as it’s for science.
That’s exactly what 107 people did in a scientific study, while 58 others ate a daily serving of almonds with the same number of calories.
At the end of the study, the researchers found no significant differences between the groups in people’s total amount of fat or their fasting glucose measures, according to the study, published Feb. 18 in the American Journal of Clinical Nutrition.
The french fry eaters gained a little more weight, but it was not statistically significant. The people who ate french fries gained 0.49 kilograms (just over a pound), vs. about a tenth of a kilogram (about one-fifth of a pound) in the group of people who ate almonds.
“The take-home is if you like almonds, eat some almonds. If you like potatoes, eat some potatoes, but don’t overeat either,” said study leader David B. Allison, PhD, a professor at Indiana University’s School of Public Health in Bloomington. ‘It’s probably good to have a little bit of each – each has some unique advantages in terms of nutrition.”
“This study confirms what registered dietitian nutritionists already know – all foods can fit. We can eat almonds, french fries, kale, and cookies,” said Melissa Majumdar, a registered dietitian and certified specialist in obesity and weight management at Emory University Hospital Midtown in Atlanta. ‘The consumption of one food or the avoidance of another does not make a healthy diet.”
At the same time, people should not interpret the results to mean it’s OK to eat french fries all day, every day. “We know that while potatoes are nutrient dense, the frying process reduces the nutritional value,” Ms. Majumdar said.
“Because french fries are often consumed alongside other nutrient-poor or high-fat foods, they should not be consumed daily but can fit into an overall balanced diet,” she added.
Would you like fries with that?
The researchers compared french fries to almonds because almonds are known for positive effects on energy balance, body composition, and low glycemic index. The research was partly funded by the Alliance for Potato Research and Education.
French fries are an incredibly popular food in the United States. According to an August 2021 post on the food website Mashed, Americans eat an average of 30 pounds of french fries each year.
Although consumption of almonds is increasing, Americans eat far less in volume each year than they do fries – an estimated 2.4 pounds of almonds per person, according to August 2021 figures from the Almond Board of California.
Dr. Allison and colleagues recruited 180 healthy adults for the study. Their average age was 30, and about two-thirds were women.
They randomly assigned 60 people to add about a medium serving of plain french fries (Tater Pals Ovenable Crinkle Cut Fries, Simplot Foods) to their diet. Another 60 people were assigned to the same amount of Tater Pals fries with herbs (oregano, basil, garlic, onion, and rosemary), and another 60 people ate Wonderful brand roasted and salted almonds.
Investigators told people to add either the potatoes or nuts to their diet every day for a month and gave no further instructions.
After some people dropped out of the study, results were based on 55 who ate regular french fries, 52 who ate french fries with herbs and spices, and 58 who ate the nuts.
The researchers scanned people to detect any changes in fat mass. They also measured changes in body weight, carbohydrate metabolism, and fasting blood glucose and insulin.
Key findings
Changes in total body fat mass and fat mass were not significantly different between the french fry groups and the almond group.
In terms of glycemic control, eating french fries for a month “is no better or worse than consuming a caloric equivalent of nuts,” the researchers noted.
Similarly, the change in total fat mass did not differ significantly among the three treatment groups.
Adding the herb and spice mix to the french fries did not make a significant difference on glycemic control, contrary to what the researchers thought might happen.
And fasting glucose, insulin, and HbA1c levels did not differ significantly between the combined french fry and almond groups. When comparisons were made among the three groups, the almond group had a lower insulin response, compared to the plain french fry group.
Many different things could be explored in future research, said study coauthor Rebecca Hanson, a registered dietitian nutritionist and research study coordinator at the University of Alabama at Birmingham. “People were not told to change their exercise or diet, so there are so many different variables,” she said. Repeating the research in people with diabetes is another possibility going forward.
The researchers acknowledged that 30 days may not have been long enough to show a significant difference. But they also noted that many previous studies were observational while they used a randomized controlled trial, considered a more robust study design.
Dr. Allison, the senior author, emphasized that this is just one study. “No one study has all the answers.
“I don’t want to tell you our results are the be all and end all or that we’ve now learned everything there is to learn about potatoes and almonds,” he said.
“Our study shows for the variables we looked at ... we did not see important, discernible differences,” he said. “That doesn’t mean if you ate 500 potatoes a day or 500 kilograms of almonds it would be the same. But at these modest levels, it doesn’t seem to make much difference.”
The study was funded by grants from the National Institutes of Health and from the Alliance for Potato Research and Education.
Asked if the industry support should be a concern, Ms. Majumdar said, “Funding from a specific food board does not necessarily dilute the results of a well-designed study. It’s not uncommon for a funding source to come from a food board that may benefit from the findings. Research money has to come from somewhere.
“This study has reputable researchers, some of the best in the field,” she said.
The U.S. produces the most almonds in the world, and California is the only state where almonds are grown commercially. Asked for the almond industry’s take on the findings, “We don’t have a comment,” said Rick Kushman, a spokesman for the Almond Board of California.
A version of this article first appeared on WebMD.com.
Eat french fries every day for a month? Sure, as long as it’s for science.
That’s exactly what 107 people did in a scientific study, while 58 others ate a daily serving of almonds with the same number of calories.
At the end of the study, the researchers found no significant differences between the groups in people’s total amount of fat or their fasting glucose measures, according to the study, published Feb. 18 in the American Journal of Clinical Nutrition.
The french fry eaters gained a little more weight, but it was not statistically significant. The people who ate french fries gained 0.49 kilograms (just over a pound), vs. about a tenth of a kilogram (about one-fifth of a pound) in the group of people who ate almonds.
“The take-home is if you like almonds, eat some almonds. If you like potatoes, eat some potatoes, but don’t overeat either,” said study leader David B. Allison, PhD, a professor at Indiana University’s School of Public Health in Bloomington. ‘It’s probably good to have a little bit of each – each has some unique advantages in terms of nutrition.”
“This study confirms what registered dietitian nutritionists already know – all foods can fit. We can eat almonds, french fries, kale, and cookies,” said Melissa Majumdar, a registered dietitian and certified specialist in obesity and weight management at Emory University Hospital Midtown in Atlanta. ‘The consumption of one food or the avoidance of another does not make a healthy diet.”
At the same time, people should not interpret the results to mean it’s OK to eat french fries all day, every day. “We know that while potatoes are nutrient dense, the frying process reduces the nutritional value,” Ms. Majumdar said.
“Because french fries are often consumed alongside other nutrient-poor or high-fat foods, they should not be consumed daily but can fit into an overall balanced diet,” she added.
Would you like fries with that?
The researchers compared french fries to almonds because almonds are known for positive effects on energy balance, body composition, and low glycemic index. The research was partly funded by the Alliance for Potato Research and Education.
French fries are an incredibly popular food in the United States. According to an August 2021 post on the food website Mashed, Americans eat an average of 30 pounds of french fries each year.
Although consumption of almonds is increasing, Americans eat far less in volume each year than they do fries – an estimated 2.4 pounds of almonds per person, according to August 2021 figures from the Almond Board of California.
Dr. Allison and colleagues recruited 180 healthy adults for the study. Their average age was 30, and about two-thirds were women.
They randomly assigned 60 people to add about a medium serving of plain french fries (Tater Pals Ovenable Crinkle Cut Fries, Simplot Foods) to their diet. Another 60 people were assigned to the same amount of Tater Pals fries with herbs (oregano, basil, garlic, onion, and rosemary), and another 60 people ate Wonderful brand roasted and salted almonds.
Investigators told people to add either the potatoes or nuts to their diet every day for a month and gave no further instructions.
After some people dropped out of the study, results were based on 55 who ate regular french fries, 52 who ate french fries with herbs and spices, and 58 who ate the nuts.
The researchers scanned people to detect any changes in fat mass. They also measured changes in body weight, carbohydrate metabolism, and fasting blood glucose and insulin.
Key findings
Changes in total body fat mass and fat mass were not significantly different between the french fry groups and the almond group.
In terms of glycemic control, eating french fries for a month “is no better or worse than consuming a caloric equivalent of nuts,” the researchers noted.
Similarly, the change in total fat mass did not differ significantly among the three treatment groups.
Adding the herb and spice mix to the french fries did not make a significant difference on glycemic control, contrary to what the researchers thought might happen.
And fasting glucose, insulin, and HbA1c levels did not differ significantly between the combined french fry and almond groups. When comparisons were made among the three groups, the almond group had a lower insulin response, compared to the plain french fry group.
Many different things could be explored in future research, said study coauthor Rebecca Hanson, a registered dietitian nutritionist and research study coordinator at the University of Alabama at Birmingham. “People were not told to change their exercise or diet, so there are so many different variables,” she said. Repeating the research in people with diabetes is another possibility going forward.
The researchers acknowledged that 30 days may not have been long enough to show a significant difference. But they also noted that many previous studies were observational while they used a randomized controlled trial, considered a more robust study design.
Dr. Allison, the senior author, emphasized that this is just one study. “No one study has all the answers.
“I don’t want to tell you our results are the be all and end all or that we’ve now learned everything there is to learn about potatoes and almonds,” he said.
“Our study shows for the variables we looked at ... we did not see important, discernible differences,” he said. “That doesn’t mean if you ate 500 potatoes a day or 500 kilograms of almonds it would be the same. But at these modest levels, it doesn’t seem to make much difference.”
The study was funded by grants from the National Institutes of Health and from the Alliance for Potato Research and Education.
Asked if the industry support should be a concern, Ms. Majumdar said, “Funding from a specific food board does not necessarily dilute the results of a well-designed study. It’s not uncommon for a funding source to come from a food board that may benefit from the findings. Research money has to come from somewhere.
“This study has reputable researchers, some of the best in the field,” she said.
The U.S. produces the most almonds in the world, and California is the only state where almonds are grown commercially. Asked for the almond industry’s take on the findings, “We don’t have a comment,” said Rick Kushman, a spokesman for the Almond Board of California.
A version of this article first appeared on WebMD.com.
Eat french fries every day for a month? Sure, as long as it’s for science.
That’s exactly what 107 people did in a scientific study, while 58 others ate a daily serving of almonds with the same number of calories.
At the end of the study, the researchers found no significant differences between the groups in people’s total amount of fat or their fasting glucose measures, according to the study, published Feb. 18 in the American Journal of Clinical Nutrition.
The french fry eaters gained a little more weight, but it was not statistically significant. The people who ate french fries gained 0.49 kilograms (just over a pound), vs. about a tenth of a kilogram (about one-fifth of a pound) in the group of people who ate almonds.
“The take-home is if you like almonds, eat some almonds. If you like potatoes, eat some potatoes, but don’t overeat either,” said study leader David B. Allison, PhD, a professor at Indiana University’s School of Public Health in Bloomington. ‘It’s probably good to have a little bit of each – each has some unique advantages in terms of nutrition.”
“This study confirms what registered dietitian nutritionists already know – all foods can fit. We can eat almonds, french fries, kale, and cookies,” said Melissa Majumdar, a registered dietitian and certified specialist in obesity and weight management at Emory University Hospital Midtown in Atlanta. ‘The consumption of one food or the avoidance of another does not make a healthy diet.”
At the same time, people should not interpret the results to mean it’s OK to eat french fries all day, every day. “We know that while potatoes are nutrient dense, the frying process reduces the nutritional value,” Ms. Majumdar said.
“Because french fries are often consumed alongside other nutrient-poor or high-fat foods, they should not be consumed daily but can fit into an overall balanced diet,” she added.
Would you like fries with that?
The researchers compared french fries to almonds because almonds are known for positive effects on energy balance, body composition, and low glycemic index. The research was partly funded by the Alliance for Potato Research and Education.
French fries are an incredibly popular food in the United States. According to an August 2021 post on the food website Mashed, Americans eat an average of 30 pounds of french fries each year.
Although consumption of almonds is increasing, Americans eat far less in volume each year than they do fries – an estimated 2.4 pounds of almonds per person, according to August 2021 figures from the Almond Board of California.
Dr. Allison and colleagues recruited 180 healthy adults for the study. Their average age was 30, and about two-thirds were women.
They randomly assigned 60 people to add about a medium serving of plain french fries (Tater Pals Ovenable Crinkle Cut Fries, Simplot Foods) to their diet. Another 60 people were assigned to the same amount of Tater Pals fries with herbs (oregano, basil, garlic, onion, and rosemary), and another 60 people ate Wonderful brand roasted and salted almonds.
Investigators told people to add either the potatoes or nuts to their diet every day for a month and gave no further instructions.
After some people dropped out of the study, results were based on 55 who ate regular french fries, 52 who ate french fries with herbs and spices, and 58 who ate the nuts.
The researchers scanned people to detect any changes in fat mass. They also measured changes in body weight, carbohydrate metabolism, and fasting blood glucose and insulin.
Key findings
Changes in total body fat mass and fat mass were not significantly different between the french fry groups and the almond group.
In terms of glycemic control, eating french fries for a month “is no better or worse than consuming a caloric equivalent of nuts,” the researchers noted.
Similarly, the change in total fat mass did not differ significantly among the three treatment groups.
Adding the herb and spice mix to the french fries did not make a significant difference on glycemic control, contrary to what the researchers thought might happen.
And fasting glucose, insulin, and HbA1c levels did not differ significantly between the combined french fry and almond groups. When comparisons were made among the three groups, the almond group had a lower insulin response, compared to the plain french fry group.
Many different things could be explored in future research, said study coauthor Rebecca Hanson, a registered dietitian nutritionist and research study coordinator at the University of Alabama at Birmingham. “People were not told to change their exercise or diet, so there are so many different variables,” she said. Repeating the research in people with diabetes is another possibility going forward.
The researchers acknowledged that 30 days may not have been long enough to show a significant difference. But they also noted that many previous studies were observational while they used a randomized controlled trial, considered a more robust study design.
Dr. Allison, the senior author, emphasized that this is just one study. “No one study has all the answers.
“I don’t want to tell you our results are the be all and end all or that we’ve now learned everything there is to learn about potatoes and almonds,” he said.
“Our study shows for the variables we looked at ... we did not see important, discernible differences,” he said. “That doesn’t mean if you ate 500 potatoes a day or 500 kilograms of almonds it would be the same. But at these modest levels, it doesn’t seem to make much difference.”
The study was funded by grants from the National Institutes of Health and from the Alliance for Potato Research and Education.
Asked if the industry support should be a concern, Ms. Majumdar said, “Funding from a specific food board does not necessarily dilute the results of a well-designed study. It’s not uncommon for a funding source to come from a food board that may benefit from the findings. Research money has to come from somewhere.
“This study has reputable researchers, some of the best in the field,” she said.
The U.S. produces the most almonds in the world, and California is the only state where almonds are grown commercially. Asked for the almond industry’s take on the findings, “We don’t have a comment,” said Rick Kushman, a spokesman for the Almond Board of California.
A version of this article first appeared on WebMD.com.
FROM AMERICAN JOURNAL OF CLINICAL NUTRITION
Is cancer testing going to the dogs? Nope, ants
The oncologist’s new best friend
We know that dogs have very sensitive noses. They can track criminals and missing persons and sniff out drugs and bombs. They can even detect cancer cells … after months of training.
And then there are ants.
Cancer cells produce volatile organic compounds (VOCs), which can be sniffed out by dogs and other animals with sufficiently sophisticated olfactory senses. A group of French investigators decided to find out if Formica fusca is such an animal.
First, they placed breast cancer cells and healthy cells in a petri dish. The sample of cancer cells, however, included a sugary treat. “Over successive trials, the ants got quicker and quicker at finding the treat, indicating that they had learned to recognize the VOCs produced by the cancerous cells, using these as a beacon to guide their way to the sugary delight,” according to IFL Science.
When the researchers removed the treat, the ants still went straight for the cancer cells. Then they removed the healthy cells and substituted another type of breast cancer cell, with just one type getting the treat. They went for the cancer cells with the treat, “indicating that they were capable of distinguishing between the different cancer types based on the unique pattern of VOCs emitted by each one,” IFL Science explained.
It’s just another chapter in the eternal struggle between dogs and ants. Dogs need months of training to learn to detect cancer cells; ants can do it in 30 minutes. Over the course of a dog’s training, Fido eats more food than 10,000 ants combined. (Okay, we’re guessing here, but it’s got to be a pretty big number, right?)
Then there’s the warm and fuzzy factor. Just look at that picture. Who wouldn’t want a cutie like that curling up in the bed next to you?
Console War II: Battle of the Twitter users
Video games can be a lot of fun, provided you’re not playing something like Rock Simulator. Or Surgeon Simulator. Or Surgeon Simulator 2. Yes, those are all real games. But calling yourself a video gamer invites a certain negative connotation, and nowhere can that be better exemplified than the increasingly ridiculous console war.
For those who don’t know their video game history, back in the early 90s Nintendo and Sega were the main video game console makers. Nintendo had Mario, Sega had Sonic, and everyone had an opinion on which was best. With Sega now but a shell of its former self and Nintendo viewed as too “casual” for the true gaming connoisseur, today’s battle pits Playstation against Xbox, and fans of both consoles spend their time trying to one-up each other in increasingly silly online arguments.
That brings us nicely to a Twitter user named “Shreeveera,” who is very vocal about his love of Playstation and hatred of the Xbox. Importantly, for LOTME purposes, Shreeveera identified himself as a doctor on his profile, and in the middle of an argument, Xbox enthusiasts called his credentials into question.
At this point, most people would recognize that there are very few noteworthy console-exclusive video games in today’s world and that any argument about consoles essentially comes down to which console design you like or which company you find less distasteful, and they would step away from the Twitter argument. Shreeveera is not most people, and he decided the next logical move was to post a video of himself and an anesthetized patient about to undergo a laparoscopic cholecystectomy.
This move did prove that he was indeed a doctor, but the ethics of posting such a video with a patient in the room is a bit dubious at best. Since Shreeveera also listed the hospital he worked at, numerous Twitter users review bombed the hospital with one-star reviews. Shreeveera’s fate is unknown, but he did take down the video and removed “doctor by profession” from his profile. He also made a second video asking Twitter to stop trying to ruin his life. We’re sure that’ll go well. Twitter is known for being completely fair and reasonable.
Use your words to gain power
We live in the age of the emoji. The use of emojis in texts and emails is basically the new shorthand. It’s a fun and easy way to chat with people close to us, but a new study shows that it doesn’t help in a business setting. In fact, it may do a little damage.
The use of images such as emojis in communication or logos can make a person seem less powerful than someone who opts for written words, according to Elinor Amit, PhD, of Tel Aviv University and associates.
Participants in their study were asked to imagine shopping with a person wearing a T-shirt. Half were then shown the logo of the Red Sox baseball team and half saw the words “Red Sox.” In another scenario, they were asked to imagine attending a retreat of a company called Lotus. Then half were shown an employee wearing a shirt with an image of lotus flower and half saw the verbal logo “Lotus.” In both scenarios, the individuals wearing shirts with images were seen as less powerful than the people who wore shirts with words on them.
Why is that? In a Eurekalert statement, Dr. Amit said that “visual messages are often interpreted as a signal for desire for social proximity.” In a world with COVID-19, that could give anyone pause.
That desire for more social proximity, in turn, equals a suggested loss of power because research shows that people who want to be around other people more are less powerful than people who don’t.
With the reduced social proximity we have these days, we may want to keep things cool and lighthearted, especially in work emails with people who we’ve never met. It may be, however, that using your words to say thank you in the multitude of emails you respond to on a regular basis is better than that thumbs-up emoji. Nobody will think less of you.
Should Daylight Savings Time still be a thing?
This past week, we just experienced the spring-forward portion of Daylight Savings Time, which took an hour of sleep away from us all. Some of us may still be struggling to find our footing with the time change, but at least it’s still sunny out at 7 pm. For those who don’t really see the point of changing the clocks twice a year, there are actually some good reasons to do so.
Sen. Marco Rubio, sponsor of a bill to make the time change permanent, put it simply: “If we can get this passed, we don’t have to do this stupidity anymore.” Message received, apparently, since the measure just passed unanimously in the Senate.
It’s not clear if President Biden will approve it, though, because there’s a lot that comes into play: economic needs, seasonal depression, and safety.
“I know this is not the most important issue confronting America, but it’s one of those issues where there’s a lot of agreement,” Sen. Rubio said.
Not total agreement, though. The National Association of Convenience Stores is opposed to the bill, and Reuters noted that one witness at a recent hearing said the time change “is like living in the wrong time zone for almost eight months out of the year.”
Many people, however, seem to be leaning toward the permanent spring-forward as it gives businesses a longer window to provide entertainment in the evenings and kids are able to play outside longer after school.
Honestly, we’re leaning toward whichever one can reduce seasonal depression.
The oncologist’s new best friend
We know that dogs have very sensitive noses. They can track criminals and missing persons and sniff out drugs and bombs. They can even detect cancer cells … after months of training.
And then there are ants.
Cancer cells produce volatile organic compounds (VOCs), which can be sniffed out by dogs and other animals with sufficiently sophisticated olfactory senses. A group of French investigators decided to find out if Formica fusca is such an animal.
First, they placed breast cancer cells and healthy cells in a petri dish. The sample of cancer cells, however, included a sugary treat. “Over successive trials, the ants got quicker and quicker at finding the treat, indicating that they had learned to recognize the VOCs produced by the cancerous cells, using these as a beacon to guide their way to the sugary delight,” according to IFL Science.
When the researchers removed the treat, the ants still went straight for the cancer cells. Then they removed the healthy cells and substituted another type of breast cancer cell, with just one type getting the treat. They went for the cancer cells with the treat, “indicating that they were capable of distinguishing between the different cancer types based on the unique pattern of VOCs emitted by each one,” IFL Science explained.
It’s just another chapter in the eternal struggle between dogs and ants. Dogs need months of training to learn to detect cancer cells; ants can do it in 30 minutes. Over the course of a dog’s training, Fido eats more food than 10,000 ants combined. (Okay, we’re guessing here, but it’s got to be a pretty big number, right?)
Then there’s the warm and fuzzy factor. Just look at that picture. Who wouldn’t want a cutie like that curling up in the bed next to you?
Console War II: Battle of the Twitter users
Video games can be a lot of fun, provided you’re not playing something like Rock Simulator. Or Surgeon Simulator. Or Surgeon Simulator 2. Yes, those are all real games. But calling yourself a video gamer invites a certain negative connotation, and nowhere can that be better exemplified than the increasingly ridiculous console war.
For those who don’t know their video game history, back in the early 90s Nintendo and Sega were the main video game console makers. Nintendo had Mario, Sega had Sonic, and everyone had an opinion on which was best. With Sega now but a shell of its former self and Nintendo viewed as too “casual” for the true gaming connoisseur, today’s battle pits Playstation against Xbox, and fans of both consoles spend their time trying to one-up each other in increasingly silly online arguments.
That brings us nicely to a Twitter user named “Shreeveera,” who is very vocal about his love of Playstation and hatred of the Xbox. Importantly, for LOTME purposes, Shreeveera identified himself as a doctor on his profile, and in the middle of an argument, Xbox enthusiasts called his credentials into question.
At this point, most people would recognize that there are very few noteworthy console-exclusive video games in today’s world and that any argument about consoles essentially comes down to which console design you like or which company you find less distasteful, and they would step away from the Twitter argument. Shreeveera is not most people, and he decided the next logical move was to post a video of himself and an anesthetized patient about to undergo a laparoscopic cholecystectomy.
This move did prove that he was indeed a doctor, but the ethics of posting such a video with a patient in the room is a bit dubious at best. Since Shreeveera also listed the hospital he worked at, numerous Twitter users review bombed the hospital with one-star reviews. Shreeveera’s fate is unknown, but he did take down the video and removed “doctor by profession” from his profile. He also made a second video asking Twitter to stop trying to ruin his life. We’re sure that’ll go well. Twitter is known for being completely fair and reasonable.
Use your words to gain power
We live in the age of the emoji. The use of emojis in texts and emails is basically the new shorthand. It’s a fun and easy way to chat with people close to us, but a new study shows that it doesn’t help in a business setting. In fact, it may do a little damage.
The use of images such as emojis in communication or logos can make a person seem less powerful than someone who opts for written words, according to Elinor Amit, PhD, of Tel Aviv University and associates.
Participants in their study were asked to imagine shopping with a person wearing a T-shirt. Half were then shown the logo of the Red Sox baseball team and half saw the words “Red Sox.” In another scenario, they were asked to imagine attending a retreat of a company called Lotus. Then half were shown an employee wearing a shirt with an image of lotus flower and half saw the verbal logo “Lotus.” In both scenarios, the individuals wearing shirts with images were seen as less powerful than the people who wore shirts with words on them.
Why is that? In a Eurekalert statement, Dr. Amit said that “visual messages are often interpreted as a signal for desire for social proximity.” In a world with COVID-19, that could give anyone pause.
That desire for more social proximity, in turn, equals a suggested loss of power because research shows that people who want to be around other people more are less powerful than people who don’t.
With the reduced social proximity we have these days, we may want to keep things cool and lighthearted, especially in work emails with people who we’ve never met. It may be, however, that using your words to say thank you in the multitude of emails you respond to on a regular basis is better than that thumbs-up emoji. Nobody will think less of you.
Should Daylight Savings Time still be a thing?
This past week, we just experienced the spring-forward portion of Daylight Savings Time, which took an hour of sleep away from us all. Some of us may still be struggling to find our footing with the time change, but at least it’s still sunny out at 7 pm. For those who don’t really see the point of changing the clocks twice a year, there are actually some good reasons to do so.
Sen. Marco Rubio, sponsor of a bill to make the time change permanent, put it simply: “If we can get this passed, we don’t have to do this stupidity anymore.” Message received, apparently, since the measure just passed unanimously in the Senate.
It’s not clear if President Biden will approve it, though, because there’s a lot that comes into play: economic needs, seasonal depression, and safety.
“I know this is not the most important issue confronting America, but it’s one of those issues where there’s a lot of agreement,” Sen. Rubio said.
Not total agreement, though. The National Association of Convenience Stores is opposed to the bill, and Reuters noted that one witness at a recent hearing said the time change “is like living in the wrong time zone for almost eight months out of the year.”
Many people, however, seem to be leaning toward the permanent spring-forward as it gives businesses a longer window to provide entertainment in the evenings and kids are able to play outside longer after school.
Honestly, we’re leaning toward whichever one can reduce seasonal depression.
The oncologist’s new best friend
We know that dogs have very sensitive noses. They can track criminals and missing persons and sniff out drugs and bombs. They can even detect cancer cells … after months of training.
And then there are ants.
Cancer cells produce volatile organic compounds (VOCs), which can be sniffed out by dogs and other animals with sufficiently sophisticated olfactory senses. A group of French investigators decided to find out if Formica fusca is such an animal.
First, they placed breast cancer cells and healthy cells in a petri dish. The sample of cancer cells, however, included a sugary treat. “Over successive trials, the ants got quicker and quicker at finding the treat, indicating that they had learned to recognize the VOCs produced by the cancerous cells, using these as a beacon to guide their way to the sugary delight,” according to IFL Science.
When the researchers removed the treat, the ants still went straight for the cancer cells. Then they removed the healthy cells and substituted another type of breast cancer cell, with just one type getting the treat. They went for the cancer cells with the treat, “indicating that they were capable of distinguishing between the different cancer types based on the unique pattern of VOCs emitted by each one,” IFL Science explained.
It’s just another chapter in the eternal struggle between dogs and ants. Dogs need months of training to learn to detect cancer cells; ants can do it in 30 minutes. Over the course of a dog’s training, Fido eats more food than 10,000 ants combined. (Okay, we’re guessing here, but it’s got to be a pretty big number, right?)
Then there’s the warm and fuzzy factor. Just look at that picture. Who wouldn’t want a cutie like that curling up in the bed next to you?
Console War II: Battle of the Twitter users
Video games can be a lot of fun, provided you’re not playing something like Rock Simulator. Or Surgeon Simulator. Or Surgeon Simulator 2. Yes, those are all real games. But calling yourself a video gamer invites a certain negative connotation, and nowhere can that be better exemplified than the increasingly ridiculous console war.
For those who don’t know their video game history, back in the early 90s Nintendo and Sega were the main video game console makers. Nintendo had Mario, Sega had Sonic, and everyone had an opinion on which was best. With Sega now but a shell of its former self and Nintendo viewed as too “casual” for the true gaming connoisseur, today’s battle pits Playstation against Xbox, and fans of both consoles spend their time trying to one-up each other in increasingly silly online arguments.
That brings us nicely to a Twitter user named “Shreeveera,” who is very vocal about his love of Playstation and hatred of the Xbox. Importantly, for LOTME purposes, Shreeveera identified himself as a doctor on his profile, and in the middle of an argument, Xbox enthusiasts called his credentials into question.
At this point, most people would recognize that there are very few noteworthy console-exclusive video games in today’s world and that any argument about consoles essentially comes down to which console design you like or which company you find less distasteful, and they would step away from the Twitter argument. Shreeveera is not most people, and he decided the next logical move was to post a video of himself and an anesthetized patient about to undergo a laparoscopic cholecystectomy.
This move did prove that he was indeed a doctor, but the ethics of posting such a video with a patient in the room is a bit dubious at best. Since Shreeveera also listed the hospital he worked at, numerous Twitter users review bombed the hospital with one-star reviews. Shreeveera’s fate is unknown, but he did take down the video and removed “doctor by profession” from his profile. He also made a second video asking Twitter to stop trying to ruin his life. We’re sure that’ll go well. Twitter is known for being completely fair and reasonable.
Use your words to gain power
We live in the age of the emoji. The use of emojis in texts and emails is basically the new shorthand. It’s a fun and easy way to chat with people close to us, but a new study shows that it doesn’t help in a business setting. In fact, it may do a little damage.
The use of images such as emojis in communication or logos can make a person seem less powerful than someone who opts for written words, according to Elinor Amit, PhD, of Tel Aviv University and associates.
Participants in their study were asked to imagine shopping with a person wearing a T-shirt. Half were then shown the logo of the Red Sox baseball team and half saw the words “Red Sox.” In another scenario, they were asked to imagine attending a retreat of a company called Lotus. Then half were shown an employee wearing a shirt with an image of lotus flower and half saw the verbal logo “Lotus.” In both scenarios, the individuals wearing shirts with images were seen as less powerful than the people who wore shirts with words on them.
Why is that? In a Eurekalert statement, Dr. Amit said that “visual messages are often interpreted as a signal for desire for social proximity.” In a world with COVID-19, that could give anyone pause.
That desire for more social proximity, in turn, equals a suggested loss of power because research shows that people who want to be around other people more are less powerful than people who don’t.
With the reduced social proximity we have these days, we may want to keep things cool and lighthearted, especially in work emails with people who we’ve never met. It may be, however, that using your words to say thank you in the multitude of emails you respond to on a regular basis is better than that thumbs-up emoji. Nobody will think less of you.
Should Daylight Savings Time still be a thing?
This past week, we just experienced the spring-forward portion of Daylight Savings Time, which took an hour of sleep away from us all. Some of us may still be struggling to find our footing with the time change, but at least it’s still sunny out at 7 pm. For those who don’t really see the point of changing the clocks twice a year, there are actually some good reasons to do so.
Sen. Marco Rubio, sponsor of a bill to make the time change permanent, put it simply: “If we can get this passed, we don’t have to do this stupidity anymore.” Message received, apparently, since the measure just passed unanimously in the Senate.
It’s not clear if President Biden will approve it, though, because there’s a lot that comes into play: economic needs, seasonal depression, and safety.
“I know this is not the most important issue confronting America, but it’s one of those issues where there’s a lot of agreement,” Sen. Rubio said.
Not total agreement, though. The National Association of Convenience Stores is opposed to the bill, and Reuters noted that one witness at a recent hearing said the time change “is like living in the wrong time zone for almost eight months out of the year.”
Many people, however, seem to be leaning toward the permanent spring-forward as it gives businesses a longer window to provide entertainment in the evenings and kids are able to play outside longer after school.
Honestly, we’re leaning toward whichever one can reduce seasonal depression.
Characterizing Opioid Response in Older Veterans in the Post-Acute Setting
Older adults admitted to post-acute settings frequently have complex rehabilitation needs and multimorbidity, which predisposes them to pain management challenges.1,2 The prevalence of pain in post-acute and long-term care is as high as 65%, and opioid use is common among this population with 1 in 7 residents receiving long-term opioids.3,4
Opioids that do not adequately control pain represent a missed opportunity for deprescribing. There is limited evidence regarding efficacy of long-term opioid use (> 90 days) for improving pain and physical functioning.5 In addition, long-term opioid use carries significant risks, including overdose-related death, dependence, and increased emergency department visits.5 These risks are likely to be pronounced among veterans receiving post-acute care (PAC) who are older, have comorbid psychiatric disorders, are prescribed several centrally acting medications, and experience substance use disorder (SUD).6
Older adults are at increased risk for opioid toxicity because of reduced drug clearance and smaller therapeutic window.5 Centers for Disease Control and Prevention (CDC) guidelines recommend frequently assessing patients for benefit in terms of sustained improvement in pain as well as physical function.5 If pain and functional improvements are minimal, opioid use and nonopioid pain management strategies should be considered. Some patients will struggle with this approach. Directly asking patients about the effectiveness of opioids is challenging. Opioid users with chronic pain frequently report problems with opioids even as they describe them as indispensable for pain management.7,8
Earlier studies have assessed patient perspectives regarding opioid difficulties as well as their helpfulness, which could introduce recall bias. Patient-level factors that contribute to a global sense of distress, in addition to the presence of painful physical conditions, also could contribute to patients requesting opioids without experiencing adequate pain relief. One study in veterans residing in PAC facilities found that individuals with depression, posttraumatic stress disorder (PTSD), and SUD were more likely to report pain and receive scheduled analgesics; this effect persisted in individuals with PTSD even after adjusting for demographic and functional status variables.9 The study looked only at analgesics as a class and did not examine opioids specifically. It is possible that distressed individuals, such as those with uncontrolled depression, PTSD, and SUD, might be more likely to report high pain levels and receive opioids with inadequate benefit and increased risk. Identifying the primary condition causing distress and targeting treatment to that condition (ie, depression) is preferable to escalating opioids in an attempt to treat pain in the context of nonresponse. Assessing an individual’s aggregate response to opioids rather than relying on a single self-report is a useful addition to current pain management strategies.
The goal of this study was to pilot a method of identifying opioid-nonresponsive pain using administrative data, measure its prevalence in a PAC population of veterans, and explore clinical and demographic correlates with particular attention to variates that could indicate high levels of psychological and physical distress. Identifying pain that is poorly responsive to opioids would give clinicians the opportunity to avoid or minimize opioid use and prioritize treatments that are likely to improve the resident’s pain, quality of life, and physical function while minimizing recall bias. We hypothesized that pain that responds poorly to opioids would be prevalent among veterans residing in a PAC unit. We considered that veterans with pain poorly responsive to opioids would be more likely to have factors that would place them at increased risk of adverse effects, such as comorbid psychiatric conditions, history of SUD, and multimorbidity, providing further rationale for clinical equipoise in that population.6
Methods
This was a small, retrospective cross-sectional study using administrative data and chart review. The study included veterans who were administered opioids while residing in a single US Department of Veterans Affairs (VA) community living center PAC (CLC-PAC) unit during at least 1 of 4 nonconsecutive, random days in 2016 and 2017. The study was approved by the institutional review board of the Ann Arbor VA Health System (#2017-1034) as part of a larger project involving models of care in vulnerable older veterans.
Inclusion criteria were the presence of at least moderate pain (≥ 4 on a 0 to 10 scale); receiving ≥ 2 opioids ordered as needed over the prespecified 24-hour observation period; and having ≥ 2 pre-and postopioid administration pain scores during the observation period. Veterans who did not meet these criteria were excluded. At the time of initial sample selection, we did not capture information related to coprescribed analgesics, including a standing order of opioids. To obtain the sample, we initially characterized all veterans on the 4 days residing in the CLC-PAC unit as those reporting at least moderate pain (≥ 4) and those who reported no or mild pain (< 4). The cut point of 4 of 10 is consistent with moderate pain based on earlier work showing higher likelihood of pain that interferes with physical function.10 We then restricted the sample to veterans who received ≥ 2 opioids ordered as needed for pain and had ≥ 2 pre- and postopioid administration numeric pain rating scores during the 24-hour observation period. This methodology was chosen to enrich our sample for those who received opioids regularly for ongoing pain. Opioids were defined as full µ-opioid receptor agonists and included hydrocodone, oxycodone, morphine, hydromorphone, fentanyl, tramadol, and methadone.
Medication administration data were obtained from the VA corporate data warehouse, which houses all barcode medication administration data collected at the point of care. The dataset includes pain scores gathered by nursing staff before and after administering an as-needed analgesic. The corporate data warehouse records data/time of pain scores and the analgesic name, dosage, formulation, and date/time of administration. Using a standardized assessment form developed iteratively, we calculated opioid dosage in oral morphine equivalents (OME) for comparison.11,12 All abstracted data were reexamined for accuracy. Data initially were collected in an anonymized, blinded fashion. Participants were then unblinded for chart review. Initial data was captured in resident-days instead of unique residents because an individual resident might have been admitted on several observation days. We were primarily interested in how pain responded to opioids administered in response to resident request; therefore, we did not examine response to opioids that were continuously ordered (ie, scheduled). We did consider scheduled opioids when calculating total daily opioid dosage during the chart review.
Outcome of Interest
The primary outcome of interest was an individual’s response to as-needed opioids, which we defined as change in the pain score after opioid administration. The pre-opioid pain score was the score that immediately preceded administration of an as-needed opioid. The postopioid administration pain score was the first score after opioid administration if obtained within 3 hours of administration. Scores collected > 3 hours after opioid administration were excluded because they no longer accurately reflected the impact of the opioid due to the short half-lives. Observations were excluded if an opioid was administered without a recorded pain score; this occurred once for 6 individuals. Observations also were excluded if an opioid was administered but the data were captured on the following day (outside of the 24-hour window); this occurred once for 3 individuals.
We calculated a ∆ score by subtracting the postopioid pain rating score from the pre-opioid score. Individual ∆ scores were then averaged over the 24-hour period (range, 2-5 opioid doses). For example, if an individual reported a pre-opioid pain score of 10, and a postopioid pain score of 2, the ∆ was recorded as 8. If the individual’s next pre-opioid score was 10, and post-opioid score was 6, the ∆ was recorded as 4. ∆ scores over the 24-hour period were averaged together to determine that individual’s response to as-needed opioids. In the previous example, the mean ∆ score is 6. Lower mean ∆ scores reflect decreased responsiveness to opioids’ analgesic effect.
Demographic and clinical data were obtained from electronic health record review using a standardized assessment form. These data included information about medical and psychiatric comorbidities, specialist consultations, and CLC-PAC unit admission indications and diagnoses. Medications of interest were categorized as antidepressants, antipsychotics, benzodiazepines, muscle relaxants, hypnotics, stimulants, antiepileptic drugs/mood stabilizers (including gabapentin and pregabalin), and all adjuvant analgesics. Adjuvant analgesics were defined as medications administered for pain as documented by chart notes or those ordered as needed for pain, and analyzed as a composite variable. Antidepressants with analgesic properties (serotonin-norepinephrine reuptake inhibitors and tricyclic antidepressants) were considered adjuvant analgesics. Psychiatric information collected included presence of mood, anxiety, and psychotic disorders, and PTSD. SUD information was collected separately from other psychiatric disorders.
Analyses
The study population was described using tabulations for categorical data and means and standard deviations for continuous data. Responsiveness to opioids was analyzed as a continuous variable. Those with higher mean ∆ scores were considered to have pain relatively more responsive to opioids, while lower mean ∆ scores indicated pain less responsive to opioids. We constructed linear regression models controlling for average pre-opioid pain rating scores to explore associations between opioid responsiveness and variables of interest. All analyses were completed using Stata version 15. This study was not adequately powered to detect differences across the spectrum of opioid responsiveness, although the authors have reported differences in this article.
Results
Over the 4-day observational period there were 146 resident-days. Of these, 88 (60.3%) reported at least 1 pain score of ≥ 4. Of those, 61 (41.8%) received ≥ 1 as-needed opioid for pain. We identified 46 resident-days meeting study criteria of ≥ 2 pre- and postanalgesic scores. We identified 41 unique individuals (Figure 1). Two individuals were admitted to the CLC-PAC unit on 2 of the 4 observation days, and 1 individual was admitted to the CLC-PAC unit on 3 of the 4 observation days. For individuals admitted several days, we included data only from the initial observation day.
Response to opioids varied greatly in this sample. The mean (SD) ∆ pain score was 3.4 (1.6) and ranged from 0.5 to 6.3. Using linear regression, we found no relationship between admission indication, medical comorbidities (including active cancer), and opioid responsiveness (Table).
Psychiatric disorders were highly prevalent, with 25 individuals (61.0%) having ≥ 1 any psychiatric diagnosis identified on chart review. The presence of any psychiatric diagnosis was significantly associated with reduced responsiveness to opioids (β = −1.08; 95% CI, −2.04 to −0.13; P = .03). SUDs also were common, with 17 individuals (41.5%) having an active SUD; most were tobacco/nicotine. Twenty-six veterans (63.4%) had documentation of SUD in remission with 19 (46.3%) for substances other than tobacco/nicotine. There was no indication that any veteran in the sample was prescribed medication for opioid use disorder (OUD) at the time of observation. There was no relationship between opioid responsiveness and SUDs, neither active or in remission. Consults to other services that suggested distress or difficult-to-control symptoms also were frequent. Consults to the pain service were significantly associated with reduced responsiveness to opioids (β = −1.75; 95% CI, −3.33 to −0.17; P = .03). Association between psychiatry consultation and reduced opioid responsiveness trended toward significance (β = −0.95; 95% CI, −2.06 to 0.17; P = .09) (Figures 2 and 3). There was no significant association with palliative medicine consultation and opioid responsiveness.
A poorer response to opioids was associated with a significantly higher as-needed opioid dosage (β = −0.02; 95% CI, −0.04 to −0.01; P = .002) as well as a trend toward higher total opioid dosage (β = −0.005; 95% CI, −0.01 to 0.0003; P = .06) (Figure 4). Thirty-eight (92.7%) participants received nonopioid adjuvant analgesics for pain. More than half (56.1%) received antidepressants or gabapentinoids (51.2%), although we did not assess whether they were prescribed for pain or another indication. We did not identify a relationship between any specific psychoactive drug class and opioid responsiveness in this sample.
Discussion
This exploratory study used readily available administrative data in a CLC-PAC unit to assess responsiveness to opioids via a numeric mean ∆ score, with higher values indicating more pain relief in response to opioids. We then constructed linear regression models to characterize the relationship between the mean ∆ score and factors known to be associated with difficult-to-control pain and psychosocial distress. As expected, opioid responsiveness was highly variable among residents; some residents experienced essentially no reduction in pain, on average, despite receiving opioids. Psychiatric comorbidity, higher dosage in OMEs, and the presence of a pain service consult significantly correlated with poorer response to opioids. To our knowledge, this is the first study to quantify opioid responsiveness and describe the relationship with clinical correlates in the understudied PAC population.
Earlier research has demonstrated a relationship between the presence of psychiatric disorders and increased likelihood of receiving any analgesics among veterans residing in PAC.9 Our study adds to the literature by quantifying opioid response using readily available administrative data and examining associations with psychiatric diagnoses. These findings highlight the possibility that attempting to treat high levels of pain by escalating the opioid dosage in patients with a comorbid psychiatric diagnosis should be re-addressed, particularly if there is no meaningful pain reduction at lower opioid dosages. Our sample had a variety of admission diagnoses and medical comorbidities, however, we did not identify a relationship with opioid responsiveness, including an active cancer diagnosis. Although SUDs were highly prevalent in our sample, there was no relationship with opioid responsiveness. This suggests that lack of response to opioids is not merely a matter of drug tolerance or an indication of drug-seeking behavior.
Factors Impacting Response
Many factors could affect whether an individual obtains an adequate analgesic response to opioids or other pain medications, including variations in genes encoding opioid receptors and hepatic enzymes involved in drug metabolism and an individual’s opioid exposure history.13 The phenomenon of requiring more drug to produce the same relief after repeated exposures (ie, tolerance) is well known.14 Opioid-induced hyperalgesia is a phenomenon whereby a patient’s overall pain increases while receiving opioids, but each opioid dose might be perceived as beneficial.15 Increasingly, psychosocial distress is an important factor in opioid response. Adverse selection is the process culminating in those with psychosocial distress and/or SUDs being prescribed more opioids for longer durations.16 Our data suggests that this process could play a role in PAC settings. In addition, exaggerating pain to obtain additional opioids for nonmedical purposes, such as euphoria or relaxation, also is possible.17
When clinically assessing an individual whose pain is not well controlled despite escalating opioid dosages, prescribers must consider which of these factors likely is predominant. However, the first step of determining who has a poor opioid response is not straightforward. Directly asking patients is challenging; many individuals perceive opioids to be helpful while simultaneously reporting inadequately controlled pain.7,8 The primary value of this study is the possibility of providing prescribers a quick, simple method of assessing a patient’s response to opioids. Using this method, individuals who are responding poorly to opioids, including those who might exaggerate pain for secondary gain, could be identified. Health care professionals could consider revisiting pain management strategies, assess for the presence of OUD, or evaluate other contributors to inadequately controlled pain. Although we only collected data regarding response to opioids in this study, any pain medication administered as needed (ie, nonsteroidal anti-inflammatory drugs, acetaminophen) could be analyzed using this methodology, allowing identification of other helpful pain management strategies. We began the validation process with extensive chart review, but further validation is required before this method can be applied to routine clinical practice.
Patients who report uncontrolled pain despite receiving opioids are a clinically challenging population. The traditional strategy has been to escalate opioids, which is recommended by the World Health Organization stepladder approach for patients with cancer pain and limited life expectancy.18 Applying this approach to a general population of patients with chronic pain is ineffective and dangerous.19 The CDC and the VA/US Department of Defense (VA/DoD) guidelines both recommend carefully reassessing risks and benefits at total daily dosages > 50 OME and avoid increasing dosages to > 90 OME daily in most circumstances.5,20 Our finding that participants taking higher dosages of opioids were not more likely to have better control over their pain supports this recommendation.
Limitations
This study has several limitations, the most significant is its small sample size because of the exploratory nature of the project. Results are based on a small pilot sample enriched to include individuals with at least moderate pain who receive opioids frequently at 1 VA CLC-PAC unit; therefore, the results might not be representative of all veterans or a more general population. Our small sample size limits power to detect small differences. Data collected should be used to inform formal power calculations before subsequent larger studies to select adequate sample size. Validation studies, including samples from the same population using different dates, which reproduce findings are an important step. Moreover, we only had data on a single dimension of pain (intensity/severity), as measured by the pain scale, which nursing staff used to make a real-time clinical decision of whether to administer an as-needed opioid. Future studies should consider using pain measures that provide multidimensional assessment (ie, severity, functional interference) and/or were developed specifically for veterans, such as the Defense and Veterans Pain Rating Scale.21
Our study was cross-sectional in nature and addressed a single 24-hour period of data per participant. The years of data collection (2016 and 2017) followed a decline in overall opioid prescribing that has continued, likely influenced by CDC and VA/DoD guidelines.22 It is unclear whether our observations are an accurate reflection of individuals’ response over time or whether prescribing practices in PAC have shifted.
We did not consider the type of pain being treated or explore clinicians’ reasons for prescribing opioids, therefore limiting our ability to know whether opioids were indicated. Information regarding OUD and other SUDs was limited to what was documented in the chart during the CLC-PAC unit admission. We did not have information on length of exposure to opioids. It is possible that opioid tolerance could play a role in reducing opioid responsiveness. However, simple tolerance would not be expected to explain robust correlations with psychiatric comorbidities. Also, simple tolerance would be expected to be overcome with higher opioid dosages, whereas our study demonstrates less responsiveness. These data suggests that some individuals’ pain might be poorly opioid responsive, and psychiatric factors could increase this risk. We used a novel data source in combination with chart review; to our knowledge, barcode medication administration data have not been used in this manner previously. Future work needs to validate this method, using larger sample sizes and several clinical sites. Finally, we used regression models that controlled for average pre-opioid pain rating scores, which is only 1 covariate important for examining effects. Larger studies with adequate power should control for multiple covariates known to be associated with pain and opioid response.
Conclusions
Opioid responsiveness is important clinically yet challenging to assess. This pilot study identifies a way of classifying pain as relatively opioid nonresponsive using administrative data but requires further validation before considering scaling for more general use. The possibility that a substantial percentage of residents in a CLC-PAC unit could be receiving increasing dosages of opioids without adequate benefit justifies the need for more research and underscores the need for prescribers to assess individuals frequently for ongoing benefit of opioids regardless of diagnosis or mechanism of pain.
Acknowledgments
The authors thank Andrzej Galecki, Corey Powell, and the University of Michigan Consulting for Statistics, Computing and Analytics Research Center for assistance with statistical analysis.
1. Marshall TL, Reinhardt JP. Pain management in the last 6 months of life: predictors of opioid and non-opioid use. J Am Med Dir Assoc. 2019;20(6):789-790. doi:10.1016/j.jamda.2019.02.026
2. Tait RC, Chibnall JT. Pain in older subacute care patients: associations with clinical status and treatment. Pain Med. 2002;3(3):231-239. doi:10.1046/j.1526-4637.2002.02031.x
3. Pimentel CB, Briesacher BA, Gurwitz JH, Rosen AB, Pimentel MT, Lapane KL. Pain management in nursing home residents with cancer. J Am Geriatr Soc. 2015;63(4):633-641. doi:10.1111/jgs.13345
4. Hunnicutt JN, Tjia J, Lapane KL. Hospice use and pain management in elderly nursing home residents with cancer. J Pain Symptom Manage. 2017;53(3):561-570. doi:10.1016/j.jpainsymman.2016.10.369
5. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep. 2016;65(No. RR-1):1-49. doi:10.15585/mmwr.rr6501e1
6. Oliva EM, Bowe T, Tavakoli S, et al. Development and applications of the Veterans Health Administration’s Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14(1):34-49. doi:10.1037/ser0000099
7. Goesling J, Moser SE, Lin LA, Hassett AL, Wasserman RA, Brummett CM. Discrepancies between perceived benefit of opioids and self-reported patient outcomes. Pain Med. 2018;19(2):297-306. doi:10.1093/pm/pnw263
8. Sullivan M, Von Korff M, Banta-Green C. Problems and concerns of patients receiving chronic opioid therapy for chronic non-cancer pain. Pain. 2010;149(2):345-353. doi:10.1016/j.pain.2010.02.037
9. Brennan PL, Greenbaum MA, Lemke S, Schutte KK. Mental health disorder, pain, and pain treatment among long-term care residents: evidence from the Minimum Data Set 3.0. Aging Ment Health. 2019;23(9):1146-1155. doi:10.1080/13607863.2018.1481922
10. Woo A, Lechner B, Fu T, et al. Cut points for mild, moderate, and severe pain among cancer and non-cancer patients: a literature review. Ann Palliat Med. 2015;4(4):176-183. doi:10.3978/j.issn.2224-5820.2015.09.04
11. Centers for Disease Control and Prevention. Calculating total daily dose of opioids for safer dosage. 2017. Accessed December 15, 2021. https://www.cdc.gov/drugoverdose/pdf/calculating_total_daily_dose-a.pdf
12. Nielsen S, Degenhardt L, Hoban B, Gisev N. Comparing opioids: a guide to estimating oral morphine equivalents (OME) in research. NDARC Technical Report No. 329. National Drug and Alcohol Research Centre; 2014. Accessed December 15, 2021. http://www.drugsandalcohol.ie/22703/1/NDARC Comparing opioids.pdf
13. Smith HS. Variations in opioid responsiveness. Pain Physician. 2008;11(2):237-248.
14. Collin E, Cesselin F. Neurobiological mechanisms of opioid tolerance and dependence. Clin Neuropharmacol. 1991;14(6):465-488. doi:10.1097/00002826-199112000-00001
15. Higgins C, Smith BH, Matthews K. Evidence of opioid-induced hyperalgesia in clinical populations after chronic opioid exposure: a systematic review and meta-analysis. Br J Anaesth. 2019;122(6):e114-e126. doi:10.1016/j.bja.2018.09.019
16. Howe CQ, Sullivan MD. The missing ‘P’ in pain management: how the current opioid epidemic highlights the need for psychiatric services in chronic pain care. Gen Hosp Psychiatry. 2014;36(1):99-104. doi:10.1016/j.genhosppsych.2013.10.003
17. Substance Abuse and Mental Health Services Administration. Key substance use and mental health indicators in the United States: results from the 2018 National Survey on Drug Use and Health. HHS Publ No PEP19-5068, NSDUH Ser H-54. 2019;170:51-58. Accessed December 15, 2021. https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHNationalFindingsReport2018/NSDUHNationalFindingsReport2018.pdf
18. World Health Organization. WHO’s cancer pain ladder for adults. Accessed September 21, 2018. www.who.int/ncds/management/palliative-care/Infographic-cancer-pain-lowres.pdf
19. Ballantyne JC, Kalso E, Stannard C. WHO analgesic ladder: a good concept gone astray. BMJ. 2016;352:i20. doi:10.1136/bmj.i20
20. The Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guideline for opioid therapy for chronic pain. US Dept of Veterans Affairs and Dept of Defense; 2017. Accessed December 15, 2021. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf
21. Defense & Veterans Pain Rating Scale (DVPRS). Defense & Veterans Center for Integrative Pain Management. Accessed July 21, 2021. https://www.dvcipm.org/clinical-resources/defense-veterans-pain-rating-scale-dvprs/
22. Guy GP Jr, Zhang K, Bohm MK, et al. Vital signs: changes in opioid prescribing in the United States, 2006–2015. MMWR Morb Mortal Wkly Rep. 2017;66(26):697-704. doi:10.15585/mmwr.mm6626a4
Older adults admitted to post-acute settings frequently have complex rehabilitation needs and multimorbidity, which predisposes them to pain management challenges.1,2 The prevalence of pain in post-acute and long-term care is as high as 65%, and opioid use is common among this population with 1 in 7 residents receiving long-term opioids.3,4
Opioids that do not adequately control pain represent a missed opportunity for deprescribing. There is limited evidence regarding efficacy of long-term opioid use (> 90 days) for improving pain and physical functioning.5 In addition, long-term opioid use carries significant risks, including overdose-related death, dependence, and increased emergency department visits.5 These risks are likely to be pronounced among veterans receiving post-acute care (PAC) who are older, have comorbid psychiatric disorders, are prescribed several centrally acting medications, and experience substance use disorder (SUD).6
Older adults are at increased risk for opioid toxicity because of reduced drug clearance and smaller therapeutic window.5 Centers for Disease Control and Prevention (CDC) guidelines recommend frequently assessing patients for benefit in terms of sustained improvement in pain as well as physical function.5 If pain and functional improvements are minimal, opioid use and nonopioid pain management strategies should be considered. Some patients will struggle with this approach. Directly asking patients about the effectiveness of opioids is challenging. Opioid users with chronic pain frequently report problems with opioids even as they describe them as indispensable for pain management.7,8
Earlier studies have assessed patient perspectives regarding opioid difficulties as well as their helpfulness, which could introduce recall bias. Patient-level factors that contribute to a global sense of distress, in addition to the presence of painful physical conditions, also could contribute to patients requesting opioids without experiencing adequate pain relief. One study in veterans residing in PAC facilities found that individuals with depression, posttraumatic stress disorder (PTSD), and SUD were more likely to report pain and receive scheduled analgesics; this effect persisted in individuals with PTSD even after adjusting for demographic and functional status variables.9 The study looked only at analgesics as a class and did not examine opioids specifically. It is possible that distressed individuals, such as those with uncontrolled depression, PTSD, and SUD, might be more likely to report high pain levels and receive opioids with inadequate benefit and increased risk. Identifying the primary condition causing distress and targeting treatment to that condition (ie, depression) is preferable to escalating opioids in an attempt to treat pain in the context of nonresponse. Assessing an individual’s aggregate response to opioids rather than relying on a single self-report is a useful addition to current pain management strategies.
The goal of this study was to pilot a method of identifying opioid-nonresponsive pain using administrative data, measure its prevalence in a PAC population of veterans, and explore clinical and demographic correlates with particular attention to variates that could indicate high levels of psychological and physical distress. Identifying pain that is poorly responsive to opioids would give clinicians the opportunity to avoid or minimize opioid use and prioritize treatments that are likely to improve the resident’s pain, quality of life, and physical function while minimizing recall bias. We hypothesized that pain that responds poorly to opioids would be prevalent among veterans residing in a PAC unit. We considered that veterans with pain poorly responsive to opioids would be more likely to have factors that would place them at increased risk of adverse effects, such as comorbid psychiatric conditions, history of SUD, and multimorbidity, providing further rationale for clinical equipoise in that population.6
Methods
This was a small, retrospective cross-sectional study using administrative data and chart review. The study included veterans who were administered opioids while residing in a single US Department of Veterans Affairs (VA) community living center PAC (CLC-PAC) unit during at least 1 of 4 nonconsecutive, random days in 2016 and 2017. The study was approved by the institutional review board of the Ann Arbor VA Health System (#2017-1034) as part of a larger project involving models of care in vulnerable older veterans.
Inclusion criteria were the presence of at least moderate pain (≥ 4 on a 0 to 10 scale); receiving ≥ 2 opioids ordered as needed over the prespecified 24-hour observation period; and having ≥ 2 pre-and postopioid administration pain scores during the observation period. Veterans who did not meet these criteria were excluded. At the time of initial sample selection, we did not capture information related to coprescribed analgesics, including a standing order of opioids. To obtain the sample, we initially characterized all veterans on the 4 days residing in the CLC-PAC unit as those reporting at least moderate pain (≥ 4) and those who reported no or mild pain (< 4). The cut point of 4 of 10 is consistent with moderate pain based on earlier work showing higher likelihood of pain that interferes with physical function.10 We then restricted the sample to veterans who received ≥ 2 opioids ordered as needed for pain and had ≥ 2 pre- and postopioid administration numeric pain rating scores during the 24-hour observation period. This methodology was chosen to enrich our sample for those who received opioids regularly for ongoing pain. Opioids were defined as full µ-opioid receptor agonists and included hydrocodone, oxycodone, morphine, hydromorphone, fentanyl, tramadol, and methadone.
Medication administration data were obtained from the VA corporate data warehouse, which houses all barcode medication administration data collected at the point of care. The dataset includes pain scores gathered by nursing staff before and after administering an as-needed analgesic. The corporate data warehouse records data/time of pain scores and the analgesic name, dosage, formulation, and date/time of administration. Using a standardized assessment form developed iteratively, we calculated opioid dosage in oral morphine equivalents (OME) for comparison.11,12 All abstracted data were reexamined for accuracy. Data initially were collected in an anonymized, blinded fashion. Participants were then unblinded for chart review. Initial data was captured in resident-days instead of unique residents because an individual resident might have been admitted on several observation days. We were primarily interested in how pain responded to opioids administered in response to resident request; therefore, we did not examine response to opioids that were continuously ordered (ie, scheduled). We did consider scheduled opioids when calculating total daily opioid dosage during the chart review.
Outcome of Interest
The primary outcome of interest was an individual’s response to as-needed opioids, which we defined as change in the pain score after opioid administration. The pre-opioid pain score was the score that immediately preceded administration of an as-needed opioid. The postopioid administration pain score was the first score after opioid administration if obtained within 3 hours of administration. Scores collected > 3 hours after opioid administration were excluded because they no longer accurately reflected the impact of the opioid due to the short half-lives. Observations were excluded if an opioid was administered without a recorded pain score; this occurred once for 6 individuals. Observations also were excluded if an opioid was administered but the data were captured on the following day (outside of the 24-hour window); this occurred once for 3 individuals.
We calculated a ∆ score by subtracting the postopioid pain rating score from the pre-opioid score. Individual ∆ scores were then averaged over the 24-hour period (range, 2-5 opioid doses). For example, if an individual reported a pre-opioid pain score of 10, and a postopioid pain score of 2, the ∆ was recorded as 8. If the individual’s next pre-opioid score was 10, and post-opioid score was 6, the ∆ was recorded as 4. ∆ scores over the 24-hour period were averaged together to determine that individual’s response to as-needed opioids. In the previous example, the mean ∆ score is 6. Lower mean ∆ scores reflect decreased responsiveness to opioids’ analgesic effect.
Demographic and clinical data were obtained from electronic health record review using a standardized assessment form. These data included information about medical and psychiatric comorbidities, specialist consultations, and CLC-PAC unit admission indications and diagnoses. Medications of interest were categorized as antidepressants, antipsychotics, benzodiazepines, muscle relaxants, hypnotics, stimulants, antiepileptic drugs/mood stabilizers (including gabapentin and pregabalin), and all adjuvant analgesics. Adjuvant analgesics were defined as medications administered for pain as documented by chart notes or those ordered as needed for pain, and analyzed as a composite variable. Antidepressants with analgesic properties (serotonin-norepinephrine reuptake inhibitors and tricyclic antidepressants) were considered adjuvant analgesics. Psychiatric information collected included presence of mood, anxiety, and psychotic disorders, and PTSD. SUD information was collected separately from other psychiatric disorders.
Analyses
The study population was described using tabulations for categorical data and means and standard deviations for continuous data. Responsiveness to opioids was analyzed as a continuous variable. Those with higher mean ∆ scores were considered to have pain relatively more responsive to opioids, while lower mean ∆ scores indicated pain less responsive to opioids. We constructed linear regression models controlling for average pre-opioid pain rating scores to explore associations between opioid responsiveness and variables of interest. All analyses were completed using Stata version 15. This study was not adequately powered to detect differences across the spectrum of opioid responsiveness, although the authors have reported differences in this article.
Results
Over the 4-day observational period there were 146 resident-days. Of these, 88 (60.3%) reported at least 1 pain score of ≥ 4. Of those, 61 (41.8%) received ≥ 1 as-needed opioid for pain. We identified 46 resident-days meeting study criteria of ≥ 2 pre- and postanalgesic scores. We identified 41 unique individuals (Figure 1). Two individuals were admitted to the CLC-PAC unit on 2 of the 4 observation days, and 1 individual was admitted to the CLC-PAC unit on 3 of the 4 observation days. For individuals admitted several days, we included data only from the initial observation day.
Response to opioids varied greatly in this sample. The mean (SD) ∆ pain score was 3.4 (1.6) and ranged from 0.5 to 6.3. Using linear regression, we found no relationship between admission indication, medical comorbidities (including active cancer), and opioid responsiveness (Table).
Psychiatric disorders were highly prevalent, with 25 individuals (61.0%) having ≥ 1 any psychiatric diagnosis identified on chart review. The presence of any psychiatric diagnosis was significantly associated with reduced responsiveness to opioids (β = −1.08; 95% CI, −2.04 to −0.13; P = .03). SUDs also were common, with 17 individuals (41.5%) having an active SUD; most were tobacco/nicotine. Twenty-six veterans (63.4%) had documentation of SUD in remission with 19 (46.3%) for substances other than tobacco/nicotine. There was no indication that any veteran in the sample was prescribed medication for opioid use disorder (OUD) at the time of observation. There was no relationship between opioid responsiveness and SUDs, neither active or in remission. Consults to other services that suggested distress or difficult-to-control symptoms also were frequent. Consults to the pain service were significantly associated with reduced responsiveness to opioids (β = −1.75; 95% CI, −3.33 to −0.17; P = .03). Association between psychiatry consultation and reduced opioid responsiveness trended toward significance (β = −0.95; 95% CI, −2.06 to 0.17; P = .09) (Figures 2 and 3). There was no significant association with palliative medicine consultation and opioid responsiveness.
A poorer response to opioids was associated with a significantly higher as-needed opioid dosage (β = −0.02; 95% CI, −0.04 to −0.01; P = .002) as well as a trend toward higher total opioid dosage (β = −0.005; 95% CI, −0.01 to 0.0003; P = .06) (Figure 4). Thirty-eight (92.7%) participants received nonopioid adjuvant analgesics for pain. More than half (56.1%) received antidepressants or gabapentinoids (51.2%), although we did not assess whether they were prescribed for pain or another indication. We did not identify a relationship between any specific psychoactive drug class and opioid responsiveness in this sample.
Discussion
This exploratory study used readily available administrative data in a CLC-PAC unit to assess responsiveness to opioids via a numeric mean ∆ score, with higher values indicating more pain relief in response to opioids. We then constructed linear regression models to characterize the relationship between the mean ∆ score and factors known to be associated with difficult-to-control pain and psychosocial distress. As expected, opioid responsiveness was highly variable among residents; some residents experienced essentially no reduction in pain, on average, despite receiving opioids. Psychiatric comorbidity, higher dosage in OMEs, and the presence of a pain service consult significantly correlated with poorer response to opioids. To our knowledge, this is the first study to quantify opioid responsiveness and describe the relationship with clinical correlates in the understudied PAC population.
Earlier research has demonstrated a relationship between the presence of psychiatric disorders and increased likelihood of receiving any analgesics among veterans residing in PAC.9 Our study adds to the literature by quantifying opioid response using readily available administrative data and examining associations with psychiatric diagnoses. These findings highlight the possibility that attempting to treat high levels of pain by escalating the opioid dosage in patients with a comorbid psychiatric diagnosis should be re-addressed, particularly if there is no meaningful pain reduction at lower opioid dosages. Our sample had a variety of admission diagnoses and medical comorbidities, however, we did not identify a relationship with opioid responsiveness, including an active cancer diagnosis. Although SUDs were highly prevalent in our sample, there was no relationship with opioid responsiveness. This suggests that lack of response to opioids is not merely a matter of drug tolerance or an indication of drug-seeking behavior.
Factors Impacting Response
Many factors could affect whether an individual obtains an adequate analgesic response to opioids or other pain medications, including variations in genes encoding opioid receptors and hepatic enzymes involved in drug metabolism and an individual’s opioid exposure history.13 The phenomenon of requiring more drug to produce the same relief after repeated exposures (ie, tolerance) is well known.14 Opioid-induced hyperalgesia is a phenomenon whereby a patient’s overall pain increases while receiving opioids, but each opioid dose might be perceived as beneficial.15 Increasingly, psychosocial distress is an important factor in opioid response. Adverse selection is the process culminating in those with psychosocial distress and/or SUDs being prescribed more opioids for longer durations.16 Our data suggests that this process could play a role in PAC settings. In addition, exaggerating pain to obtain additional opioids for nonmedical purposes, such as euphoria or relaxation, also is possible.17
When clinically assessing an individual whose pain is not well controlled despite escalating opioid dosages, prescribers must consider which of these factors likely is predominant. However, the first step of determining who has a poor opioid response is not straightforward. Directly asking patients is challenging; many individuals perceive opioids to be helpful while simultaneously reporting inadequately controlled pain.7,8 The primary value of this study is the possibility of providing prescribers a quick, simple method of assessing a patient’s response to opioids. Using this method, individuals who are responding poorly to opioids, including those who might exaggerate pain for secondary gain, could be identified. Health care professionals could consider revisiting pain management strategies, assess for the presence of OUD, or evaluate other contributors to inadequately controlled pain. Although we only collected data regarding response to opioids in this study, any pain medication administered as needed (ie, nonsteroidal anti-inflammatory drugs, acetaminophen) could be analyzed using this methodology, allowing identification of other helpful pain management strategies. We began the validation process with extensive chart review, but further validation is required before this method can be applied to routine clinical practice.
Patients who report uncontrolled pain despite receiving opioids are a clinically challenging population. The traditional strategy has been to escalate opioids, which is recommended by the World Health Organization stepladder approach for patients with cancer pain and limited life expectancy.18 Applying this approach to a general population of patients with chronic pain is ineffective and dangerous.19 The CDC and the VA/US Department of Defense (VA/DoD) guidelines both recommend carefully reassessing risks and benefits at total daily dosages > 50 OME and avoid increasing dosages to > 90 OME daily in most circumstances.5,20 Our finding that participants taking higher dosages of opioids were not more likely to have better control over their pain supports this recommendation.
Limitations
This study has several limitations, the most significant is its small sample size because of the exploratory nature of the project. Results are based on a small pilot sample enriched to include individuals with at least moderate pain who receive opioids frequently at 1 VA CLC-PAC unit; therefore, the results might not be representative of all veterans or a more general population. Our small sample size limits power to detect small differences. Data collected should be used to inform formal power calculations before subsequent larger studies to select adequate sample size. Validation studies, including samples from the same population using different dates, which reproduce findings are an important step. Moreover, we only had data on a single dimension of pain (intensity/severity), as measured by the pain scale, which nursing staff used to make a real-time clinical decision of whether to administer an as-needed opioid. Future studies should consider using pain measures that provide multidimensional assessment (ie, severity, functional interference) and/or were developed specifically for veterans, such as the Defense and Veterans Pain Rating Scale.21
Our study was cross-sectional in nature and addressed a single 24-hour period of data per participant. The years of data collection (2016 and 2017) followed a decline in overall opioid prescribing that has continued, likely influenced by CDC and VA/DoD guidelines.22 It is unclear whether our observations are an accurate reflection of individuals’ response over time or whether prescribing practices in PAC have shifted.
We did not consider the type of pain being treated or explore clinicians’ reasons for prescribing opioids, therefore limiting our ability to know whether opioids were indicated. Information regarding OUD and other SUDs was limited to what was documented in the chart during the CLC-PAC unit admission. We did not have information on length of exposure to opioids. It is possible that opioid tolerance could play a role in reducing opioid responsiveness. However, simple tolerance would not be expected to explain robust correlations with psychiatric comorbidities. Also, simple tolerance would be expected to be overcome with higher opioid dosages, whereas our study demonstrates less responsiveness. These data suggests that some individuals’ pain might be poorly opioid responsive, and psychiatric factors could increase this risk. We used a novel data source in combination with chart review; to our knowledge, barcode medication administration data have not been used in this manner previously. Future work needs to validate this method, using larger sample sizes and several clinical sites. Finally, we used regression models that controlled for average pre-opioid pain rating scores, which is only 1 covariate important for examining effects. Larger studies with adequate power should control for multiple covariates known to be associated with pain and opioid response.
Conclusions
Opioid responsiveness is important clinically yet challenging to assess. This pilot study identifies a way of classifying pain as relatively opioid nonresponsive using administrative data but requires further validation before considering scaling for more general use. The possibility that a substantial percentage of residents in a CLC-PAC unit could be receiving increasing dosages of opioids without adequate benefit justifies the need for more research and underscores the need for prescribers to assess individuals frequently for ongoing benefit of opioids regardless of diagnosis or mechanism of pain.
Acknowledgments
The authors thank Andrzej Galecki, Corey Powell, and the University of Michigan Consulting for Statistics, Computing and Analytics Research Center for assistance with statistical analysis.
Older adults admitted to post-acute settings frequently have complex rehabilitation needs and multimorbidity, which predisposes them to pain management challenges.1,2 The prevalence of pain in post-acute and long-term care is as high as 65%, and opioid use is common among this population with 1 in 7 residents receiving long-term opioids.3,4
Opioids that do not adequately control pain represent a missed opportunity for deprescribing. There is limited evidence regarding efficacy of long-term opioid use (> 90 days) for improving pain and physical functioning.5 In addition, long-term opioid use carries significant risks, including overdose-related death, dependence, and increased emergency department visits.5 These risks are likely to be pronounced among veterans receiving post-acute care (PAC) who are older, have comorbid psychiatric disorders, are prescribed several centrally acting medications, and experience substance use disorder (SUD).6
Older adults are at increased risk for opioid toxicity because of reduced drug clearance and smaller therapeutic window.5 Centers for Disease Control and Prevention (CDC) guidelines recommend frequently assessing patients for benefit in terms of sustained improvement in pain as well as physical function.5 If pain and functional improvements are minimal, opioid use and nonopioid pain management strategies should be considered. Some patients will struggle with this approach. Directly asking patients about the effectiveness of opioids is challenging. Opioid users with chronic pain frequently report problems with opioids even as they describe them as indispensable for pain management.7,8
Earlier studies have assessed patient perspectives regarding opioid difficulties as well as their helpfulness, which could introduce recall bias. Patient-level factors that contribute to a global sense of distress, in addition to the presence of painful physical conditions, also could contribute to patients requesting opioids without experiencing adequate pain relief. One study in veterans residing in PAC facilities found that individuals with depression, posttraumatic stress disorder (PTSD), and SUD were more likely to report pain and receive scheduled analgesics; this effect persisted in individuals with PTSD even after adjusting for demographic and functional status variables.9 The study looked only at analgesics as a class and did not examine opioids specifically. It is possible that distressed individuals, such as those with uncontrolled depression, PTSD, and SUD, might be more likely to report high pain levels and receive opioids with inadequate benefit and increased risk. Identifying the primary condition causing distress and targeting treatment to that condition (ie, depression) is preferable to escalating opioids in an attempt to treat pain in the context of nonresponse. Assessing an individual’s aggregate response to opioids rather than relying on a single self-report is a useful addition to current pain management strategies.
The goal of this study was to pilot a method of identifying opioid-nonresponsive pain using administrative data, measure its prevalence in a PAC population of veterans, and explore clinical and demographic correlates with particular attention to variates that could indicate high levels of psychological and physical distress. Identifying pain that is poorly responsive to opioids would give clinicians the opportunity to avoid or minimize opioid use and prioritize treatments that are likely to improve the resident’s pain, quality of life, and physical function while minimizing recall bias. We hypothesized that pain that responds poorly to opioids would be prevalent among veterans residing in a PAC unit. We considered that veterans with pain poorly responsive to opioids would be more likely to have factors that would place them at increased risk of adverse effects, such as comorbid psychiatric conditions, history of SUD, and multimorbidity, providing further rationale for clinical equipoise in that population.6
Methods
This was a small, retrospective cross-sectional study using administrative data and chart review. The study included veterans who were administered opioids while residing in a single US Department of Veterans Affairs (VA) community living center PAC (CLC-PAC) unit during at least 1 of 4 nonconsecutive, random days in 2016 and 2017. The study was approved by the institutional review board of the Ann Arbor VA Health System (#2017-1034) as part of a larger project involving models of care in vulnerable older veterans.
Inclusion criteria were the presence of at least moderate pain (≥ 4 on a 0 to 10 scale); receiving ≥ 2 opioids ordered as needed over the prespecified 24-hour observation period; and having ≥ 2 pre-and postopioid administration pain scores during the observation period. Veterans who did not meet these criteria were excluded. At the time of initial sample selection, we did not capture information related to coprescribed analgesics, including a standing order of opioids. To obtain the sample, we initially characterized all veterans on the 4 days residing in the CLC-PAC unit as those reporting at least moderate pain (≥ 4) and those who reported no or mild pain (< 4). The cut point of 4 of 10 is consistent with moderate pain based on earlier work showing higher likelihood of pain that interferes with physical function.10 We then restricted the sample to veterans who received ≥ 2 opioids ordered as needed for pain and had ≥ 2 pre- and postopioid administration numeric pain rating scores during the 24-hour observation period. This methodology was chosen to enrich our sample for those who received opioids regularly for ongoing pain. Opioids were defined as full µ-opioid receptor agonists and included hydrocodone, oxycodone, morphine, hydromorphone, fentanyl, tramadol, and methadone.
Medication administration data were obtained from the VA corporate data warehouse, which houses all barcode medication administration data collected at the point of care. The dataset includes pain scores gathered by nursing staff before and after administering an as-needed analgesic. The corporate data warehouse records data/time of pain scores and the analgesic name, dosage, formulation, and date/time of administration. Using a standardized assessment form developed iteratively, we calculated opioid dosage in oral morphine equivalents (OME) for comparison.11,12 All abstracted data were reexamined for accuracy. Data initially were collected in an anonymized, blinded fashion. Participants were then unblinded for chart review. Initial data was captured in resident-days instead of unique residents because an individual resident might have been admitted on several observation days. We were primarily interested in how pain responded to opioids administered in response to resident request; therefore, we did not examine response to opioids that were continuously ordered (ie, scheduled). We did consider scheduled opioids when calculating total daily opioid dosage during the chart review.
Outcome of Interest
The primary outcome of interest was an individual’s response to as-needed opioids, which we defined as change in the pain score after opioid administration. The pre-opioid pain score was the score that immediately preceded administration of an as-needed opioid. The postopioid administration pain score was the first score after opioid administration if obtained within 3 hours of administration. Scores collected > 3 hours after opioid administration were excluded because they no longer accurately reflected the impact of the opioid due to the short half-lives. Observations were excluded if an opioid was administered without a recorded pain score; this occurred once for 6 individuals. Observations also were excluded if an opioid was administered but the data were captured on the following day (outside of the 24-hour window); this occurred once for 3 individuals.
We calculated a ∆ score by subtracting the postopioid pain rating score from the pre-opioid score. Individual ∆ scores were then averaged over the 24-hour period (range, 2-5 opioid doses). For example, if an individual reported a pre-opioid pain score of 10, and a postopioid pain score of 2, the ∆ was recorded as 8. If the individual’s next pre-opioid score was 10, and post-opioid score was 6, the ∆ was recorded as 4. ∆ scores over the 24-hour period were averaged together to determine that individual’s response to as-needed opioids. In the previous example, the mean ∆ score is 6. Lower mean ∆ scores reflect decreased responsiveness to opioids’ analgesic effect.
Demographic and clinical data were obtained from electronic health record review using a standardized assessment form. These data included information about medical and psychiatric comorbidities, specialist consultations, and CLC-PAC unit admission indications and diagnoses. Medications of interest were categorized as antidepressants, antipsychotics, benzodiazepines, muscle relaxants, hypnotics, stimulants, antiepileptic drugs/mood stabilizers (including gabapentin and pregabalin), and all adjuvant analgesics. Adjuvant analgesics were defined as medications administered for pain as documented by chart notes or those ordered as needed for pain, and analyzed as a composite variable. Antidepressants with analgesic properties (serotonin-norepinephrine reuptake inhibitors and tricyclic antidepressants) were considered adjuvant analgesics. Psychiatric information collected included presence of mood, anxiety, and psychotic disorders, and PTSD. SUD information was collected separately from other psychiatric disorders.
Analyses
The study population was described using tabulations for categorical data and means and standard deviations for continuous data. Responsiveness to opioids was analyzed as a continuous variable. Those with higher mean ∆ scores were considered to have pain relatively more responsive to opioids, while lower mean ∆ scores indicated pain less responsive to opioids. We constructed linear regression models controlling for average pre-opioid pain rating scores to explore associations between opioid responsiveness and variables of interest. All analyses were completed using Stata version 15. This study was not adequately powered to detect differences across the spectrum of opioid responsiveness, although the authors have reported differences in this article.
Results
Over the 4-day observational period there were 146 resident-days. Of these, 88 (60.3%) reported at least 1 pain score of ≥ 4. Of those, 61 (41.8%) received ≥ 1 as-needed opioid for pain. We identified 46 resident-days meeting study criteria of ≥ 2 pre- and postanalgesic scores. We identified 41 unique individuals (Figure 1). Two individuals were admitted to the CLC-PAC unit on 2 of the 4 observation days, and 1 individual was admitted to the CLC-PAC unit on 3 of the 4 observation days. For individuals admitted several days, we included data only from the initial observation day.
Response to opioids varied greatly in this sample. The mean (SD) ∆ pain score was 3.4 (1.6) and ranged from 0.5 to 6.3. Using linear regression, we found no relationship between admission indication, medical comorbidities (including active cancer), and opioid responsiveness (Table).
Psychiatric disorders were highly prevalent, with 25 individuals (61.0%) having ≥ 1 any psychiatric diagnosis identified on chart review. The presence of any psychiatric diagnosis was significantly associated with reduced responsiveness to opioids (β = −1.08; 95% CI, −2.04 to −0.13; P = .03). SUDs also were common, with 17 individuals (41.5%) having an active SUD; most were tobacco/nicotine. Twenty-six veterans (63.4%) had documentation of SUD in remission with 19 (46.3%) for substances other than tobacco/nicotine. There was no indication that any veteran in the sample was prescribed medication for opioid use disorder (OUD) at the time of observation. There was no relationship between opioid responsiveness and SUDs, neither active or in remission. Consults to other services that suggested distress or difficult-to-control symptoms also were frequent. Consults to the pain service were significantly associated with reduced responsiveness to opioids (β = −1.75; 95% CI, −3.33 to −0.17; P = .03). Association between psychiatry consultation and reduced opioid responsiveness trended toward significance (β = −0.95; 95% CI, −2.06 to 0.17; P = .09) (Figures 2 and 3). There was no significant association with palliative medicine consultation and opioid responsiveness.
A poorer response to opioids was associated with a significantly higher as-needed opioid dosage (β = −0.02; 95% CI, −0.04 to −0.01; P = .002) as well as a trend toward higher total opioid dosage (β = −0.005; 95% CI, −0.01 to 0.0003; P = .06) (Figure 4). Thirty-eight (92.7%) participants received nonopioid adjuvant analgesics for pain. More than half (56.1%) received antidepressants or gabapentinoids (51.2%), although we did not assess whether they were prescribed for pain or another indication. We did not identify a relationship between any specific psychoactive drug class and opioid responsiveness in this sample.
Discussion
This exploratory study used readily available administrative data in a CLC-PAC unit to assess responsiveness to opioids via a numeric mean ∆ score, with higher values indicating more pain relief in response to opioids. We then constructed linear regression models to characterize the relationship between the mean ∆ score and factors known to be associated with difficult-to-control pain and psychosocial distress. As expected, opioid responsiveness was highly variable among residents; some residents experienced essentially no reduction in pain, on average, despite receiving opioids. Psychiatric comorbidity, higher dosage in OMEs, and the presence of a pain service consult significantly correlated with poorer response to opioids. To our knowledge, this is the first study to quantify opioid responsiveness and describe the relationship with clinical correlates in the understudied PAC population.
Earlier research has demonstrated a relationship between the presence of psychiatric disorders and increased likelihood of receiving any analgesics among veterans residing in PAC.9 Our study adds to the literature by quantifying opioid response using readily available administrative data and examining associations with psychiatric diagnoses. These findings highlight the possibility that attempting to treat high levels of pain by escalating the opioid dosage in patients with a comorbid psychiatric diagnosis should be re-addressed, particularly if there is no meaningful pain reduction at lower opioid dosages. Our sample had a variety of admission diagnoses and medical comorbidities, however, we did not identify a relationship with opioid responsiveness, including an active cancer diagnosis. Although SUDs were highly prevalent in our sample, there was no relationship with opioid responsiveness. This suggests that lack of response to opioids is not merely a matter of drug tolerance or an indication of drug-seeking behavior.
Factors Impacting Response
Many factors could affect whether an individual obtains an adequate analgesic response to opioids or other pain medications, including variations in genes encoding opioid receptors and hepatic enzymes involved in drug metabolism and an individual’s opioid exposure history.13 The phenomenon of requiring more drug to produce the same relief after repeated exposures (ie, tolerance) is well known.14 Opioid-induced hyperalgesia is a phenomenon whereby a patient’s overall pain increases while receiving opioids, but each opioid dose might be perceived as beneficial.15 Increasingly, psychosocial distress is an important factor in opioid response. Adverse selection is the process culminating in those with psychosocial distress and/or SUDs being prescribed more opioids for longer durations.16 Our data suggests that this process could play a role in PAC settings. In addition, exaggerating pain to obtain additional opioids for nonmedical purposes, such as euphoria or relaxation, also is possible.17
When clinically assessing an individual whose pain is not well controlled despite escalating opioid dosages, prescribers must consider which of these factors likely is predominant. However, the first step of determining who has a poor opioid response is not straightforward. Directly asking patients is challenging; many individuals perceive opioids to be helpful while simultaneously reporting inadequately controlled pain.7,8 The primary value of this study is the possibility of providing prescribers a quick, simple method of assessing a patient’s response to opioids. Using this method, individuals who are responding poorly to opioids, including those who might exaggerate pain for secondary gain, could be identified. Health care professionals could consider revisiting pain management strategies, assess for the presence of OUD, or evaluate other contributors to inadequately controlled pain. Although we only collected data regarding response to opioids in this study, any pain medication administered as needed (ie, nonsteroidal anti-inflammatory drugs, acetaminophen) could be analyzed using this methodology, allowing identification of other helpful pain management strategies. We began the validation process with extensive chart review, but further validation is required before this method can be applied to routine clinical practice.
Patients who report uncontrolled pain despite receiving opioids are a clinically challenging population. The traditional strategy has been to escalate opioids, which is recommended by the World Health Organization stepladder approach for patients with cancer pain and limited life expectancy.18 Applying this approach to a general population of patients with chronic pain is ineffective and dangerous.19 The CDC and the VA/US Department of Defense (VA/DoD) guidelines both recommend carefully reassessing risks and benefits at total daily dosages > 50 OME and avoid increasing dosages to > 90 OME daily in most circumstances.5,20 Our finding that participants taking higher dosages of opioids were not more likely to have better control over their pain supports this recommendation.
Limitations
This study has several limitations, the most significant is its small sample size because of the exploratory nature of the project. Results are based on a small pilot sample enriched to include individuals with at least moderate pain who receive opioids frequently at 1 VA CLC-PAC unit; therefore, the results might not be representative of all veterans or a more general population. Our small sample size limits power to detect small differences. Data collected should be used to inform formal power calculations before subsequent larger studies to select adequate sample size. Validation studies, including samples from the same population using different dates, which reproduce findings are an important step. Moreover, we only had data on a single dimension of pain (intensity/severity), as measured by the pain scale, which nursing staff used to make a real-time clinical decision of whether to administer an as-needed opioid. Future studies should consider using pain measures that provide multidimensional assessment (ie, severity, functional interference) and/or were developed specifically for veterans, such as the Defense and Veterans Pain Rating Scale.21
Our study was cross-sectional in nature and addressed a single 24-hour period of data per participant. The years of data collection (2016 and 2017) followed a decline in overall opioid prescribing that has continued, likely influenced by CDC and VA/DoD guidelines.22 It is unclear whether our observations are an accurate reflection of individuals’ response over time or whether prescribing practices in PAC have shifted.
We did not consider the type of pain being treated or explore clinicians’ reasons for prescribing opioids, therefore limiting our ability to know whether opioids were indicated. Information regarding OUD and other SUDs was limited to what was documented in the chart during the CLC-PAC unit admission. We did not have information on length of exposure to opioids. It is possible that opioid tolerance could play a role in reducing opioid responsiveness. However, simple tolerance would not be expected to explain robust correlations with psychiatric comorbidities. Also, simple tolerance would be expected to be overcome with higher opioid dosages, whereas our study demonstrates less responsiveness. These data suggests that some individuals’ pain might be poorly opioid responsive, and psychiatric factors could increase this risk. We used a novel data source in combination with chart review; to our knowledge, barcode medication administration data have not been used in this manner previously. Future work needs to validate this method, using larger sample sizes and several clinical sites. Finally, we used regression models that controlled for average pre-opioid pain rating scores, which is only 1 covariate important for examining effects. Larger studies with adequate power should control for multiple covariates known to be associated with pain and opioid response.
Conclusions
Opioid responsiveness is important clinically yet challenging to assess. This pilot study identifies a way of classifying pain as relatively opioid nonresponsive using administrative data but requires further validation before considering scaling for more general use. The possibility that a substantial percentage of residents in a CLC-PAC unit could be receiving increasing dosages of opioids without adequate benefit justifies the need for more research and underscores the need for prescribers to assess individuals frequently for ongoing benefit of opioids regardless of diagnosis or mechanism of pain.
Acknowledgments
The authors thank Andrzej Galecki, Corey Powell, and the University of Michigan Consulting for Statistics, Computing and Analytics Research Center for assistance with statistical analysis.
1. Marshall TL, Reinhardt JP. Pain management in the last 6 months of life: predictors of opioid and non-opioid use. J Am Med Dir Assoc. 2019;20(6):789-790. doi:10.1016/j.jamda.2019.02.026
2. Tait RC, Chibnall JT. Pain in older subacute care patients: associations with clinical status and treatment. Pain Med. 2002;3(3):231-239. doi:10.1046/j.1526-4637.2002.02031.x
3. Pimentel CB, Briesacher BA, Gurwitz JH, Rosen AB, Pimentel MT, Lapane KL. Pain management in nursing home residents with cancer. J Am Geriatr Soc. 2015;63(4):633-641. doi:10.1111/jgs.13345
4. Hunnicutt JN, Tjia J, Lapane KL. Hospice use and pain management in elderly nursing home residents with cancer. J Pain Symptom Manage. 2017;53(3):561-570. doi:10.1016/j.jpainsymman.2016.10.369
5. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep. 2016;65(No. RR-1):1-49. doi:10.15585/mmwr.rr6501e1
6. Oliva EM, Bowe T, Tavakoli S, et al. Development and applications of the Veterans Health Administration’s Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14(1):34-49. doi:10.1037/ser0000099
7. Goesling J, Moser SE, Lin LA, Hassett AL, Wasserman RA, Brummett CM. Discrepancies between perceived benefit of opioids and self-reported patient outcomes. Pain Med. 2018;19(2):297-306. doi:10.1093/pm/pnw263
8. Sullivan M, Von Korff M, Banta-Green C. Problems and concerns of patients receiving chronic opioid therapy for chronic non-cancer pain. Pain. 2010;149(2):345-353. doi:10.1016/j.pain.2010.02.037
9. Brennan PL, Greenbaum MA, Lemke S, Schutte KK. Mental health disorder, pain, and pain treatment among long-term care residents: evidence from the Minimum Data Set 3.0. Aging Ment Health. 2019;23(9):1146-1155. doi:10.1080/13607863.2018.1481922
10. Woo A, Lechner B, Fu T, et al. Cut points for mild, moderate, and severe pain among cancer and non-cancer patients: a literature review. Ann Palliat Med. 2015;4(4):176-183. doi:10.3978/j.issn.2224-5820.2015.09.04
11. Centers for Disease Control and Prevention. Calculating total daily dose of opioids for safer dosage. 2017. Accessed December 15, 2021. https://www.cdc.gov/drugoverdose/pdf/calculating_total_daily_dose-a.pdf
12. Nielsen S, Degenhardt L, Hoban B, Gisev N. Comparing opioids: a guide to estimating oral morphine equivalents (OME) in research. NDARC Technical Report No. 329. National Drug and Alcohol Research Centre; 2014. Accessed December 15, 2021. http://www.drugsandalcohol.ie/22703/1/NDARC Comparing opioids.pdf
13. Smith HS. Variations in opioid responsiveness. Pain Physician. 2008;11(2):237-248.
14. Collin E, Cesselin F. Neurobiological mechanisms of opioid tolerance and dependence. Clin Neuropharmacol. 1991;14(6):465-488. doi:10.1097/00002826-199112000-00001
15. Higgins C, Smith BH, Matthews K. Evidence of opioid-induced hyperalgesia in clinical populations after chronic opioid exposure: a systematic review and meta-analysis. Br J Anaesth. 2019;122(6):e114-e126. doi:10.1016/j.bja.2018.09.019
16. Howe CQ, Sullivan MD. The missing ‘P’ in pain management: how the current opioid epidemic highlights the need for psychiatric services in chronic pain care. Gen Hosp Psychiatry. 2014;36(1):99-104. doi:10.1016/j.genhosppsych.2013.10.003
17. Substance Abuse and Mental Health Services Administration. Key substance use and mental health indicators in the United States: results from the 2018 National Survey on Drug Use and Health. HHS Publ No PEP19-5068, NSDUH Ser H-54. 2019;170:51-58. Accessed December 15, 2021. https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHNationalFindingsReport2018/NSDUHNationalFindingsReport2018.pdf
18. World Health Organization. WHO’s cancer pain ladder for adults. Accessed September 21, 2018. www.who.int/ncds/management/palliative-care/Infographic-cancer-pain-lowres.pdf
19. Ballantyne JC, Kalso E, Stannard C. WHO analgesic ladder: a good concept gone astray. BMJ. 2016;352:i20. doi:10.1136/bmj.i20
20. The Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guideline for opioid therapy for chronic pain. US Dept of Veterans Affairs and Dept of Defense; 2017. Accessed December 15, 2021. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf
21. Defense & Veterans Pain Rating Scale (DVPRS). Defense & Veterans Center for Integrative Pain Management. Accessed July 21, 2021. https://www.dvcipm.org/clinical-resources/defense-veterans-pain-rating-scale-dvprs/
22. Guy GP Jr, Zhang K, Bohm MK, et al. Vital signs: changes in opioid prescribing in the United States, 2006–2015. MMWR Morb Mortal Wkly Rep. 2017;66(26):697-704. doi:10.15585/mmwr.mm6626a4
1. Marshall TL, Reinhardt JP. Pain management in the last 6 months of life: predictors of opioid and non-opioid use. J Am Med Dir Assoc. 2019;20(6):789-790. doi:10.1016/j.jamda.2019.02.026
2. Tait RC, Chibnall JT. Pain in older subacute care patients: associations with clinical status and treatment. Pain Med. 2002;3(3):231-239. doi:10.1046/j.1526-4637.2002.02031.x
3. Pimentel CB, Briesacher BA, Gurwitz JH, Rosen AB, Pimentel MT, Lapane KL. Pain management in nursing home residents with cancer. J Am Geriatr Soc. 2015;63(4):633-641. doi:10.1111/jgs.13345
4. Hunnicutt JN, Tjia J, Lapane KL. Hospice use and pain management in elderly nursing home residents with cancer. J Pain Symptom Manage. 2017;53(3):561-570. doi:10.1016/j.jpainsymman.2016.10.369
5. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep. 2016;65(No. RR-1):1-49. doi:10.15585/mmwr.rr6501e1
6. Oliva EM, Bowe T, Tavakoli S, et al. Development and applications of the Veterans Health Administration’s Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14(1):34-49. doi:10.1037/ser0000099
7. Goesling J, Moser SE, Lin LA, Hassett AL, Wasserman RA, Brummett CM. Discrepancies between perceived benefit of opioids and self-reported patient outcomes. Pain Med. 2018;19(2):297-306. doi:10.1093/pm/pnw263
8. Sullivan M, Von Korff M, Banta-Green C. Problems and concerns of patients receiving chronic opioid therapy for chronic non-cancer pain. Pain. 2010;149(2):345-353. doi:10.1016/j.pain.2010.02.037
9. Brennan PL, Greenbaum MA, Lemke S, Schutte KK. Mental health disorder, pain, and pain treatment among long-term care residents: evidence from the Minimum Data Set 3.0. Aging Ment Health. 2019;23(9):1146-1155. doi:10.1080/13607863.2018.1481922
10. Woo A, Lechner B, Fu T, et al. Cut points for mild, moderate, and severe pain among cancer and non-cancer patients: a literature review. Ann Palliat Med. 2015;4(4):176-183. doi:10.3978/j.issn.2224-5820.2015.09.04
11. Centers for Disease Control and Prevention. Calculating total daily dose of opioids for safer dosage. 2017. Accessed December 15, 2021. https://www.cdc.gov/drugoverdose/pdf/calculating_total_daily_dose-a.pdf
12. Nielsen S, Degenhardt L, Hoban B, Gisev N. Comparing opioids: a guide to estimating oral morphine equivalents (OME) in research. NDARC Technical Report No. 329. National Drug and Alcohol Research Centre; 2014. Accessed December 15, 2021. http://www.drugsandalcohol.ie/22703/1/NDARC Comparing opioids.pdf
13. Smith HS. Variations in opioid responsiveness. Pain Physician. 2008;11(2):237-248.
14. Collin E, Cesselin F. Neurobiological mechanisms of opioid tolerance and dependence. Clin Neuropharmacol. 1991;14(6):465-488. doi:10.1097/00002826-199112000-00001
15. Higgins C, Smith BH, Matthews K. Evidence of opioid-induced hyperalgesia in clinical populations after chronic opioid exposure: a systematic review and meta-analysis. Br J Anaesth. 2019;122(6):e114-e126. doi:10.1016/j.bja.2018.09.019
16. Howe CQ, Sullivan MD. The missing ‘P’ in pain management: how the current opioid epidemic highlights the need for psychiatric services in chronic pain care. Gen Hosp Psychiatry. 2014;36(1):99-104. doi:10.1016/j.genhosppsych.2013.10.003
17. Substance Abuse and Mental Health Services Administration. Key substance use and mental health indicators in the United States: results from the 2018 National Survey on Drug Use and Health. HHS Publ No PEP19-5068, NSDUH Ser H-54. 2019;170:51-58. Accessed December 15, 2021. https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHNationalFindingsReport2018/NSDUHNationalFindingsReport2018.pdf
18. World Health Organization. WHO’s cancer pain ladder for adults. Accessed September 21, 2018. www.who.int/ncds/management/palliative-care/Infographic-cancer-pain-lowres.pdf
19. Ballantyne JC, Kalso E, Stannard C. WHO analgesic ladder: a good concept gone astray. BMJ. 2016;352:i20. doi:10.1136/bmj.i20
20. The Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guideline for opioid therapy for chronic pain. US Dept of Veterans Affairs and Dept of Defense; 2017. Accessed December 15, 2021. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf
21. Defense & Veterans Pain Rating Scale (DVPRS). Defense & Veterans Center for Integrative Pain Management. Accessed July 21, 2021. https://www.dvcipm.org/clinical-resources/defense-veterans-pain-rating-scale-dvprs/
22. Guy GP Jr, Zhang K, Bohm MK, et al. Vital signs: changes in opioid prescribing in the United States, 2006–2015. MMWR Morb Mortal Wkly Rep. 2017;66(26):697-704. doi:10.15585/mmwr.mm6626a4
Pollution levels linked to physical and mental health problems
Other analyses of data have found environmental air pollution from sources such as car exhaust and factory output can trigger an inflammatory response in the body. What’s new about a study published in RMD Open is that it explored an association between long-term exposure to pollution and risk of autoimmune diseases, wrote Giovanni Adami, MD, of the University of Verona (Italy) and colleagues.
“Environmental air pollution, according to the World Health Organization, is a major risk to health and 99% of the population worldwide is living in places where recommendations for air quality are not met,” said Dr. Adami in an interview. The limited data on the precise role of air pollution on rheumatic diseases in particular prompted the study, he said.
To explore the potential link between air pollution exposure and autoimmune disease, the researchers reviewed medical information from 81,363 adults via a national medical database in Italy; the data were submitted between June 2016 and November 2020.
The average age of the study population was 65 years, and 92% were women; 22% had at least one coexisting health condition. Each study participant was linked to local environmental monitoring via their residential postcode.
The researchers obtained details about concentrations of particulate matter in the environment from the Italian Institute of Environmental Protection that included 617 monitoring stations in 110 Italian provinces. They focused on concentrations of 10 and 2.5 (PM10 and PM2.5).
Exposure thresholds of 30 mcg/m3 for PM10 and 20 mcg/m3 for PM2.5 are generally considered harmful to health, they noted. On average, the long-term exposure was 16 mcg/m3 for PM2.5 and 25 mcg/m3 for PM10 between 2013 and 2019.
Overall, 9,723 individuals (12%) were diagnosed with an autoimmune disease between 2016 and 2020.
Exposure to PM10 was associated with a 7% higher risk of diagnosis with any autoimmune disease for every 10 mcg/m3 increase in concentration, but no association appeared between PM2.5 exposure and increased risk of autoimmune diseases.
However, in an adjusted model, chronic exposure to PM10 above 30 mcg/m3 and to PM2.5 above 20 mcg/m3 were associated with a 12% and 13% higher risk, respectively, of any autoimmune disease.
Chronic exposure to high levels of PM10 was specifically associated with a higher risk of rheumatoid arthritis, but no other autoimmune diseases. Chronic exposure to high levels of PM2.5 was associated with a higher risk of rheumatoid arthritis, connective tissue diseases, and inflammatory bowel diseases.
In their discussion, the researchers noted that the smaller diameter of PM2.5 molecules fluctuate less in response to rain and other weather, compared with PM10 molecules, which might make them a more accurate predictor of exposure to chronic air pollution.
The study findings were limited by several factors including the observational design, which prohibits the establishment of cause, and a lack of data on the start of symptoms and dates of diagnoses for autoimmune diseases, the researchers noted. Other limitations include the high percentage of older women in the study, which may limit generalizability, and the inability to account for additional personal exposure to pollutants outside of the environmental exposure, they said.
However, the results were strengthened by the large sample size and wide geographic distribution with variable pollution exposure, they said.
“Unfortunately, we were not surprised at all,” by the findings, Dr. Adami said in an interview.
“The biological rationale underpinning our findings is strong. Nevertheless, the magnitude of the effect was overwhelming. In addition, we saw an effect even at threshold of exposure that is widely considered as safe,” Dr. Adami noted.
Clinicians have been taught to consider cigarette smoking or other lifestyle behaviors as major risk factors for the development of several autoimmune diseases, said Dr. Adami. “In the future, we probably should include air pollution exposure as a risk factor as well. Interestingly, there is also accumulating evidence linking acute exposure to environmental air pollution with flares of chronic arthritis,” he said.
“Our study could have direct societal and political consequences,” and might help direct policy makers’ decisions on addressing strategies aimed to reduce fossil emissions, he said. As for additional research, “we certainly need multination studies to confirm our results on a larger scale,” Dr. Adami emphasized. “In addition, it is time to take action and start designing interventions aimed to reduce acute and chronic exposure to air pollution in patients suffering from RMDs.”
Consider the big picture of air quality
The Italian study is especially timely “given our evolving and emerging understanding of environmental risk factors for acute and chronic diseases, which we must first understand before we can address,” said Eileen Barrett, MD, of the University of New Mexico, Albuquerque, in an interview.
“I am largely surprised about the findings, as most physicians aren’t studying ambient air quality and risk for autoimmune disease,” said Dr. Barrett. “More often we think of air quality when we think of risk for respiratory diseases than autoimmune diseases, per se,” she said.
“There are several take-home messages from this study,” said Dr. Barrett. “The first is that we need more research to understand the consequences of air pollutants on health. Second, this study reminds us to think broadly about how air quality and our environment can affect health. And third, all clinicians should be committed to promoting science that can improve public health and reduce death and disability,” she emphasized.
The findings do not specifically reflect associations between pollution and other conditions such as chronic obstructive pulmonary disease and asthma although previous studies have shown an association between asthma and COPD exacerbations and air pollution, Dr. Barrett said.
“Further research will be needed to confirm the associations reported in this study,” Dr. Barrett said.
More research in other countries, including research related to other autoimmune diseases, and with other datasets on population and community level risks from poor air quality, would be helpful, and that information could be used to advise smart public policy, Dr. Barrett added.
Air pollution’s mental health impact
Air pollution’s effects extend beyond physical to the psychological, a new study of depression in teenagers showed. This study was published in Developmental Psychology.
Previous research on the environmental factors associated with depressive symptoms in teens has focused mainly on individual and family level contributors; the impact of the physical environment has not been well studied, the investigators, Erika M. Manczak, PhD, of the University of Denver and colleagues, wrote.
In their paper, the authors found a significant impact of neighborhood ozone exposure on the trajectory of depressive symptoms in teens over a 4-year period.
“Given that inhaling pollution activates biological pathways implicated in the development of depression, including immune, cardiovascular, and neurodevelopmental processes, exposure to ambient air pollution may influence the development and/or trajectory of depressive symptoms in youth,” they said.
The researchers recruited 213 adolescents in the San Francisco Bay area through local advertisements. The participants were aged 9-13 years at baseline, with an average age of 11 years. A total of 121 were female, 47% were white, 8.5% were African American, 12.3% were Asian, 10.4% were nonwhite Latin, and 21.7% were biracial or another ethnicity. The participants self-reported depressive symptoms and other psychopathology symptoms up to three times during the study period. Ozone exposure was calculated based on home addresses.
After controlling for other personal, family, and neighborhood variables, the researchers found that higher levels of ozone exposure were significantly associated with increased depressive symptoms over time, and the slope of trajectory of depressive symptoms became steeper as the ozone levels increased (P less than .001). Ozone did not significantly predict the trajectory of any other psychopathology symptoms.
“The results of this study provide preliminary support for the possibility that ozone is an overlooked contributor to the development or course of youth depressive symptoms,” the researchers wrote in their discussion.
“Interestingly, the association between ozone and symptom trajectories as measured by Anxious/Depressed subscale of the [Youth Self-Report] was not as strong as it was for the [Children’s Depression Inventory-Short Version] or Withdrawn/Depressed scales, suggesting that associations are more robust for behavioral withdrawal symptoms of depression than for other types of symptoms,” they noted.
The study findings were limited by the use of self-reports and by the inability of the study design to show causality, the researchers said. Other limitations include the use of average assessments of ozone that are less precise, lack of assessment of biological pathways for risk, lack of formal psychiatric diagnoses, and the small geographic region included in the study, they said.
However, the results provide preliminary evidence that ozone exposure is a potential contributing factor to depressive symptoms in youth, and serve as a jumping-off point for future research, they noted. Future studies should address changes in systemic inflammation, neurodevelopment, or stress reactivity, as well as concurrent psychosocial or biological factors, and temporal associations between air pollution and mental health symptoms, they concluded.
Environmental factors drive inflammatory responses
Peter L. Loper Jr., MD, considers the findings of the Developmental Psychology study to be unsurprising but important – because air pollution is simply getting worse.
“As the study authors cite, there is sufficient data correlating ozone to negative physical health outcomes in youth, but a paucity of data exploring the impact of poor air quality on mental health outcomes in this demographic,” noted Dr. Loper, of the University of South Carolina, Columbia, in an interview.
“As discussed by the study researchers, any environmental exposure that increases immune-mediated inflammation can result in negative health outcomes. In fact, there is already data to suggest that similar cytokines, or immune cell signalers, that get released by our immune system due to environmental exposures and that contribute to asthma, may also be implicated in depression and other mental health problems,” he noted.
“Just like downstream symptom indicators of physical illnesses such as asthma are secondary to immune-mediated pulmonary inflammation, downstream symptom indicators of mental illness, such as depression, are secondary to immune-mediated neuroinflammation,” Dr. Loper emphasized. “The most well-characterized upstream phenomenon perpetuating the downstream symptom indicators of depression involve neuroinflammatory states due to psychosocial and relational factors such as chronic stress, poor relationships, or substance use. However, any environmental factor that triggers an immune response and inflammation can promote neuroinflammation that manifests as symptoms of mental illness.”
The message for teens with depression and their families is that “we are a product of our environment,” Dr. Loper said. “When our environments are proinflammatory, or cause our immune system to become overactive, then we will develop illness; however, the most potent mediator of inflammation in the brain, and the downstream symptoms of depression, is our relationships with those we love most,” he said.
Dr. Loper suggested research aimed at identifying other sources of immune-mediated inflammation caused by physical environments and better understanding how environmental phenomenon like ozone may compound previously established risk factors for mental illness could be useful.
The RMD Open study received no outside funding, and its authors had no financial conflicts.
The Developmental Psychology study was supported by the National Institute of Mental Health and the Stanford University Precision Health and Integrated Diagnostics Center. The researchers for that report, and Dr. Loper and Dr. Barrett had no conflicts to disclose.
Other analyses of data have found environmental air pollution from sources such as car exhaust and factory output can trigger an inflammatory response in the body. What’s new about a study published in RMD Open is that it explored an association between long-term exposure to pollution and risk of autoimmune diseases, wrote Giovanni Adami, MD, of the University of Verona (Italy) and colleagues.
“Environmental air pollution, according to the World Health Organization, is a major risk to health and 99% of the population worldwide is living in places where recommendations for air quality are not met,” said Dr. Adami in an interview. The limited data on the precise role of air pollution on rheumatic diseases in particular prompted the study, he said.
To explore the potential link between air pollution exposure and autoimmune disease, the researchers reviewed medical information from 81,363 adults via a national medical database in Italy; the data were submitted between June 2016 and November 2020.
The average age of the study population was 65 years, and 92% were women; 22% had at least one coexisting health condition. Each study participant was linked to local environmental monitoring via their residential postcode.
The researchers obtained details about concentrations of particulate matter in the environment from the Italian Institute of Environmental Protection that included 617 monitoring stations in 110 Italian provinces. They focused on concentrations of 10 and 2.5 (PM10 and PM2.5).
Exposure thresholds of 30 mcg/m3 for PM10 and 20 mcg/m3 for PM2.5 are generally considered harmful to health, they noted. On average, the long-term exposure was 16 mcg/m3 for PM2.5 and 25 mcg/m3 for PM10 between 2013 and 2019.
Overall, 9,723 individuals (12%) were diagnosed with an autoimmune disease between 2016 and 2020.
Exposure to PM10 was associated with a 7% higher risk of diagnosis with any autoimmune disease for every 10 mcg/m3 increase in concentration, but no association appeared between PM2.5 exposure and increased risk of autoimmune diseases.
However, in an adjusted model, chronic exposure to PM10 above 30 mcg/m3 and to PM2.5 above 20 mcg/m3 were associated with a 12% and 13% higher risk, respectively, of any autoimmune disease.
Chronic exposure to high levels of PM10 was specifically associated with a higher risk of rheumatoid arthritis, but no other autoimmune diseases. Chronic exposure to high levels of PM2.5 was associated with a higher risk of rheumatoid arthritis, connective tissue diseases, and inflammatory bowel diseases.
In their discussion, the researchers noted that the smaller diameter of PM2.5 molecules fluctuate less in response to rain and other weather, compared with PM10 molecules, which might make them a more accurate predictor of exposure to chronic air pollution.
The study findings were limited by several factors including the observational design, which prohibits the establishment of cause, and a lack of data on the start of symptoms and dates of diagnoses for autoimmune diseases, the researchers noted. Other limitations include the high percentage of older women in the study, which may limit generalizability, and the inability to account for additional personal exposure to pollutants outside of the environmental exposure, they said.
However, the results were strengthened by the large sample size and wide geographic distribution with variable pollution exposure, they said.
“Unfortunately, we were not surprised at all,” by the findings, Dr. Adami said in an interview.
“The biological rationale underpinning our findings is strong. Nevertheless, the magnitude of the effect was overwhelming. In addition, we saw an effect even at threshold of exposure that is widely considered as safe,” Dr. Adami noted.
Clinicians have been taught to consider cigarette smoking or other lifestyle behaviors as major risk factors for the development of several autoimmune diseases, said Dr. Adami. “In the future, we probably should include air pollution exposure as a risk factor as well. Interestingly, there is also accumulating evidence linking acute exposure to environmental air pollution with flares of chronic arthritis,” he said.
“Our study could have direct societal and political consequences,” and might help direct policy makers’ decisions on addressing strategies aimed to reduce fossil emissions, he said. As for additional research, “we certainly need multination studies to confirm our results on a larger scale,” Dr. Adami emphasized. “In addition, it is time to take action and start designing interventions aimed to reduce acute and chronic exposure to air pollution in patients suffering from RMDs.”
Consider the big picture of air quality
The Italian study is especially timely “given our evolving and emerging understanding of environmental risk factors for acute and chronic diseases, which we must first understand before we can address,” said Eileen Barrett, MD, of the University of New Mexico, Albuquerque, in an interview.
“I am largely surprised about the findings, as most physicians aren’t studying ambient air quality and risk for autoimmune disease,” said Dr. Barrett. “More often we think of air quality when we think of risk for respiratory diseases than autoimmune diseases, per se,” she said.
“There are several take-home messages from this study,” said Dr. Barrett. “The first is that we need more research to understand the consequences of air pollutants on health. Second, this study reminds us to think broadly about how air quality and our environment can affect health. And third, all clinicians should be committed to promoting science that can improve public health and reduce death and disability,” she emphasized.
The findings do not specifically reflect associations between pollution and other conditions such as chronic obstructive pulmonary disease and asthma although previous studies have shown an association between asthma and COPD exacerbations and air pollution, Dr. Barrett said.
“Further research will be needed to confirm the associations reported in this study,” Dr. Barrett said.
More research in other countries, including research related to other autoimmune diseases, and with other datasets on population and community level risks from poor air quality, would be helpful, and that information could be used to advise smart public policy, Dr. Barrett added.
Air pollution’s mental health impact
Air pollution’s effects extend beyond physical to the psychological, a new study of depression in teenagers showed. This study was published in Developmental Psychology.
Previous research on the environmental factors associated with depressive symptoms in teens has focused mainly on individual and family level contributors; the impact of the physical environment has not been well studied, the investigators, Erika M. Manczak, PhD, of the University of Denver and colleagues, wrote.
In their paper, the authors found a significant impact of neighborhood ozone exposure on the trajectory of depressive symptoms in teens over a 4-year period.
“Given that inhaling pollution activates biological pathways implicated in the development of depression, including immune, cardiovascular, and neurodevelopmental processes, exposure to ambient air pollution may influence the development and/or trajectory of depressive symptoms in youth,” they said.
The researchers recruited 213 adolescents in the San Francisco Bay area through local advertisements. The participants were aged 9-13 years at baseline, with an average age of 11 years. A total of 121 were female, 47% were white, 8.5% were African American, 12.3% were Asian, 10.4% were nonwhite Latin, and 21.7% were biracial or another ethnicity. The participants self-reported depressive symptoms and other psychopathology symptoms up to three times during the study period. Ozone exposure was calculated based on home addresses.
After controlling for other personal, family, and neighborhood variables, the researchers found that higher levels of ozone exposure were significantly associated with increased depressive symptoms over time, and the slope of trajectory of depressive symptoms became steeper as the ozone levels increased (P less than .001). Ozone did not significantly predict the trajectory of any other psychopathology symptoms.
“The results of this study provide preliminary support for the possibility that ozone is an overlooked contributor to the development or course of youth depressive symptoms,” the researchers wrote in their discussion.
“Interestingly, the association between ozone and symptom trajectories as measured by Anxious/Depressed subscale of the [Youth Self-Report] was not as strong as it was for the [Children’s Depression Inventory-Short Version] or Withdrawn/Depressed scales, suggesting that associations are more robust for behavioral withdrawal symptoms of depression than for other types of symptoms,” they noted.
The study findings were limited by the use of self-reports and by the inability of the study design to show causality, the researchers said. Other limitations include the use of average assessments of ozone that are less precise, lack of assessment of biological pathways for risk, lack of formal psychiatric diagnoses, and the small geographic region included in the study, they said.
However, the results provide preliminary evidence that ozone exposure is a potential contributing factor to depressive symptoms in youth, and serve as a jumping-off point for future research, they noted. Future studies should address changes in systemic inflammation, neurodevelopment, or stress reactivity, as well as concurrent psychosocial or biological factors, and temporal associations between air pollution and mental health symptoms, they concluded.
Environmental factors drive inflammatory responses
Peter L. Loper Jr., MD, considers the findings of the Developmental Psychology study to be unsurprising but important – because air pollution is simply getting worse.
“As the study authors cite, there is sufficient data correlating ozone to negative physical health outcomes in youth, but a paucity of data exploring the impact of poor air quality on mental health outcomes in this demographic,” noted Dr. Loper, of the University of South Carolina, Columbia, in an interview.
“As discussed by the study researchers, any environmental exposure that increases immune-mediated inflammation can result in negative health outcomes. In fact, there is already data to suggest that similar cytokines, or immune cell signalers, that get released by our immune system due to environmental exposures and that contribute to asthma, may also be implicated in depression and other mental health problems,” he noted.
“Just like downstream symptom indicators of physical illnesses such as asthma are secondary to immune-mediated pulmonary inflammation, downstream symptom indicators of mental illness, such as depression, are secondary to immune-mediated neuroinflammation,” Dr. Loper emphasized. “The most well-characterized upstream phenomenon perpetuating the downstream symptom indicators of depression involve neuroinflammatory states due to psychosocial and relational factors such as chronic stress, poor relationships, or substance use. However, any environmental factor that triggers an immune response and inflammation can promote neuroinflammation that manifests as symptoms of mental illness.”
The message for teens with depression and their families is that “we are a product of our environment,” Dr. Loper said. “When our environments are proinflammatory, or cause our immune system to become overactive, then we will develop illness; however, the most potent mediator of inflammation in the brain, and the downstream symptoms of depression, is our relationships with those we love most,” he said.
Dr. Loper suggested research aimed at identifying other sources of immune-mediated inflammation caused by physical environments and better understanding how environmental phenomenon like ozone may compound previously established risk factors for mental illness could be useful.
The RMD Open study received no outside funding, and its authors had no financial conflicts.
The Developmental Psychology study was supported by the National Institute of Mental Health and the Stanford University Precision Health and Integrated Diagnostics Center. The researchers for that report, and Dr. Loper and Dr. Barrett had no conflicts to disclose.
Other analyses of data have found environmental air pollution from sources such as car exhaust and factory output can trigger an inflammatory response in the body. What’s new about a study published in RMD Open is that it explored an association between long-term exposure to pollution and risk of autoimmune diseases, wrote Giovanni Adami, MD, of the University of Verona (Italy) and colleagues.
“Environmental air pollution, according to the World Health Organization, is a major risk to health and 99% of the population worldwide is living in places where recommendations for air quality are not met,” said Dr. Adami in an interview. The limited data on the precise role of air pollution on rheumatic diseases in particular prompted the study, he said.
To explore the potential link between air pollution exposure and autoimmune disease, the researchers reviewed medical information from 81,363 adults via a national medical database in Italy; the data were submitted between June 2016 and November 2020.
The average age of the study population was 65 years, and 92% were women; 22% had at least one coexisting health condition. Each study participant was linked to local environmental monitoring via their residential postcode.
The researchers obtained details about concentrations of particulate matter in the environment from the Italian Institute of Environmental Protection that included 617 monitoring stations in 110 Italian provinces. They focused on concentrations of 10 and 2.5 (PM10 and PM2.5).
Exposure thresholds of 30 mcg/m3 for PM10 and 20 mcg/m3 for PM2.5 are generally considered harmful to health, they noted. On average, the long-term exposure was 16 mcg/m3 for PM2.5 and 25 mcg/m3 for PM10 between 2013 and 2019.
Overall, 9,723 individuals (12%) were diagnosed with an autoimmune disease between 2016 and 2020.
Exposure to PM10 was associated with a 7% higher risk of diagnosis with any autoimmune disease for every 10 mcg/m3 increase in concentration, but no association appeared between PM2.5 exposure and increased risk of autoimmune diseases.
However, in an adjusted model, chronic exposure to PM10 above 30 mcg/m3 and to PM2.5 above 20 mcg/m3 were associated with a 12% and 13% higher risk, respectively, of any autoimmune disease.
Chronic exposure to high levels of PM10 was specifically associated with a higher risk of rheumatoid arthritis, but no other autoimmune diseases. Chronic exposure to high levels of PM2.5 was associated with a higher risk of rheumatoid arthritis, connective tissue diseases, and inflammatory bowel diseases.
In their discussion, the researchers noted that the smaller diameter of PM2.5 molecules fluctuate less in response to rain and other weather, compared with PM10 molecules, which might make them a more accurate predictor of exposure to chronic air pollution.
The study findings were limited by several factors including the observational design, which prohibits the establishment of cause, and a lack of data on the start of symptoms and dates of diagnoses for autoimmune diseases, the researchers noted. Other limitations include the high percentage of older women in the study, which may limit generalizability, and the inability to account for additional personal exposure to pollutants outside of the environmental exposure, they said.
However, the results were strengthened by the large sample size and wide geographic distribution with variable pollution exposure, they said.
“Unfortunately, we were not surprised at all,” by the findings, Dr. Adami said in an interview.
“The biological rationale underpinning our findings is strong. Nevertheless, the magnitude of the effect was overwhelming. In addition, we saw an effect even at threshold of exposure that is widely considered as safe,” Dr. Adami noted.
Clinicians have been taught to consider cigarette smoking or other lifestyle behaviors as major risk factors for the development of several autoimmune diseases, said Dr. Adami. “In the future, we probably should include air pollution exposure as a risk factor as well. Interestingly, there is also accumulating evidence linking acute exposure to environmental air pollution with flares of chronic arthritis,” he said.
“Our study could have direct societal and political consequences,” and might help direct policy makers’ decisions on addressing strategies aimed to reduce fossil emissions, he said. As for additional research, “we certainly need multination studies to confirm our results on a larger scale,” Dr. Adami emphasized. “In addition, it is time to take action and start designing interventions aimed to reduce acute and chronic exposure to air pollution in patients suffering from RMDs.”
Consider the big picture of air quality
The Italian study is especially timely “given our evolving and emerging understanding of environmental risk factors for acute and chronic diseases, which we must first understand before we can address,” said Eileen Barrett, MD, of the University of New Mexico, Albuquerque, in an interview.
“I am largely surprised about the findings, as most physicians aren’t studying ambient air quality and risk for autoimmune disease,” said Dr. Barrett. “More often we think of air quality when we think of risk for respiratory diseases than autoimmune diseases, per se,” she said.
“There are several take-home messages from this study,” said Dr. Barrett. “The first is that we need more research to understand the consequences of air pollutants on health. Second, this study reminds us to think broadly about how air quality and our environment can affect health. And third, all clinicians should be committed to promoting science that can improve public health and reduce death and disability,” she emphasized.
The findings do not specifically reflect associations between pollution and other conditions such as chronic obstructive pulmonary disease and asthma although previous studies have shown an association between asthma and COPD exacerbations and air pollution, Dr. Barrett said.
“Further research will be needed to confirm the associations reported in this study,” Dr. Barrett said.
More research in other countries, including research related to other autoimmune diseases, and with other datasets on population and community level risks from poor air quality, would be helpful, and that information could be used to advise smart public policy, Dr. Barrett added.
Air pollution’s mental health impact
Air pollution’s effects extend beyond physical to the psychological, a new study of depression in teenagers showed. This study was published in Developmental Psychology.
Previous research on the environmental factors associated with depressive symptoms in teens has focused mainly on individual and family level contributors; the impact of the physical environment has not been well studied, the investigators, Erika M. Manczak, PhD, of the University of Denver and colleagues, wrote.
In their paper, the authors found a significant impact of neighborhood ozone exposure on the trajectory of depressive symptoms in teens over a 4-year period.
“Given that inhaling pollution activates biological pathways implicated in the development of depression, including immune, cardiovascular, and neurodevelopmental processes, exposure to ambient air pollution may influence the development and/or trajectory of depressive symptoms in youth,” they said.
The researchers recruited 213 adolescents in the San Francisco Bay area through local advertisements. The participants were aged 9-13 years at baseline, with an average age of 11 years. A total of 121 were female, 47% were white, 8.5% were African American, 12.3% were Asian, 10.4% were nonwhite Latin, and 21.7% were biracial or another ethnicity. The participants self-reported depressive symptoms and other psychopathology symptoms up to three times during the study period. Ozone exposure was calculated based on home addresses.
After controlling for other personal, family, and neighborhood variables, the researchers found that higher levels of ozone exposure were significantly associated with increased depressive symptoms over time, and the slope of trajectory of depressive symptoms became steeper as the ozone levels increased (P less than .001). Ozone did not significantly predict the trajectory of any other psychopathology symptoms.
“The results of this study provide preliminary support for the possibility that ozone is an overlooked contributor to the development or course of youth depressive symptoms,” the researchers wrote in their discussion.
“Interestingly, the association between ozone and symptom trajectories as measured by Anxious/Depressed subscale of the [Youth Self-Report] was not as strong as it was for the [Children’s Depression Inventory-Short Version] or Withdrawn/Depressed scales, suggesting that associations are more robust for behavioral withdrawal symptoms of depression than for other types of symptoms,” they noted.
The study findings were limited by the use of self-reports and by the inability of the study design to show causality, the researchers said. Other limitations include the use of average assessments of ozone that are less precise, lack of assessment of biological pathways for risk, lack of formal psychiatric diagnoses, and the small geographic region included in the study, they said.
However, the results provide preliminary evidence that ozone exposure is a potential contributing factor to depressive symptoms in youth, and serve as a jumping-off point for future research, they noted. Future studies should address changes in systemic inflammation, neurodevelopment, or stress reactivity, as well as concurrent psychosocial or biological factors, and temporal associations between air pollution and mental health symptoms, they concluded.
Environmental factors drive inflammatory responses
Peter L. Loper Jr., MD, considers the findings of the Developmental Psychology study to be unsurprising but important – because air pollution is simply getting worse.
“As the study authors cite, there is sufficient data correlating ozone to negative physical health outcomes in youth, but a paucity of data exploring the impact of poor air quality on mental health outcomes in this demographic,” noted Dr. Loper, of the University of South Carolina, Columbia, in an interview.
“As discussed by the study researchers, any environmental exposure that increases immune-mediated inflammation can result in negative health outcomes. In fact, there is already data to suggest that similar cytokines, or immune cell signalers, that get released by our immune system due to environmental exposures and that contribute to asthma, may also be implicated in depression and other mental health problems,” he noted.
“Just like downstream symptom indicators of physical illnesses such as asthma are secondary to immune-mediated pulmonary inflammation, downstream symptom indicators of mental illness, such as depression, are secondary to immune-mediated neuroinflammation,” Dr. Loper emphasized. “The most well-characterized upstream phenomenon perpetuating the downstream symptom indicators of depression involve neuroinflammatory states due to psychosocial and relational factors such as chronic stress, poor relationships, or substance use. However, any environmental factor that triggers an immune response and inflammation can promote neuroinflammation that manifests as symptoms of mental illness.”
The message for teens with depression and their families is that “we are a product of our environment,” Dr. Loper said. “When our environments are proinflammatory, or cause our immune system to become overactive, then we will develop illness; however, the most potent mediator of inflammation in the brain, and the downstream symptoms of depression, is our relationships with those we love most,” he said.
Dr. Loper suggested research aimed at identifying other sources of immune-mediated inflammation caused by physical environments and better understanding how environmental phenomenon like ozone may compound previously established risk factors for mental illness could be useful.
The RMD Open study received no outside funding, and its authors had no financial conflicts.
The Developmental Psychology study was supported by the National Institute of Mental Health and the Stanford University Precision Health and Integrated Diagnostics Center. The researchers for that report, and Dr. Loper and Dr. Barrett had no conflicts to disclose.
FROM RMD OPEN