User login
French fries vs. almonds every day for a month: What changes?
Eat french fries every day for a month? Sure, as long as it’s for science.
That’s exactly what 107 people did in a scientific study, while 58 others ate a daily serving of almonds with the same number of calories.
At the end of the study, the researchers found no significant differences between the groups in people’s total amount of fat or their fasting glucose measures, according to the study, published Feb. 18 in the American Journal of Clinical Nutrition.
The french fry eaters gained a little more weight, but it was not statistically significant. The people who ate french fries gained 0.49 kilograms (just over a pound), vs. about a tenth of a kilogram (about one-fifth of a pound) in the group of people who ate almonds.
“The take-home is if you like almonds, eat some almonds. If you like potatoes, eat some potatoes, but don’t overeat either,” said study leader David B. Allison, PhD, a professor at Indiana University’s School of Public Health in Bloomington. ‘It’s probably good to have a little bit of each – each has some unique advantages in terms of nutrition.”
“This study confirms what registered dietitian nutritionists already know – all foods can fit. We can eat almonds, french fries, kale, and cookies,” said Melissa Majumdar, a registered dietitian and certified specialist in obesity and weight management at Emory University Hospital Midtown in Atlanta. ‘The consumption of one food or the avoidance of another does not make a healthy diet.”
At the same time, people should not interpret the results to mean it’s OK to eat french fries all day, every day. “We know that while potatoes are nutrient dense, the frying process reduces the nutritional value,” Ms. Majumdar said.
“Because french fries are often consumed alongside other nutrient-poor or high-fat foods, they should not be consumed daily but can fit into an overall balanced diet,” she added.
Would you like fries with that?
The researchers compared french fries to almonds because almonds are known for positive effects on energy balance, body composition, and low glycemic index. The research was partly funded by the Alliance for Potato Research and Education.
French fries are an incredibly popular food in the United States. According to an August 2021 post on the food website Mashed, Americans eat an average of 30 pounds of french fries each year.
Although consumption of almonds is increasing, Americans eat far less in volume each year than they do fries – an estimated 2.4 pounds of almonds per person, according to August 2021 figures from the Almond Board of California.
Dr. Allison and colleagues recruited 180 healthy adults for the study. Their average age was 30, and about two-thirds were women.
They randomly assigned 60 people to add about a medium serving of plain french fries (Tater Pals Ovenable Crinkle Cut Fries, Simplot Foods) to their diet. Another 60 people were assigned to the same amount of Tater Pals fries with herbs (oregano, basil, garlic, onion, and rosemary), and another 60 people ate Wonderful brand roasted and salted almonds.
Investigators told people to add either the potatoes or nuts to their diet every day for a month and gave no further instructions.
After some people dropped out of the study, results were based on 55 who ate regular french fries, 52 who ate french fries with herbs and spices, and 58 who ate the nuts.
The researchers scanned people to detect any changes in fat mass. They also measured changes in body weight, carbohydrate metabolism, and fasting blood glucose and insulin.
Key findings
Changes in total body fat mass and fat mass were not significantly different between the french fry groups and the almond group.
In terms of glycemic control, eating french fries for a month “is no better or worse than consuming a caloric equivalent of nuts,” the researchers noted.
Similarly, the change in total fat mass did not differ significantly among the three treatment groups.
Adding the herb and spice mix to the french fries did not make a significant difference on glycemic control, contrary to what the researchers thought might happen.
And fasting glucose, insulin, and HbA1c levels did not differ significantly between the combined french fry and almond groups. When comparisons were made among the three groups, the almond group had a lower insulin response, compared to the plain french fry group.
Many different things could be explored in future research, said study coauthor Rebecca Hanson, a registered dietitian nutritionist and research study coordinator at the University of Alabama at Birmingham. “People were not told to change their exercise or diet, so there are so many different variables,” she said. Repeating the research in people with diabetes is another possibility going forward.
The researchers acknowledged that 30 days may not have been long enough to show a significant difference. But they also noted that many previous studies were observational while they used a randomized controlled trial, considered a more robust study design.
Dr. Allison, the senior author, emphasized that this is just one study. “No one study has all the answers.
“I don’t want to tell you our results are the be all and end all or that we’ve now learned everything there is to learn about potatoes and almonds,” he said.
“Our study shows for the variables we looked at ... we did not see important, discernible differences,” he said. “That doesn’t mean if you ate 500 potatoes a day or 500 kilograms of almonds it would be the same. But at these modest levels, it doesn’t seem to make much difference.”
The study was funded by grants from the National Institutes of Health and from the Alliance for Potato Research and Education.
Asked if the industry support should be a concern, Ms. Majumdar said, “Funding from a specific food board does not necessarily dilute the results of a well-designed study. It’s not uncommon for a funding source to come from a food board that may benefit from the findings. Research money has to come from somewhere.
“This study has reputable researchers, some of the best in the field,” she said.
The U.S. produces the most almonds in the world, and California is the only state where almonds are grown commercially. Asked for the almond industry’s take on the findings, “We don’t have a comment,” said Rick Kushman, a spokesman for the Almond Board of California.
A version of this article first appeared on WebMD.com.
Eat french fries every day for a month? Sure, as long as it’s for science.
That’s exactly what 107 people did in a scientific study, while 58 others ate a daily serving of almonds with the same number of calories.
At the end of the study, the researchers found no significant differences between the groups in people’s total amount of fat or their fasting glucose measures, according to the study, published Feb. 18 in the American Journal of Clinical Nutrition.
The french fry eaters gained a little more weight, but it was not statistically significant. The people who ate french fries gained 0.49 kilograms (just over a pound), vs. about a tenth of a kilogram (about one-fifth of a pound) in the group of people who ate almonds.
“The take-home is if you like almonds, eat some almonds. If you like potatoes, eat some potatoes, but don’t overeat either,” said study leader David B. Allison, PhD, a professor at Indiana University’s School of Public Health in Bloomington. ‘It’s probably good to have a little bit of each – each has some unique advantages in terms of nutrition.”
“This study confirms what registered dietitian nutritionists already know – all foods can fit. We can eat almonds, french fries, kale, and cookies,” said Melissa Majumdar, a registered dietitian and certified specialist in obesity and weight management at Emory University Hospital Midtown in Atlanta. ‘The consumption of one food or the avoidance of another does not make a healthy diet.”
At the same time, people should not interpret the results to mean it’s OK to eat french fries all day, every day. “We know that while potatoes are nutrient dense, the frying process reduces the nutritional value,” Ms. Majumdar said.
“Because french fries are often consumed alongside other nutrient-poor or high-fat foods, they should not be consumed daily but can fit into an overall balanced diet,” she added.
Would you like fries with that?
The researchers compared french fries to almonds because almonds are known for positive effects on energy balance, body composition, and low glycemic index. The research was partly funded by the Alliance for Potato Research and Education.
French fries are an incredibly popular food in the United States. According to an August 2021 post on the food website Mashed, Americans eat an average of 30 pounds of french fries each year.
Although consumption of almonds is increasing, Americans eat far less in volume each year than they do fries – an estimated 2.4 pounds of almonds per person, according to August 2021 figures from the Almond Board of California.
Dr. Allison and colleagues recruited 180 healthy adults for the study. Their average age was 30, and about two-thirds were women.
They randomly assigned 60 people to add about a medium serving of plain french fries (Tater Pals Ovenable Crinkle Cut Fries, Simplot Foods) to their diet. Another 60 people were assigned to the same amount of Tater Pals fries with herbs (oregano, basil, garlic, onion, and rosemary), and another 60 people ate Wonderful brand roasted and salted almonds.
Investigators told people to add either the potatoes or nuts to their diet every day for a month and gave no further instructions.
After some people dropped out of the study, results were based on 55 who ate regular french fries, 52 who ate french fries with herbs and spices, and 58 who ate the nuts.
The researchers scanned people to detect any changes in fat mass. They also measured changes in body weight, carbohydrate metabolism, and fasting blood glucose and insulin.
Key findings
Changes in total body fat mass and fat mass were not significantly different between the french fry groups and the almond group.
In terms of glycemic control, eating french fries for a month “is no better or worse than consuming a caloric equivalent of nuts,” the researchers noted.
Similarly, the change in total fat mass did not differ significantly among the three treatment groups.
Adding the herb and spice mix to the french fries did not make a significant difference on glycemic control, contrary to what the researchers thought might happen.
And fasting glucose, insulin, and HbA1c levels did not differ significantly between the combined french fry and almond groups. When comparisons were made among the three groups, the almond group had a lower insulin response, compared to the plain french fry group.
Many different things could be explored in future research, said study coauthor Rebecca Hanson, a registered dietitian nutritionist and research study coordinator at the University of Alabama at Birmingham. “People were not told to change their exercise or diet, so there are so many different variables,” she said. Repeating the research in people with diabetes is another possibility going forward.
The researchers acknowledged that 30 days may not have been long enough to show a significant difference. But they also noted that many previous studies were observational while they used a randomized controlled trial, considered a more robust study design.
Dr. Allison, the senior author, emphasized that this is just one study. “No one study has all the answers.
“I don’t want to tell you our results are the be all and end all or that we’ve now learned everything there is to learn about potatoes and almonds,” he said.
“Our study shows for the variables we looked at ... we did not see important, discernible differences,” he said. “That doesn’t mean if you ate 500 potatoes a day or 500 kilograms of almonds it would be the same. But at these modest levels, it doesn’t seem to make much difference.”
The study was funded by grants from the National Institutes of Health and from the Alliance for Potato Research and Education.
Asked if the industry support should be a concern, Ms. Majumdar said, “Funding from a specific food board does not necessarily dilute the results of a well-designed study. It’s not uncommon for a funding source to come from a food board that may benefit from the findings. Research money has to come from somewhere.
“This study has reputable researchers, some of the best in the field,” she said.
The U.S. produces the most almonds in the world, and California is the only state where almonds are grown commercially. Asked for the almond industry’s take on the findings, “We don’t have a comment,” said Rick Kushman, a spokesman for the Almond Board of California.
A version of this article first appeared on WebMD.com.
Eat french fries every day for a month? Sure, as long as it’s for science.
That’s exactly what 107 people did in a scientific study, while 58 others ate a daily serving of almonds with the same number of calories.
At the end of the study, the researchers found no significant differences between the groups in people’s total amount of fat or their fasting glucose measures, according to the study, published Feb. 18 in the American Journal of Clinical Nutrition.
The french fry eaters gained a little more weight, but it was not statistically significant. The people who ate french fries gained 0.49 kilograms (just over a pound), vs. about a tenth of a kilogram (about one-fifth of a pound) in the group of people who ate almonds.
“The take-home is if you like almonds, eat some almonds. If you like potatoes, eat some potatoes, but don’t overeat either,” said study leader David B. Allison, PhD, a professor at Indiana University’s School of Public Health in Bloomington. ‘It’s probably good to have a little bit of each – each has some unique advantages in terms of nutrition.”
“This study confirms what registered dietitian nutritionists already know – all foods can fit. We can eat almonds, french fries, kale, and cookies,” said Melissa Majumdar, a registered dietitian and certified specialist in obesity and weight management at Emory University Hospital Midtown in Atlanta. ‘The consumption of one food or the avoidance of another does not make a healthy diet.”
At the same time, people should not interpret the results to mean it’s OK to eat french fries all day, every day. “We know that while potatoes are nutrient dense, the frying process reduces the nutritional value,” Ms. Majumdar said.
“Because french fries are often consumed alongside other nutrient-poor or high-fat foods, they should not be consumed daily but can fit into an overall balanced diet,” she added.
Would you like fries with that?
The researchers compared french fries to almonds because almonds are known for positive effects on energy balance, body composition, and low glycemic index. The research was partly funded by the Alliance for Potato Research and Education.
French fries are an incredibly popular food in the United States. According to an August 2021 post on the food website Mashed, Americans eat an average of 30 pounds of french fries each year.
Although consumption of almonds is increasing, Americans eat far less in volume each year than they do fries – an estimated 2.4 pounds of almonds per person, according to August 2021 figures from the Almond Board of California.
Dr. Allison and colleagues recruited 180 healthy adults for the study. Their average age was 30, and about two-thirds were women.
They randomly assigned 60 people to add about a medium serving of plain french fries (Tater Pals Ovenable Crinkle Cut Fries, Simplot Foods) to their diet. Another 60 people were assigned to the same amount of Tater Pals fries with herbs (oregano, basil, garlic, onion, and rosemary), and another 60 people ate Wonderful brand roasted and salted almonds.
Investigators told people to add either the potatoes or nuts to their diet every day for a month and gave no further instructions.
After some people dropped out of the study, results were based on 55 who ate regular french fries, 52 who ate french fries with herbs and spices, and 58 who ate the nuts.
The researchers scanned people to detect any changes in fat mass. They also measured changes in body weight, carbohydrate metabolism, and fasting blood glucose and insulin.
Key findings
Changes in total body fat mass and fat mass were not significantly different between the french fry groups and the almond group.
In terms of glycemic control, eating french fries for a month “is no better or worse than consuming a caloric equivalent of nuts,” the researchers noted.
Similarly, the change in total fat mass did not differ significantly among the three treatment groups.
Adding the herb and spice mix to the french fries did not make a significant difference on glycemic control, contrary to what the researchers thought might happen.
And fasting glucose, insulin, and HbA1c levels did not differ significantly between the combined french fry and almond groups. When comparisons were made among the three groups, the almond group had a lower insulin response, compared to the plain french fry group.
Many different things could be explored in future research, said study coauthor Rebecca Hanson, a registered dietitian nutritionist and research study coordinator at the University of Alabama at Birmingham. “People were not told to change their exercise or diet, so there are so many different variables,” she said. Repeating the research in people with diabetes is another possibility going forward.
The researchers acknowledged that 30 days may not have been long enough to show a significant difference. But they also noted that many previous studies were observational while they used a randomized controlled trial, considered a more robust study design.
Dr. Allison, the senior author, emphasized that this is just one study. “No one study has all the answers.
“I don’t want to tell you our results are the be all and end all or that we’ve now learned everything there is to learn about potatoes and almonds,” he said.
“Our study shows for the variables we looked at ... we did not see important, discernible differences,” he said. “That doesn’t mean if you ate 500 potatoes a day or 500 kilograms of almonds it would be the same. But at these modest levels, it doesn’t seem to make much difference.”
The study was funded by grants from the National Institutes of Health and from the Alliance for Potato Research and Education.
Asked if the industry support should be a concern, Ms. Majumdar said, “Funding from a specific food board does not necessarily dilute the results of a well-designed study. It’s not uncommon for a funding source to come from a food board that may benefit from the findings. Research money has to come from somewhere.
“This study has reputable researchers, some of the best in the field,” she said.
The U.S. produces the most almonds in the world, and California is the only state where almonds are grown commercially. Asked for the almond industry’s take on the findings, “We don’t have a comment,” said Rick Kushman, a spokesman for the Almond Board of California.
A version of this article first appeared on WebMD.com.
FROM AMERICAN JOURNAL OF CLINICAL NUTRITION
Physician loses right leg, sues podiatrist; more
Pennsylvania Record, among other news sites, reports.
, as a story in theIn December 2020, Mario Adajar, MD, 59, an internist in Wyoming, Penn., sought treatment for his foot calluses and the chronic ulceration of his right foot.
Dr. Adajar consulted a podiatrist, who has surgical privileges at Wilkes-Barre Commonwealth Hospital. According to his complaint, Dr. Adajar made the podiatrist aware that he had type 2 diabetes and had recently undergone a kidney transplant.
Over the next several months, Adajar continued to be treated by the podiatrist who, among other things, debrided and cleaned his patient’s ulcerated right foot on multiple occasions. In June 2021, working out of the hospital’s Wound Healing Center, the podiatrist placed Dr. Adajar’s right leg in a total contact cast.
By the following day, the patient experienced what he later described as “excruciating” pain around the cast. He was also running a fever of 102.3. Taken to a local emergency department, Dr. Adajar soon went into septic shock, accompanied by both atrial fibrillation and acute hypoxic respiratory failure.
Doctors soon had a diagnosis: a gram-negative bacilli infection. Meanwhile, his right leg had become severely gangrenous, of the gas gangrene type. Nevertheless, after treatment, Dr. Adajar was discharged on June 15, 2021, and advised to continue with his follow-up, which included a referral to physical therapy. However, on July 27, 2021, doctors at Wilkes-Barre Commonwealth were forced to amputate Dr. Adajar’s right leg through the fibula and tibia.
In his suit, Dr. Adajar claims that the decision by the podiatrist and his associates to place him in a total contact cast was the direct and immediate cause of his injuries, most catastrophically the amputation of his right leg. He and his legal team are seeking damages “in excess of $50,000,” the standard language in Pennsylvania for cases likely to involve much larger awards.
Dr. Adajar, despite the loss of his right leg, continues to practice internal medicine.
Doctor wins forceps-delivery suit
Last month, a Virginia jury decided in favor of a physician accused of damaging a baby’s eye during delivery, a story in The Winchester Star reports.
In December 2015, Melissa Clements went to Winchester Medical Center, part of Valley Health, to have her baby delivered. Her doctor was ob.gyn. George F. Craft II, at the time a member of Winchester Women’s Specialists. At one point during the roughly 30-minute delivery, Dr. Craft used forceps to remove Ms. Clements’s baby, who in the process sustained facial fractures and left-eye damage.
At trial, Craft argued that a forceps delivery was justified because the baby was stuck and his patient had refused a C-section.
The attorney for the plaintiffs — which included Ms. Clements’s husband — claimed that the use of forceps was premature, as professional guidelines require that a woman in labor be allowed at least 3 hours to push on her own before forceps are employed. (The suit, initially filed in 2019, also accused Dr. Craft of failing to properly inform his patient about the risks of, and alternatives to, this form of delivery. That part of the complaint was dropped, however, prior to the recent trial.)
The jury debated just 50 minutes before deciding Dr. Craft wasn’t medically negligent in the birth of William, Ms. Clements’s now 6-year-old son, who will be forced to wear contact lenses or glasses for life, or undergo corrective surgery.
As Dr. Craft’s attorney explained at trial: “He [Dr. Craft] hoped to give her [Ms. Clements] what she wanted: a vaginal delivery. But forceps techniques can and will cause injuries, even when properly placed.”
Unsupervised PAs subject to med-mal cap, state says
The California Supreme Court ruled late last month that even unsupervised physician assistants (PAs) are protected under the state’s $250,000 cap on noneconomic damages, according to a posting on the website of the Claims Journal, among other news sites.
The ruling stems from a 2013 suit filed by Marisol Lopez, who claimed that a dermatologist, a plastic surgeon, and two PAs had misdiagnosed her child’s skin cancer. Ms. Lopez’s child, Olivia Sarinana, died in February 2014, causing her mother to amend her original claim to a wrongful-death suit.
A trial court found both the doctors and the PAs liable for negligence, awarding the plaintiff $11,200 in economic damages and $4.25 million in noneconomic damages. The court subsequently reduced that amount, however, referencing the state’s $250,000 limit on noneconomic damages, which is part of the Medical Injury Compensation Reform Act of 1975, known as MICRA.
Ms. Lopez appealed the decision, arguing that the cap shouldn’t apply to the two PAs, because neither was under a physician’s direct supervision and therefore not acting within the proper scope of practice, as defined by state law. Despite agreeing with the factual basis of Ms. Lopez’s claim — that neither PA was being supervised during the period in question — the trial court refused to wave the state cap. Ms. Lopez again appealed, and, in a split decision, the Second District Court of Appeal upheld the trial court’s decision.
At this point, attorneys for Ms. Lopez applied for, and obtained, a review before the state’s highest court. Last month, the justices weighed in, ruling that the PAs were still entitled to protection under MICRA because they “had valid delegation-of-service agreements in place.” In other words, while the two PAs had not been directly supervised by a physician, their services had been properly delegated by one.
Said Associate Justice Goodwin Liu, who wrote the opinion: “To be sure, there are reasonable policy arguments for excluding physician assistants who perform medical services without actual supervision from a cap on non-economic damages, and the Legislature is well equipped to weigh and reweigh the competing policy considerations. But our role is confined to interpreting the statute before us in the manner that comports most closely with the Legislature’s purpose in enacting MICRA.”
Despite the high-court ruling, voters may soon get a chance to amend the nearly 5-decades-old MICRA legislation. A November ballot initiative would not only adjust the cap for inflation, raising it to more than $1.2 million, but would also permit “judges and juries to waive the cap entirely for cases involving death and permanent disability.”
Medical groups have said that if either or both of these changes happen the cost of healthcare in the Golden State will surely go up.
The content contained in this article is for informational purposes only and does not constitute legal advice. Reliance on any information provided in this article is solely at your own risk.
A version of this article first appeared on Medscape.com.
Pennsylvania Record, among other news sites, reports.
, as a story in theIn December 2020, Mario Adajar, MD, 59, an internist in Wyoming, Penn., sought treatment for his foot calluses and the chronic ulceration of his right foot.
Dr. Adajar consulted a podiatrist, who has surgical privileges at Wilkes-Barre Commonwealth Hospital. According to his complaint, Dr. Adajar made the podiatrist aware that he had type 2 diabetes and had recently undergone a kidney transplant.
Over the next several months, Adajar continued to be treated by the podiatrist who, among other things, debrided and cleaned his patient’s ulcerated right foot on multiple occasions. In June 2021, working out of the hospital’s Wound Healing Center, the podiatrist placed Dr. Adajar’s right leg in a total contact cast.
By the following day, the patient experienced what he later described as “excruciating” pain around the cast. He was also running a fever of 102.3. Taken to a local emergency department, Dr. Adajar soon went into septic shock, accompanied by both atrial fibrillation and acute hypoxic respiratory failure.
Doctors soon had a diagnosis: a gram-negative bacilli infection. Meanwhile, his right leg had become severely gangrenous, of the gas gangrene type. Nevertheless, after treatment, Dr. Adajar was discharged on June 15, 2021, and advised to continue with his follow-up, which included a referral to physical therapy. However, on July 27, 2021, doctors at Wilkes-Barre Commonwealth were forced to amputate Dr. Adajar’s right leg through the fibula and tibia.
In his suit, Dr. Adajar claims that the decision by the podiatrist and his associates to place him in a total contact cast was the direct and immediate cause of his injuries, most catastrophically the amputation of his right leg. He and his legal team are seeking damages “in excess of $50,000,” the standard language in Pennsylvania for cases likely to involve much larger awards.
Dr. Adajar, despite the loss of his right leg, continues to practice internal medicine.
Doctor wins forceps-delivery suit
Last month, a Virginia jury decided in favor of a physician accused of damaging a baby’s eye during delivery, a story in The Winchester Star reports.
In December 2015, Melissa Clements went to Winchester Medical Center, part of Valley Health, to have her baby delivered. Her doctor was ob.gyn. George F. Craft II, at the time a member of Winchester Women’s Specialists. At one point during the roughly 30-minute delivery, Dr. Craft used forceps to remove Ms. Clements’s baby, who in the process sustained facial fractures and left-eye damage.
At trial, Craft argued that a forceps delivery was justified because the baby was stuck and his patient had refused a C-section.
The attorney for the plaintiffs — which included Ms. Clements’s husband — claimed that the use of forceps was premature, as professional guidelines require that a woman in labor be allowed at least 3 hours to push on her own before forceps are employed. (The suit, initially filed in 2019, also accused Dr. Craft of failing to properly inform his patient about the risks of, and alternatives to, this form of delivery. That part of the complaint was dropped, however, prior to the recent trial.)
The jury debated just 50 minutes before deciding Dr. Craft wasn’t medically negligent in the birth of William, Ms. Clements’s now 6-year-old son, who will be forced to wear contact lenses or glasses for life, or undergo corrective surgery.
As Dr. Craft’s attorney explained at trial: “He [Dr. Craft] hoped to give her [Ms. Clements] what she wanted: a vaginal delivery. But forceps techniques can and will cause injuries, even when properly placed.”
Unsupervised PAs subject to med-mal cap, state says
The California Supreme Court ruled late last month that even unsupervised physician assistants (PAs) are protected under the state’s $250,000 cap on noneconomic damages, according to a posting on the website of the Claims Journal, among other news sites.
The ruling stems from a 2013 suit filed by Marisol Lopez, who claimed that a dermatologist, a plastic surgeon, and two PAs had misdiagnosed her child’s skin cancer. Ms. Lopez’s child, Olivia Sarinana, died in February 2014, causing her mother to amend her original claim to a wrongful-death suit.
A trial court found both the doctors and the PAs liable for negligence, awarding the plaintiff $11,200 in economic damages and $4.25 million in noneconomic damages. The court subsequently reduced that amount, however, referencing the state’s $250,000 limit on noneconomic damages, which is part of the Medical Injury Compensation Reform Act of 1975, known as MICRA.
Ms. Lopez appealed the decision, arguing that the cap shouldn’t apply to the two PAs, because neither was under a physician’s direct supervision and therefore not acting within the proper scope of practice, as defined by state law. Despite agreeing with the factual basis of Ms. Lopez’s claim — that neither PA was being supervised during the period in question — the trial court refused to wave the state cap. Ms. Lopez again appealed, and, in a split decision, the Second District Court of Appeal upheld the trial court’s decision.
At this point, attorneys for Ms. Lopez applied for, and obtained, a review before the state’s highest court. Last month, the justices weighed in, ruling that the PAs were still entitled to protection under MICRA because they “had valid delegation-of-service agreements in place.” In other words, while the two PAs had not been directly supervised by a physician, their services had been properly delegated by one.
Said Associate Justice Goodwin Liu, who wrote the opinion: “To be sure, there are reasonable policy arguments for excluding physician assistants who perform medical services without actual supervision from a cap on non-economic damages, and the Legislature is well equipped to weigh and reweigh the competing policy considerations. But our role is confined to interpreting the statute before us in the manner that comports most closely with the Legislature’s purpose in enacting MICRA.”
Despite the high-court ruling, voters may soon get a chance to amend the nearly 5-decades-old MICRA legislation. A November ballot initiative would not only adjust the cap for inflation, raising it to more than $1.2 million, but would also permit “judges and juries to waive the cap entirely for cases involving death and permanent disability.”
Medical groups have said that if either or both of these changes happen the cost of healthcare in the Golden State will surely go up.
The content contained in this article is for informational purposes only and does not constitute legal advice. Reliance on any information provided in this article is solely at your own risk.
A version of this article first appeared on Medscape.com.
Pennsylvania Record, among other news sites, reports.
, as a story in theIn December 2020, Mario Adajar, MD, 59, an internist in Wyoming, Penn., sought treatment for his foot calluses and the chronic ulceration of his right foot.
Dr. Adajar consulted a podiatrist, who has surgical privileges at Wilkes-Barre Commonwealth Hospital. According to his complaint, Dr. Adajar made the podiatrist aware that he had type 2 diabetes and had recently undergone a kidney transplant.
Over the next several months, Adajar continued to be treated by the podiatrist who, among other things, debrided and cleaned his patient’s ulcerated right foot on multiple occasions. In June 2021, working out of the hospital’s Wound Healing Center, the podiatrist placed Dr. Adajar’s right leg in a total contact cast.
By the following day, the patient experienced what he later described as “excruciating” pain around the cast. He was also running a fever of 102.3. Taken to a local emergency department, Dr. Adajar soon went into septic shock, accompanied by both atrial fibrillation and acute hypoxic respiratory failure.
Doctors soon had a diagnosis: a gram-negative bacilli infection. Meanwhile, his right leg had become severely gangrenous, of the gas gangrene type. Nevertheless, after treatment, Dr. Adajar was discharged on June 15, 2021, and advised to continue with his follow-up, which included a referral to physical therapy. However, on July 27, 2021, doctors at Wilkes-Barre Commonwealth were forced to amputate Dr. Adajar’s right leg through the fibula and tibia.
In his suit, Dr. Adajar claims that the decision by the podiatrist and his associates to place him in a total contact cast was the direct and immediate cause of his injuries, most catastrophically the amputation of his right leg. He and his legal team are seeking damages “in excess of $50,000,” the standard language in Pennsylvania for cases likely to involve much larger awards.
Dr. Adajar, despite the loss of his right leg, continues to practice internal medicine.
Doctor wins forceps-delivery suit
Last month, a Virginia jury decided in favor of a physician accused of damaging a baby’s eye during delivery, a story in The Winchester Star reports.
In December 2015, Melissa Clements went to Winchester Medical Center, part of Valley Health, to have her baby delivered. Her doctor was ob.gyn. George F. Craft II, at the time a member of Winchester Women’s Specialists. At one point during the roughly 30-minute delivery, Dr. Craft used forceps to remove Ms. Clements’s baby, who in the process sustained facial fractures and left-eye damage.
At trial, Craft argued that a forceps delivery was justified because the baby was stuck and his patient had refused a C-section.
The attorney for the plaintiffs — which included Ms. Clements’s husband — claimed that the use of forceps was premature, as professional guidelines require that a woman in labor be allowed at least 3 hours to push on her own before forceps are employed. (The suit, initially filed in 2019, also accused Dr. Craft of failing to properly inform his patient about the risks of, and alternatives to, this form of delivery. That part of the complaint was dropped, however, prior to the recent trial.)
The jury debated just 50 minutes before deciding Dr. Craft wasn’t medically negligent in the birth of William, Ms. Clements’s now 6-year-old son, who will be forced to wear contact lenses or glasses for life, or undergo corrective surgery.
As Dr. Craft’s attorney explained at trial: “He [Dr. Craft] hoped to give her [Ms. Clements] what she wanted: a vaginal delivery. But forceps techniques can and will cause injuries, even when properly placed.”
Unsupervised PAs subject to med-mal cap, state says
The California Supreme Court ruled late last month that even unsupervised physician assistants (PAs) are protected under the state’s $250,000 cap on noneconomic damages, according to a posting on the website of the Claims Journal, among other news sites.
The ruling stems from a 2013 suit filed by Marisol Lopez, who claimed that a dermatologist, a plastic surgeon, and two PAs had misdiagnosed her child’s skin cancer. Ms. Lopez’s child, Olivia Sarinana, died in February 2014, causing her mother to amend her original claim to a wrongful-death suit.
A trial court found both the doctors and the PAs liable for negligence, awarding the plaintiff $11,200 in economic damages and $4.25 million in noneconomic damages. The court subsequently reduced that amount, however, referencing the state’s $250,000 limit on noneconomic damages, which is part of the Medical Injury Compensation Reform Act of 1975, known as MICRA.
Ms. Lopez appealed the decision, arguing that the cap shouldn’t apply to the two PAs, because neither was under a physician’s direct supervision and therefore not acting within the proper scope of practice, as defined by state law. Despite agreeing with the factual basis of Ms. Lopez’s claim — that neither PA was being supervised during the period in question — the trial court refused to wave the state cap. Ms. Lopez again appealed, and, in a split decision, the Second District Court of Appeal upheld the trial court’s decision.
At this point, attorneys for Ms. Lopez applied for, and obtained, a review before the state’s highest court. Last month, the justices weighed in, ruling that the PAs were still entitled to protection under MICRA because they “had valid delegation-of-service agreements in place.” In other words, while the two PAs had not been directly supervised by a physician, their services had been properly delegated by one.
Said Associate Justice Goodwin Liu, who wrote the opinion: “To be sure, there are reasonable policy arguments for excluding physician assistants who perform medical services without actual supervision from a cap on non-economic damages, and the Legislature is well equipped to weigh and reweigh the competing policy considerations. But our role is confined to interpreting the statute before us in the manner that comports most closely with the Legislature’s purpose in enacting MICRA.”
Despite the high-court ruling, voters may soon get a chance to amend the nearly 5-decades-old MICRA legislation. A November ballot initiative would not only adjust the cap for inflation, raising it to more than $1.2 million, but would also permit “judges and juries to waive the cap entirely for cases involving death and permanent disability.”
Medical groups have said that if either or both of these changes happen the cost of healthcare in the Golden State will surely go up.
The content contained in this article is for informational purposes only and does not constitute legal advice. Reliance on any information provided in this article is solely at your own risk.
A version of this article first appeared on Medscape.com.
Standard of care in suicide prevention in pediatrics: A review of the Blueprint for Youth Suicide Prevention
In March, an unprecedented collaboration between the American Academy of Pediatrics (AAP), American Foundation for Suicide Prevention (AFSP), and National Institute of Mental Health (NIMH) resulted in the development of the Blueprint for Youth Suicide Prevention. The blueprint comprises a consensus summary of expert recommendations, educational resources, and specific and practical strategies for pediatricians and other health care providers to support youth at risk for suicide in pediatric primary care settings. It is ambitious and far-reaching in scope and speaks to the growing understanding that suicide care pathways offer a clear ray of hope toward a shared “zero suicide” goal.
Following the declaration of a national emergency for child and adolescent mental health, the blueprint represents a resource to help us move forward during this national emergency. It offers practically focused suggestions at the clinic site and individual level, in addition to community and school levels, to tackle the deeply concerning and alarming increasing rate of emergency department visits by 30% in the last 2 pandemic years for youth suicide attempts. A reflexive visit for an emergency mental health evaluation in an emergency department after a disclosure of suicidal ideation isn’t always the next best step in a pathway to care, nor a sustainable community solution with the dearth of mental health and crisis resources nationally.
With this new tool, let’s proceed through a case of how one would approach a patient in the office setting with a concerning disclosure.
Case
Emily is a 12-year-old girl who presents for a routine well-check in your practice. Her mother shared with you before your examination that she has wondered if Emily may need more support. Since the pandemic, Emily had increasingly spent time using social media and watching television. When you meet with Emily on her own, she says, “I know that life is getting back to normal, and I am supposed to be excited for that, but now I have some anxiety about doing what I used to do. I’ve had some thoughts that it would be better to sleep forever and not wake up ...”
Case discussion
The blueprint recommends universal screening for suicide in all youths aged 12 and over. Not all children, like Emily, will be as open about their inner thoughts. The blueprint provides a link to the ASQ, which comprises questions to ascertain suicide risk and takes 20 seconds to complete with a patient. It is recommended as a first-line screening tool by the NIMH: Suicide Risk Screening Tool. This tool can guide one’s clinical thinking beyond the question of whether or not a child feels “suicidal” after a disclosure such as Emily’s. The blueprint also provides a tip sheet on how to frame these screenings to ensure their thoroughness and interpersonal effectiveness.
Case continued
You go through the ASQ with Emily and she revealed that she has had thoughts about suicide but not currently and without further plans. According to the ASQ, this screening falls into the category of a “non-acute positive screen (potential risk identified),” and now the patient requires a brief suicide safety assessment to determine if an emergency mental health evaluation is needed.
Case discussion
An initial screen (ASQ) should be followed by a Brief Suicide Safety Assessment (BSSA). Two common ones are the ASQ-BSSA (created by the same group that created the ASQ) or the C-SSRS (Columbia suicide severity rating scale).
The blueprint suggests adding this level of depth to one’s investigation in a pediatrics office for a divulged concern with suicidal ideation and following the algorithm to ensure safety.
The complete screening process is also described, in detail, in this instructional video: Suicide Risk Screening Training: How to Manage Patients at Risk for Suicide.
Case continued
Following the ASQ-BSSA, you determine that a referral to more immediate mental health resources would be most helpful and discuss your concerns with Emily and her family. You connect her via a “warm handoff” to a therapist in the office available from the newly adopted primary care mental health integration model. Emily completes further screening for anxiety and depressive disorders and begins a course of cognitive-behavioral therapy. You feel reassured that the therapist can connect with the consulting psychiatrist in the model who can offer a comprehensive psychiatric evaluation if needed. A referral to the emergency department to complete this screening has been avoided. You also plan for a “caring contact” from the office in a day to check in on Emily and her family and, before they go, provide them with crisis services and resources.
The blueprint represents a thoughtful means to know when emergency department visits are necessary and when other forms of support such as robust safety planning, a connection to other nonemergency services, and “caring contacts” from the office within 24-48 hours are actually of more benefit. “Caring contacts,” in particular, have been lauded as having a significant impact in modifying the course of a patient with suicidal ideation. Data show that differences such as follow-up phone calls by any staff member or even postcards from the clinic over 6-12 months can affect suicide risk.
Beyond outlining suicide care pathways, the blueprint also shares clinical algorithms from the National Network of Child Psychiatry Access Programs (NNCPAP). These algorithms help clinicians assess common issues in pediatrics and reserve referrals to psychiatry and escalations of care to the emergency department for certain high-risk circumstances.
The blueprint seeks to provide a “one-stop-shop” for accessible and usable resources in the clinic workflow for suicide prevention. It is inspiring to see our professional organizations pursuing practical and practice-based solutions to our children’s mental health crisis in unison.
Dr. Pawlowski is a child and adolescent consulting psychiatrist. She is a division chief at the University of Vermont Medical Center where she focuses on primary care mental health integration within primary care pediatrics, internal medicine, and family medicine. Email her at [email protected].
In March, an unprecedented collaboration between the American Academy of Pediatrics (AAP), American Foundation for Suicide Prevention (AFSP), and National Institute of Mental Health (NIMH) resulted in the development of the Blueprint for Youth Suicide Prevention. The blueprint comprises a consensus summary of expert recommendations, educational resources, and specific and practical strategies for pediatricians and other health care providers to support youth at risk for suicide in pediatric primary care settings. It is ambitious and far-reaching in scope and speaks to the growing understanding that suicide care pathways offer a clear ray of hope toward a shared “zero suicide” goal.
Following the declaration of a national emergency for child and adolescent mental health, the blueprint represents a resource to help us move forward during this national emergency. It offers practically focused suggestions at the clinic site and individual level, in addition to community and school levels, to tackle the deeply concerning and alarming increasing rate of emergency department visits by 30% in the last 2 pandemic years for youth suicide attempts. A reflexive visit for an emergency mental health evaluation in an emergency department after a disclosure of suicidal ideation isn’t always the next best step in a pathway to care, nor a sustainable community solution with the dearth of mental health and crisis resources nationally.
With this new tool, let’s proceed through a case of how one would approach a patient in the office setting with a concerning disclosure.
Case
Emily is a 12-year-old girl who presents for a routine well-check in your practice. Her mother shared with you before your examination that she has wondered if Emily may need more support. Since the pandemic, Emily had increasingly spent time using social media and watching television. When you meet with Emily on her own, she says, “I know that life is getting back to normal, and I am supposed to be excited for that, but now I have some anxiety about doing what I used to do. I’ve had some thoughts that it would be better to sleep forever and not wake up ...”
Case discussion
The blueprint recommends universal screening for suicide in all youths aged 12 and over. Not all children, like Emily, will be as open about their inner thoughts. The blueprint provides a link to the ASQ, which comprises questions to ascertain suicide risk and takes 20 seconds to complete with a patient. It is recommended as a first-line screening tool by the NIMH: Suicide Risk Screening Tool. This tool can guide one’s clinical thinking beyond the question of whether or not a child feels “suicidal” after a disclosure such as Emily’s. The blueprint also provides a tip sheet on how to frame these screenings to ensure their thoroughness and interpersonal effectiveness.
Case continued
You go through the ASQ with Emily and she revealed that she has had thoughts about suicide but not currently and without further plans. According to the ASQ, this screening falls into the category of a “non-acute positive screen (potential risk identified),” and now the patient requires a brief suicide safety assessment to determine if an emergency mental health evaluation is needed.
Case discussion
An initial screen (ASQ) should be followed by a Brief Suicide Safety Assessment (BSSA). Two common ones are the ASQ-BSSA (created by the same group that created the ASQ) or the C-SSRS (Columbia suicide severity rating scale).
The blueprint suggests adding this level of depth to one’s investigation in a pediatrics office for a divulged concern with suicidal ideation and following the algorithm to ensure safety.
The complete screening process is also described, in detail, in this instructional video: Suicide Risk Screening Training: How to Manage Patients at Risk for Suicide.
Case continued
Following the ASQ-BSSA, you determine that a referral to more immediate mental health resources would be most helpful and discuss your concerns with Emily and her family. You connect her via a “warm handoff” to a therapist in the office available from the newly adopted primary care mental health integration model. Emily completes further screening for anxiety and depressive disorders and begins a course of cognitive-behavioral therapy. You feel reassured that the therapist can connect with the consulting psychiatrist in the model who can offer a comprehensive psychiatric evaluation if needed. A referral to the emergency department to complete this screening has been avoided. You also plan for a “caring contact” from the office in a day to check in on Emily and her family and, before they go, provide them with crisis services and resources.
The blueprint represents a thoughtful means to know when emergency department visits are necessary and when other forms of support such as robust safety planning, a connection to other nonemergency services, and “caring contacts” from the office within 24-48 hours are actually of more benefit. “Caring contacts,” in particular, have been lauded as having a significant impact in modifying the course of a patient with suicidal ideation. Data show that differences such as follow-up phone calls by any staff member or even postcards from the clinic over 6-12 months can affect suicide risk.
Beyond outlining suicide care pathways, the blueprint also shares clinical algorithms from the National Network of Child Psychiatry Access Programs (NNCPAP). These algorithms help clinicians assess common issues in pediatrics and reserve referrals to psychiatry and escalations of care to the emergency department for certain high-risk circumstances.
The blueprint seeks to provide a “one-stop-shop” for accessible and usable resources in the clinic workflow for suicide prevention. It is inspiring to see our professional organizations pursuing practical and practice-based solutions to our children’s mental health crisis in unison.
Dr. Pawlowski is a child and adolescent consulting psychiatrist. She is a division chief at the University of Vermont Medical Center where she focuses on primary care mental health integration within primary care pediatrics, internal medicine, and family medicine. Email her at [email protected].
In March, an unprecedented collaboration between the American Academy of Pediatrics (AAP), American Foundation for Suicide Prevention (AFSP), and National Institute of Mental Health (NIMH) resulted in the development of the Blueprint for Youth Suicide Prevention. The blueprint comprises a consensus summary of expert recommendations, educational resources, and specific and practical strategies for pediatricians and other health care providers to support youth at risk for suicide in pediatric primary care settings. It is ambitious and far-reaching in scope and speaks to the growing understanding that suicide care pathways offer a clear ray of hope toward a shared “zero suicide” goal.
Following the declaration of a national emergency for child and adolescent mental health, the blueprint represents a resource to help us move forward during this national emergency. It offers practically focused suggestions at the clinic site and individual level, in addition to community and school levels, to tackle the deeply concerning and alarming increasing rate of emergency department visits by 30% in the last 2 pandemic years for youth suicide attempts. A reflexive visit for an emergency mental health evaluation in an emergency department after a disclosure of suicidal ideation isn’t always the next best step in a pathway to care, nor a sustainable community solution with the dearth of mental health and crisis resources nationally.
With this new tool, let’s proceed through a case of how one would approach a patient in the office setting with a concerning disclosure.
Case
Emily is a 12-year-old girl who presents for a routine well-check in your practice. Her mother shared with you before your examination that she has wondered if Emily may need more support. Since the pandemic, Emily had increasingly spent time using social media and watching television. When you meet with Emily on her own, she says, “I know that life is getting back to normal, and I am supposed to be excited for that, but now I have some anxiety about doing what I used to do. I’ve had some thoughts that it would be better to sleep forever and not wake up ...”
Case discussion
The blueprint recommends universal screening for suicide in all youths aged 12 and over. Not all children, like Emily, will be as open about their inner thoughts. The blueprint provides a link to the ASQ, which comprises questions to ascertain suicide risk and takes 20 seconds to complete with a patient. It is recommended as a first-line screening tool by the NIMH: Suicide Risk Screening Tool. This tool can guide one’s clinical thinking beyond the question of whether or not a child feels “suicidal” after a disclosure such as Emily’s. The blueprint also provides a tip sheet on how to frame these screenings to ensure their thoroughness and interpersonal effectiveness.
Case continued
You go through the ASQ with Emily and she revealed that she has had thoughts about suicide but not currently and without further plans. According to the ASQ, this screening falls into the category of a “non-acute positive screen (potential risk identified),” and now the patient requires a brief suicide safety assessment to determine if an emergency mental health evaluation is needed.
Case discussion
An initial screen (ASQ) should be followed by a Brief Suicide Safety Assessment (BSSA). Two common ones are the ASQ-BSSA (created by the same group that created the ASQ) or the C-SSRS (Columbia suicide severity rating scale).
The blueprint suggests adding this level of depth to one’s investigation in a pediatrics office for a divulged concern with suicidal ideation and following the algorithm to ensure safety.
The complete screening process is also described, in detail, in this instructional video: Suicide Risk Screening Training: How to Manage Patients at Risk for Suicide.
Case continued
Following the ASQ-BSSA, you determine that a referral to more immediate mental health resources would be most helpful and discuss your concerns with Emily and her family. You connect her via a “warm handoff” to a therapist in the office available from the newly adopted primary care mental health integration model. Emily completes further screening for anxiety and depressive disorders and begins a course of cognitive-behavioral therapy. You feel reassured that the therapist can connect with the consulting psychiatrist in the model who can offer a comprehensive psychiatric evaluation if needed. A referral to the emergency department to complete this screening has been avoided. You also plan for a “caring contact” from the office in a day to check in on Emily and her family and, before they go, provide them with crisis services and resources.
The blueprint represents a thoughtful means to know when emergency department visits are necessary and when other forms of support such as robust safety planning, a connection to other nonemergency services, and “caring contacts” from the office within 24-48 hours are actually of more benefit. “Caring contacts,” in particular, have been lauded as having a significant impact in modifying the course of a patient with suicidal ideation. Data show that differences such as follow-up phone calls by any staff member or even postcards from the clinic over 6-12 months can affect suicide risk.
Beyond outlining suicide care pathways, the blueprint also shares clinical algorithms from the National Network of Child Psychiatry Access Programs (NNCPAP). These algorithms help clinicians assess common issues in pediatrics and reserve referrals to psychiatry and escalations of care to the emergency department for certain high-risk circumstances.
The blueprint seeks to provide a “one-stop-shop” for accessible and usable resources in the clinic workflow for suicide prevention. It is inspiring to see our professional organizations pursuing practical and practice-based solutions to our children’s mental health crisis in unison.
Dr. Pawlowski is a child and adolescent consulting psychiatrist. She is a division chief at the University of Vermont Medical Center where she focuses on primary care mental health integration within primary care pediatrics, internal medicine, and family medicine. Email her at [email protected].
Diagnosing adolescent ADHD
Pediatricians are increasingly expert in the assessment and treatment of attention-deficit/hyperactivity disorder. But what do you do when adolescents present to your office saying they think they have ADHD? While ADHD is a common and treatable disorder of youth, it is important to take special care when assessing an adolescent. Difficulties with attention and concentration are common symptoms for many different challenges of adolescence, and for ADHD to be the underlying cause, those symptoms must have started prior to adolescence (according to DSM-5, prior to the age of 12). When your adolescent patients or their parents come to your office complaining of inattention, it is important to consider the full range of possible explanations.
Sleep
We have written in this column previously about the challenges that adolescents face in getting adequate sleep consistently. Teenagers, on average, need more than 9 hours of sleep nightly and American teenagers get fewer than 6. This mismatch is because of physiologic shifts that move their natural sleep onset time significantly later, while school still starts early. It’s often compounded by other demands on their time, including homework, extracurricular activities, and the gravitational pull of social connections. Independent teenagers make their own decisions about how to manage their time and may feel sleep is optional, or manage their fatigue with naps and caffeine, both of which will further compromise the quality and efficiency of sleep.
Chronic sleep deprivation will present with difficulties with focus, attention, memory, and cognitive performance. Treatment of this problem with stimulants is likely to make the underlying poor sleep habits even worse. When your patient presents complaining of difficulty concentrating and worsening school performance, be sure to start with a thorough sleep history, and always provide guidance about the body’s need for sleep and healthy sleep habits.
Anxiety
Anxiety disorders are the most common psychiatric illnesses of youth, with estimates of as many as 30% of children and adolescents experiencing one. The true prevalence of ADHD is estimated to be about 4% of the population. Whether social phobia, generalized anxiety disorder, or even posttraumatic stress disorder, anxiety disorders interfere with attention as ruminative worry tends to distract those experiencing it. It can also affect attention and focus indirectly by interfering with restful sleep. Anxiety disorders can be difficult to identify, as the sufferers typically internalize their symptoms. But inquire about specific worries (such as speaking in front of others, meeting new people, or an illness or accident striking themselves or a loved one) and how much time they take up. Explore if worries fill their thoughts during quiet or downtime, and explore more about their worries. You may use a screening instrument such as the Pediatric Symptom Checklist or the SCARED, both of which will indicate a likely problem with anxiety. While it is possible to have comorbid ADHD with an anxiety disorder, the anxiety disorder will likely worsen with stimulants and should be treated first. These are usually curable illnesses and you may find that remission of anxiety symptoms resolves the attentional problems.
Depression
Mood disorders are less common than anxiety disorders in youth, but far more prevalent than ADHD. And depression is usually marked by serious difficulty concentrating across settings (including for things that were previously very interesting). A sullen teenager who is deeply self-critical about school performance would benefit from exploration of associated changes in mood, interests, energy, appetite, sleep, and for feelings of worthlessness, guilt, and suicidal thoughts. The PHQ9A is a simple, free screening instrument that is reasonable to use with every sick visit (and well-check) with your adolescent patients, given the risks of undetected and untreated depression. If your patient presents complaining of poor school performance, always screen for depression. As with anxiety disorders, comorbid ADHD is possible, but it is always recommended to treat the mood disorder first and then to assess for residual ADHD symptoms once the mood disorder is in remission.
Substance abuse
Adolescence is a time of exploration, and drug and alcohol use is common. While attentional impairment will happen with intoxication, occasional or rare use should not lead to consistent impairment in school. But when parents are more worried than their children about a significant change in school performance, it is important to screen for substance abuse. A child with a secret substance use disorder will often present with behavioral changes and deteriorating school performance and might deny any drug or alcohol use to parents. Indeed, stimulants have some street value and some patients may be seeking a stimulant prescription to sell or trade for other drugs. Regular marijuana use may present with only deteriorating school performance and no irritability or other noticeable behavioral changes. Marijuana is seen as safe and even healthy by many teenagers (and even many parents), and some youth may be using it recreationally or to manage difficulties with sleep, anxiety, or mood symptoms.
But there is compelling evidence that marijuana use causes cognitive impairment, including difficulty with sustaining attention, short-term memory, and processing speed, for as long as 24 hours after use. If a teenager is using marijuana daily after school, it is certainly going to interfere, in a dose-dependent manner, with attention and cognitive function. Sustained heavy use can lead to permanent cognitive deficits. It can also trigger or worsen anxiety or mood symptoms (contrary to much popular opinion).
Gathering a thorough substance use history is essential when assessing a teenager for difficulties with focus or attention, especially when these are accompanied by change in behavior and school performance. Remember, it is critical to interview these children without their parents present to invite them to be forthcoming with you.
History
While true ADHD should have been present throughout childhood, it is possible that the symptoms have become noticeable only in adolescence. For patients with very high intelligence and lower levels of impulsivity and hyperactivity, they might easily have “flown under the radar” during their elementary and even middle school years. Their difficulties with attention and focus might become apparent only when the volume and difficulty of schoolwork both are great enough that their intelligence is not enough to get good grades. That is, their problems with executive function, prioritizing, shifting sets, and completing tasks in a timely way make it impossible to keep up good grades when the work gets harder.
Your history should reveal a long history of dreaminess or distractibility, a tendency to lose and forget things, and the other symptoms of inattention. Did they often seem to not be listening when they were younger? Forget to hand in homework? Leave chores unfinished? Leave messes behind everywhere they went? These will not be definitive, but they do reassure that symptoms may have been present for a long time, even if school performance was considered fine until the workload got too large. If such problems were not present before puberty, consider whether a subtle learning disability could be impairing them as they face more challenging academic subjects.
If you have ruled out anxiety, mood, and substance use concerns, and helped them to address a sleep deficit, then you can proceed. It is worthwhile to get Vanderbilt Assessments as you would for a younger child. If they meet criteria, discuss the risks and benefits of medication, executive skills coaching, and environmental adjustments (extra time for tests, a less stimulating environment) that can help them explore academic challenges without the discouragement that ADHD can bring.
Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].
Pediatricians are increasingly expert in the assessment and treatment of attention-deficit/hyperactivity disorder. But what do you do when adolescents present to your office saying they think they have ADHD? While ADHD is a common and treatable disorder of youth, it is important to take special care when assessing an adolescent. Difficulties with attention and concentration are common symptoms for many different challenges of adolescence, and for ADHD to be the underlying cause, those symptoms must have started prior to adolescence (according to DSM-5, prior to the age of 12). When your adolescent patients or their parents come to your office complaining of inattention, it is important to consider the full range of possible explanations.
Sleep
We have written in this column previously about the challenges that adolescents face in getting adequate sleep consistently. Teenagers, on average, need more than 9 hours of sleep nightly and American teenagers get fewer than 6. This mismatch is because of physiologic shifts that move their natural sleep onset time significantly later, while school still starts early. It’s often compounded by other demands on their time, including homework, extracurricular activities, and the gravitational pull of social connections. Independent teenagers make their own decisions about how to manage their time and may feel sleep is optional, or manage their fatigue with naps and caffeine, both of which will further compromise the quality and efficiency of sleep.
Chronic sleep deprivation will present with difficulties with focus, attention, memory, and cognitive performance. Treatment of this problem with stimulants is likely to make the underlying poor sleep habits even worse. When your patient presents complaining of difficulty concentrating and worsening school performance, be sure to start with a thorough sleep history, and always provide guidance about the body’s need for sleep and healthy sleep habits.
Anxiety
Anxiety disorders are the most common psychiatric illnesses of youth, with estimates of as many as 30% of children and adolescents experiencing one. The true prevalence of ADHD is estimated to be about 4% of the population. Whether social phobia, generalized anxiety disorder, or even posttraumatic stress disorder, anxiety disorders interfere with attention as ruminative worry tends to distract those experiencing it. It can also affect attention and focus indirectly by interfering with restful sleep. Anxiety disorders can be difficult to identify, as the sufferers typically internalize their symptoms. But inquire about specific worries (such as speaking in front of others, meeting new people, or an illness or accident striking themselves or a loved one) and how much time they take up. Explore if worries fill their thoughts during quiet or downtime, and explore more about their worries. You may use a screening instrument such as the Pediatric Symptom Checklist or the SCARED, both of which will indicate a likely problem with anxiety. While it is possible to have comorbid ADHD with an anxiety disorder, the anxiety disorder will likely worsen with stimulants and should be treated first. These are usually curable illnesses and you may find that remission of anxiety symptoms resolves the attentional problems.
Depression
Mood disorders are less common than anxiety disorders in youth, but far more prevalent than ADHD. And depression is usually marked by serious difficulty concentrating across settings (including for things that were previously very interesting). A sullen teenager who is deeply self-critical about school performance would benefit from exploration of associated changes in mood, interests, energy, appetite, sleep, and for feelings of worthlessness, guilt, and suicidal thoughts. The PHQ9A is a simple, free screening instrument that is reasonable to use with every sick visit (and well-check) with your adolescent patients, given the risks of undetected and untreated depression. If your patient presents complaining of poor school performance, always screen for depression. As with anxiety disorders, comorbid ADHD is possible, but it is always recommended to treat the mood disorder first and then to assess for residual ADHD symptoms once the mood disorder is in remission.
Substance abuse
Adolescence is a time of exploration, and drug and alcohol use is common. While attentional impairment will happen with intoxication, occasional or rare use should not lead to consistent impairment in school. But when parents are more worried than their children about a significant change in school performance, it is important to screen for substance abuse. A child with a secret substance use disorder will often present with behavioral changes and deteriorating school performance and might deny any drug or alcohol use to parents. Indeed, stimulants have some street value and some patients may be seeking a stimulant prescription to sell or trade for other drugs. Regular marijuana use may present with only deteriorating school performance and no irritability or other noticeable behavioral changes. Marijuana is seen as safe and even healthy by many teenagers (and even many parents), and some youth may be using it recreationally or to manage difficulties with sleep, anxiety, or mood symptoms.
But there is compelling evidence that marijuana use causes cognitive impairment, including difficulty with sustaining attention, short-term memory, and processing speed, for as long as 24 hours after use. If a teenager is using marijuana daily after school, it is certainly going to interfere, in a dose-dependent manner, with attention and cognitive function. Sustained heavy use can lead to permanent cognitive deficits. It can also trigger or worsen anxiety or mood symptoms (contrary to much popular opinion).
Gathering a thorough substance use history is essential when assessing a teenager for difficulties with focus or attention, especially when these are accompanied by change in behavior and school performance. Remember, it is critical to interview these children without their parents present to invite them to be forthcoming with you.
History
While true ADHD should have been present throughout childhood, it is possible that the symptoms have become noticeable only in adolescence. For patients with very high intelligence and lower levels of impulsivity and hyperactivity, they might easily have “flown under the radar” during their elementary and even middle school years. Their difficulties with attention and focus might become apparent only when the volume and difficulty of schoolwork both are great enough that their intelligence is not enough to get good grades. That is, their problems with executive function, prioritizing, shifting sets, and completing tasks in a timely way make it impossible to keep up good grades when the work gets harder.
Your history should reveal a long history of dreaminess or distractibility, a tendency to lose and forget things, and the other symptoms of inattention. Did they often seem to not be listening when they were younger? Forget to hand in homework? Leave chores unfinished? Leave messes behind everywhere they went? These will not be definitive, but they do reassure that symptoms may have been present for a long time, even if school performance was considered fine until the workload got too large. If such problems were not present before puberty, consider whether a subtle learning disability could be impairing them as they face more challenging academic subjects.
If you have ruled out anxiety, mood, and substance use concerns, and helped them to address a sleep deficit, then you can proceed. It is worthwhile to get Vanderbilt Assessments as you would for a younger child. If they meet criteria, discuss the risks and benefits of medication, executive skills coaching, and environmental adjustments (extra time for tests, a less stimulating environment) that can help them explore academic challenges without the discouragement that ADHD can bring.
Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].
Pediatricians are increasingly expert in the assessment and treatment of attention-deficit/hyperactivity disorder. But what do you do when adolescents present to your office saying they think they have ADHD? While ADHD is a common and treatable disorder of youth, it is important to take special care when assessing an adolescent. Difficulties with attention and concentration are common symptoms for many different challenges of adolescence, and for ADHD to be the underlying cause, those symptoms must have started prior to adolescence (according to DSM-5, prior to the age of 12). When your adolescent patients or their parents come to your office complaining of inattention, it is important to consider the full range of possible explanations.
Sleep
We have written in this column previously about the challenges that adolescents face in getting adequate sleep consistently. Teenagers, on average, need more than 9 hours of sleep nightly and American teenagers get fewer than 6. This mismatch is because of physiologic shifts that move their natural sleep onset time significantly later, while school still starts early. It’s often compounded by other demands on their time, including homework, extracurricular activities, and the gravitational pull of social connections. Independent teenagers make their own decisions about how to manage their time and may feel sleep is optional, or manage their fatigue with naps and caffeine, both of which will further compromise the quality and efficiency of sleep.
Chronic sleep deprivation will present with difficulties with focus, attention, memory, and cognitive performance. Treatment of this problem with stimulants is likely to make the underlying poor sleep habits even worse. When your patient presents complaining of difficulty concentrating and worsening school performance, be sure to start with a thorough sleep history, and always provide guidance about the body’s need for sleep and healthy sleep habits.
Anxiety
Anxiety disorders are the most common psychiatric illnesses of youth, with estimates of as many as 30% of children and adolescents experiencing one. The true prevalence of ADHD is estimated to be about 4% of the population. Whether social phobia, generalized anxiety disorder, or even posttraumatic stress disorder, anxiety disorders interfere with attention as ruminative worry tends to distract those experiencing it. It can also affect attention and focus indirectly by interfering with restful sleep. Anxiety disorders can be difficult to identify, as the sufferers typically internalize their symptoms. But inquire about specific worries (such as speaking in front of others, meeting new people, or an illness or accident striking themselves or a loved one) and how much time they take up. Explore if worries fill their thoughts during quiet or downtime, and explore more about their worries. You may use a screening instrument such as the Pediatric Symptom Checklist or the SCARED, both of which will indicate a likely problem with anxiety. While it is possible to have comorbid ADHD with an anxiety disorder, the anxiety disorder will likely worsen with stimulants and should be treated first. These are usually curable illnesses and you may find that remission of anxiety symptoms resolves the attentional problems.
Depression
Mood disorders are less common than anxiety disorders in youth, but far more prevalent than ADHD. And depression is usually marked by serious difficulty concentrating across settings (including for things that were previously very interesting). A sullen teenager who is deeply self-critical about school performance would benefit from exploration of associated changes in mood, interests, energy, appetite, sleep, and for feelings of worthlessness, guilt, and suicidal thoughts. The PHQ9A is a simple, free screening instrument that is reasonable to use with every sick visit (and well-check) with your adolescent patients, given the risks of undetected and untreated depression. If your patient presents complaining of poor school performance, always screen for depression. As with anxiety disorders, comorbid ADHD is possible, but it is always recommended to treat the mood disorder first and then to assess for residual ADHD symptoms once the mood disorder is in remission.
Substance abuse
Adolescence is a time of exploration, and drug and alcohol use is common. While attentional impairment will happen with intoxication, occasional or rare use should not lead to consistent impairment in school. But when parents are more worried than their children about a significant change in school performance, it is important to screen for substance abuse. A child with a secret substance use disorder will often present with behavioral changes and deteriorating school performance and might deny any drug or alcohol use to parents. Indeed, stimulants have some street value and some patients may be seeking a stimulant prescription to sell or trade for other drugs. Regular marijuana use may present with only deteriorating school performance and no irritability or other noticeable behavioral changes. Marijuana is seen as safe and even healthy by many teenagers (and even many parents), and some youth may be using it recreationally or to manage difficulties with sleep, anxiety, or mood symptoms.
But there is compelling evidence that marijuana use causes cognitive impairment, including difficulty with sustaining attention, short-term memory, and processing speed, for as long as 24 hours after use. If a teenager is using marijuana daily after school, it is certainly going to interfere, in a dose-dependent manner, with attention and cognitive function. Sustained heavy use can lead to permanent cognitive deficits. It can also trigger or worsen anxiety or mood symptoms (contrary to much popular opinion).
Gathering a thorough substance use history is essential when assessing a teenager for difficulties with focus or attention, especially when these are accompanied by change in behavior and school performance. Remember, it is critical to interview these children without their parents present to invite them to be forthcoming with you.
History
While true ADHD should have been present throughout childhood, it is possible that the symptoms have become noticeable only in adolescence. For patients with very high intelligence and lower levels of impulsivity and hyperactivity, they might easily have “flown under the radar” during their elementary and even middle school years. Their difficulties with attention and focus might become apparent only when the volume and difficulty of schoolwork both are great enough that their intelligence is not enough to get good grades. That is, their problems with executive function, prioritizing, shifting sets, and completing tasks in a timely way make it impossible to keep up good grades when the work gets harder.
Your history should reveal a long history of dreaminess or distractibility, a tendency to lose and forget things, and the other symptoms of inattention. Did they often seem to not be listening when they were younger? Forget to hand in homework? Leave chores unfinished? Leave messes behind everywhere they went? These will not be definitive, but they do reassure that symptoms may have been present for a long time, even if school performance was considered fine until the workload got too large. If such problems were not present before puberty, consider whether a subtle learning disability could be impairing them as they face more challenging academic subjects.
If you have ruled out anxiety, mood, and substance use concerns, and helped them to address a sleep deficit, then you can proceed. It is worthwhile to get Vanderbilt Assessments as you would for a younger child. If they meet criteria, discuss the risks and benefits of medication, executive skills coaching, and environmental adjustments (extra time for tests, a less stimulating environment) that can help them explore academic challenges without the discouragement that ADHD can bring.
Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].
Ways to lessen toxic effects of chemo in older adults
Age-related changes that potentiate adverse drug reactions include alterations in absorption, distribution, metabolism, and excretion. As such, older patients often require adjustments in medications to optimize safety and use. Medication adjustment is especially important for older patients on complex medication regimens for multiple conditions, such as those undergoing cancer treatment. Three recent high-quality randomized trials evaluated the use of geriatric assessment (GA) in older adults with cancer.1-3
Interdisciplinary GA can identify aging-related conditions associated with poor outcomes in older patients with cancer (e.g., toxic effects of chemotherapy) and provide recommendations aimed at improving health outcomes. The results of these trials suggest that interdisciplinary GA can improve care outcomes and oncologists’ communication for older adults with cancer, and should be considered an emerging standard of care.
Geriatric assessment and chemotherapy-related toxic effects
A cluster randomized trial1 at City of Hope National Medical Center conducted between August 2015 and February 2019 enrolled 613 participants and randomly assigned them to receive a GA-guided intervention or usual standard of care in a 2-to-1 ratio. Participants were eligible for the study if they were aged ≥65 years; had a diagnosis of solid malignant neoplasm of any stage; were starting a new chemotherapy regimen; and were fluent in English, Spanish, or Chinese.
The intervention included a GA at baseline followed by assessments focused on six common areas: sleep problems, problems with eating and feeding, incontinence, confusion, evidence of falls, and skin breakdown. An interdisciplinary team (oncologist, nurse practitioner, pharmacist, physical therapist, occupational therapist, social worker, and nutritionist) performed the assessment and developed a plan of care. Interventions were multifactorial and could include referral to specialists; recommendations for medication changes; symptom management; nutritional intervention with diet recommendations and supplementation; and interventions targeting social, spiritual, and functional well-being. Follow-up by a nurse practitioner continued until completion of chemotherapy or 6 months after starting chemotherapy, whichever was earlier.
The primary outcome was grade 3 or higher chemotherapy-related toxic effects using National Cancer Institute criteria, and secondary outcomes were advance directive completion, emergency room visits and unplanned hospitalizations, and survival up to 12 months. Results showed a 10% absolute reduction in the incidence of grade 3 or higher toxic effects (P = .02), with a number needed to treat of 10. Advance directive completion also increased by 15%, but no differences were observed for other outcomes. This study offers high-quality evidence that a GA-based intervention can reduce toxic effects of chemotherapy regimens for older adults with cancer.
Geriatric assessment in community oncology practices
A recent study by Supriya G. Mohile, MD, and colleagues2 is the first nationwide multicenter clinical trial to demonstrate the effects of GA and GA-guided management. This study was conducted in 40 oncology practices from the University of Rochester National Cancer Institute Community Oncology Research Program network. Centers were randomly assigned to intervention or usual care (362 patients treated by 68 oncologists in the intervention group and 371 patients treated by 91 oncologists in the usual-care group). Eligibility criteria were age ≥70 years; impairment in at least one GA domain other than polypharmacy; incurable advanced solid tumor or lymphoma with a plan to start new cancer treatment with a high risk for toxic effects within 4 weeks; and English language fluency. Both study groups underwent a baseline GA that assessed patients’ physical performance, functional status, comorbidity, cognition, nutrition, social support, polypharmacy, and psychological status. For the intervention group, a summary and management recommendations were provided to the treating oncologists.
The primary outcome was grade 3 or higher toxic effects within 3 months of starting a new regimen; secondary outcomes included treatment intensity and survival and GA outcomes within 3 months. A smaller proportion of patients in the intervention group experienced toxicity (51% vs. 71%), with an absolute risk reduction of 20%. Patients in the intervention group also had fewer falls and a greater reduction in medications used; there were no other differences in secondary outcomes. This study offers very strong and generalizable evidence that incorporating GA in the care of older adults with cancer at risk for toxicity can reduce toxicity as well as improve other outcomes, such as falls and polypharmacy.
Geriatric assessment and oncologist-patient communication
A secondary analysis3 of data from Dr. Mohile and colleagues2 evaluated the effect of GA-guided recommendations on oncologist-patient communication regarding comorbidities. Patients (n = 541) included in this analysis were 76.6 years of age on average and had 3.2 (standard deviation, 1.9) comorbid conditions. All patients underwent GA, but only oncologists in the intervention arm received GA-based recommendations. Clinical encounters between oncologist and patient immediately following the GA were audio recorded and analyzed to examine communication between oncologists and participants as it relates to chronic comorbid conditions.
In the intervention arm, more discussions regarding comorbidities took place, and more participants’ concerns about comorbidities were acknowledged. More importantly, participants in the intervention group were 2.4 times more likely to have their concerns about comorbidities addressed through referral or education, compared with the usual-care group (P = .004). Moreover, 41% of oncologists in the intervention arm modified dosage or cancer treatment schedule because of concern about tolerability or comorbidities. This study demonstrates beneficial effects of GA in increasing communication and perhaps consideration of comorbidities of older adults when planning cancer treatment.
Dr. Hung is professor of geriatrics and palliative care at Mount Sinai Hospital, New York. He disclosed no relevant conflicts of interest.
References
1. Li D et al. JAMA Oncol. 2021;7:e214158.
2. Mohile SG et al. Lancet. 2021;398:1894-1904.
3. Kleckner AS et al. JCO Oncol Pract. 2022;18:e9-19.
A version of this article first appeared on Medscape.com.
Age-related changes that potentiate adverse drug reactions include alterations in absorption, distribution, metabolism, and excretion. As such, older patients often require adjustments in medications to optimize safety and use. Medication adjustment is especially important for older patients on complex medication regimens for multiple conditions, such as those undergoing cancer treatment. Three recent high-quality randomized trials evaluated the use of geriatric assessment (GA) in older adults with cancer.1-3
Interdisciplinary GA can identify aging-related conditions associated with poor outcomes in older patients with cancer (e.g., toxic effects of chemotherapy) and provide recommendations aimed at improving health outcomes. The results of these trials suggest that interdisciplinary GA can improve care outcomes and oncologists’ communication for older adults with cancer, and should be considered an emerging standard of care.
Geriatric assessment and chemotherapy-related toxic effects
A cluster randomized trial1 at City of Hope National Medical Center conducted between August 2015 and February 2019 enrolled 613 participants and randomly assigned them to receive a GA-guided intervention or usual standard of care in a 2-to-1 ratio. Participants were eligible for the study if they were aged ≥65 years; had a diagnosis of solid malignant neoplasm of any stage; were starting a new chemotherapy regimen; and were fluent in English, Spanish, or Chinese.
The intervention included a GA at baseline followed by assessments focused on six common areas: sleep problems, problems with eating and feeding, incontinence, confusion, evidence of falls, and skin breakdown. An interdisciplinary team (oncologist, nurse practitioner, pharmacist, physical therapist, occupational therapist, social worker, and nutritionist) performed the assessment and developed a plan of care. Interventions were multifactorial and could include referral to specialists; recommendations for medication changes; symptom management; nutritional intervention with diet recommendations and supplementation; and interventions targeting social, spiritual, and functional well-being. Follow-up by a nurse practitioner continued until completion of chemotherapy or 6 months after starting chemotherapy, whichever was earlier.
The primary outcome was grade 3 or higher chemotherapy-related toxic effects using National Cancer Institute criteria, and secondary outcomes were advance directive completion, emergency room visits and unplanned hospitalizations, and survival up to 12 months. Results showed a 10% absolute reduction in the incidence of grade 3 or higher toxic effects (P = .02), with a number needed to treat of 10. Advance directive completion also increased by 15%, but no differences were observed for other outcomes. This study offers high-quality evidence that a GA-based intervention can reduce toxic effects of chemotherapy regimens for older adults with cancer.
Geriatric assessment in community oncology practices
A recent study by Supriya G. Mohile, MD, and colleagues2 is the first nationwide multicenter clinical trial to demonstrate the effects of GA and GA-guided management. This study was conducted in 40 oncology practices from the University of Rochester National Cancer Institute Community Oncology Research Program network. Centers were randomly assigned to intervention or usual care (362 patients treated by 68 oncologists in the intervention group and 371 patients treated by 91 oncologists in the usual-care group). Eligibility criteria were age ≥70 years; impairment in at least one GA domain other than polypharmacy; incurable advanced solid tumor or lymphoma with a plan to start new cancer treatment with a high risk for toxic effects within 4 weeks; and English language fluency. Both study groups underwent a baseline GA that assessed patients’ physical performance, functional status, comorbidity, cognition, nutrition, social support, polypharmacy, and psychological status. For the intervention group, a summary and management recommendations were provided to the treating oncologists.
The primary outcome was grade 3 or higher toxic effects within 3 months of starting a new regimen; secondary outcomes included treatment intensity and survival and GA outcomes within 3 months. A smaller proportion of patients in the intervention group experienced toxicity (51% vs. 71%), with an absolute risk reduction of 20%. Patients in the intervention group also had fewer falls and a greater reduction in medications used; there were no other differences in secondary outcomes. This study offers very strong and generalizable evidence that incorporating GA in the care of older adults with cancer at risk for toxicity can reduce toxicity as well as improve other outcomes, such as falls and polypharmacy.
Geriatric assessment and oncologist-patient communication
A secondary analysis3 of data from Dr. Mohile and colleagues2 evaluated the effect of GA-guided recommendations on oncologist-patient communication regarding comorbidities. Patients (n = 541) included in this analysis were 76.6 years of age on average and had 3.2 (standard deviation, 1.9) comorbid conditions. All patients underwent GA, but only oncologists in the intervention arm received GA-based recommendations. Clinical encounters between oncologist and patient immediately following the GA were audio recorded and analyzed to examine communication between oncologists and participants as it relates to chronic comorbid conditions.
In the intervention arm, more discussions regarding comorbidities took place, and more participants’ concerns about comorbidities were acknowledged. More importantly, participants in the intervention group were 2.4 times more likely to have their concerns about comorbidities addressed through referral or education, compared with the usual-care group (P = .004). Moreover, 41% of oncologists in the intervention arm modified dosage or cancer treatment schedule because of concern about tolerability or comorbidities. This study demonstrates beneficial effects of GA in increasing communication and perhaps consideration of comorbidities of older adults when planning cancer treatment.
Dr. Hung is professor of geriatrics and palliative care at Mount Sinai Hospital, New York. He disclosed no relevant conflicts of interest.
References
1. Li D et al. JAMA Oncol. 2021;7:e214158.
2. Mohile SG et al. Lancet. 2021;398:1894-1904.
3. Kleckner AS et al. JCO Oncol Pract. 2022;18:e9-19.
A version of this article first appeared on Medscape.com.
Age-related changes that potentiate adverse drug reactions include alterations in absorption, distribution, metabolism, and excretion. As such, older patients often require adjustments in medications to optimize safety and use. Medication adjustment is especially important for older patients on complex medication regimens for multiple conditions, such as those undergoing cancer treatment. Three recent high-quality randomized trials evaluated the use of geriatric assessment (GA) in older adults with cancer.1-3
Interdisciplinary GA can identify aging-related conditions associated with poor outcomes in older patients with cancer (e.g., toxic effects of chemotherapy) and provide recommendations aimed at improving health outcomes. The results of these trials suggest that interdisciplinary GA can improve care outcomes and oncologists’ communication for older adults with cancer, and should be considered an emerging standard of care.
Geriatric assessment and chemotherapy-related toxic effects
A cluster randomized trial1 at City of Hope National Medical Center conducted between August 2015 and February 2019 enrolled 613 participants and randomly assigned them to receive a GA-guided intervention or usual standard of care in a 2-to-1 ratio. Participants were eligible for the study if they were aged ≥65 years; had a diagnosis of solid malignant neoplasm of any stage; were starting a new chemotherapy regimen; and were fluent in English, Spanish, or Chinese.
The intervention included a GA at baseline followed by assessments focused on six common areas: sleep problems, problems with eating and feeding, incontinence, confusion, evidence of falls, and skin breakdown. An interdisciplinary team (oncologist, nurse practitioner, pharmacist, physical therapist, occupational therapist, social worker, and nutritionist) performed the assessment and developed a plan of care. Interventions were multifactorial and could include referral to specialists; recommendations for medication changes; symptom management; nutritional intervention with diet recommendations and supplementation; and interventions targeting social, spiritual, and functional well-being. Follow-up by a nurse practitioner continued until completion of chemotherapy or 6 months after starting chemotherapy, whichever was earlier.
The primary outcome was grade 3 or higher chemotherapy-related toxic effects using National Cancer Institute criteria, and secondary outcomes were advance directive completion, emergency room visits and unplanned hospitalizations, and survival up to 12 months. Results showed a 10% absolute reduction in the incidence of grade 3 or higher toxic effects (P = .02), with a number needed to treat of 10. Advance directive completion also increased by 15%, but no differences were observed for other outcomes. This study offers high-quality evidence that a GA-based intervention can reduce toxic effects of chemotherapy regimens for older adults with cancer.
Geriatric assessment in community oncology practices
A recent study by Supriya G. Mohile, MD, and colleagues2 is the first nationwide multicenter clinical trial to demonstrate the effects of GA and GA-guided management. This study was conducted in 40 oncology practices from the University of Rochester National Cancer Institute Community Oncology Research Program network. Centers were randomly assigned to intervention or usual care (362 patients treated by 68 oncologists in the intervention group and 371 patients treated by 91 oncologists in the usual-care group). Eligibility criteria were age ≥70 years; impairment in at least one GA domain other than polypharmacy; incurable advanced solid tumor or lymphoma with a plan to start new cancer treatment with a high risk for toxic effects within 4 weeks; and English language fluency. Both study groups underwent a baseline GA that assessed patients’ physical performance, functional status, comorbidity, cognition, nutrition, social support, polypharmacy, and psychological status. For the intervention group, a summary and management recommendations were provided to the treating oncologists.
The primary outcome was grade 3 or higher toxic effects within 3 months of starting a new regimen; secondary outcomes included treatment intensity and survival and GA outcomes within 3 months. A smaller proportion of patients in the intervention group experienced toxicity (51% vs. 71%), with an absolute risk reduction of 20%. Patients in the intervention group also had fewer falls and a greater reduction in medications used; there were no other differences in secondary outcomes. This study offers very strong and generalizable evidence that incorporating GA in the care of older adults with cancer at risk for toxicity can reduce toxicity as well as improve other outcomes, such as falls and polypharmacy.
Geriatric assessment and oncologist-patient communication
A secondary analysis3 of data from Dr. Mohile and colleagues2 evaluated the effect of GA-guided recommendations on oncologist-patient communication regarding comorbidities. Patients (n = 541) included in this analysis were 76.6 years of age on average and had 3.2 (standard deviation, 1.9) comorbid conditions. All patients underwent GA, but only oncologists in the intervention arm received GA-based recommendations. Clinical encounters between oncologist and patient immediately following the GA were audio recorded and analyzed to examine communication between oncologists and participants as it relates to chronic comorbid conditions.
In the intervention arm, more discussions regarding comorbidities took place, and more participants’ concerns about comorbidities were acknowledged. More importantly, participants in the intervention group were 2.4 times more likely to have their concerns about comorbidities addressed through referral or education, compared with the usual-care group (P = .004). Moreover, 41% of oncologists in the intervention arm modified dosage or cancer treatment schedule because of concern about tolerability or comorbidities. This study demonstrates beneficial effects of GA in increasing communication and perhaps consideration of comorbidities of older adults when planning cancer treatment.
Dr. Hung is professor of geriatrics and palliative care at Mount Sinai Hospital, New York. He disclosed no relevant conflicts of interest.
References
1. Li D et al. JAMA Oncol. 2021;7:e214158.
2. Mohile SG et al. Lancet. 2021;398:1894-1904.
3. Kleckner AS et al. JCO Oncol Pract. 2022;18:e9-19.
A version of this article first appeared on Medscape.com.
Is cancer testing going to the dogs? Nope, ants
The oncologist’s new best friend
We know that dogs have very sensitive noses. They can track criminals and missing persons and sniff out drugs and bombs. They can even detect cancer cells … after months of training.
And then there are ants.
Cancer cells produce volatile organic compounds (VOCs), which can be sniffed out by dogs and other animals with sufficiently sophisticated olfactory senses. A group of French investigators decided to find out if Formica fusca is such an animal.
First, they placed breast cancer cells and healthy cells in a petri dish. The sample of cancer cells, however, included a sugary treat. “Over successive trials, the ants got quicker and quicker at finding the treat, indicating that they had learned to recognize the VOCs produced by the cancerous cells, using these as a beacon to guide their way to the sugary delight,” according to IFL Science.
When the researchers removed the treat, the ants still went straight for the cancer cells. Then they removed the healthy cells and substituted another type of breast cancer cell, with just one type getting the treat. They went for the cancer cells with the treat, “indicating that they were capable of distinguishing between the different cancer types based on the unique pattern of VOCs emitted by each one,” IFL Science explained.
It’s just another chapter in the eternal struggle between dogs and ants. Dogs need months of training to learn to detect cancer cells; ants can do it in 30 minutes. Over the course of a dog’s training, Fido eats more food than 10,000 ants combined. (Okay, we’re guessing here, but it’s got to be a pretty big number, right?)
Then there’s the warm and fuzzy factor. Just look at that picture. Who wouldn’t want a cutie like that curling up in the bed next to you?
Console War II: Battle of the Twitter users
Video games can be a lot of fun, provided you’re not playing something like Rock Simulator. Or Surgeon Simulator. Or Surgeon Simulator 2. Yes, those are all real games. But calling yourself a video gamer invites a certain negative connotation, and nowhere can that be better exemplified than the increasingly ridiculous console war.
For those who don’t know their video game history, back in the early 90s Nintendo and Sega were the main video game console makers. Nintendo had Mario, Sega had Sonic, and everyone had an opinion on which was best. With Sega now but a shell of its former self and Nintendo viewed as too “casual” for the true gaming connoisseur, today’s battle pits Playstation against Xbox, and fans of both consoles spend their time trying to one-up each other in increasingly silly online arguments.
That brings us nicely to a Twitter user named “Shreeveera,” who is very vocal about his love of Playstation and hatred of the Xbox. Importantly, for LOTME purposes, Shreeveera identified himself as a doctor on his profile, and in the middle of an argument, Xbox enthusiasts called his credentials into question.
At this point, most people would recognize that there are very few noteworthy console-exclusive video games in today’s world and that any argument about consoles essentially comes down to which console design you like or which company you find less distasteful, and they would step away from the Twitter argument. Shreeveera is not most people, and he decided the next logical move was to post a video of himself and an anesthetized patient about to undergo a laparoscopic cholecystectomy.
This move did prove that he was indeed a doctor, but the ethics of posting such a video with a patient in the room is a bit dubious at best. Since Shreeveera also listed the hospital he worked at, numerous Twitter users review bombed the hospital with one-star reviews. Shreeveera’s fate is unknown, but he did take down the video and removed “doctor by profession” from his profile. He also made a second video asking Twitter to stop trying to ruin his life. We’re sure that’ll go well. Twitter is known for being completely fair and reasonable.
Use your words to gain power
We live in the age of the emoji. The use of emojis in texts and emails is basically the new shorthand. It’s a fun and easy way to chat with people close to us, but a new study shows that it doesn’t help in a business setting. In fact, it may do a little damage.
The use of images such as emojis in communication or logos can make a person seem less powerful than someone who opts for written words, according to Elinor Amit, PhD, of Tel Aviv University and associates.
Participants in their study were asked to imagine shopping with a person wearing a T-shirt. Half were then shown the logo of the Red Sox baseball team and half saw the words “Red Sox.” In another scenario, they were asked to imagine attending a retreat of a company called Lotus. Then half were shown an employee wearing a shirt with an image of lotus flower and half saw the verbal logo “Lotus.” In both scenarios, the individuals wearing shirts with images were seen as less powerful than the people who wore shirts with words on them.
Why is that? In a Eurekalert statement, Dr. Amit said that “visual messages are often interpreted as a signal for desire for social proximity.” In a world with COVID-19, that could give anyone pause.
That desire for more social proximity, in turn, equals a suggested loss of power because research shows that people who want to be around other people more are less powerful than people who don’t.
With the reduced social proximity we have these days, we may want to keep things cool and lighthearted, especially in work emails with people who we’ve never met. It may be, however, that using your words to say thank you in the multitude of emails you respond to on a regular basis is better than that thumbs-up emoji. Nobody will think less of you.
Should Daylight Savings Time still be a thing?
This past week, we just experienced the spring-forward portion of Daylight Savings Time, which took an hour of sleep away from us all. Some of us may still be struggling to find our footing with the time change, but at least it’s still sunny out at 7 pm. For those who don’t really see the point of changing the clocks twice a year, there are actually some good reasons to do so.
Sen. Marco Rubio, sponsor of a bill to make the time change permanent, put it simply: “If we can get this passed, we don’t have to do this stupidity anymore.” Message received, apparently, since the measure just passed unanimously in the Senate.
It’s not clear if President Biden will approve it, though, because there’s a lot that comes into play: economic needs, seasonal depression, and safety.
“I know this is not the most important issue confronting America, but it’s one of those issues where there’s a lot of agreement,” Sen. Rubio said.
Not total agreement, though. The National Association of Convenience Stores is opposed to the bill, and Reuters noted that one witness at a recent hearing said the time change “is like living in the wrong time zone for almost eight months out of the year.”
Many people, however, seem to be leaning toward the permanent spring-forward as it gives businesses a longer window to provide entertainment in the evenings and kids are able to play outside longer after school.
Honestly, we’re leaning toward whichever one can reduce seasonal depression.
The oncologist’s new best friend
We know that dogs have very sensitive noses. They can track criminals and missing persons and sniff out drugs and bombs. They can even detect cancer cells … after months of training.
And then there are ants.
Cancer cells produce volatile organic compounds (VOCs), which can be sniffed out by dogs and other animals with sufficiently sophisticated olfactory senses. A group of French investigators decided to find out if Formica fusca is such an animal.
First, they placed breast cancer cells and healthy cells in a petri dish. The sample of cancer cells, however, included a sugary treat. “Over successive trials, the ants got quicker and quicker at finding the treat, indicating that they had learned to recognize the VOCs produced by the cancerous cells, using these as a beacon to guide their way to the sugary delight,” according to IFL Science.
When the researchers removed the treat, the ants still went straight for the cancer cells. Then they removed the healthy cells and substituted another type of breast cancer cell, with just one type getting the treat. They went for the cancer cells with the treat, “indicating that they were capable of distinguishing between the different cancer types based on the unique pattern of VOCs emitted by each one,” IFL Science explained.
It’s just another chapter in the eternal struggle between dogs and ants. Dogs need months of training to learn to detect cancer cells; ants can do it in 30 minutes. Over the course of a dog’s training, Fido eats more food than 10,000 ants combined. (Okay, we’re guessing here, but it’s got to be a pretty big number, right?)
Then there’s the warm and fuzzy factor. Just look at that picture. Who wouldn’t want a cutie like that curling up in the bed next to you?
Console War II: Battle of the Twitter users
Video games can be a lot of fun, provided you’re not playing something like Rock Simulator. Or Surgeon Simulator. Or Surgeon Simulator 2. Yes, those are all real games. But calling yourself a video gamer invites a certain negative connotation, and nowhere can that be better exemplified than the increasingly ridiculous console war.
For those who don’t know their video game history, back in the early 90s Nintendo and Sega were the main video game console makers. Nintendo had Mario, Sega had Sonic, and everyone had an opinion on which was best. With Sega now but a shell of its former self and Nintendo viewed as too “casual” for the true gaming connoisseur, today’s battle pits Playstation against Xbox, and fans of both consoles spend their time trying to one-up each other in increasingly silly online arguments.
That brings us nicely to a Twitter user named “Shreeveera,” who is very vocal about his love of Playstation and hatred of the Xbox. Importantly, for LOTME purposes, Shreeveera identified himself as a doctor on his profile, and in the middle of an argument, Xbox enthusiasts called his credentials into question.
At this point, most people would recognize that there are very few noteworthy console-exclusive video games in today’s world and that any argument about consoles essentially comes down to which console design you like or which company you find less distasteful, and they would step away from the Twitter argument. Shreeveera is not most people, and he decided the next logical move was to post a video of himself and an anesthetized patient about to undergo a laparoscopic cholecystectomy.
This move did prove that he was indeed a doctor, but the ethics of posting such a video with a patient in the room is a bit dubious at best. Since Shreeveera also listed the hospital he worked at, numerous Twitter users review bombed the hospital with one-star reviews. Shreeveera’s fate is unknown, but he did take down the video and removed “doctor by profession” from his profile. He also made a second video asking Twitter to stop trying to ruin his life. We’re sure that’ll go well. Twitter is known for being completely fair and reasonable.
Use your words to gain power
We live in the age of the emoji. The use of emojis in texts and emails is basically the new shorthand. It’s a fun and easy way to chat with people close to us, but a new study shows that it doesn’t help in a business setting. In fact, it may do a little damage.
The use of images such as emojis in communication or logos can make a person seem less powerful than someone who opts for written words, according to Elinor Amit, PhD, of Tel Aviv University and associates.
Participants in their study were asked to imagine shopping with a person wearing a T-shirt. Half were then shown the logo of the Red Sox baseball team and half saw the words “Red Sox.” In another scenario, they were asked to imagine attending a retreat of a company called Lotus. Then half were shown an employee wearing a shirt with an image of lotus flower and half saw the verbal logo “Lotus.” In both scenarios, the individuals wearing shirts with images were seen as less powerful than the people who wore shirts with words on them.
Why is that? In a Eurekalert statement, Dr. Amit said that “visual messages are often interpreted as a signal for desire for social proximity.” In a world with COVID-19, that could give anyone pause.
That desire for more social proximity, in turn, equals a suggested loss of power because research shows that people who want to be around other people more are less powerful than people who don’t.
With the reduced social proximity we have these days, we may want to keep things cool and lighthearted, especially in work emails with people who we’ve never met. It may be, however, that using your words to say thank you in the multitude of emails you respond to on a regular basis is better than that thumbs-up emoji. Nobody will think less of you.
Should Daylight Savings Time still be a thing?
This past week, we just experienced the spring-forward portion of Daylight Savings Time, which took an hour of sleep away from us all. Some of us may still be struggling to find our footing with the time change, but at least it’s still sunny out at 7 pm. For those who don’t really see the point of changing the clocks twice a year, there are actually some good reasons to do so.
Sen. Marco Rubio, sponsor of a bill to make the time change permanent, put it simply: “If we can get this passed, we don’t have to do this stupidity anymore.” Message received, apparently, since the measure just passed unanimously in the Senate.
It’s not clear if President Biden will approve it, though, because there’s a lot that comes into play: economic needs, seasonal depression, and safety.
“I know this is not the most important issue confronting America, but it’s one of those issues where there’s a lot of agreement,” Sen. Rubio said.
Not total agreement, though. The National Association of Convenience Stores is opposed to the bill, and Reuters noted that one witness at a recent hearing said the time change “is like living in the wrong time zone for almost eight months out of the year.”
Many people, however, seem to be leaning toward the permanent spring-forward as it gives businesses a longer window to provide entertainment in the evenings and kids are able to play outside longer after school.
Honestly, we’re leaning toward whichever one can reduce seasonal depression.
The oncologist’s new best friend
We know that dogs have very sensitive noses. They can track criminals and missing persons and sniff out drugs and bombs. They can even detect cancer cells … after months of training.
And then there are ants.
Cancer cells produce volatile organic compounds (VOCs), which can be sniffed out by dogs and other animals with sufficiently sophisticated olfactory senses. A group of French investigators decided to find out if Formica fusca is such an animal.
First, they placed breast cancer cells and healthy cells in a petri dish. The sample of cancer cells, however, included a sugary treat. “Over successive trials, the ants got quicker and quicker at finding the treat, indicating that they had learned to recognize the VOCs produced by the cancerous cells, using these as a beacon to guide their way to the sugary delight,” according to IFL Science.
When the researchers removed the treat, the ants still went straight for the cancer cells. Then they removed the healthy cells and substituted another type of breast cancer cell, with just one type getting the treat. They went for the cancer cells with the treat, “indicating that they were capable of distinguishing between the different cancer types based on the unique pattern of VOCs emitted by each one,” IFL Science explained.
It’s just another chapter in the eternal struggle between dogs and ants. Dogs need months of training to learn to detect cancer cells; ants can do it in 30 minutes. Over the course of a dog’s training, Fido eats more food than 10,000 ants combined. (Okay, we’re guessing here, but it’s got to be a pretty big number, right?)
Then there’s the warm and fuzzy factor. Just look at that picture. Who wouldn’t want a cutie like that curling up in the bed next to you?
Console War II: Battle of the Twitter users
Video games can be a lot of fun, provided you’re not playing something like Rock Simulator. Or Surgeon Simulator. Or Surgeon Simulator 2. Yes, those are all real games. But calling yourself a video gamer invites a certain negative connotation, and nowhere can that be better exemplified than the increasingly ridiculous console war.
For those who don’t know their video game history, back in the early 90s Nintendo and Sega were the main video game console makers. Nintendo had Mario, Sega had Sonic, and everyone had an opinion on which was best. With Sega now but a shell of its former self and Nintendo viewed as too “casual” for the true gaming connoisseur, today’s battle pits Playstation against Xbox, and fans of both consoles spend their time trying to one-up each other in increasingly silly online arguments.
That brings us nicely to a Twitter user named “Shreeveera,” who is very vocal about his love of Playstation and hatred of the Xbox. Importantly, for LOTME purposes, Shreeveera identified himself as a doctor on his profile, and in the middle of an argument, Xbox enthusiasts called his credentials into question.
At this point, most people would recognize that there are very few noteworthy console-exclusive video games in today’s world and that any argument about consoles essentially comes down to which console design you like or which company you find less distasteful, and they would step away from the Twitter argument. Shreeveera is not most people, and he decided the next logical move was to post a video of himself and an anesthetized patient about to undergo a laparoscopic cholecystectomy.
This move did prove that he was indeed a doctor, but the ethics of posting such a video with a patient in the room is a bit dubious at best. Since Shreeveera also listed the hospital he worked at, numerous Twitter users review bombed the hospital with one-star reviews. Shreeveera’s fate is unknown, but he did take down the video and removed “doctor by profession” from his profile. He also made a second video asking Twitter to stop trying to ruin his life. We’re sure that’ll go well. Twitter is known for being completely fair and reasonable.
Use your words to gain power
We live in the age of the emoji. The use of emojis in texts and emails is basically the new shorthand. It’s a fun and easy way to chat with people close to us, but a new study shows that it doesn’t help in a business setting. In fact, it may do a little damage.
The use of images such as emojis in communication or logos can make a person seem less powerful than someone who opts for written words, according to Elinor Amit, PhD, of Tel Aviv University and associates.
Participants in their study were asked to imagine shopping with a person wearing a T-shirt. Half were then shown the logo of the Red Sox baseball team and half saw the words “Red Sox.” In another scenario, they were asked to imagine attending a retreat of a company called Lotus. Then half were shown an employee wearing a shirt with an image of lotus flower and half saw the verbal logo “Lotus.” In both scenarios, the individuals wearing shirts with images were seen as less powerful than the people who wore shirts with words on them.
Why is that? In a Eurekalert statement, Dr. Amit said that “visual messages are often interpreted as a signal for desire for social proximity.” In a world with COVID-19, that could give anyone pause.
That desire for more social proximity, in turn, equals a suggested loss of power because research shows that people who want to be around other people more are less powerful than people who don’t.
With the reduced social proximity we have these days, we may want to keep things cool and lighthearted, especially in work emails with people who we’ve never met. It may be, however, that using your words to say thank you in the multitude of emails you respond to on a regular basis is better than that thumbs-up emoji. Nobody will think less of you.
Should Daylight Savings Time still be a thing?
This past week, we just experienced the spring-forward portion of Daylight Savings Time, which took an hour of sleep away from us all. Some of us may still be struggling to find our footing with the time change, but at least it’s still sunny out at 7 pm. For those who don’t really see the point of changing the clocks twice a year, there are actually some good reasons to do so.
Sen. Marco Rubio, sponsor of a bill to make the time change permanent, put it simply: “If we can get this passed, we don’t have to do this stupidity anymore.” Message received, apparently, since the measure just passed unanimously in the Senate.
It’s not clear if President Biden will approve it, though, because there’s a lot that comes into play: economic needs, seasonal depression, and safety.
“I know this is not the most important issue confronting America, but it’s one of those issues where there’s a lot of agreement,” Sen. Rubio said.
Not total agreement, though. The National Association of Convenience Stores is opposed to the bill, and Reuters noted that one witness at a recent hearing said the time change “is like living in the wrong time zone for almost eight months out of the year.”
Many people, however, seem to be leaning toward the permanent spring-forward as it gives businesses a longer window to provide entertainment in the evenings and kids are able to play outside longer after school.
Honestly, we’re leaning toward whichever one can reduce seasonal depression.
High-intensity exercise vs. omega-3s for heart failure risk reduction
A year of high-intensity interval training seemed to benefit obese middle-aged adults at a high risk of heart failure, but omega-3 fatty acid supplementation didn’t have any effect on cardiac biomarkers measured in a small, single-center, prospective study.
“One year of HIIT training reduces adiposity but had no consistent effect on myocardial triglyceride content or visceral adiposity,” wrote lead author Christopher M. Hearon Jr., PhD, and colleagues in JACC: Heart Failure. “However, long-duration HIIT improves fitness and induces favorable cardiac remodeling.” Omega-3 supplementation, however, had “no independent or additive effect.” Dr. Hearon is an instructor of applied clinical research at University of Texas Southwestern Medical Center in Dallas.
Investigators there and at the Institute for Exercise and Environmental Medicine at Texas Health Presbyterian Hospital Dallas studied 80 patients aged 40-55 years classified as high risk for HF and obese, randomizing them to a year of high-intensity interval training (HIIT) with supplementation of either 1.6 g omega-3 FA or placebo daily; or to a control group split between supplementation or placebo. Fifty-six patients completed the 1-year study, with a compliance rate of 90% in the HIIT group and 92% in those assigned omega-3 FA supplementation.
Carl J. “Chip” Lavie, MD, of the John Ochsner Heart and Vascular Institute in New Orleans, commented that, although the study was “extremely well done from an excellent research group,” it was limited by its small population and relatively short follow-up. Future research should evaluate HIIT and moderate exercise on clinical events over a longer term as well as different doses of omega-3 “There is tremendous potential for omega-3 in heart failure prevention and treatment.”
HIIT boosts exercise capacity, more
In the study, the HIIT group showed improvement in a number of cardiac markers: around a 22% improvement in exercise capacity as measured by absolute peak and relative peak oxygen uptake (VO2), even without significant weight loss. They improved an average of 0.43 L/min (0.32-0.53; P < .0001) and 4.46 mL/kg per minute (3.18-5.56; P < .0001), respectively.
The researchers attributed the increase in peak VO2 to an increase in peak cardiac output averaging 2.15 L/min (95% confidence interval, 0.90-3.39; P = .001) and stroke volume averaging 9.46 mL (95% CI, 0.65-18.27; P = .04). A year of exercise training also resulted in changes in cardiac remodeling, including increases in left ventricle mass and LV end diastolic volume, averaging 9.4 g (95% CI, 4.36-14.44; P < .001) and 12.33 mL (95% CI, 5.61-19.05; P < .001), respectively.
The study also found that neither intervention had any appreciable impact on body weight, body mass index, body surface area or lean mass, or markers of arterial or local carotid stiffness. The exercise group had a modest decrease in fat mass, averaging 2.63 kg (95% CI,–4.81 to –0.46; P = .02), but without any effect from omega-3 supplementation.
The study acknowledged that high-dose omega-3 supplements have been found to lower triglyceride levels in people with severe hypertriglyceridemia, and hypothesized that HIIT alone or with omega-3 supplementation would improve fitness and biomarkers in people with stage A HF. “Contrary to our hypothesis, we found that one year of n-3FA [omega-3 FA] supplementation had no detectable effect on any parameter related to cardiopulmonary fitness, cardiovascular remodeling/stiffness, visceral adiposity, or myocardial triglyceride content,” Dr. Hearon and colleagues wrote.
The study “shows that obese middle-aged patients with heart failure with preserved ejection fraction [HFpEF] can markedly improve their fitness with HIIT and, generally, fitness is one of the strongest if not the strongest predictor of prognosis and survival,” said Dr. Lavie.
“Studies are needed on exercise that improves fitness in both HF with reduced ejection fraction and HFpEF, but especially HFpEF,” he said.
The study received funding from the American Heart Association Strategically Focused Research Network. Dr. Hearon and coauthors have no relevant disclosures. Dr. Lavie is a speaker and consultant for PAI Health, the Global Organization for EPA and DHA Omega-3s and DSM Nutritional Products.
A year of high-intensity interval training seemed to benefit obese middle-aged adults at a high risk of heart failure, but omega-3 fatty acid supplementation didn’t have any effect on cardiac biomarkers measured in a small, single-center, prospective study.
“One year of HIIT training reduces adiposity but had no consistent effect on myocardial triglyceride content or visceral adiposity,” wrote lead author Christopher M. Hearon Jr., PhD, and colleagues in JACC: Heart Failure. “However, long-duration HIIT improves fitness and induces favorable cardiac remodeling.” Omega-3 supplementation, however, had “no independent or additive effect.” Dr. Hearon is an instructor of applied clinical research at University of Texas Southwestern Medical Center in Dallas.
Investigators there and at the Institute for Exercise and Environmental Medicine at Texas Health Presbyterian Hospital Dallas studied 80 patients aged 40-55 years classified as high risk for HF and obese, randomizing them to a year of high-intensity interval training (HIIT) with supplementation of either 1.6 g omega-3 FA or placebo daily; or to a control group split between supplementation or placebo. Fifty-six patients completed the 1-year study, with a compliance rate of 90% in the HIIT group and 92% in those assigned omega-3 FA supplementation.
Carl J. “Chip” Lavie, MD, of the John Ochsner Heart and Vascular Institute in New Orleans, commented that, although the study was “extremely well done from an excellent research group,” it was limited by its small population and relatively short follow-up. Future research should evaluate HIIT and moderate exercise on clinical events over a longer term as well as different doses of omega-3 “There is tremendous potential for omega-3 in heart failure prevention and treatment.”
HIIT boosts exercise capacity, more
In the study, the HIIT group showed improvement in a number of cardiac markers: around a 22% improvement in exercise capacity as measured by absolute peak and relative peak oxygen uptake (VO2), even without significant weight loss. They improved an average of 0.43 L/min (0.32-0.53; P < .0001) and 4.46 mL/kg per minute (3.18-5.56; P < .0001), respectively.
The researchers attributed the increase in peak VO2 to an increase in peak cardiac output averaging 2.15 L/min (95% confidence interval, 0.90-3.39; P = .001) and stroke volume averaging 9.46 mL (95% CI, 0.65-18.27; P = .04). A year of exercise training also resulted in changes in cardiac remodeling, including increases in left ventricle mass and LV end diastolic volume, averaging 9.4 g (95% CI, 4.36-14.44; P < .001) and 12.33 mL (95% CI, 5.61-19.05; P < .001), respectively.
The study also found that neither intervention had any appreciable impact on body weight, body mass index, body surface area or lean mass, or markers of arterial or local carotid stiffness. The exercise group had a modest decrease in fat mass, averaging 2.63 kg (95% CI,–4.81 to –0.46; P = .02), but without any effect from omega-3 supplementation.
The study acknowledged that high-dose omega-3 supplements have been found to lower triglyceride levels in people with severe hypertriglyceridemia, and hypothesized that HIIT alone or with omega-3 supplementation would improve fitness and biomarkers in people with stage A HF. “Contrary to our hypothesis, we found that one year of n-3FA [omega-3 FA] supplementation had no detectable effect on any parameter related to cardiopulmonary fitness, cardiovascular remodeling/stiffness, visceral adiposity, or myocardial triglyceride content,” Dr. Hearon and colleagues wrote.
The study “shows that obese middle-aged patients with heart failure with preserved ejection fraction [HFpEF] can markedly improve their fitness with HIIT and, generally, fitness is one of the strongest if not the strongest predictor of prognosis and survival,” said Dr. Lavie.
“Studies are needed on exercise that improves fitness in both HF with reduced ejection fraction and HFpEF, but especially HFpEF,” he said.
The study received funding from the American Heart Association Strategically Focused Research Network. Dr. Hearon and coauthors have no relevant disclosures. Dr. Lavie is a speaker and consultant for PAI Health, the Global Organization for EPA and DHA Omega-3s and DSM Nutritional Products.
A year of high-intensity interval training seemed to benefit obese middle-aged adults at a high risk of heart failure, but omega-3 fatty acid supplementation didn’t have any effect on cardiac biomarkers measured in a small, single-center, prospective study.
“One year of HIIT training reduces adiposity but had no consistent effect on myocardial triglyceride content or visceral adiposity,” wrote lead author Christopher M. Hearon Jr., PhD, and colleagues in JACC: Heart Failure. “However, long-duration HIIT improves fitness and induces favorable cardiac remodeling.” Omega-3 supplementation, however, had “no independent or additive effect.” Dr. Hearon is an instructor of applied clinical research at University of Texas Southwestern Medical Center in Dallas.
Investigators there and at the Institute for Exercise and Environmental Medicine at Texas Health Presbyterian Hospital Dallas studied 80 patients aged 40-55 years classified as high risk for HF and obese, randomizing them to a year of high-intensity interval training (HIIT) with supplementation of either 1.6 g omega-3 FA or placebo daily; or to a control group split between supplementation or placebo. Fifty-six patients completed the 1-year study, with a compliance rate of 90% in the HIIT group and 92% in those assigned omega-3 FA supplementation.
Carl J. “Chip” Lavie, MD, of the John Ochsner Heart and Vascular Institute in New Orleans, commented that, although the study was “extremely well done from an excellent research group,” it was limited by its small population and relatively short follow-up. Future research should evaluate HIIT and moderate exercise on clinical events over a longer term as well as different doses of omega-3 “There is tremendous potential for omega-3 in heart failure prevention and treatment.”
HIIT boosts exercise capacity, more
In the study, the HIIT group showed improvement in a number of cardiac markers: around a 22% improvement in exercise capacity as measured by absolute peak and relative peak oxygen uptake (VO2), even without significant weight loss. They improved an average of 0.43 L/min (0.32-0.53; P < .0001) and 4.46 mL/kg per minute (3.18-5.56; P < .0001), respectively.
The researchers attributed the increase in peak VO2 to an increase in peak cardiac output averaging 2.15 L/min (95% confidence interval, 0.90-3.39; P = .001) and stroke volume averaging 9.46 mL (95% CI, 0.65-18.27; P = .04). A year of exercise training also resulted in changes in cardiac remodeling, including increases in left ventricle mass and LV end diastolic volume, averaging 9.4 g (95% CI, 4.36-14.44; P < .001) and 12.33 mL (95% CI, 5.61-19.05; P < .001), respectively.
The study also found that neither intervention had any appreciable impact on body weight, body mass index, body surface area or lean mass, or markers of arterial or local carotid stiffness. The exercise group had a modest decrease in fat mass, averaging 2.63 kg (95% CI,–4.81 to –0.46; P = .02), but without any effect from omega-3 supplementation.
The study acknowledged that high-dose omega-3 supplements have been found to lower triglyceride levels in people with severe hypertriglyceridemia, and hypothesized that HIIT alone or with omega-3 supplementation would improve fitness and biomarkers in people with stage A HF. “Contrary to our hypothesis, we found that one year of n-3FA [omega-3 FA] supplementation had no detectable effect on any parameter related to cardiopulmonary fitness, cardiovascular remodeling/stiffness, visceral adiposity, or myocardial triglyceride content,” Dr. Hearon and colleagues wrote.
The study “shows that obese middle-aged patients with heart failure with preserved ejection fraction [HFpEF] can markedly improve their fitness with HIIT and, generally, fitness is one of the strongest if not the strongest predictor of prognosis and survival,” said Dr. Lavie.
“Studies are needed on exercise that improves fitness in both HF with reduced ejection fraction and HFpEF, but especially HFpEF,” he said.
The study received funding from the American Heart Association Strategically Focused Research Network. Dr. Hearon and coauthors have no relevant disclosures. Dr. Lavie is a speaker and consultant for PAI Health, the Global Organization for EPA and DHA Omega-3s and DSM Nutritional Products.
FROM JACC: HEART FAILURE
Hematocrit, White Blood Cells, and Thrombotic Events in the Veteran Population With Polycythemia Vera
Polycythemia vera (PV) is a rare myeloproliferative neoplasm affecting 44 to 57 individuals per 100,000 in the United States.1,2 It is characterized by somatic mutations in the hematopoietic stem cell, resulting in hyperproliferation of mature myeloid lineage cells.2 Sustained erythrocytosis is a hallmark of PV, although many patients also have leukocytosis and thrombocytosis.2,3 These patients have increased inherent thrombotic risk with arterial events reported to occur at rates of 7 to 21/1000 person-years and venous thrombotic events at 5 to 20/1000 person-years.4-7 Thrombotic and cardiovascular events are leading causes of morbidity and mortality, resulting in a reduced overall survival of patients with PV compared with the general population.3,8-10
Blood Cell Counts and Thrombotic Events in PV
Treatment strategies for patients with PV mainly aim to prevent or manage thrombotic and bleeding complications through normalization of blood counts.11 Hematocrit (Hct) control has been reported to be associated with reduced thrombotic risk in patients with PV. This was shown and popularized by the prospective, randomized Cytoreductive Therapy in Polycythemia Vera (CYTO-PV) trial in which participants were randomized 1:1 to maintaining either a low (< 45%) or high (45%-50%) Hct for 5 years to examine the long-term effects of more- or less-intensive cytoreductive therapy.12 Patients in the low-Hct group were found to have a lower rate of death from cardiovascular events or major thrombosis (1.1/100 person-years in the low-Hct group vs 4.4 in the high-Hct group; hazard ratio [HR], 3.91; 95% confidence interval [CI], 1.45-10.53; P = .007). Likewise, cardiovascular events occurred at a lower rate in patients in the low-Hct group compared with the high-Hct group (4.4% vs 10.9% of patients, respectively; HR, 2.69; 95% CI, 1.19-6.12; P = .02).12
Leukocytosis has also been linked to elevated risk for vascular events as shown in several studies, including the real-world European Collaboration on Low-Dose Aspirin in PV (ECLAP) observational study and a post hoc subanalysis of the CYTO-PV study.13,14 In a multivariate, time-dependent analysis in ECLAP, patients with white blood cell (WBC) counts > 15 × 109/L had a significant increase in the risk of thrombosis compared with those who had lower WBC counts, with higher WBC count more strongly associated with arterial than venous thromboembolism.13 In CYTO-PV, a significant correlation between elevated WBC count (≥ 11 × 109/L vs reference level of < 7 × 109/L) and time-dependent risk of major thrombosis was shown (HR, 3.9; 95% CI, 1.24-12.3; P = .02).14 Likewise, WBC count ≥ 11 × 109/L was found to be a predictor of subsequent venous events in a separate single-center multivariate analysis of patients with PV.8
Although CYTO-PV remains one of the largest prospective landmark studies in PV demonstrating the impact of Hct control on thrombosis, it is worthwhile to note that the patients in the high-Hct group who received less frequent myelosuppressive therapy with hydroxyurea than the low-Hct group also had higher WBC counts.12,15 Work is needed to determine the relative effects of high Hct and high WBC counts on PV independent of each other.
The Veteran Population with PV
Two recently published retrospective analyses from Parasuraman and colleagues used data from the Veterans Health Administration (VHA), the largest integrated health care system in the US, with an aim to replicate findings from CYTO-PV in a real-world population.16,17 The 2 analyses focused independently on the effects of Hct control and WBC count on the risk of a thrombotic event in patients with PV.
In the first retrospective analysis, 213 patients with PV and no prior thrombosis were placed into groups based on whether Hct levels were consistently either < 45% or ≥ 45% throughout the study period.17 The mean follow-up time was 2.3 years, during which 44.1% of patients experienced a thrombotic event (Figure 1). Patients with Hct levels < 45% had a lower rate of thrombotic events compared to those with levels ≥ 45% (40.3% vs 54.2%, respectively; HR, 1.61; 95% CI, 1.03-2.51; P = .04). In a sensitivity analysis that included patients with pre-index thrombotic events (N = 342), similar results were noted (55.6% vs 76.9% between the < 45% and ≥ 45% groups, respectively; HR, 1.95; 95% CI, 1.46-2.61; P < .001).
In the second analysis, the authors investigated the relationship between WBC counts and thrombotic events.16 Evaluable patients (N = 1565) were grouped into 1 of 4 cohorts based on the last WBC measurement taken during the study period before a thrombotic event or through the end of follow-up: (1) WBC < 7.0 × 109/L, (2) 7.0 to 8.4 × 109/L, (3) 8.5 to < 11.0 × 109/L, or (4) ≥ 11.0 × 109/L. Mean follow-up time ranged from 3.6 to 4.5 years among WBC count cohorts, during which 24.9% of patients experienced a thrombotic event. Compared with the reference cohort (WBC < 7.0 × 109/L), a significant positive association between WBC counts and thrombotic event occurrence was observed among patients with WBC counts of 8.5 to < 11.0 × 109/L (HR, 1.47; 95% CI, 1.10-1.96; P < .01) and ≥ 11 × 109/L (HR, 1.87; 95% CI, 1.44-2.43; P < .001) (Figure 2).16 When including all patients in a sensitivity analysis regardless of whether they experienced thrombotic events before the index date (N = 1876), similar results were obtained (7.0-8.4 × 109/L group: HR, 1.22; 95% CI, 0.97-1.55; P = .0959; 8.5 - 11.0 × 109/L group: HR, 1.41; 95% CI, 1.10-1.81; P = .0062; ≥ 11.0 × 109/L group: HR, 1.53; 95% CI, 1.23-1.91; P < .001; compared with < 7.0 × 109/L reference group). Rates of phlebotomy and cytoreductive treatments were similar across groups.16
Some limitations to these studies are attributable to their retrospective design, reliance on health records, and the VHA population characteristics, which differ from the general population. For example, in this analysis, patients with PV in the VHA population had significantly increased risk of thrombotic events, even at a lower WBC count threshold (≥ 8.5 × 109/L) compared with those reported in CYTO-PV (≥ 11 × 109/L). Furthermore, approximately one-third of patients had elevated WBC levels, compared with 25.5% in the CYTO-PV study.14,16 This is most likely due to the unique nature of the VHA patient population, who are predominantly older adult men and generally have a higher comorbidity burden. A notable pre-index comorbidity burden was reported in the VHA population in the Hct analysis, even when compared to patients with PV in the general US population (Charlson Comorbidity Index score, 1.3 vs 0.8).6,17 Comorbid conditions such as hypertension, diabetes, and tobacco use, which are most common among the VHA population, are independently associated with higher risk of cardiovascular and thrombotic events.18,19 However, whether these higher levels of comorbidities affected the type of treatments they received was not elucidated, and the effectiveness of treatments to maintain target Hct levels was not addressed in the study.
Current PV Management and Future Implications
The National Comprehensive Cancer Network (NCCN) clinical practice guidelines in oncology in myeloproliferative neoplasms recommend maintaining Hct levels < 45% in patients with PV.11 Patients with high-risk disease (age ≥ 60 years and/or history of thrombosis) are monitored for new thrombosis or bleeding and are managed for their cardiovascular risk factors. In addition, they receive low-dose aspirin (81-100 mg/day), undergo phlebotomy to maintain an Hct < 45%, and are managed with pharmacologic cytoreductive therapy. Cytoreductive therapy primarily consists of hydroxyurea or peginterferon alfa-2a for younger patients. Ruxolitinib, a Janus kinase (JAK1)/JAK2 inhibitor, is now approved by the US Food and Drug Administration as second-line treatment for those with PV that is intolerant or unresponsive to hydroxyurea or peginterferon alfa-2a treatments.11,20 However, the role of cytoreductive therapy is not clear for patients with low-risk disease (age < 60 years and no history of thrombosis). These patients are managed for their cardiovascular risk factors, undergo phlebotomy to maintain an Hct < 45%, are maintained on low-dose aspirin (81-100 mg/day), and are monitored for indications for cytoreductive therapy, which include any new thrombosis or disease-related major bleeding, frequent or persistent need for phlebotomy with poor tolerance for the procedure, splenomegaly, thrombocytosis, leukocytosis, and disease-related symptoms (eg, aquagenic pruritus, night sweats, fatigue).
Even though the current guidelines recommend maintaining a target Hct of < 45% in patients with high-risk PV, the role of Hct as the main determinant of thrombotic risk in patients with PV is still debated.21 In JAK2V617F-positive essential thrombocythemia, Hct levels are usually normal but risk of thrombosis is nevertheless still significant.22 The risk of thrombosis is significantly lower in primary familial and congenital polycythemia and much lower in secondary erythrocytosis such as cyanotic heart disease, long-term native dwellers of high altitude, and those with high-oxygen–affinity hemoglobins.21,23 In secondary erythrocytosis from hypoxia or upregulated hypoxic pathway such as hypoxia inducible factor-2α (HIF-2α) mutation and Chuvash erythrocytosis, the risk of thrombosis is more associated with the upregulated HIF pathway and its downstream consequences, rather than the elevated Hct level.24
However, most current literature supports the association of increased risk of thrombosis with higher Hct and high WBC count in patients with PV. In addition, the underlying mechanism of thrombogenesis still remains elusive; it is likely a complex process that involves interactions among multiple components, including elevated blood counts arising from clonal hematopoiesis, JAK2V617F allele burden, and platelet and WBC activation and their interaction with endothelial cells and inflammatory cytokines.25
Nevertheless, Hct control and aspirin use are current standard of care for patients with PV to mitigate thrombotic risk, and the results from the 2 analyses by Parasuraman and colleagues, using real-world data from the VHA, support the current practice guidelines to maintain Hct < 45% in these patients. They also provide additional support for considering WBC counts when determining patient risk and treatment plans. Although treatment response criteria from the European LeukemiaNet include achieving normal WBC levels to decrease the risk of thrombosis, current NCCN guidelines do not include WBC counts as a component for establishing patient risk or provide a target WBC count to guide patient management.11,26,27 Updates to these practice guidelines may be warranted. In addition, further study is needed to understand the mechanism of thrombogenesis in PV and other myeloproliferative disorders in order to develop novel therapeutic targets and improve patient outcomes.
Acknowledgments
Writing assistance was provided by Tania Iqbal, PhD, an employee of ICON (North Wales, PA), and was funded by Incyte Corporation (Wilmington, DE).
1. Mehta J, Wang H, Iqbal SU, Mesa R. Epidemiology of myeloproliferative neoplasms in the United States. Leuk Lymphoma. 2014;55(3):595-600. doi:10.3109/10428194.2013.813500
2. Arber DA, Orazi A, Hasserjian R, et al. The 2016 revision to the World Health Organization classification of myeloid neoplasms and acute leukemia. Blood. 2016;127(20):2391-2405. doi:10.1182/blood-2016-03-643544
3. Tefferi A, Rumi E, Finazzi G, et al. Survival and prognosis among 1545 patients with contemporary polycythemia vera: an international study. Leukemia. 2013;27(9):1874-1881. doi:10.1038/leu.2013.163
4. Marchioli R, Finazzi G, Landolfi R, et al. Vascular and neoplastic risk in a large cohort of patients with polycythemia vera. J Clin Oncol. 2005;23(10):2224-2232. doi:10.1200/JCO.2005.07.062
5. Vannucchi AM, Antonioli E, Guglielmelli P, et al. Clinical profile of homozygous JAK2 617V>F mutation in patients with polycythemia vera or essential thrombocythemia. Blood. 2007;110(3):840-846. doi:10.1182/blood-2006-12-064287
6. Goyal RK, Davis KL, Cote I, Mounedji N, Kaye JA. Increased incidence of thromboembolic event rates in patients diagnosed with polycythemia vera: results from an observational cohort study. Blood (ASH Annual Meeting Abstracts). 2014;124:4840. doi:10.1182/blood.V124.21.4840.4840
7. Barbui T, Carobbio A, Rumi E, et al. In contemporary patients with polycythemia vera, rates of thrombosis and risk factors delineate a new clinical epidemiology. Blood. 2014;124(19):3021-3023. doi:10.1182/blood-2014-07-591610 8. Cerquozzi S, Barraco D, Lasho T, et al. Risk factors for arterial versus venous thrombosis in polycythemia vera: a single center experience in 587 patients. Blood Cancer J. 2017;7(12):662. doi:10.1038/s41408-017-0035-6
9. Stein BL, Moliterno AR, Tiu RV. Polycythemia vera disease burden: contributing factors, impact on quality of life, and emerging treatment options. Ann Hematol. 2014;93(12):1965-1976. doi:10.1007/s00277-014-2205-y
10. Hultcrantz M, Kristinsson SY, Andersson TM-L, et al. Patterns of survival among patients with myeloproliferative neoplasms diagnosed in Sweden from 1973 to 2008: a population-based study. J Clin Oncol. 2012;30(24):2995-3001. doi:10.1200/JCO.2012.42.1925
11. National Comprehensive Cancer Network. NCCN clinical practice guidelines in myeloproliferative neoplasms (Version 1.2020). Accessed March 3, 2022. https://www.nccn.org/professionals/physician_gls/pdf/mpn.pdf
12. Marchioli R, Finazzi G, Specchia G, et al. Cardiovascular events and intensity of treatment in polycythemia vera. N Engl J Med. 2013;368(1):22-33. doi:10.1056/NEJMoa1208500
13. Landolfi R, Di Gennaro L, Barbui T, et al. Leukocytosis as a major thrombotic risk factor in patients with polycythemia vera. Blood. 2007;109(6):2446-2452. doi:10.1182/blood-2006-08-042515
14. Barbui T, Masciulli A, Marfisi MR, et al. White blood cell counts and thrombosis in polycythemia vera: a subanalysis of the CYTO-PV study. Blood. 2015;126(4):560-561. doi:10.1182/blood-2015-04-638593
15. Prchal JT, Gordeuk VR. Treatment target in polycythemia vera. N Engl J Med. 2013;368(16):1555-1556. doi:10.1056/NEJMc1301262
16. Parasuraman S, Yu J, Paranagama D, et al. Elevated white blood cell levels and thrombotic events in patients with polycythemia vera: a real-world analysis of Veterans Health Administration data. Clin Lymphoma Myeloma Leuk. 2020;20(2):63-69. doi:10.1016/j.clml.2019.11.010
17. Parasuraman S, Yu J, Paranagama D, et al. Hematocrit levels and thrombotic events in patients with polycythemia vera: an analysis of Veterans Health Administration data. Ann Hematol. 2019;98(11):2533-2539. doi:10.1007/s00277-019-03793-w
18. WHO CVD Risk Chart Working Group. World Health Organization cardiovascular disease risk charts: revised models to estimate risk in 21 global regions. Lancet Glob Health. 2019;7(10):e1332-e1345. doi:10.1016/S2214-109X(19)30318-3.
19. D’Agostino RB Sr, Vasan RS, Pencina MJ, et al. General cardiovascular risk profile for use in primary care: the Framingham Heart Study. Circulation. 2008;117(6):743-753. doi:10.1161/CIRCULATIONAHA.107.699579
20. Jakafi. Package insert. Incyte Corporation; 2020.
21. Gordeuk VR, Key NS, Prchal JT. Re-evaluation of hematocrit as a determinant of thrombotic risk in erythrocytosis. Haematologica. 2019;104(4):653-658. doi:10.3324/haematol.2018.210732
22. Carobbio A, Thiele J, Passamonti F, et al. Risk factors for arterial and venous thrombosis in WHO-defined essential thrombocythemia: an international study of 891 patients. Blood. 2011;117(22):5857-5859. doi:10.1182/blood-2011-02-339002
23. Perloff JK, Marelli AJ, Miner PD. Risk of stroke in adults with cyanotic congenital heart disease. Circulation. 1993;87(6):1954-1959. doi:10.1161/01.cir.87.6.1954
24. Gordeuk VR, Miasnikova GY, Sergueeva AI, et al. Thrombotic risk in congenital erythrocytosis due to up-regulated hypoxia sensing is not associated with elevated hematocrit. Haematologica. 2020;105(3):e87-e90. doi:10.3324/haematol.2019.216267
25. Kroll MH, Michaelis LC, Verstovsek S. Mechanisms of thrombogenesis in polycythemia vera. Blood Rev. 2015;29(4):215-221. doi:10.1016/j.blre.2014.12.002
26. Barbui T, Tefferi A, Vannucchi AM, et al. Philadelphia chromosome-negative classical myeloproliferative neoplasms: revised management recommendations from European LeukemiaNet. Leukemia. 2018;32(5):1057-1069. doi:10.1038/s41375-018-0077-1
27. Barosi G, Mesa R, Finazzi G, et al. Revised response criteria for polycythemia vera and essential thrombocythemia: an ELN and IWG-MRT consensus project. Blood. 2013;121(23):4778-4781. doi:10.1182/blood-2013-01-478891
Polycythemia vera (PV) is a rare myeloproliferative neoplasm affecting 44 to 57 individuals per 100,000 in the United States.1,2 It is characterized by somatic mutations in the hematopoietic stem cell, resulting in hyperproliferation of mature myeloid lineage cells.2 Sustained erythrocytosis is a hallmark of PV, although many patients also have leukocytosis and thrombocytosis.2,3 These patients have increased inherent thrombotic risk with arterial events reported to occur at rates of 7 to 21/1000 person-years and venous thrombotic events at 5 to 20/1000 person-years.4-7 Thrombotic and cardiovascular events are leading causes of morbidity and mortality, resulting in a reduced overall survival of patients with PV compared with the general population.3,8-10
Blood Cell Counts and Thrombotic Events in PV
Treatment strategies for patients with PV mainly aim to prevent or manage thrombotic and bleeding complications through normalization of blood counts.11 Hematocrit (Hct) control has been reported to be associated with reduced thrombotic risk in patients with PV. This was shown and popularized by the prospective, randomized Cytoreductive Therapy in Polycythemia Vera (CYTO-PV) trial in which participants were randomized 1:1 to maintaining either a low (< 45%) or high (45%-50%) Hct for 5 years to examine the long-term effects of more- or less-intensive cytoreductive therapy.12 Patients in the low-Hct group were found to have a lower rate of death from cardiovascular events or major thrombosis (1.1/100 person-years in the low-Hct group vs 4.4 in the high-Hct group; hazard ratio [HR], 3.91; 95% confidence interval [CI], 1.45-10.53; P = .007). Likewise, cardiovascular events occurred at a lower rate in patients in the low-Hct group compared with the high-Hct group (4.4% vs 10.9% of patients, respectively; HR, 2.69; 95% CI, 1.19-6.12; P = .02).12
Leukocytosis has also been linked to elevated risk for vascular events as shown in several studies, including the real-world European Collaboration on Low-Dose Aspirin in PV (ECLAP) observational study and a post hoc subanalysis of the CYTO-PV study.13,14 In a multivariate, time-dependent analysis in ECLAP, patients with white blood cell (WBC) counts > 15 × 109/L had a significant increase in the risk of thrombosis compared with those who had lower WBC counts, with higher WBC count more strongly associated with arterial than venous thromboembolism.13 In CYTO-PV, a significant correlation between elevated WBC count (≥ 11 × 109/L vs reference level of < 7 × 109/L) and time-dependent risk of major thrombosis was shown (HR, 3.9; 95% CI, 1.24-12.3; P = .02).14 Likewise, WBC count ≥ 11 × 109/L was found to be a predictor of subsequent venous events in a separate single-center multivariate analysis of patients with PV.8
Although CYTO-PV remains one of the largest prospective landmark studies in PV demonstrating the impact of Hct control on thrombosis, it is worthwhile to note that the patients in the high-Hct group who received less frequent myelosuppressive therapy with hydroxyurea than the low-Hct group also had higher WBC counts.12,15 Work is needed to determine the relative effects of high Hct and high WBC counts on PV independent of each other.
The Veteran Population with PV
Two recently published retrospective analyses from Parasuraman and colleagues used data from the Veterans Health Administration (VHA), the largest integrated health care system in the US, with an aim to replicate findings from CYTO-PV in a real-world population.16,17 The 2 analyses focused independently on the effects of Hct control and WBC count on the risk of a thrombotic event in patients with PV.
In the first retrospective analysis, 213 patients with PV and no prior thrombosis were placed into groups based on whether Hct levels were consistently either < 45% or ≥ 45% throughout the study period.17 The mean follow-up time was 2.3 years, during which 44.1% of patients experienced a thrombotic event (Figure 1). Patients with Hct levels < 45% had a lower rate of thrombotic events compared to those with levels ≥ 45% (40.3% vs 54.2%, respectively; HR, 1.61; 95% CI, 1.03-2.51; P = .04). In a sensitivity analysis that included patients with pre-index thrombotic events (N = 342), similar results were noted (55.6% vs 76.9% between the < 45% and ≥ 45% groups, respectively; HR, 1.95; 95% CI, 1.46-2.61; P < .001).
In the second analysis, the authors investigated the relationship between WBC counts and thrombotic events.16 Evaluable patients (N = 1565) were grouped into 1 of 4 cohorts based on the last WBC measurement taken during the study period before a thrombotic event or through the end of follow-up: (1) WBC < 7.0 × 109/L, (2) 7.0 to 8.4 × 109/L, (3) 8.5 to < 11.0 × 109/L, or (4) ≥ 11.0 × 109/L. Mean follow-up time ranged from 3.6 to 4.5 years among WBC count cohorts, during which 24.9% of patients experienced a thrombotic event. Compared with the reference cohort (WBC < 7.0 × 109/L), a significant positive association between WBC counts and thrombotic event occurrence was observed among patients with WBC counts of 8.5 to < 11.0 × 109/L (HR, 1.47; 95% CI, 1.10-1.96; P < .01) and ≥ 11 × 109/L (HR, 1.87; 95% CI, 1.44-2.43; P < .001) (Figure 2).16 When including all patients in a sensitivity analysis regardless of whether they experienced thrombotic events before the index date (N = 1876), similar results were obtained (7.0-8.4 × 109/L group: HR, 1.22; 95% CI, 0.97-1.55; P = .0959; 8.5 - 11.0 × 109/L group: HR, 1.41; 95% CI, 1.10-1.81; P = .0062; ≥ 11.0 × 109/L group: HR, 1.53; 95% CI, 1.23-1.91; P < .001; compared with < 7.0 × 109/L reference group). Rates of phlebotomy and cytoreductive treatments were similar across groups.16
Some limitations to these studies are attributable to their retrospective design, reliance on health records, and the VHA population characteristics, which differ from the general population. For example, in this analysis, patients with PV in the VHA population had significantly increased risk of thrombotic events, even at a lower WBC count threshold (≥ 8.5 × 109/L) compared with those reported in CYTO-PV (≥ 11 × 109/L). Furthermore, approximately one-third of patients had elevated WBC levels, compared with 25.5% in the CYTO-PV study.14,16 This is most likely due to the unique nature of the VHA patient population, who are predominantly older adult men and generally have a higher comorbidity burden. A notable pre-index comorbidity burden was reported in the VHA population in the Hct analysis, even when compared to patients with PV in the general US population (Charlson Comorbidity Index score, 1.3 vs 0.8).6,17 Comorbid conditions such as hypertension, diabetes, and tobacco use, which are most common among the VHA population, are independently associated with higher risk of cardiovascular and thrombotic events.18,19 However, whether these higher levels of comorbidities affected the type of treatments they received was not elucidated, and the effectiveness of treatments to maintain target Hct levels was not addressed in the study.
Current PV Management and Future Implications
The National Comprehensive Cancer Network (NCCN) clinical practice guidelines in oncology in myeloproliferative neoplasms recommend maintaining Hct levels < 45% in patients with PV.11 Patients with high-risk disease (age ≥ 60 years and/or history of thrombosis) are monitored for new thrombosis or bleeding and are managed for their cardiovascular risk factors. In addition, they receive low-dose aspirin (81-100 mg/day), undergo phlebotomy to maintain an Hct < 45%, and are managed with pharmacologic cytoreductive therapy. Cytoreductive therapy primarily consists of hydroxyurea or peginterferon alfa-2a for younger patients. Ruxolitinib, a Janus kinase (JAK1)/JAK2 inhibitor, is now approved by the US Food and Drug Administration as second-line treatment for those with PV that is intolerant or unresponsive to hydroxyurea or peginterferon alfa-2a treatments.11,20 However, the role of cytoreductive therapy is not clear for patients with low-risk disease (age < 60 years and no history of thrombosis). These patients are managed for their cardiovascular risk factors, undergo phlebotomy to maintain an Hct < 45%, are maintained on low-dose aspirin (81-100 mg/day), and are monitored for indications for cytoreductive therapy, which include any new thrombosis or disease-related major bleeding, frequent or persistent need for phlebotomy with poor tolerance for the procedure, splenomegaly, thrombocytosis, leukocytosis, and disease-related symptoms (eg, aquagenic pruritus, night sweats, fatigue).
Even though the current guidelines recommend maintaining a target Hct of < 45% in patients with high-risk PV, the role of Hct as the main determinant of thrombotic risk in patients with PV is still debated.21 In JAK2V617F-positive essential thrombocythemia, Hct levels are usually normal but risk of thrombosis is nevertheless still significant.22 The risk of thrombosis is significantly lower in primary familial and congenital polycythemia and much lower in secondary erythrocytosis such as cyanotic heart disease, long-term native dwellers of high altitude, and those with high-oxygen–affinity hemoglobins.21,23 In secondary erythrocytosis from hypoxia or upregulated hypoxic pathway such as hypoxia inducible factor-2α (HIF-2α) mutation and Chuvash erythrocytosis, the risk of thrombosis is more associated with the upregulated HIF pathway and its downstream consequences, rather than the elevated Hct level.24
However, most current literature supports the association of increased risk of thrombosis with higher Hct and high WBC count in patients with PV. In addition, the underlying mechanism of thrombogenesis still remains elusive; it is likely a complex process that involves interactions among multiple components, including elevated blood counts arising from clonal hematopoiesis, JAK2V617F allele burden, and platelet and WBC activation and their interaction with endothelial cells and inflammatory cytokines.25
Nevertheless, Hct control and aspirin use are current standard of care for patients with PV to mitigate thrombotic risk, and the results from the 2 analyses by Parasuraman and colleagues, using real-world data from the VHA, support the current practice guidelines to maintain Hct < 45% in these patients. They also provide additional support for considering WBC counts when determining patient risk and treatment plans. Although treatment response criteria from the European LeukemiaNet include achieving normal WBC levels to decrease the risk of thrombosis, current NCCN guidelines do not include WBC counts as a component for establishing patient risk or provide a target WBC count to guide patient management.11,26,27 Updates to these practice guidelines may be warranted. In addition, further study is needed to understand the mechanism of thrombogenesis in PV and other myeloproliferative disorders in order to develop novel therapeutic targets and improve patient outcomes.
Acknowledgments
Writing assistance was provided by Tania Iqbal, PhD, an employee of ICON (North Wales, PA), and was funded by Incyte Corporation (Wilmington, DE).
Polycythemia vera (PV) is a rare myeloproliferative neoplasm affecting 44 to 57 individuals per 100,000 in the United States.1,2 It is characterized by somatic mutations in the hematopoietic stem cell, resulting in hyperproliferation of mature myeloid lineage cells.2 Sustained erythrocytosis is a hallmark of PV, although many patients also have leukocytosis and thrombocytosis.2,3 These patients have increased inherent thrombotic risk with arterial events reported to occur at rates of 7 to 21/1000 person-years and venous thrombotic events at 5 to 20/1000 person-years.4-7 Thrombotic and cardiovascular events are leading causes of morbidity and mortality, resulting in a reduced overall survival of patients with PV compared with the general population.3,8-10
Blood Cell Counts and Thrombotic Events in PV
Treatment strategies for patients with PV mainly aim to prevent or manage thrombotic and bleeding complications through normalization of blood counts.11 Hematocrit (Hct) control has been reported to be associated with reduced thrombotic risk in patients with PV. This was shown and popularized by the prospective, randomized Cytoreductive Therapy in Polycythemia Vera (CYTO-PV) trial in which participants were randomized 1:1 to maintaining either a low (< 45%) or high (45%-50%) Hct for 5 years to examine the long-term effects of more- or less-intensive cytoreductive therapy.12 Patients in the low-Hct group were found to have a lower rate of death from cardiovascular events or major thrombosis (1.1/100 person-years in the low-Hct group vs 4.4 in the high-Hct group; hazard ratio [HR], 3.91; 95% confidence interval [CI], 1.45-10.53; P = .007). Likewise, cardiovascular events occurred at a lower rate in patients in the low-Hct group compared with the high-Hct group (4.4% vs 10.9% of patients, respectively; HR, 2.69; 95% CI, 1.19-6.12; P = .02).12
Leukocytosis has also been linked to elevated risk for vascular events as shown in several studies, including the real-world European Collaboration on Low-Dose Aspirin in PV (ECLAP) observational study and a post hoc subanalysis of the CYTO-PV study.13,14 In a multivariate, time-dependent analysis in ECLAP, patients with white blood cell (WBC) counts > 15 × 109/L had a significant increase in the risk of thrombosis compared with those who had lower WBC counts, with higher WBC count more strongly associated with arterial than venous thromboembolism.13 In CYTO-PV, a significant correlation between elevated WBC count (≥ 11 × 109/L vs reference level of < 7 × 109/L) and time-dependent risk of major thrombosis was shown (HR, 3.9; 95% CI, 1.24-12.3; P = .02).14 Likewise, WBC count ≥ 11 × 109/L was found to be a predictor of subsequent venous events in a separate single-center multivariate analysis of patients with PV.8
Although CYTO-PV remains one of the largest prospective landmark studies in PV demonstrating the impact of Hct control on thrombosis, it is worthwhile to note that the patients in the high-Hct group who received less frequent myelosuppressive therapy with hydroxyurea than the low-Hct group also had higher WBC counts.12,15 Work is needed to determine the relative effects of high Hct and high WBC counts on PV independent of each other.
The Veteran Population with PV
Two recently published retrospective analyses from Parasuraman and colleagues used data from the Veterans Health Administration (VHA), the largest integrated health care system in the US, with an aim to replicate findings from CYTO-PV in a real-world population.16,17 The 2 analyses focused independently on the effects of Hct control and WBC count on the risk of a thrombotic event in patients with PV.
In the first retrospective analysis, 213 patients with PV and no prior thrombosis were placed into groups based on whether Hct levels were consistently either < 45% or ≥ 45% throughout the study period.17 The mean follow-up time was 2.3 years, during which 44.1% of patients experienced a thrombotic event (Figure 1). Patients with Hct levels < 45% had a lower rate of thrombotic events compared to those with levels ≥ 45% (40.3% vs 54.2%, respectively; HR, 1.61; 95% CI, 1.03-2.51; P = .04). In a sensitivity analysis that included patients with pre-index thrombotic events (N = 342), similar results were noted (55.6% vs 76.9% between the < 45% and ≥ 45% groups, respectively; HR, 1.95; 95% CI, 1.46-2.61; P < .001).
In the second analysis, the authors investigated the relationship between WBC counts and thrombotic events.16 Evaluable patients (N = 1565) were grouped into 1 of 4 cohorts based on the last WBC measurement taken during the study period before a thrombotic event or through the end of follow-up: (1) WBC < 7.0 × 109/L, (2) 7.0 to 8.4 × 109/L, (3) 8.5 to < 11.0 × 109/L, or (4) ≥ 11.0 × 109/L. Mean follow-up time ranged from 3.6 to 4.5 years among WBC count cohorts, during which 24.9% of patients experienced a thrombotic event. Compared with the reference cohort (WBC < 7.0 × 109/L), a significant positive association between WBC counts and thrombotic event occurrence was observed among patients with WBC counts of 8.5 to < 11.0 × 109/L (HR, 1.47; 95% CI, 1.10-1.96; P < .01) and ≥ 11 × 109/L (HR, 1.87; 95% CI, 1.44-2.43; P < .001) (Figure 2).16 When including all patients in a sensitivity analysis regardless of whether they experienced thrombotic events before the index date (N = 1876), similar results were obtained (7.0-8.4 × 109/L group: HR, 1.22; 95% CI, 0.97-1.55; P = .0959; 8.5 - 11.0 × 109/L group: HR, 1.41; 95% CI, 1.10-1.81; P = .0062; ≥ 11.0 × 109/L group: HR, 1.53; 95% CI, 1.23-1.91; P < .001; compared with < 7.0 × 109/L reference group). Rates of phlebotomy and cytoreductive treatments were similar across groups.16
Some limitations to these studies are attributable to their retrospective design, reliance on health records, and the VHA population characteristics, which differ from the general population. For example, in this analysis, patients with PV in the VHA population had significantly increased risk of thrombotic events, even at a lower WBC count threshold (≥ 8.5 × 109/L) compared with those reported in CYTO-PV (≥ 11 × 109/L). Furthermore, approximately one-third of patients had elevated WBC levels, compared with 25.5% in the CYTO-PV study.14,16 This is most likely due to the unique nature of the VHA patient population, who are predominantly older adult men and generally have a higher comorbidity burden. A notable pre-index comorbidity burden was reported in the VHA population in the Hct analysis, even when compared to patients with PV in the general US population (Charlson Comorbidity Index score, 1.3 vs 0.8).6,17 Comorbid conditions such as hypertension, diabetes, and tobacco use, which are most common among the VHA population, are independently associated with higher risk of cardiovascular and thrombotic events.18,19 However, whether these higher levels of comorbidities affected the type of treatments they received was not elucidated, and the effectiveness of treatments to maintain target Hct levels was not addressed in the study.
Current PV Management and Future Implications
The National Comprehensive Cancer Network (NCCN) clinical practice guidelines in oncology in myeloproliferative neoplasms recommend maintaining Hct levels < 45% in patients with PV.11 Patients with high-risk disease (age ≥ 60 years and/or history of thrombosis) are monitored for new thrombosis or bleeding and are managed for their cardiovascular risk factors. In addition, they receive low-dose aspirin (81-100 mg/day), undergo phlebotomy to maintain an Hct < 45%, and are managed with pharmacologic cytoreductive therapy. Cytoreductive therapy primarily consists of hydroxyurea or peginterferon alfa-2a for younger patients. Ruxolitinib, a Janus kinase (JAK1)/JAK2 inhibitor, is now approved by the US Food and Drug Administration as second-line treatment for those with PV that is intolerant or unresponsive to hydroxyurea or peginterferon alfa-2a treatments.11,20 However, the role of cytoreductive therapy is not clear for patients with low-risk disease (age < 60 years and no history of thrombosis). These patients are managed for their cardiovascular risk factors, undergo phlebotomy to maintain an Hct < 45%, are maintained on low-dose aspirin (81-100 mg/day), and are monitored for indications for cytoreductive therapy, which include any new thrombosis or disease-related major bleeding, frequent or persistent need for phlebotomy with poor tolerance for the procedure, splenomegaly, thrombocytosis, leukocytosis, and disease-related symptoms (eg, aquagenic pruritus, night sweats, fatigue).
Even though the current guidelines recommend maintaining a target Hct of < 45% in patients with high-risk PV, the role of Hct as the main determinant of thrombotic risk in patients with PV is still debated.21 In JAK2V617F-positive essential thrombocythemia, Hct levels are usually normal but risk of thrombosis is nevertheless still significant.22 The risk of thrombosis is significantly lower in primary familial and congenital polycythemia and much lower in secondary erythrocytosis such as cyanotic heart disease, long-term native dwellers of high altitude, and those with high-oxygen–affinity hemoglobins.21,23 In secondary erythrocytosis from hypoxia or upregulated hypoxic pathway such as hypoxia inducible factor-2α (HIF-2α) mutation and Chuvash erythrocytosis, the risk of thrombosis is more associated with the upregulated HIF pathway and its downstream consequences, rather than the elevated Hct level.24
However, most current literature supports the association of increased risk of thrombosis with higher Hct and high WBC count in patients with PV. In addition, the underlying mechanism of thrombogenesis still remains elusive; it is likely a complex process that involves interactions among multiple components, including elevated blood counts arising from clonal hematopoiesis, JAK2V617F allele burden, and platelet and WBC activation and their interaction with endothelial cells and inflammatory cytokines.25
Nevertheless, Hct control and aspirin use are current standard of care for patients with PV to mitigate thrombotic risk, and the results from the 2 analyses by Parasuraman and colleagues, using real-world data from the VHA, support the current practice guidelines to maintain Hct < 45% in these patients. They also provide additional support for considering WBC counts when determining patient risk and treatment plans. Although treatment response criteria from the European LeukemiaNet include achieving normal WBC levels to decrease the risk of thrombosis, current NCCN guidelines do not include WBC counts as a component for establishing patient risk or provide a target WBC count to guide patient management.11,26,27 Updates to these practice guidelines may be warranted. In addition, further study is needed to understand the mechanism of thrombogenesis in PV and other myeloproliferative disorders in order to develop novel therapeutic targets and improve patient outcomes.
Acknowledgments
Writing assistance was provided by Tania Iqbal, PhD, an employee of ICON (North Wales, PA), and was funded by Incyte Corporation (Wilmington, DE).
1. Mehta J, Wang H, Iqbal SU, Mesa R. Epidemiology of myeloproliferative neoplasms in the United States. Leuk Lymphoma. 2014;55(3):595-600. doi:10.3109/10428194.2013.813500
2. Arber DA, Orazi A, Hasserjian R, et al. The 2016 revision to the World Health Organization classification of myeloid neoplasms and acute leukemia. Blood. 2016;127(20):2391-2405. doi:10.1182/blood-2016-03-643544
3. Tefferi A, Rumi E, Finazzi G, et al. Survival and prognosis among 1545 patients with contemporary polycythemia vera: an international study. Leukemia. 2013;27(9):1874-1881. doi:10.1038/leu.2013.163
4. Marchioli R, Finazzi G, Landolfi R, et al. Vascular and neoplastic risk in a large cohort of patients with polycythemia vera. J Clin Oncol. 2005;23(10):2224-2232. doi:10.1200/JCO.2005.07.062
5. Vannucchi AM, Antonioli E, Guglielmelli P, et al. Clinical profile of homozygous JAK2 617V>F mutation in patients with polycythemia vera or essential thrombocythemia. Blood. 2007;110(3):840-846. doi:10.1182/blood-2006-12-064287
6. Goyal RK, Davis KL, Cote I, Mounedji N, Kaye JA. Increased incidence of thromboembolic event rates in patients diagnosed with polycythemia vera: results from an observational cohort study. Blood (ASH Annual Meeting Abstracts). 2014;124:4840. doi:10.1182/blood.V124.21.4840.4840
7. Barbui T, Carobbio A, Rumi E, et al. In contemporary patients with polycythemia vera, rates of thrombosis and risk factors delineate a new clinical epidemiology. Blood. 2014;124(19):3021-3023. doi:10.1182/blood-2014-07-591610 8. Cerquozzi S, Barraco D, Lasho T, et al. Risk factors for arterial versus venous thrombosis in polycythemia vera: a single center experience in 587 patients. Blood Cancer J. 2017;7(12):662. doi:10.1038/s41408-017-0035-6
9. Stein BL, Moliterno AR, Tiu RV. Polycythemia vera disease burden: contributing factors, impact on quality of life, and emerging treatment options. Ann Hematol. 2014;93(12):1965-1976. doi:10.1007/s00277-014-2205-y
10. Hultcrantz M, Kristinsson SY, Andersson TM-L, et al. Patterns of survival among patients with myeloproliferative neoplasms diagnosed in Sweden from 1973 to 2008: a population-based study. J Clin Oncol. 2012;30(24):2995-3001. doi:10.1200/JCO.2012.42.1925
11. National Comprehensive Cancer Network. NCCN clinical practice guidelines in myeloproliferative neoplasms (Version 1.2020). Accessed March 3, 2022. https://www.nccn.org/professionals/physician_gls/pdf/mpn.pdf
12. Marchioli R, Finazzi G, Specchia G, et al. Cardiovascular events and intensity of treatment in polycythemia vera. N Engl J Med. 2013;368(1):22-33. doi:10.1056/NEJMoa1208500
13. Landolfi R, Di Gennaro L, Barbui T, et al. Leukocytosis as a major thrombotic risk factor in patients with polycythemia vera. Blood. 2007;109(6):2446-2452. doi:10.1182/blood-2006-08-042515
14. Barbui T, Masciulli A, Marfisi MR, et al. White blood cell counts and thrombosis in polycythemia vera: a subanalysis of the CYTO-PV study. Blood. 2015;126(4):560-561. doi:10.1182/blood-2015-04-638593
15. Prchal JT, Gordeuk VR. Treatment target in polycythemia vera. N Engl J Med. 2013;368(16):1555-1556. doi:10.1056/NEJMc1301262
16. Parasuraman S, Yu J, Paranagama D, et al. Elevated white blood cell levels and thrombotic events in patients with polycythemia vera: a real-world analysis of Veterans Health Administration data. Clin Lymphoma Myeloma Leuk. 2020;20(2):63-69. doi:10.1016/j.clml.2019.11.010
17. Parasuraman S, Yu J, Paranagama D, et al. Hematocrit levels and thrombotic events in patients with polycythemia vera: an analysis of Veterans Health Administration data. Ann Hematol. 2019;98(11):2533-2539. doi:10.1007/s00277-019-03793-w
18. WHO CVD Risk Chart Working Group. World Health Organization cardiovascular disease risk charts: revised models to estimate risk in 21 global regions. Lancet Glob Health. 2019;7(10):e1332-e1345. doi:10.1016/S2214-109X(19)30318-3.
19. D’Agostino RB Sr, Vasan RS, Pencina MJ, et al. General cardiovascular risk profile for use in primary care: the Framingham Heart Study. Circulation. 2008;117(6):743-753. doi:10.1161/CIRCULATIONAHA.107.699579
20. Jakafi. Package insert. Incyte Corporation; 2020.
21. Gordeuk VR, Key NS, Prchal JT. Re-evaluation of hematocrit as a determinant of thrombotic risk in erythrocytosis. Haematologica. 2019;104(4):653-658. doi:10.3324/haematol.2018.210732
22. Carobbio A, Thiele J, Passamonti F, et al. Risk factors for arterial and venous thrombosis in WHO-defined essential thrombocythemia: an international study of 891 patients. Blood. 2011;117(22):5857-5859. doi:10.1182/blood-2011-02-339002
23. Perloff JK, Marelli AJ, Miner PD. Risk of stroke in adults with cyanotic congenital heart disease. Circulation. 1993;87(6):1954-1959. doi:10.1161/01.cir.87.6.1954
24. Gordeuk VR, Miasnikova GY, Sergueeva AI, et al. Thrombotic risk in congenital erythrocytosis due to up-regulated hypoxia sensing is not associated with elevated hematocrit. Haematologica. 2020;105(3):e87-e90. doi:10.3324/haematol.2019.216267
25. Kroll MH, Michaelis LC, Verstovsek S. Mechanisms of thrombogenesis in polycythemia vera. Blood Rev. 2015;29(4):215-221. doi:10.1016/j.blre.2014.12.002
26. Barbui T, Tefferi A, Vannucchi AM, et al. Philadelphia chromosome-negative classical myeloproliferative neoplasms: revised management recommendations from European LeukemiaNet. Leukemia. 2018;32(5):1057-1069. doi:10.1038/s41375-018-0077-1
27. Barosi G, Mesa R, Finazzi G, et al. Revised response criteria for polycythemia vera and essential thrombocythemia: an ELN and IWG-MRT consensus project. Blood. 2013;121(23):4778-4781. doi:10.1182/blood-2013-01-478891
1. Mehta J, Wang H, Iqbal SU, Mesa R. Epidemiology of myeloproliferative neoplasms in the United States. Leuk Lymphoma. 2014;55(3):595-600. doi:10.3109/10428194.2013.813500
2. Arber DA, Orazi A, Hasserjian R, et al. The 2016 revision to the World Health Organization classification of myeloid neoplasms and acute leukemia. Blood. 2016;127(20):2391-2405. doi:10.1182/blood-2016-03-643544
3. Tefferi A, Rumi E, Finazzi G, et al. Survival and prognosis among 1545 patients with contemporary polycythemia vera: an international study. Leukemia. 2013;27(9):1874-1881. doi:10.1038/leu.2013.163
4. Marchioli R, Finazzi G, Landolfi R, et al. Vascular and neoplastic risk in a large cohort of patients with polycythemia vera. J Clin Oncol. 2005;23(10):2224-2232. doi:10.1200/JCO.2005.07.062
5. Vannucchi AM, Antonioli E, Guglielmelli P, et al. Clinical profile of homozygous JAK2 617V>F mutation in patients with polycythemia vera or essential thrombocythemia. Blood. 2007;110(3):840-846. doi:10.1182/blood-2006-12-064287
6. Goyal RK, Davis KL, Cote I, Mounedji N, Kaye JA. Increased incidence of thromboembolic event rates in patients diagnosed with polycythemia vera: results from an observational cohort study. Blood (ASH Annual Meeting Abstracts). 2014;124:4840. doi:10.1182/blood.V124.21.4840.4840
7. Barbui T, Carobbio A, Rumi E, et al. In contemporary patients with polycythemia vera, rates of thrombosis and risk factors delineate a new clinical epidemiology. Blood. 2014;124(19):3021-3023. doi:10.1182/blood-2014-07-591610 8. Cerquozzi S, Barraco D, Lasho T, et al. Risk factors for arterial versus venous thrombosis in polycythemia vera: a single center experience in 587 patients. Blood Cancer J. 2017;7(12):662. doi:10.1038/s41408-017-0035-6
9. Stein BL, Moliterno AR, Tiu RV. Polycythemia vera disease burden: contributing factors, impact on quality of life, and emerging treatment options. Ann Hematol. 2014;93(12):1965-1976. doi:10.1007/s00277-014-2205-y
10. Hultcrantz M, Kristinsson SY, Andersson TM-L, et al. Patterns of survival among patients with myeloproliferative neoplasms diagnosed in Sweden from 1973 to 2008: a population-based study. J Clin Oncol. 2012;30(24):2995-3001. doi:10.1200/JCO.2012.42.1925
11. National Comprehensive Cancer Network. NCCN clinical practice guidelines in myeloproliferative neoplasms (Version 1.2020). Accessed March 3, 2022. https://www.nccn.org/professionals/physician_gls/pdf/mpn.pdf
12. Marchioli R, Finazzi G, Specchia G, et al. Cardiovascular events and intensity of treatment in polycythemia vera. N Engl J Med. 2013;368(1):22-33. doi:10.1056/NEJMoa1208500
13. Landolfi R, Di Gennaro L, Barbui T, et al. Leukocytosis as a major thrombotic risk factor in patients with polycythemia vera. Blood. 2007;109(6):2446-2452. doi:10.1182/blood-2006-08-042515
14. Barbui T, Masciulli A, Marfisi MR, et al. White blood cell counts and thrombosis in polycythemia vera: a subanalysis of the CYTO-PV study. Blood. 2015;126(4):560-561. doi:10.1182/blood-2015-04-638593
15. Prchal JT, Gordeuk VR. Treatment target in polycythemia vera. N Engl J Med. 2013;368(16):1555-1556. doi:10.1056/NEJMc1301262
16. Parasuraman S, Yu J, Paranagama D, et al. Elevated white blood cell levels and thrombotic events in patients with polycythemia vera: a real-world analysis of Veterans Health Administration data. Clin Lymphoma Myeloma Leuk. 2020;20(2):63-69. doi:10.1016/j.clml.2019.11.010
17. Parasuraman S, Yu J, Paranagama D, et al. Hematocrit levels and thrombotic events in patients with polycythemia vera: an analysis of Veterans Health Administration data. Ann Hematol. 2019;98(11):2533-2539. doi:10.1007/s00277-019-03793-w
18. WHO CVD Risk Chart Working Group. World Health Organization cardiovascular disease risk charts: revised models to estimate risk in 21 global regions. Lancet Glob Health. 2019;7(10):e1332-e1345. doi:10.1016/S2214-109X(19)30318-3.
19. D’Agostino RB Sr, Vasan RS, Pencina MJ, et al. General cardiovascular risk profile for use in primary care: the Framingham Heart Study. Circulation. 2008;117(6):743-753. doi:10.1161/CIRCULATIONAHA.107.699579
20. Jakafi. Package insert. Incyte Corporation; 2020.
21. Gordeuk VR, Key NS, Prchal JT. Re-evaluation of hematocrit as a determinant of thrombotic risk in erythrocytosis. Haematologica. 2019;104(4):653-658. doi:10.3324/haematol.2018.210732
22. Carobbio A, Thiele J, Passamonti F, et al. Risk factors for arterial and venous thrombosis in WHO-defined essential thrombocythemia: an international study of 891 patients. Blood. 2011;117(22):5857-5859. doi:10.1182/blood-2011-02-339002
23. Perloff JK, Marelli AJ, Miner PD. Risk of stroke in adults with cyanotic congenital heart disease. Circulation. 1993;87(6):1954-1959. doi:10.1161/01.cir.87.6.1954
24. Gordeuk VR, Miasnikova GY, Sergueeva AI, et al. Thrombotic risk in congenital erythrocytosis due to up-regulated hypoxia sensing is not associated with elevated hematocrit. Haematologica. 2020;105(3):e87-e90. doi:10.3324/haematol.2019.216267
25. Kroll MH, Michaelis LC, Verstovsek S. Mechanisms of thrombogenesis in polycythemia vera. Blood Rev. 2015;29(4):215-221. doi:10.1016/j.blre.2014.12.002
26. Barbui T, Tefferi A, Vannucchi AM, et al. Philadelphia chromosome-negative classical myeloproliferative neoplasms: revised management recommendations from European LeukemiaNet. Leukemia. 2018;32(5):1057-1069. doi:10.1038/s41375-018-0077-1
27. Barosi G, Mesa R, Finazzi G, et al. Revised response criteria for polycythemia vera and essential thrombocythemia: an ELN and IWG-MRT consensus project. Blood. 2013;121(23):4778-4781. doi:10.1182/blood-2013-01-478891
Characterizing Opioid Response in Older Veterans in the Post-Acute Setting
Older adults admitted to post-acute settings frequently have complex rehabilitation needs and multimorbidity, which predisposes them to pain management challenges.1,2 The prevalence of pain in post-acute and long-term care is as high as 65%, and opioid use is common among this population with 1 in 7 residents receiving long-term opioids.3,4
Opioids that do not adequately control pain represent a missed opportunity for deprescribing. There is limited evidence regarding efficacy of long-term opioid use (> 90 days) for improving pain and physical functioning.5 In addition, long-term opioid use carries significant risks, including overdose-related death, dependence, and increased emergency department visits.5 These risks are likely to be pronounced among veterans receiving post-acute care (PAC) who are older, have comorbid psychiatric disorders, are prescribed several centrally acting medications, and experience substance use disorder (SUD).6
Older adults are at increased risk for opioid toxicity because of reduced drug clearance and smaller therapeutic window.5 Centers for Disease Control and Prevention (CDC) guidelines recommend frequently assessing patients for benefit in terms of sustained improvement in pain as well as physical function.5 If pain and functional improvements are minimal, opioid use and nonopioid pain management strategies should be considered. Some patients will struggle with this approach. Directly asking patients about the effectiveness of opioids is challenging. Opioid users with chronic pain frequently report problems with opioids even as they describe them as indispensable for pain management.7,8
Earlier studies have assessed patient perspectives regarding opioid difficulties as well as their helpfulness, which could introduce recall bias. Patient-level factors that contribute to a global sense of distress, in addition to the presence of painful physical conditions, also could contribute to patients requesting opioids without experiencing adequate pain relief. One study in veterans residing in PAC facilities found that individuals with depression, posttraumatic stress disorder (PTSD), and SUD were more likely to report pain and receive scheduled analgesics; this effect persisted in individuals with PTSD even after adjusting for demographic and functional status variables.9 The study looked only at analgesics as a class and did not examine opioids specifically. It is possible that distressed individuals, such as those with uncontrolled depression, PTSD, and SUD, might be more likely to report high pain levels and receive opioids with inadequate benefit and increased risk. Identifying the primary condition causing distress and targeting treatment to that condition (ie, depression) is preferable to escalating opioids in an attempt to treat pain in the context of nonresponse. Assessing an individual’s aggregate response to opioids rather than relying on a single self-report is a useful addition to current pain management strategies.
The goal of this study was to pilot a method of identifying opioid-nonresponsive pain using administrative data, measure its prevalence in a PAC population of veterans, and explore clinical and demographic correlates with particular attention to variates that could indicate high levels of psychological and physical distress. Identifying pain that is poorly responsive to opioids would give clinicians the opportunity to avoid or minimize opioid use and prioritize treatments that are likely to improve the resident’s pain, quality of life, and physical function while minimizing recall bias. We hypothesized that pain that responds poorly to opioids would be prevalent among veterans residing in a PAC unit. We considered that veterans with pain poorly responsive to opioids would be more likely to have factors that would place them at increased risk of adverse effects, such as comorbid psychiatric conditions, history of SUD, and multimorbidity, providing further rationale for clinical equipoise in that population.6
Methods
This was a small, retrospective cross-sectional study using administrative data and chart review. The study included veterans who were administered opioids while residing in a single US Department of Veterans Affairs (VA) community living center PAC (CLC-PAC) unit during at least 1 of 4 nonconsecutive, random days in 2016 and 2017. The study was approved by the institutional review board of the Ann Arbor VA Health System (#2017-1034) as part of a larger project involving models of care in vulnerable older veterans.
Inclusion criteria were the presence of at least moderate pain (≥ 4 on a 0 to 10 scale); receiving ≥ 2 opioids ordered as needed over the prespecified 24-hour observation period; and having ≥ 2 pre-and postopioid administration pain scores during the observation period. Veterans who did not meet these criteria were excluded. At the time of initial sample selection, we did not capture information related to coprescribed analgesics, including a standing order of opioids. To obtain the sample, we initially characterized all veterans on the 4 days residing in the CLC-PAC unit as those reporting at least moderate pain (≥ 4) and those who reported no or mild pain (< 4). The cut point of 4 of 10 is consistent with moderate pain based on earlier work showing higher likelihood of pain that interferes with physical function.10 We then restricted the sample to veterans who received ≥ 2 opioids ordered as needed for pain and had ≥ 2 pre- and postopioid administration numeric pain rating scores during the 24-hour observation period. This methodology was chosen to enrich our sample for those who received opioids regularly for ongoing pain. Opioids were defined as full µ-opioid receptor agonists and included hydrocodone, oxycodone, morphine, hydromorphone, fentanyl, tramadol, and methadone.
Medication administration data were obtained from the VA corporate data warehouse, which houses all barcode medication administration data collected at the point of care. The dataset includes pain scores gathered by nursing staff before and after administering an as-needed analgesic. The corporate data warehouse records data/time of pain scores and the analgesic name, dosage, formulation, and date/time of administration. Using a standardized assessment form developed iteratively, we calculated opioid dosage in oral morphine equivalents (OME) for comparison.11,12 All abstracted data were reexamined for accuracy. Data initially were collected in an anonymized, blinded fashion. Participants were then unblinded for chart review. Initial data was captured in resident-days instead of unique residents because an individual resident might have been admitted on several observation days. We were primarily interested in how pain responded to opioids administered in response to resident request; therefore, we did not examine response to opioids that were continuously ordered (ie, scheduled). We did consider scheduled opioids when calculating total daily opioid dosage during the chart review.
Outcome of Interest
The primary outcome of interest was an individual’s response to as-needed opioids, which we defined as change in the pain score after opioid administration. The pre-opioid pain score was the score that immediately preceded administration of an as-needed opioid. The postopioid administration pain score was the first score after opioid administration if obtained within 3 hours of administration. Scores collected > 3 hours after opioid administration were excluded because they no longer accurately reflected the impact of the opioid due to the short half-lives. Observations were excluded if an opioid was administered without a recorded pain score; this occurred once for 6 individuals. Observations also were excluded if an opioid was administered but the data were captured on the following day (outside of the 24-hour window); this occurred once for 3 individuals.
We calculated a ∆ score by subtracting the postopioid pain rating score from the pre-opioid score. Individual ∆ scores were then averaged over the 24-hour period (range, 2-5 opioid doses). For example, if an individual reported a pre-opioid pain score of 10, and a postopioid pain score of 2, the ∆ was recorded as 8. If the individual’s next pre-opioid score was 10, and post-opioid score was 6, the ∆ was recorded as 4. ∆ scores over the 24-hour period were averaged together to determine that individual’s response to as-needed opioids. In the previous example, the mean ∆ score is 6. Lower mean ∆ scores reflect decreased responsiveness to opioids’ analgesic effect.
Demographic and clinical data were obtained from electronic health record review using a standardized assessment form. These data included information about medical and psychiatric comorbidities, specialist consultations, and CLC-PAC unit admission indications and diagnoses. Medications of interest were categorized as antidepressants, antipsychotics, benzodiazepines, muscle relaxants, hypnotics, stimulants, antiepileptic drugs/mood stabilizers (including gabapentin and pregabalin), and all adjuvant analgesics. Adjuvant analgesics were defined as medications administered for pain as documented by chart notes or those ordered as needed for pain, and analyzed as a composite variable. Antidepressants with analgesic properties (serotonin-norepinephrine reuptake inhibitors and tricyclic antidepressants) were considered adjuvant analgesics. Psychiatric information collected included presence of mood, anxiety, and psychotic disorders, and PTSD. SUD information was collected separately from other psychiatric disorders.
Analyses
The study population was described using tabulations for categorical data and means and standard deviations for continuous data. Responsiveness to opioids was analyzed as a continuous variable. Those with higher mean ∆ scores were considered to have pain relatively more responsive to opioids, while lower mean ∆ scores indicated pain less responsive to opioids. We constructed linear regression models controlling for average pre-opioid pain rating scores to explore associations between opioid responsiveness and variables of interest. All analyses were completed using Stata version 15. This study was not adequately powered to detect differences across the spectrum of opioid responsiveness, although the authors have reported differences in this article.
Results
Over the 4-day observational period there were 146 resident-days. Of these, 88 (60.3%) reported at least 1 pain score of ≥ 4. Of those, 61 (41.8%) received ≥ 1 as-needed opioid for pain. We identified 46 resident-days meeting study criteria of ≥ 2 pre- and postanalgesic scores. We identified 41 unique individuals (Figure 1). Two individuals were admitted to the CLC-PAC unit on 2 of the 4 observation days, and 1 individual was admitted to the CLC-PAC unit on 3 of the 4 observation days. For individuals admitted several days, we included data only from the initial observation day.
Response to opioids varied greatly in this sample. The mean (SD) ∆ pain score was 3.4 (1.6) and ranged from 0.5 to 6.3. Using linear regression, we found no relationship between admission indication, medical comorbidities (including active cancer), and opioid responsiveness (Table).
Psychiatric disorders were highly prevalent, with 25 individuals (61.0%) having ≥ 1 any psychiatric diagnosis identified on chart review. The presence of any psychiatric diagnosis was significantly associated with reduced responsiveness to opioids (β = −1.08; 95% CI, −2.04 to −0.13; P = .03). SUDs also were common, with 17 individuals (41.5%) having an active SUD; most were tobacco/nicotine. Twenty-six veterans (63.4%) had documentation of SUD in remission with 19 (46.3%) for substances other than tobacco/nicotine. There was no indication that any veteran in the sample was prescribed medication for opioid use disorder (OUD) at the time of observation. There was no relationship between opioid responsiveness and SUDs, neither active or in remission. Consults to other services that suggested distress or difficult-to-control symptoms also were frequent. Consults to the pain service were significantly associated with reduced responsiveness to opioids (β = −1.75; 95% CI, −3.33 to −0.17; P = .03). Association between psychiatry consultation and reduced opioid responsiveness trended toward significance (β = −0.95; 95% CI, −2.06 to 0.17; P = .09) (Figures 2 and 3). There was no significant association with palliative medicine consultation and opioid responsiveness.
A poorer response to opioids was associated with a significantly higher as-needed opioid dosage (β = −0.02; 95% CI, −0.04 to −0.01; P = .002) as well as a trend toward higher total opioid dosage (β = −0.005; 95% CI, −0.01 to 0.0003; P = .06) (Figure 4). Thirty-eight (92.7%) participants received nonopioid adjuvant analgesics for pain. More than half (56.1%) received antidepressants or gabapentinoids (51.2%), although we did not assess whether they were prescribed for pain or another indication. We did not identify a relationship between any specific psychoactive drug class and opioid responsiveness in this sample.
Discussion
This exploratory study used readily available administrative data in a CLC-PAC unit to assess responsiveness to opioids via a numeric mean ∆ score, with higher values indicating more pain relief in response to opioids. We then constructed linear regression models to characterize the relationship between the mean ∆ score and factors known to be associated with difficult-to-control pain and psychosocial distress. As expected, opioid responsiveness was highly variable among residents; some residents experienced essentially no reduction in pain, on average, despite receiving opioids. Psychiatric comorbidity, higher dosage in OMEs, and the presence of a pain service consult significantly correlated with poorer response to opioids. To our knowledge, this is the first study to quantify opioid responsiveness and describe the relationship with clinical correlates in the understudied PAC population.
Earlier research has demonstrated a relationship between the presence of psychiatric disorders and increased likelihood of receiving any analgesics among veterans residing in PAC.9 Our study adds to the literature by quantifying opioid response using readily available administrative data and examining associations with psychiatric diagnoses. These findings highlight the possibility that attempting to treat high levels of pain by escalating the opioid dosage in patients with a comorbid psychiatric diagnosis should be re-addressed, particularly if there is no meaningful pain reduction at lower opioid dosages. Our sample had a variety of admission diagnoses and medical comorbidities, however, we did not identify a relationship with opioid responsiveness, including an active cancer diagnosis. Although SUDs were highly prevalent in our sample, there was no relationship with opioid responsiveness. This suggests that lack of response to opioids is not merely a matter of drug tolerance or an indication of drug-seeking behavior.
Factors Impacting Response
Many factors could affect whether an individual obtains an adequate analgesic response to opioids or other pain medications, including variations in genes encoding opioid receptors and hepatic enzymes involved in drug metabolism and an individual’s opioid exposure history.13 The phenomenon of requiring more drug to produce the same relief after repeated exposures (ie, tolerance) is well known.14 Opioid-induced hyperalgesia is a phenomenon whereby a patient’s overall pain increases while receiving opioids, but each opioid dose might be perceived as beneficial.15 Increasingly, psychosocial distress is an important factor in opioid response. Adverse selection is the process culminating in those with psychosocial distress and/or SUDs being prescribed more opioids for longer durations.16 Our data suggests that this process could play a role in PAC settings. In addition, exaggerating pain to obtain additional opioids for nonmedical purposes, such as euphoria or relaxation, also is possible.17
When clinically assessing an individual whose pain is not well controlled despite escalating opioid dosages, prescribers must consider which of these factors likely is predominant. However, the first step of determining who has a poor opioid response is not straightforward. Directly asking patients is challenging; many individuals perceive opioids to be helpful while simultaneously reporting inadequately controlled pain.7,8 The primary value of this study is the possibility of providing prescribers a quick, simple method of assessing a patient’s response to opioids. Using this method, individuals who are responding poorly to opioids, including those who might exaggerate pain for secondary gain, could be identified. Health care professionals could consider revisiting pain management strategies, assess for the presence of OUD, or evaluate other contributors to inadequately controlled pain. Although we only collected data regarding response to opioids in this study, any pain medication administered as needed (ie, nonsteroidal anti-inflammatory drugs, acetaminophen) could be analyzed using this methodology, allowing identification of other helpful pain management strategies. We began the validation process with extensive chart review, but further validation is required before this method can be applied to routine clinical practice.
Patients who report uncontrolled pain despite receiving opioids are a clinically challenging population. The traditional strategy has been to escalate opioids, which is recommended by the World Health Organization stepladder approach for patients with cancer pain and limited life expectancy.18 Applying this approach to a general population of patients with chronic pain is ineffective and dangerous.19 The CDC and the VA/US Department of Defense (VA/DoD) guidelines both recommend carefully reassessing risks and benefits at total daily dosages > 50 OME and avoid increasing dosages to > 90 OME daily in most circumstances.5,20 Our finding that participants taking higher dosages of opioids were not more likely to have better control over their pain supports this recommendation.
Limitations
This study has several limitations, the most significant is its small sample size because of the exploratory nature of the project. Results are based on a small pilot sample enriched to include individuals with at least moderate pain who receive opioids frequently at 1 VA CLC-PAC unit; therefore, the results might not be representative of all veterans or a more general population. Our small sample size limits power to detect small differences. Data collected should be used to inform formal power calculations before subsequent larger studies to select adequate sample size. Validation studies, including samples from the same population using different dates, which reproduce findings are an important step. Moreover, we only had data on a single dimension of pain (intensity/severity), as measured by the pain scale, which nursing staff used to make a real-time clinical decision of whether to administer an as-needed opioid. Future studies should consider using pain measures that provide multidimensional assessment (ie, severity, functional interference) and/or were developed specifically for veterans, such as the Defense and Veterans Pain Rating Scale.21
Our study was cross-sectional in nature and addressed a single 24-hour period of data per participant. The years of data collection (2016 and 2017) followed a decline in overall opioid prescribing that has continued, likely influenced by CDC and VA/DoD guidelines.22 It is unclear whether our observations are an accurate reflection of individuals’ response over time or whether prescribing practices in PAC have shifted.
We did not consider the type of pain being treated or explore clinicians’ reasons for prescribing opioids, therefore limiting our ability to know whether opioids were indicated. Information regarding OUD and other SUDs was limited to what was documented in the chart during the CLC-PAC unit admission. We did not have information on length of exposure to opioids. It is possible that opioid tolerance could play a role in reducing opioid responsiveness. However, simple tolerance would not be expected to explain robust correlations with psychiatric comorbidities. Also, simple tolerance would be expected to be overcome with higher opioid dosages, whereas our study demonstrates less responsiveness. These data suggests that some individuals’ pain might be poorly opioid responsive, and psychiatric factors could increase this risk. We used a novel data source in combination with chart review; to our knowledge, barcode medication administration data have not been used in this manner previously. Future work needs to validate this method, using larger sample sizes and several clinical sites. Finally, we used regression models that controlled for average pre-opioid pain rating scores, which is only 1 covariate important for examining effects. Larger studies with adequate power should control for multiple covariates known to be associated with pain and opioid response.
Conclusions
Opioid responsiveness is important clinically yet challenging to assess. This pilot study identifies a way of classifying pain as relatively opioid nonresponsive using administrative data but requires further validation before considering scaling for more general use. The possibility that a substantial percentage of residents in a CLC-PAC unit could be receiving increasing dosages of opioids without adequate benefit justifies the need for more research and underscores the need for prescribers to assess individuals frequently for ongoing benefit of opioids regardless of diagnosis or mechanism of pain.
Acknowledgments
The authors thank Andrzej Galecki, Corey Powell, and the University of Michigan Consulting for Statistics, Computing and Analytics Research Center for assistance with statistical analysis.
1. Marshall TL, Reinhardt JP. Pain management in the last 6 months of life: predictors of opioid and non-opioid use. J Am Med Dir Assoc. 2019;20(6):789-790. doi:10.1016/j.jamda.2019.02.026
2. Tait RC, Chibnall JT. Pain in older subacute care patients: associations with clinical status and treatment. Pain Med. 2002;3(3):231-239. doi:10.1046/j.1526-4637.2002.02031.x
3. Pimentel CB, Briesacher BA, Gurwitz JH, Rosen AB, Pimentel MT, Lapane KL. Pain management in nursing home residents with cancer. J Am Geriatr Soc. 2015;63(4):633-641. doi:10.1111/jgs.13345
4. Hunnicutt JN, Tjia J, Lapane KL. Hospice use and pain management in elderly nursing home residents with cancer. J Pain Symptom Manage. 2017;53(3):561-570. doi:10.1016/j.jpainsymman.2016.10.369
5. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep. 2016;65(No. RR-1):1-49. doi:10.15585/mmwr.rr6501e1
6. Oliva EM, Bowe T, Tavakoli S, et al. Development and applications of the Veterans Health Administration’s Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14(1):34-49. doi:10.1037/ser0000099
7. Goesling J, Moser SE, Lin LA, Hassett AL, Wasserman RA, Brummett CM. Discrepancies between perceived benefit of opioids and self-reported patient outcomes. Pain Med. 2018;19(2):297-306. doi:10.1093/pm/pnw263
8. Sullivan M, Von Korff M, Banta-Green C. Problems and concerns of patients receiving chronic opioid therapy for chronic non-cancer pain. Pain. 2010;149(2):345-353. doi:10.1016/j.pain.2010.02.037
9. Brennan PL, Greenbaum MA, Lemke S, Schutte KK. Mental health disorder, pain, and pain treatment among long-term care residents: evidence from the Minimum Data Set 3.0. Aging Ment Health. 2019;23(9):1146-1155. doi:10.1080/13607863.2018.1481922
10. Woo A, Lechner B, Fu T, et al. Cut points for mild, moderate, and severe pain among cancer and non-cancer patients: a literature review. Ann Palliat Med. 2015;4(4):176-183. doi:10.3978/j.issn.2224-5820.2015.09.04
11. Centers for Disease Control and Prevention. Calculating total daily dose of opioids for safer dosage. 2017. Accessed December 15, 2021. https://www.cdc.gov/drugoverdose/pdf/calculating_total_daily_dose-a.pdf
12. Nielsen S, Degenhardt L, Hoban B, Gisev N. Comparing opioids: a guide to estimating oral morphine equivalents (OME) in research. NDARC Technical Report No. 329. National Drug and Alcohol Research Centre; 2014. Accessed December 15, 2021. http://www.drugsandalcohol.ie/22703/1/NDARC Comparing opioids.pdf
13. Smith HS. Variations in opioid responsiveness. Pain Physician. 2008;11(2):237-248.
14. Collin E, Cesselin F. Neurobiological mechanisms of opioid tolerance and dependence. Clin Neuropharmacol. 1991;14(6):465-488. doi:10.1097/00002826-199112000-00001
15. Higgins C, Smith BH, Matthews K. Evidence of opioid-induced hyperalgesia in clinical populations after chronic opioid exposure: a systematic review and meta-analysis. Br J Anaesth. 2019;122(6):e114-e126. doi:10.1016/j.bja.2018.09.019
16. Howe CQ, Sullivan MD. The missing ‘P’ in pain management: how the current opioid epidemic highlights the need for psychiatric services in chronic pain care. Gen Hosp Psychiatry. 2014;36(1):99-104. doi:10.1016/j.genhosppsych.2013.10.003
17. Substance Abuse and Mental Health Services Administration. Key substance use and mental health indicators in the United States: results from the 2018 National Survey on Drug Use and Health. HHS Publ No PEP19-5068, NSDUH Ser H-54. 2019;170:51-58. Accessed December 15, 2021. https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHNationalFindingsReport2018/NSDUHNationalFindingsReport2018.pdf
18. World Health Organization. WHO’s cancer pain ladder for adults. Accessed September 21, 2018. www.who.int/ncds/management/palliative-care/Infographic-cancer-pain-lowres.pdf
19. Ballantyne JC, Kalso E, Stannard C. WHO analgesic ladder: a good concept gone astray. BMJ. 2016;352:i20. doi:10.1136/bmj.i20
20. The Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guideline for opioid therapy for chronic pain. US Dept of Veterans Affairs and Dept of Defense; 2017. Accessed December 15, 2021. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf
21. Defense & Veterans Pain Rating Scale (DVPRS). Defense & Veterans Center for Integrative Pain Management. Accessed July 21, 2021. https://www.dvcipm.org/clinical-resources/defense-veterans-pain-rating-scale-dvprs/
22. Guy GP Jr, Zhang K, Bohm MK, et al. Vital signs: changes in opioid prescribing in the United States, 2006–2015. MMWR Morb Mortal Wkly Rep. 2017;66(26):697-704. doi:10.15585/mmwr.mm6626a4
Older adults admitted to post-acute settings frequently have complex rehabilitation needs and multimorbidity, which predisposes them to pain management challenges.1,2 The prevalence of pain in post-acute and long-term care is as high as 65%, and opioid use is common among this population with 1 in 7 residents receiving long-term opioids.3,4
Opioids that do not adequately control pain represent a missed opportunity for deprescribing. There is limited evidence regarding efficacy of long-term opioid use (> 90 days) for improving pain and physical functioning.5 In addition, long-term opioid use carries significant risks, including overdose-related death, dependence, and increased emergency department visits.5 These risks are likely to be pronounced among veterans receiving post-acute care (PAC) who are older, have comorbid psychiatric disorders, are prescribed several centrally acting medications, and experience substance use disorder (SUD).6
Older adults are at increased risk for opioid toxicity because of reduced drug clearance and smaller therapeutic window.5 Centers for Disease Control and Prevention (CDC) guidelines recommend frequently assessing patients for benefit in terms of sustained improvement in pain as well as physical function.5 If pain and functional improvements are minimal, opioid use and nonopioid pain management strategies should be considered. Some patients will struggle with this approach. Directly asking patients about the effectiveness of opioids is challenging. Opioid users with chronic pain frequently report problems with opioids even as they describe them as indispensable for pain management.7,8
Earlier studies have assessed patient perspectives regarding opioid difficulties as well as their helpfulness, which could introduce recall bias. Patient-level factors that contribute to a global sense of distress, in addition to the presence of painful physical conditions, also could contribute to patients requesting opioids without experiencing adequate pain relief. One study in veterans residing in PAC facilities found that individuals with depression, posttraumatic stress disorder (PTSD), and SUD were more likely to report pain and receive scheduled analgesics; this effect persisted in individuals with PTSD even after adjusting for demographic and functional status variables.9 The study looked only at analgesics as a class and did not examine opioids specifically. It is possible that distressed individuals, such as those with uncontrolled depression, PTSD, and SUD, might be more likely to report high pain levels and receive opioids with inadequate benefit and increased risk. Identifying the primary condition causing distress and targeting treatment to that condition (ie, depression) is preferable to escalating opioids in an attempt to treat pain in the context of nonresponse. Assessing an individual’s aggregate response to opioids rather than relying on a single self-report is a useful addition to current pain management strategies.
The goal of this study was to pilot a method of identifying opioid-nonresponsive pain using administrative data, measure its prevalence in a PAC population of veterans, and explore clinical and demographic correlates with particular attention to variates that could indicate high levels of psychological and physical distress. Identifying pain that is poorly responsive to opioids would give clinicians the opportunity to avoid or minimize opioid use and prioritize treatments that are likely to improve the resident’s pain, quality of life, and physical function while minimizing recall bias. We hypothesized that pain that responds poorly to opioids would be prevalent among veterans residing in a PAC unit. We considered that veterans with pain poorly responsive to opioids would be more likely to have factors that would place them at increased risk of adverse effects, such as comorbid psychiatric conditions, history of SUD, and multimorbidity, providing further rationale for clinical equipoise in that population.6
Methods
This was a small, retrospective cross-sectional study using administrative data and chart review. The study included veterans who were administered opioids while residing in a single US Department of Veterans Affairs (VA) community living center PAC (CLC-PAC) unit during at least 1 of 4 nonconsecutive, random days in 2016 and 2017. The study was approved by the institutional review board of the Ann Arbor VA Health System (#2017-1034) as part of a larger project involving models of care in vulnerable older veterans.
Inclusion criteria were the presence of at least moderate pain (≥ 4 on a 0 to 10 scale); receiving ≥ 2 opioids ordered as needed over the prespecified 24-hour observation period; and having ≥ 2 pre-and postopioid administration pain scores during the observation period. Veterans who did not meet these criteria were excluded. At the time of initial sample selection, we did not capture information related to coprescribed analgesics, including a standing order of opioids. To obtain the sample, we initially characterized all veterans on the 4 days residing in the CLC-PAC unit as those reporting at least moderate pain (≥ 4) and those who reported no or mild pain (< 4). The cut point of 4 of 10 is consistent with moderate pain based on earlier work showing higher likelihood of pain that interferes with physical function.10 We then restricted the sample to veterans who received ≥ 2 opioids ordered as needed for pain and had ≥ 2 pre- and postopioid administration numeric pain rating scores during the 24-hour observation period. This methodology was chosen to enrich our sample for those who received opioids regularly for ongoing pain. Opioids were defined as full µ-opioid receptor agonists and included hydrocodone, oxycodone, morphine, hydromorphone, fentanyl, tramadol, and methadone.
Medication administration data were obtained from the VA corporate data warehouse, which houses all barcode medication administration data collected at the point of care. The dataset includes pain scores gathered by nursing staff before and after administering an as-needed analgesic. The corporate data warehouse records data/time of pain scores and the analgesic name, dosage, formulation, and date/time of administration. Using a standardized assessment form developed iteratively, we calculated opioid dosage in oral morphine equivalents (OME) for comparison.11,12 All abstracted data were reexamined for accuracy. Data initially were collected in an anonymized, blinded fashion. Participants were then unblinded for chart review. Initial data was captured in resident-days instead of unique residents because an individual resident might have been admitted on several observation days. We were primarily interested in how pain responded to opioids administered in response to resident request; therefore, we did not examine response to opioids that were continuously ordered (ie, scheduled). We did consider scheduled opioids when calculating total daily opioid dosage during the chart review.
Outcome of Interest
The primary outcome of interest was an individual’s response to as-needed opioids, which we defined as change in the pain score after opioid administration. The pre-opioid pain score was the score that immediately preceded administration of an as-needed opioid. The postopioid administration pain score was the first score after opioid administration if obtained within 3 hours of administration. Scores collected > 3 hours after opioid administration were excluded because they no longer accurately reflected the impact of the opioid due to the short half-lives. Observations were excluded if an opioid was administered without a recorded pain score; this occurred once for 6 individuals. Observations also were excluded if an opioid was administered but the data were captured on the following day (outside of the 24-hour window); this occurred once for 3 individuals.
We calculated a ∆ score by subtracting the postopioid pain rating score from the pre-opioid score. Individual ∆ scores were then averaged over the 24-hour period (range, 2-5 opioid doses). For example, if an individual reported a pre-opioid pain score of 10, and a postopioid pain score of 2, the ∆ was recorded as 8. If the individual’s next pre-opioid score was 10, and post-opioid score was 6, the ∆ was recorded as 4. ∆ scores over the 24-hour period were averaged together to determine that individual’s response to as-needed opioids. In the previous example, the mean ∆ score is 6. Lower mean ∆ scores reflect decreased responsiveness to opioids’ analgesic effect.
Demographic and clinical data were obtained from electronic health record review using a standardized assessment form. These data included information about medical and psychiatric comorbidities, specialist consultations, and CLC-PAC unit admission indications and diagnoses. Medications of interest were categorized as antidepressants, antipsychotics, benzodiazepines, muscle relaxants, hypnotics, stimulants, antiepileptic drugs/mood stabilizers (including gabapentin and pregabalin), and all adjuvant analgesics. Adjuvant analgesics were defined as medications administered for pain as documented by chart notes or those ordered as needed for pain, and analyzed as a composite variable. Antidepressants with analgesic properties (serotonin-norepinephrine reuptake inhibitors and tricyclic antidepressants) were considered adjuvant analgesics. Psychiatric information collected included presence of mood, anxiety, and psychotic disorders, and PTSD. SUD information was collected separately from other psychiatric disorders.
Analyses
The study population was described using tabulations for categorical data and means and standard deviations for continuous data. Responsiveness to opioids was analyzed as a continuous variable. Those with higher mean ∆ scores were considered to have pain relatively more responsive to opioids, while lower mean ∆ scores indicated pain less responsive to opioids. We constructed linear regression models controlling for average pre-opioid pain rating scores to explore associations between opioid responsiveness and variables of interest. All analyses were completed using Stata version 15. This study was not adequately powered to detect differences across the spectrum of opioid responsiveness, although the authors have reported differences in this article.
Results
Over the 4-day observational period there were 146 resident-days. Of these, 88 (60.3%) reported at least 1 pain score of ≥ 4. Of those, 61 (41.8%) received ≥ 1 as-needed opioid for pain. We identified 46 resident-days meeting study criteria of ≥ 2 pre- and postanalgesic scores. We identified 41 unique individuals (Figure 1). Two individuals were admitted to the CLC-PAC unit on 2 of the 4 observation days, and 1 individual was admitted to the CLC-PAC unit on 3 of the 4 observation days. For individuals admitted several days, we included data only from the initial observation day.
Response to opioids varied greatly in this sample. The mean (SD) ∆ pain score was 3.4 (1.6) and ranged from 0.5 to 6.3. Using linear regression, we found no relationship between admission indication, medical comorbidities (including active cancer), and opioid responsiveness (Table).
Psychiatric disorders were highly prevalent, with 25 individuals (61.0%) having ≥ 1 any psychiatric diagnosis identified on chart review. The presence of any psychiatric diagnosis was significantly associated with reduced responsiveness to opioids (β = −1.08; 95% CI, −2.04 to −0.13; P = .03). SUDs also were common, with 17 individuals (41.5%) having an active SUD; most were tobacco/nicotine. Twenty-six veterans (63.4%) had documentation of SUD in remission with 19 (46.3%) for substances other than tobacco/nicotine. There was no indication that any veteran in the sample was prescribed medication for opioid use disorder (OUD) at the time of observation. There was no relationship between opioid responsiveness and SUDs, neither active or in remission. Consults to other services that suggested distress or difficult-to-control symptoms also were frequent. Consults to the pain service were significantly associated with reduced responsiveness to opioids (β = −1.75; 95% CI, −3.33 to −0.17; P = .03). Association between psychiatry consultation and reduced opioid responsiveness trended toward significance (β = −0.95; 95% CI, −2.06 to 0.17; P = .09) (Figures 2 and 3). There was no significant association with palliative medicine consultation and opioid responsiveness.
A poorer response to opioids was associated with a significantly higher as-needed opioid dosage (β = −0.02; 95% CI, −0.04 to −0.01; P = .002) as well as a trend toward higher total opioid dosage (β = −0.005; 95% CI, −0.01 to 0.0003; P = .06) (Figure 4). Thirty-eight (92.7%) participants received nonopioid adjuvant analgesics for pain. More than half (56.1%) received antidepressants or gabapentinoids (51.2%), although we did not assess whether they were prescribed for pain or another indication. We did not identify a relationship between any specific psychoactive drug class and opioid responsiveness in this sample.
Discussion
This exploratory study used readily available administrative data in a CLC-PAC unit to assess responsiveness to opioids via a numeric mean ∆ score, with higher values indicating more pain relief in response to opioids. We then constructed linear regression models to characterize the relationship between the mean ∆ score and factors known to be associated with difficult-to-control pain and psychosocial distress. As expected, opioid responsiveness was highly variable among residents; some residents experienced essentially no reduction in pain, on average, despite receiving opioids. Psychiatric comorbidity, higher dosage in OMEs, and the presence of a pain service consult significantly correlated with poorer response to opioids. To our knowledge, this is the first study to quantify opioid responsiveness and describe the relationship with clinical correlates in the understudied PAC population.
Earlier research has demonstrated a relationship between the presence of psychiatric disorders and increased likelihood of receiving any analgesics among veterans residing in PAC.9 Our study adds to the literature by quantifying opioid response using readily available administrative data and examining associations with psychiatric diagnoses. These findings highlight the possibility that attempting to treat high levels of pain by escalating the opioid dosage in patients with a comorbid psychiatric diagnosis should be re-addressed, particularly if there is no meaningful pain reduction at lower opioid dosages. Our sample had a variety of admission diagnoses and medical comorbidities, however, we did not identify a relationship with opioid responsiveness, including an active cancer diagnosis. Although SUDs were highly prevalent in our sample, there was no relationship with opioid responsiveness. This suggests that lack of response to opioids is not merely a matter of drug tolerance or an indication of drug-seeking behavior.
Factors Impacting Response
Many factors could affect whether an individual obtains an adequate analgesic response to opioids or other pain medications, including variations in genes encoding opioid receptors and hepatic enzymes involved in drug metabolism and an individual’s opioid exposure history.13 The phenomenon of requiring more drug to produce the same relief after repeated exposures (ie, tolerance) is well known.14 Opioid-induced hyperalgesia is a phenomenon whereby a patient’s overall pain increases while receiving opioids, but each opioid dose might be perceived as beneficial.15 Increasingly, psychosocial distress is an important factor in opioid response. Adverse selection is the process culminating in those with psychosocial distress and/or SUDs being prescribed more opioids for longer durations.16 Our data suggests that this process could play a role in PAC settings. In addition, exaggerating pain to obtain additional opioids for nonmedical purposes, such as euphoria or relaxation, also is possible.17
When clinically assessing an individual whose pain is not well controlled despite escalating opioid dosages, prescribers must consider which of these factors likely is predominant. However, the first step of determining who has a poor opioid response is not straightforward. Directly asking patients is challenging; many individuals perceive opioids to be helpful while simultaneously reporting inadequately controlled pain.7,8 The primary value of this study is the possibility of providing prescribers a quick, simple method of assessing a patient’s response to opioids. Using this method, individuals who are responding poorly to opioids, including those who might exaggerate pain for secondary gain, could be identified. Health care professionals could consider revisiting pain management strategies, assess for the presence of OUD, or evaluate other contributors to inadequately controlled pain. Although we only collected data regarding response to opioids in this study, any pain medication administered as needed (ie, nonsteroidal anti-inflammatory drugs, acetaminophen) could be analyzed using this methodology, allowing identification of other helpful pain management strategies. We began the validation process with extensive chart review, but further validation is required before this method can be applied to routine clinical practice.
Patients who report uncontrolled pain despite receiving opioids are a clinically challenging population. The traditional strategy has been to escalate opioids, which is recommended by the World Health Organization stepladder approach for patients with cancer pain and limited life expectancy.18 Applying this approach to a general population of patients with chronic pain is ineffective and dangerous.19 The CDC and the VA/US Department of Defense (VA/DoD) guidelines both recommend carefully reassessing risks and benefits at total daily dosages > 50 OME and avoid increasing dosages to > 90 OME daily in most circumstances.5,20 Our finding that participants taking higher dosages of opioids were not more likely to have better control over their pain supports this recommendation.
Limitations
This study has several limitations, the most significant is its small sample size because of the exploratory nature of the project. Results are based on a small pilot sample enriched to include individuals with at least moderate pain who receive opioids frequently at 1 VA CLC-PAC unit; therefore, the results might not be representative of all veterans or a more general population. Our small sample size limits power to detect small differences. Data collected should be used to inform formal power calculations before subsequent larger studies to select adequate sample size. Validation studies, including samples from the same population using different dates, which reproduce findings are an important step. Moreover, we only had data on a single dimension of pain (intensity/severity), as measured by the pain scale, which nursing staff used to make a real-time clinical decision of whether to administer an as-needed opioid. Future studies should consider using pain measures that provide multidimensional assessment (ie, severity, functional interference) and/or were developed specifically for veterans, such as the Defense and Veterans Pain Rating Scale.21
Our study was cross-sectional in nature and addressed a single 24-hour period of data per participant. The years of data collection (2016 and 2017) followed a decline in overall opioid prescribing that has continued, likely influenced by CDC and VA/DoD guidelines.22 It is unclear whether our observations are an accurate reflection of individuals’ response over time or whether prescribing practices in PAC have shifted.
We did not consider the type of pain being treated or explore clinicians’ reasons for prescribing opioids, therefore limiting our ability to know whether opioids were indicated. Information regarding OUD and other SUDs was limited to what was documented in the chart during the CLC-PAC unit admission. We did not have information on length of exposure to opioids. It is possible that opioid tolerance could play a role in reducing opioid responsiveness. However, simple tolerance would not be expected to explain robust correlations with psychiatric comorbidities. Also, simple tolerance would be expected to be overcome with higher opioid dosages, whereas our study demonstrates less responsiveness. These data suggests that some individuals’ pain might be poorly opioid responsive, and psychiatric factors could increase this risk. We used a novel data source in combination with chart review; to our knowledge, barcode medication administration data have not been used in this manner previously. Future work needs to validate this method, using larger sample sizes and several clinical sites. Finally, we used regression models that controlled for average pre-opioid pain rating scores, which is only 1 covariate important for examining effects. Larger studies with adequate power should control for multiple covariates known to be associated with pain and opioid response.
Conclusions
Opioid responsiveness is important clinically yet challenging to assess. This pilot study identifies a way of classifying pain as relatively opioid nonresponsive using administrative data but requires further validation before considering scaling for more general use. The possibility that a substantial percentage of residents in a CLC-PAC unit could be receiving increasing dosages of opioids without adequate benefit justifies the need for more research and underscores the need for prescribers to assess individuals frequently for ongoing benefit of opioids regardless of diagnosis or mechanism of pain.
Acknowledgments
The authors thank Andrzej Galecki, Corey Powell, and the University of Michigan Consulting for Statistics, Computing and Analytics Research Center for assistance with statistical analysis.
Older adults admitted to post-acute settings frequently have complex rehabilitation needs and multimorbidity, which predisposes them to pain management challenges.1,2 The prevalence of pain in post-acute and long-term care is as high as 65%, and opioid use is common among this population with 1 in 7 residents receiving long-term opioids.3,4
Opioids that do not adequately control pain represent a missed opportunity for deprescribing. There is limited evidence regarding efficacy of long-term opioid use (> 90 days) for improving pain and physical functioning.5 In addition, long-term opioid use carries significant risks, including overdose-related death, dependence, and increased emergency department visits.5 These risks are likely to be pronounced among veterans receiving post-acute care (PAC) who are older, have comorbid psychiatric disorders, are prescribed several centrally acting medications, and experience substance use disorder (SUD).6
Older adults are at increased risk for opioid toxicity because of reduced drug clearance and smaller therapeutic window.5 Centers for Disease Control and Prevention (CDC) guidelines recommend frequently assessing patients for benefit in terms of sustained improvement in pain as well as physical function.5 If pain and functional improvements are minimal, opioid use and nonopioid pain management strategies should be considered. Some patients will struggle with this approach. Directly asking patients about the effectiveness of opioids is challenging. Opioid users with chronic pain frequently report problems with opioids even as they describe them as indispensable for pain management.7,8
Earlier studies have assessed patient perspectives regarding opioid difficulties as well as their helpfulness, which could introduce recall bias. Patient-level factors that contribute to a global sense of distress, in addition to the presence of painful physical conditions, also could contribute to patients requesting opioids without experiencing adequate pain relief. One study in veterans residing in PAC facilities found that individuals with depression, posttraumatic stress disorder (PTSD), and SUD were more likely to report pain and receive scheduled analgesics; this effect persisted in individuals with PTSD even after adjusting for demographic and functional status variables.9 The study looked only at analgesics as a class and did not examine opioids specifically. It is possible that distressed individuals, such as those with uncontrolled depression, PTSD, and SUD, might be more likely to report high pain levels and receive opioids with inadequate benefit and increased risk. Identifying the primary condition causing distress and targeting treatment to that condition (ie, depression) is preferable to escalating opioids in an attempt to treat pain in the context of nonresponse. Assessing an individual’s aggregate response to opioids rather than relying on a single self-report is a useful addition to current pain management strategies.
The goal of this study was to pilot a method of identifying opioid-nonresponsive pain using administrative data, measure its prevalence in a PAC population of veterans, and explore clinical and demographic correlates with particular attention to variates that could indicate high levels of psychological and physical distress. Identifying pain that is poorly responsive to opioids would give clinicians the opportunity to avoid or minimize opioid use and prioritize treatments that are likely to improve the resident’s pain, quality of life, and physical function while minimizing recall bias. We hypothesized that pain that responds poorly to opioids would be prevalent among veterans residing in a PAC unit. We considered that veterans with pain poorly responsive to opioids would be more likely to have factors that would place them at increased risk of adverse effects, such as comorbid psychiatric conditions, history of SUD, and multimorbidity, providing further rationale for clinical equipoise in that population.6
Methods
This was a small, retrospective cross-sectional study using administrative data and chart review. The study included veterans who were administered opioids while residing in a single US Department of Veterans Affairs (VA) community living center PAC (CLC-PAC) unit during at least 1 of 4 nonconsecutive, random days in 2016 and 2017. The study was approved by the institutional review board of the Ann Arbor VA Health System (#2017-1034) as part of a larger project involving models of care in vulnerable older veterans.
Inclusion criteria were the presence of at least moderate pain (≥ 4 on a 0 to 10 scale); receiving ≥ 2 opioids ordered as needed over the prespecified 24-hour observation period; and having ≥ 2 pre-and postopioid administration pain scores during the observation period. Veterans who did not meet these criteria were excluded. At the time of initial sample selection, we did not capture information related to coprescribed analgesics, including a standing order of opioids. To obtain the sample, we initially characterized all veterans on the 4 days residing in the CLC-PAC unit as those reporting at least moderate pain (≥ 4) and those who reported no or mild pain (< 4). The cut point of 4 of 10 is consistent with moderate pain based on earlier work showing higher likelihood of pain that interferes with physical function.10 We then restricted the sample to veterans who received ≥ 2 opioids ordered as needed for pain and had ≥ 2 pre- and postopioid administration numeric pain rating scores during the 24-hour observation period. This methodology was chosen to enrich our sample for those who received opioids regularly for ongoing pain. Opioids were defined as full µ-opioid receptor agonists and included hydrocodone, oxycodone, morphine, hydromorphone, fentanyl, tramadol, and methadone.
Medication administration data were obtained from the VA corporate data warehouse, which houses all barcode medication administration data collected at the point of care. The dataset includes pain scores gathered by nursing staff before and after administering an as-needed analgesic. The corporate data warehouse records data/time of pain scores and the analgesic name, dosage, formulation, and date/time of administration. Using a standardized assessment form developed iteratively, we calculated opioid dosage in oral morphine equivalents (OME) for comparison.11,12 All abstracted data were reexamined for accuracy. Data initially were collected in an anonymized, blinded fashion. Participants were then unblinded for chart review. Initial data was captured in resident-days instead of unique residents because an individual resident might have been admitted on several observation days. We were primarily interested in how pain responded to opioids administered in response to resident request; therefore, we did not examine response to opioids that were continuously ordered (ie, scheduled). We did consider scheduled opioids when calculating total daily opioid dosage during the chart review.
Outcome of Interest
The primary outcome of interest was an individual’s response to as-needed opioids, which we defined as change in the pain score after opioid administration. The pre-opioid pain score was the score that immediately preceded administration of an as-needed opioid. The postopioid administration pain score was the first score after opioid administration if obtained within 3 hours of administration. Scores collected > 3 hours after opioid administration were excluded because they no longer accurately reflected the impact of the opioid due to the short half-lives. Observations were excluded if an opioid was administered without a recorded pain score; this occurred once for 6 individuals. Observations also were excluded if an opioid was administered but the data were captured on the following day (outside of the 24-hour window); this occurred once for 3 individuals.
We calculated a ∆ score by subtracting the postopioid pain rating score from the pre-opioid score. Individual ∆ scores were then averaged over the 24-hour period (range, 2-5 opioid doses). For example, if an individual reported a pre-opioid pain score of 10, and a postopioid pain score of 2, the ∆ was recorded as 8. If the individual’s next pre-opioid score was 10, and post-opioid score was 6, the ∆ was recorded as 4. ∆ scores over the 24-hour period were averaged together to determine that individual’s response to as-needed opioids. In the previous example, the mean ∆ score is 6. Lower mean ∆ scores reflect decreased responsiveness to opioids’ analgesic effect.
Demographic and clinical data were obtained from electronic health record review using a standardized assessment form. These data included information about medical and psychiatric comorbidities, specialist consultations, and CLC-PAC unit admission indications and diagnoses. Medications of interest were categorized as antidepressants, antipsychotics, benzodiazepines, muscle relaxants, hypnotics, stimulants, antiepileptic drugs/mood stabilizers (including gabapentin and pregabalin), and all adjuvant analgesics. Adjuvant analgesics were defined as medications administered for pain as documented by chart notes or those ordered as needed for pain, and analyzed as a composite variable. Antidepressants with analgesic properties (serotonin-norepinephrine reuptake inhibitors and tricyclic antidepressants) were considered adjuvant analgesics. Psychiatric information collected included presence of mood, anxiety, and psychotic disorders, and PTSD. SUD information was collected separately from other psychiatric disorders.
Analyses
The study population was described using tabulations for categorical data and means and standard deviations for continuous data. Responsiveness to opioids was analyzed as a continuous variable. Those with higher mean ∆ scores were considered to have pain relatively more responsive to opioids, while lower mean ∆ scores indicated pain less responsive to opioids. We constructed linear regression models controlling for average pre-opioid pain rating scores to explore associations between opioid responsiveness and variables of interest. All analyses were completed using Stata version 15. This study was not adequately powered to detect differences across the spectrum of opioid responsiveness, although the authors have reported differences in this article.
Results
Over the 4-day observational period there were 146 resident-days. Of these, 88 (60.3%) reported at least 1 pain score of ≥ 4. Of those, 61 (41.8%) received ≥ 1 as-needed opioid for pain. We identified 46 resident-days meeting study criteria of ≥ 2 pre- and postanalgesic scores. We identified 41 unique individuals (Figure 1). Two individuals were admitted to the CLC-PAC unit on 2 of the 4 observation days, and 1 individual was admitted to the CLC-PAC unit on 3 of the 4 observation days. For individuals admitted several days, we included data only from the initial observation day.
Response to opioids varied greatly in this sample. The mean (SD) ∆ pain score was 3.4 (1.6) and ranged from 0.5 to 6.3. Using linear regression, we found no relationship between admission indication, medical comorbidities (including active cancer), and opioid responsiveness (Table).
Psychiatric disorders were highly prevalent, with 25 individuals (61.0%) having ≥ 1 any psychiatric diagnosis identified on chart review. The presence of any psychiatric diagnosis was significantly associated with reduced responsiveness to opioids (β = −1.08; 95% CI, −2.04 to −0.13; P = .03). SUDs also were common, with 17 individuals (41.5%) having an active SUD; most were tobacco/nicotine. Twenty-six veterans (63.4%) had documentation of SUD in remission with 19 (46.3%) for substances other than tobacco/nicotine. There was no indication that any veteran in the sample was prescribed medication for opioid use disorder (OUD) at the time of observation. There was no relationship between opioid responsiveness and SUDs, neither active or in remission. Consults to other services that suggested distress or difficult-to-control symptoms also were frequent. Consults to the pain service were significantly associated with reduced responsiveness to opioids (β = −1.75; 95% CI, −3.33 to −0.17; P = .03). Association between psychiatry consultation and reduced opioid responsiveness trended toward significance (β = −0.95; 95% CI, −2.06 to 0.17; P = .09) (Figures 2 and 3). There was no significant association with palliative medicine consultation and opioid responsiveness.
A poorer response to opioids was associated with a significantly higher as-needed opioid dosage (β = −0.02; 95% CI, −0.04 to −0.01; P = .002) as well as a trend toward higher total opioid dosage (β = −0.005; 95% CI, −0.01 to 0.0003; P = .06) (Figure 4). Thirty-eight (92.7%) participants received nonopioid adjuvant analgesics for pain. More than half (56.1%) received antidepressants or gabapentinoids (51.2%), although we did not assess whether they were prescribed for pain or another indication. We did not identify a relationship between any specific psychoactive drug class and opioid responsiveness in this sample.
Discussion
This exploratory study used readily available administrative data in a CLC-PAC unit to assess responsiveness to opioids via a numeric mean ∆ score, with higher values indicating more pain relief in response to opioids. We then constructed linear regression models to characterize the relationship between the mean ∆ score and factors known to be associated with difficult-to-control pain and psychosocial distress. As expected, opioid responsiveness was highly variable among residents; some residents experienced essentially no reduction in pain, on average, despite receiving opioids. Psychiatric comorbidity, higher dosage in OMEs, and the presence of a pain service consult significantly correlated with poorer response to opioids. To our knowledge, this is the first study to quantify opioid responsiveness and describe the relationship with clinical correlates in the understudied PAC population.
Earlier research has demonstrated a relationship between the presence of psychiatric disorders and increased likelihood of receiving any analgesics among veterans residing in PAC.9 Our study adds to the literature by quantifying opioid response using readily available administrative data and examining associations with psychiatric diagnoses. These findings highlight the possibility that attempting to treat high levels of pain by escalating the opioid dosage in patients with a comorbid psychiatric diagnosis should be re-addressed, particularly if there is no meaningful pain reduction at lower opioid dosages. Our sample had a variety of admission diagnoses and medical comorbidities, however, we did not identify a relationship with opioid responsiveness, including an active cancer diagnosis. Although SUDs were highly prevalent in our sample, there was no relationship with opioid responsiveness. This suggests that lack of response to opioids is not merely a matter of drug tolerance or an indication of drug-seeking behavior.
Factors Impacting Response
Many factors could affect whether an individual obtains an adequate analgesic response to opioids or other pain medications, including variations in genes encoding opioid receptors and hepatic enzymes involved in drug metabolism and an individual’s opioid exposure history.13 The phenomenon of requiring more drug to produce the same relief after repeated exposures (ie, tolerance) is well known.14 Opioid-induced hyperalgesia is a phenomenon whereby a patient’s overall pain increases while receiving opioids, but each opioid dose might be perceived as beneficial.15 Increasingly, psychosocial distress is an important factor in opioid response. Adverse selection is the process culminating in those with psychosocial distress and/or SUDs being prescribed more opioids for longer durations.16 Our data suggests that this process could play a role in PAC settings. In addition, exaggerating pain to obtain additional opioids for nonmedical purposes, such as euphoria or relaxation, also is possible.17
When clinically assessing an individual whose pain is not well controlled despite escalating opioid dosages, prescribers must consider which of these factors likely is predominant. However, the first step of determining who has a poor opioid response is not straightforward. Directly asking patients is challenging; many individuals perceive opioids to be helpful while simultaneously reporting inadequately controlled pain.7,8 The primary value of this study is the possibility of providing prescribers a quick, simple method of assessing a patient’s response to opioids. Using this method, individuals who are responding poorly to opioids, including those who might exaggerate pain for secondary gain, could be identified. Health care professionals could consider revisiting pain management strategies, assess for the presence of OUD, or evaluate other contributors to inadequately controlled pain. Although we only collected data regarding response to opioids in this study, any pain medication administered as needed (ie, nonsteroidal anti-inflammatory drugs, acetaminophen) could be analyzed using this methodology, allowing identification of other helpful pain management strategies. We began the validation process with extensive chart review, but further validation is required before this method can be applied to routine clinical practice.
Patients who report uncontrolled pain despite receiving opioids are a clinically challenging population. The traditional strategy has been to escalate opioids, which is recommended by the World Health Organization stepladder approach for patients with cancer pain and limited life expectancy.18 Applying this approach to a general population of patients with chronic pain is ineffective and dangerous.19 The CDC and the VA/US Department of Defense (VA/DoD) guidelines both recommend carefully reassessing risks and benefits at total daily dosages > 50 OME and avoid increasing dosages to > 90 OME daily in most circumstances.5,20 Our finding that participants taking higher dosages of opioids were not more likely to have better control over their pain supports this recommendation.
Limitations
This study has several limitations, the most significant is its small sample size because of the exploratory nature of the project. Results are based on a small pilot sample enriched to include individuals with at least moderate pain who receive opioids frequently at 1 VA CLC-PAC unit; therefore, the results might not be representative of all veterans or a more general population. Our small sample size limits power to detect small differences. Data collected should be used to inform formal power calculations before subsequent larger studies to select adequate sample size. Validation studies, including samples from the same population using different dates, which reproduce findings are an important step. Moreover, we only had data on a single dimension of pain (intensity/severity), as measured by the pain scale, which nursing staff used to make a real-time clinical decision of whether to administer an as-needed opioid. Future studies should consider using pain measures that provide multidimensional assessment (ie, severity, functional interference) and/or were developed specifically for veterans, such as the Defense and Veterans Pain Rating Scale.21
Our study was cross-sectional in nature and addressed a single 24-hour period of data per participant. The years of data collection (2016 and 2017) followed a decline in overall opioid prescribing that has continued, likely influenced by CDC and VA/DoD guidelines.22 It is unclear whether our observations are an accurate reflection of individuals’ response over time or whether prescribing practices in PAC have shifted.
We did not consider the type of pain being treated or explore clinicians’ reasons for prescribing opioids, therefore limiting our ability to know whether opioids were indicated. Information regarding OUD and other SUDs was limited to what was documented in the chart during the CLC-PAC unit admission. We did not have information on length of exposure to opioids. It is possible that opioid tolerance could play a role in reducing opioid responsiveness. However, simple tolerance would not be expected to explain robust correlations with psychiatric comorbidities. Also, simple tolerance would be expected to be overcome with higher opioid dosages, whereas our study demonstrates less responsiveness. These data suggests that some individuals’ pain might be poorly opioid responsive, and psychiatric factors could increase this risk. We used a novel data source in combination with chart review; to our knowledge, barcode medication administration data have not been used in this manner previously. Future work needs to validate this method, using larger sample sizes and several clinical sites. Finally, we used regression models that controlled for average pre-opioid pain rating scores, which is only 1 covariate important for examining effects. Larger studies with adequate power should control for multiple covariates known to be associated with pain and opioid response.
Conclusions
Opioid responsiveness is important clinically yet challenging to assess. This pilot study identifies a way of classifying pain as relatively opioid nonresponsive using administrative data but requires further validation before considering scaling for more general use. The possibility that a substantial percentage of residents in a CLC-PAC unit could be receiving increasing dosages of opioids without adequate benefit justifies the need for more research and underscores the need for prescribers to assess individuals frequently for ongoing benefit of opioids regardless of diagnosis or mechanism of pain.
Acknowledgments
The authors thank Andrzej Galecki, Corey Powell, and the University of Michigan Consulting for Statistics, Computing and Analytics Research Center for assistance with statistical analysis.
1. Marshall TL, Reinhardt JP. Pain management in the last 6 months of life: predictors of opioid and non-opioid use. J Am Med Dir Assoc. 2019;20(6):789-790. doi:10.1016/j.jamda.2019.02.026
2. Tait RC, Chibnall JT. Pain in older subacute care patients: associations with clinical status and treatment. Pain Med. 2002;3(3):231-239. doi:10.1046/j.1526-4637.2002.02031.x
3. Pimentel CB, Briesacher BA, Gurwitz JH, Rosen AB, Pimentel MT, Lapane KL. Pain management in nursing home residents with cancer. J Am Geriatr Soc. 2015;63(4):633-641. doi:10.1111/jgs.13345
4. Hunnicutt JN, Tjia J, Lapane KL. Hospice use and pain management in elderly nursing home residents with cancer. J Pain Symptom Manage. 2017;53(3):561-570. doi:10.1016/j.jpainsymman.2016.10.369
5. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep. 2016;65(No. RR-1):1-49. doi:10.15585/mmwr.rr6501e1
6. Oliva EM, Bowe T, Tavakoli S, et al. Development and applications of the Veterans Health Administration’s Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14(1):34-49. doi:10.1037/ser0000099
7. Goesling J, Moser SE, Lin LA, Hassett AL, Wasserman RA, Brummett CM. Discrepancies between perceived benefit of opioids and self-reported patient outcomes. Pain Med. 2018;19(2):297-306. doi:10.1093/pm/pnw263
8. Sullivan M, Von Korff M, Banta-Green C. Problems and concerns of patients receiving chronic opioid therapy for chronic non-cancer pain. Pain. 2010;149(2):345-353. doi:10.1016/j.pain.2010.02.037
9. Brennan PL, Greenbaum MA, Lemke S, Schutte KK. Mental health disorder, pain, and pain treatment among long-term care residents: evidence from the Minimum Data Set 3.0. Aging Ment Health. 2019;23(9):1146-1155. doi:10.1080/13607863.2018.1481922
10. Woo A, Lechner B, Fu T, et al. Cut points for mild, moderate, and severe pain among cancer and non-cancer patients: a literature review. Ann Palliat Med. 2015;4(4):176-183. doi:10.3978/j.issn.2224-5820.2015.09.04
11. Centers for Disease Control and Prevention. Calculating total daily dose of opioids for safer dosage. 2017. Accessed December 15, 2021. https://www.cdc.gov/drugoverdose/pdf/calculating_total_daily_dose-a.pdf
12. Nielsen S, Degenhardt L, Hoban B, Gisev N. Comparing opioids: a guide to estimating oral morphine equivalents (OME) in research. NDARC Technical Report No. 329. National Drug and Alcohol Research Centre; 2014. Accessed December 15, 2021. http://www.drugsandalcohol.ie/22703/1/NDARC Comparing opioids.pdf
13. Smith HS. Variations in opioid responsiveness. Pain Physician. 2008;11(2):237-248.
14. Collin E, Cesselin F. Neurobiological mechanisms of opioid tolerance and dependence. Clin Neuropharmacol. 1991;14(6):465-488. doi:10.1097/00002826-199112000-00001
15. Higgins C, Smith BH, Matthews K. Evidence of opioid-induced hyperalgesia in clinical populations after chronic opioid exposure: a systematic review and meta-analysis. Br J Anaesth. 2019;122(6):e114-e126. doi:10.1016/j.bja.2018.09.019
16. Howe CQ, Sullivan MD. The missing ‘P’ in pain management: how the current opioid epidemic highlights the need for psychiatric services in chronic pain care. Gen Hosp Psychiatry. 2014;36(1):99-104. doi:10.1016/j.genhosppsych.2013.10.003
17. Substance Abuse and Mental Health Services Administration. Key substance use and mental health indicators in the United States: results from the 2018 National Survey on Drug Use and Health. HHS Publ No PEP19-5068, NSDUH Ser H-54. 2019;170:51-58. Accessed December 15, 2021. https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHNationalFindingsReport2018/NSDUHNationalFindingsReport2018.pdf
18. World Health Organization. WHO’s cancer pain ladder for adults. Accessed September 21, 2018. www.who.int/ncds/management/palliative-care/Infographic-cancer-pain-lowres.pdf
19. Ballantyne JC, Kalso E, Stannard C. WHO analgesic ladder: a good concept gone astray. BMJ. 2016;352:i20. doi:10.1136/bmj.i20
20. The Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guideline for opioid therapy for chronic pain. US Dept of Veterans Affairs and Dept of Defense; 2017. Accessed December 15, 2021. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf
21. Defense & Veterans Pain Rating Scale (DVPRS). Defense & Veterans Center for Integrative Pain Management. Accessed July 21, 2021. https://www.dvcipm.org/clinical-resources/defense-veterans-pain-rating-scale-dvprs/
22. Guy GP Jr, Zhang K, Bohm MK, et al. Vital signs: changes in opioid prescribing in the United States, 2006–2015. MMWR Morb Mortal Wkly Rep. 2017;66(26):697-704. doi:10.15585/mmwr.mm6626a4
1. Marshall TL, Reinhardt JP. Pain management in the last 6 months of life: predictors of opioid and non-opioid use. J Am Med Dir Assoc. 2019;20(6):789-790. doi:10.1016/j.jamda.2019.02.026
2. Tait RC, Chibnall JT. Pain in older subacute care patients: associations with clinical status and treatment. Pain Med. 2002;3(3):231-239. doi:10.1046/j.1526-4637.2002.02031.x
3. Pimentel CB, Briesacher BA, Gurwitz JH, Rosen AB, Pimentel MT, Lapane KL. Pain management in nursing home residents with cancer. J Am Geriatr Soc. 2015;63(4):633-641. doi:10.1111/jgs.13345
4. Hunnicutt JN, Tjia J, Lapane KL. Hospice use and pain management in elderly nursing home residents with cancer. J Pain Symptom Manage. 2017;53(3):561-570. doi:10.1016/j.jpainsymman.2016.10.369
5. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep. 2016;65(No. RR-1):1-49. doi:10.15585/mmwr.rr6501e1
6. Oliva EM, Bowe T, Tavakoli S, et al. Development and applications of the Veterans Health Administration’s Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14(1):34-49. doi:10.1037/ser0000099
7. Goesling J, Moser SE, Lin LA, Hassett AL, Wasserman RA, Brummett CM. Discrepancies between perceived benefit of opioids and self-reported patient outcomes. Pain Med. 2018;19(2):297-306. doi:10.1093/pm/pnw263
8. Sullivan M, Von Korff M, Banta-Green C. Problems and concerns of patients receiving chronic opioid therapy for chronic non-cancer pain. Pain. 2010;149(2):345-353. doi:10.1016/j.pain.2010.02.037
9. Brennan PL, Greenbaum MA, Lemke S, Schutte KK. Mental health disorder, pain, and pain treatment among long-term care residents: evidence from the Minimum Data Set 3.0. Aging Ment Health. 2019;23(9):1146-1155. doi:10.1080/13607863.2018.1481922
10. Woo A, Lechner B, Fu T, et al. Cut points for mild, moderate, and severe pain among cancer and non-cancer patients: a literature review. Ann Palliat Med. 2015;4(4):176-183. doi:10.3978/j.issn.2224-5820.2015.09.04
11. Centers for Disease Control and Prevention. Calculating total daily dose of opioids for safer dosage. 2017. Accessed December 15, 2021. https://www.cdc.gov/drugoverdose/pdf/calculating_total_daily_dose-a.pdf
12. Nielsen S, Degenhardt L, Hoban B, Gisev N. Comparing opioids: a guide to estimating oral morphine equivalents (OME) in research. NDARC Technical Report No. 329. National Drug and Alcohol Research Centre; 2014. Accessed December 15, 2021. http://www.drugsandalcohol.ie/22703/1/NDARC Comparing opioids.pdf
13. Smith HS. Variations in opioid responsiveness. Pain Physician. 2008;11(2):237-248.
14. Collin E, Cesselin F. Neurobiological mechanisms of opioid tolerance and dependence. Clin Neuropharmacol. 1991;14(6):465-488. doi:10.1097/00002826-199112000-00001
15. Higgins C, Smith BH, Matthews K. Evidence of opioid-induced hyperalgesia in clinical populations after chronic opioid exposure: a systematic review and meta-analysis. Br J Anaesth. 2019;122(6):e114-e126. doi:10.1016/j.bja.2018.09.019
16. Howe CQ, Sullivan MD. The missing ‘P’ in pain management: how the current opioid epidemic highlights the need for psychiatric services in chronic pain care. Gen Hosp Psychiatry. 2014;36(1):99-104. doi:10.1016/j.genhosppsych.2013.10.003
17. Substance Abuse and Mental Health Services Administration. Key substance use and mental health indicators in the United States: results from the 2018 National Survey on Drug Use and Health. HHS Publ No PEP19-5068, NSDUH Ser H-54. 2019;170:51-58. Accessed December 15, 2021. https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHNationalFindingsReport2018/NSDUHNationalFindingsReport2018.pdf
18. World Health Organization. WHO’s cancer pain ladder for adults. Accessed September 21, 2018. www.who.int/ncds/management/palliative-care/Infographic-cancer-pain-lowres.pdf
19. Ballantyne JC, Kalso E, Stannard C. WHO analgesic ladder: a good concept gone astray. BMJ. 2016;352:i20. doi:10.1136/bmj.i20
20. The Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guideline for opioid therapy for chronic pain. US Dept of Veterans Affairs and Dept of Defense; 2017. Accessed December 15, 2021. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf
21. Defense & Veterans Pain Rating Scale (DVPRS). Defense & Veterans Center for Integrative Pain Management. Accessed July 21, 2021. https://www.dvcipm.org/clinical-resources/defense-veterans-pain-rating-scale-dvprs/
22. Guy GP Jr, Zhang K, Bohm MK, et al. Vital signs: changes in opioid prescribing in the United States, 2006–2015. MMWR Morb Mortal Wkly Rep. 2017;66(26):697-704. doi:10.15585/mmwr.mm6626a4
“Fishy” papule
A biopsy was performed to exclude squamous cell carcinoma and an additional biopsy was sent for tissue culture for aerobic and acid-fast bacteria. The culture revealed a surprising diagnosis: cutaneous mycobacterium marinum.
Mycobacterium marinum is one of many nontuberculosis mycobacteria that may rarely cause infections in immunocompetent patients. M marinum is found worldwide in saltwater and freshwater. Infections may occur in individuals working in fisheries or fish markets, natural marine environments, or with aquariums. The infection may gain access through small, even unnoticed breaks in the skin. Papules, pustules, or abscesses caused by M marinum develop a few weeks after exposure and share many features with other common skin infections, including Staphylococcus aureus. Lymphatic involvement and sporotrichoid spread may occur. Immunocompromised patients can experience deeper involvement into tendons. Patients with significant soft tissue pain should undergo computed tomography, or preferably magnetic resonance imaging, to determine the extent of disease.
For immunocompetent patients and those with limited disease, as in this case, spontaneous resolution can occur after a year or more. However, because of the potential risk of more severe disease, treatment is recommended. M marinum is resistant to multiple antibiotics and there are no standardized treatment guidelines. Minocycline 100 mg bid for 3 weeks to 3 months is 1 accepted regimen for limited disease; treatment should be continued for 3 to 4 weeks following clinical resolution.1 Patients with more widespread disease benefit from evaluation by Infectious Diseases. Patients exposed to atypical mycobacteria may have false positive QuantiFERON-TB Gold tests that are commonly performed prior to biologic therapies.2
This patient achieved complete resolution of his signs and symptoms after receiving minocycline 100 mg bid for 6 weeks. He continues to fish recreationally.
Text courtesy of Jonathan Karnes, MD, medical director, MDFMR Dermatology Services, Augusta, ME. Photos courtesy of Jonathan Karnes, MD (copyright retained).
1. Rallis E, Koumantaki-Mathioudaki E. Treatment of Mycobacterium marinum cutaneous infections. Expert Opin Pharmacother. 2007;8:2965-2978. doi: 10.1517/14656566.8.17.2965
2. Gajurel K, Subramanian AK. False-positive QuantiFERON TB-Gold test due to Mycobacterium gordonae. Diagn Microbiol Infect Dis. 2016;84:315-317. doi: 10.1016/j.diagmicrobio.2015.10.020
A biopsy was performed to exclude squamous cell carcinoma and an additional biopsy was sent for tissue culture for aerobic and acid-fast bacteria. The culture revealed a surprising diagnosis: cutaneous mycobacterium marinum.
Mycobacterium marinum is one of many nontuberculosis mycobacteria that may rarely cause infections in immunocompetent patients. M marinum is found worldwide in saltwater and freshwater. Infections may occur in individuals working in fisheries or fish markets, natural marine environments, or with aquariums. The infection may gain access through small, even unnoticed breaks in the skin. Papules, pustules, or abscesses caused by M marinum develop a few weeks after exposure and share many features with other common skin infections, including Staphylococcus aureus. Lymphatic involvement and sporotrichoid spread may occur. Immunocompromised patients can experience deeper involvement into tendons. Patients with significant soft tissue pain should undergo computed tomography, or preferably magnetic resonance imaging, to determine the extent of disease.
For immunocompetent patients and those with limited disease, as in this case, spontaneous resolution can occur after a year or more. However, because of the potential risk of more severe disease, treatment is recommended. M marinum is resistant to multiple antibiotics and there are no standardized treatment guidelines. Minocycline 100 mg bid for 3 weeks to 3 months is 1 accepted regimen for limited disease; treatment should be continued for 3 to 4 weeks following clinical resolution.1 Patients with more widespread disease benefit from evaluation by Infectious Diseases. Patients exposed to atypical mycobacteria may have false positive QuantiFERON-TB Gold tests that are commonly performed prior to biologic therapies.2
This patient achieved complete resolution of his signs and symptoms after receiving minocycline 100 mg bid for 6 weeks. He continues to fish recreationally.
Text courtesy of Jonathan Karnes, MD, medical director, MDFMR Dermatology Services, Augusta, ME. Photos courtesy of Jonathan Karnes, MD (copyright retained).
A biopsy was performed to exclude squamous cell carcinoma and an additional biopsy was sent for tissue culture for aerobic and acid-fast bacteria. The culture revealed a surprising diagnosis: cutaneous mycobacterium marinum.
Mycobacterium marinum is one of many nontuberculosis mycobacteria that may rarely cause infections in immunocompetent patients. M marinum is found worldwide in saltwater and freshwater. Infections may occur in individuals working in fisheries or fish markets, natural marine environments, or with aquariums. The infection may gain access through small, even unnoticed breaks in the skin. Papules, pustules, or abscesses caused by M marinum develop a few weeks after exposure and share many features with other common skin infections, including Staphylococcus aureus. Lymphatic involvement and sporotrichoid spread may occur. Immunocompromised patients can experience deeper involvement into tendons. Patients with significant soft tissue pain should undergo computed tomography, or preferably magnetic resonance imaging, to determine the extent of disease.
For immunocompetent patients and those with limited disease, as in this case, spontaneous resolution can occur after a year or more. However, because of the potential risk of more severe disease, treatment is recommended. M marinum is resistant to multiple antibiotics and there are no standardized treatment guidelines. Minocycline 100 mg bid for 3 weeks to 3 months is 1 accepted regimen for limited disease; treatment should be continued for 3 to 4 weeks following clinical resolution.1 Patients with more widespread disease benefit from evaluation by Infectious Diseases. Patients exposed to atypical mycobacteria may have false positive QuantiFERON-TB Gold tests that are commonly performed prior to biologic therapies.2
This patient achieved complete resolution of his signs and symptoms after receiving minocycline 100 mg bid for 6 weeks. He continues to fish recreationally.
Text courtesy of Jonathan Karnes, MD, medical director, MDFMR Dermatology Services, Augusta, ME. Photos courtesy of Jonathan Karnes, MD (copyright retained).
1. Rallis E, Koumantaki-Mathioudaki E. Treatment of Mycobacterium marinum cutaneous infections. Expert Opin Pharmacother. 2007;8:2965-2978. doi: 10.1517/14656566.8.17.2965
2. Gajurel K, Subramanian AK. False-positive QuantiFERON TB-Gold test due to Mycobacterium gordonae. Diagn Microbiol Infect Dis. 2016;84:315-317. doi: 10.1016/j.diagmicrobio.2015.10.020
1. Rallis E, Koumantaki-Mathioudaki E. Treatment of Mycobacterium marinum cutaneous infections. Expert Opin Pharmacother. 2007;8:2965-2978. doi: 10.1517/14656566.8.17.2965
2. Gajurel K, Subramanian AK. False-positive QuantiFERON TB-Gold test due to Mycobacterium gordonae. Diagn Microbiol Infect Dis. 2016;84:315-317. doi: 10.1016/j.diagmicrobio.2015.10.020