Time-restricted eating shows no weight-loss benefit in RCT

Article Type
Changed
Wed, 10/07/2020 - 12:05

 

The popular new weight-loss approach of eating within a restricted window of time during the day, allowing for an extended period of fasting – also known as intermittent fasting – does not result in greater weight loss, compared with nonrestricted meal timing, results from a randomized clinical trial show.

“I was very surprised by all of [the results],” senior author Ethan J. Weiss, MD, said in an interview.

“Part of the reason we did the study was because I had been doing time-restricted eating myself for years and even recommending it to friends and patients as an effective weight-loss tool,” said Dr. Weiss, of the Cardiovascular Research Institute, University of California, San Francisco.

“But no matter how you slice it, prescription of time-restricted eating – at least this version –is not a very effective weight-loss strategy,” Dr. Weiss said.

The study, published online in JAMA Internal Medicine by Dylan A. Lowe, PhD, also of the University of California, San Francisco, involved 116 participants who were randomized to a 12-week regimen of either three structured meals per day or time-restricted eating, with instructions to eat only between 12:00 p.m. and 8:00 p.m. and to completely abstain from eating at other times.

The participants were not given any specific instructions regarding caloric or macronutrient intake “so as to offer a simple, real-world recommendation to free-living individuals,” the authors wrote.

Although some prior research has shown improvements in measures such as glucose tolerance with time-restricted eating, studies showing weight loss with the approach, including one recently reported by Medscape Medical News, have been small and lacked control groups.

“To my knowledge this is the first randomized, controlled trial and definitely the biggest,” Dr. Weiss. “I think it is the most comprehensive dataset available in people, at least for this intervention.”
 

Participants used app to log details

At baseline, participants had a mean weight of 99.2 kg (approximately 219 lb). Their mean age was 46.5 years and 60.3% were men. They were drawn from anywhere in the United States and received study surveys through a custom mobile study application on the Eureka Research Platform. They were given a Bluetooth weight scale to use daily, which was connected with the app, and randomized to one of the two interventions. A subset of 50 participants living near San Francisco underwent in-person testing.

At the end of the 12 weeks, those in the time-restricted eating group (n = 59) did have a significant decrease in weight, compared with baseline (−0.94 kg; P = .01), while weight loss in the consistent-meal group (n = 57) was not significant (−0.68 kg; P = .07).

But importantly, the difference in weight loss between the groups was not significant (−0.26 kg; P = .63).

There were no significant differences in secondary outcomes of fasting insulin, glucose, hemoglobin A1c, or blood lipids within or between the time-restricted eating and consistent-meal group either. Nor were there any significant differences in resting metabolic rate.

Although participants did not self-report their caloric intake, the authors estimated that the differences were not significant using mathematical modeling developed at the National Institutes of Health.

Rates of adherence to the diets were 92.1% in the consistent-meal group versus 83.5% in the time-restricted group.
 

 

 

Not all diets are equal: Time-restricted eating group lost more lean mass

In a subset analysis, loss of lean mass was significantly greater in the time-restricted eating group, compared with the consistent-meals group, in terms of both appendicular lean mass (P = .009) and the appendicular lean mass index (P = .005).

In fact, as much as 65% of the weight lost (1.10 kg of the average 1.70 kg) in the time-restricted eating group consisted of lean mass, while much less was fat mass (0.51 kg).

“The proportion of lean mass loss in this study (approximately 65%) far exceeds the normal range of 20%-30%,” the authors wrote. “In addition, there was a highly significant between-group difference in appendicular lean mass.”

Appendicular lean mass correlates with nutritional and physical status, and its reduction can lead to weakness, disability, and impaired quality of life.

“This serves as a caution for patient populations at risk for sarcopenia because time-restricted eating could exacerbate muscle loss,” the authors asserted.

Furthermore, previous studies suggest that the loss of lean mass in such studies is positively linked with weight regain.

While a limitation of the work is that self-reported measures of energy or macronutrient or protein intake were not obtained, the authors speculated that the role of protein intake could be linked to the greater loss of lean mass.

“Given the loss of appendicular lean mass in participants in the time-restricted eating arm and previous reports of decreased protein consumption from time-restricted eating, it is possible that protein intake was altered by time-restricted eating in this cohort, and this clearly warrants future study,” they wrote.

Dr. Weiss said the findings underscore that not all weight loss in dieting is beneficial.

“Losing 1 kg of lean mass (is not equal) to a kilogram of fat,” he said. “Indeed, if one loses 0.65 kg of lean mass and only 0.35 kg of fat mass, that is an intervention I’d probably pass on.”
 

Time-restricted eating is popular, perhaps because it’s easy?

Time-restricted eating has gained popularity in recent years.

The approach “is attractive as a weight-loss option in that it does not require tedious and time-consuming methods such as calorie counting or adherence to complicated diets,” the authors noted. “Indeed, we found that self-reported adherence to the time-restricted eating schedule was high; however, in contrast to our hypothesis, there was no greater weight loss with time-restricted eating compared with the consistent meal timing.”

They explain that the 12 p.m. to 8 p.m. window for eating was chosen because they thought people might find it easier culturally to skip breakfast than dinner, the more social meal.

However, an 8 p.m. cutoff is somewhat late given there is some suggestion that fasting several hours before bedtime is most beneficial, Dr. Weiss noted. So it may be worth examining different time windows.

“I am very intrigued about looking at early time-restricted eating – 6 a.m. to 2 p.m.,” for example, he said. “It is on our list.”

Meanwhile, the study results support previous research showing no effect on weight outcomes in relation to skipping breakfast.

The study received funding from the UCSF cardiology division’s Cardiology Innovations Award Program and the National Institute of Diabetes and Digestive and Kidney Diseases, with additional support from the James Peter Read Foundation. Dr. Weiss has reported nonfinancial support from Mocacare and nonfinancial support from iHealth Labs during the conduct of the study. He also is a cofounder and equity stakeholder of Keyto, and owns stock and was formerly on the board of Virta.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

The popular new weight-loss approach of eating within a restricted window of time during the day, allowing for an extended period of fasting – also known as intermittent fasting – does not result in greater weight loss, compared with nonrestricted meal timing, results from a randomized clinical trial show.

“I was very surprised by all of [the results],” senior author Ethan J. Weiss, MD, said in an interview.

“Part of the reason we did the study was because I had been doing time-restricted eating myself for years and even recommending it to friends and patients as an effective weight-loss tool,” said Dr. Weiss, of the Cardiovascular Research Institute, University of California, San Francisco.

“But no matter how you slice it, prescription of time-restricted eating – at least this version –is not a very effective weight-loss strategy,” Dr. Weiss said.

The study, published online in JAMA Internal Medicine by Dylan A. Lowe, PhD, also of the University of California, San Francisco, involved 116 participants who were randomized to a 12-week regimen of either three structured meals per day or time-restricted eating, with instructions to eat only between 12:00 p.m. and 8:00 p.m. and to completely abstain from eating at other times.

The participants were not given any specific instructions regarding caloric or macronutrient intake “so as to offer a simple, real-world recommendation to free-living individuals,” the authors wrote.

Although some prior research has shown improvements in measures such as glucose tolerance with time-restricted eating, studies showing weight loss with the approach, including one recently reported by Medscape Medical News, have been small and lacked control groups.

“To my knowledge this is the first randomized, controlled trial and definitely the biggest,” Dr. Weiss. “I think it is the most comprehensive dataset available in people, at least for this intervention.”
 

Participants used app to log details

At baseline, participants had a mean weight of 99.2 kg (approximately 219 lb). Their mean age was 46.5 years and 60.3% were men. They were drawn from anywhere in the United States and received study surveys through a custom mobile study application on the Eureka Research Platform. They were given a Bluetooth weight scale to use daily, which was connected with the app, and randomized to one of the two interventions. A subset of 50 participants living near San Francisco underwent in-person testing.

At the end of the 12 weeks, those in the time-restricted eating group (n = 59) did have a significant decrease in weight, compared with baseline (−0.94 kg; P = .01), while weight loss in the consistent-meal group (n = 57) was not significant (−0.68 kg; P = .07).

But importantly, the difference in weight loss between the groups was not significant (−0.26 kg; P = .63).

There were no significant differences in secondary outcomes of fasting insulin, glucose, hemoglobin A1c, or blood lipids within or between the time-restricted eating and consistent-meal group either. Nor were there any significant differences in resting metabolic rate.

Although participants did not self-report their caloric intake, the authors estimated that the differences were not significant using mathematical modeling developed at the National Institutes of Health.

Rates of adherence to the diets were 92.1% in the consistent-meal group versus 83.5% in the time-restricted group.
 

 

 

Not all diets are equal: Time-restricted eating group lost more lean mass

In a subset analysis, loss of lean mass was significantly greater in the time-restricted eating group, compared with the consistent-meals group, in terms of both appendicular lean mass (P = .009) and the appendicular lean mass index (P = .005).

In fact, as much as 65% of the weight lost (1.10 kg of the average 1.70 kg) in the time-restricted eating group consisted of lean mass, while much less was fat mass (0.51 kg).

“The proportion of lean mass loss in this study (approximately 65%) far exceeds the normal range of 20%-30%,” the authors wrote. “In addition, there was a highly significant between-group difference in appendicular lean mass.”

Appendicular lean mass correlates with nutritional and physical status, and its reduction can lead to weakness, disability, and impaired quality of life.

“This serves as a caution for patient populations at risk for sarcopenia because time-restricted eating could exacerbate muscle loss,” the authors asserted.

Furthermore, previous studies suggest that the loss of lean mass in such studies is positively linked with weight regain.

While a limitation of the work is that self-reported measures of energy or macronutrient or protein intake were not obtained, the authors speculated that the role of protein intake could be linked to the greater loss of lean mass.

“Given the loss of appendicular lean mass in participants in the time-restricted eating arm and previous reports of decreased protein consumption from time-restricted eating, it is possible that protein intake was altered by time-restricted eating in this cohort, and this clearly warrants future study,” they wrote.

Dr. Weiss said the findings underscore that not all weight loss in dieting is beneficial.

“Losing 1 kg of lean mass (is not equal) to a kilogram of fat,” he said. “Indeed, if one loses 0.65 kg of lean mass and only 0.35 kg of fat mass, that is an intervention I’d probably pass on.”
 

Time-restricted eating is popular, perhaps because it’s easy?

Time-restricted eating has gained popularity in recent years.

The approach “is attractive as a weight-loss option in that it does not require tedious and time-consuming methods such as calorie counting or adherence to complicated diets,” the authors noted. “Indeed, we found that self-reported adherence to the time-restricted eating schedule was high; however, in contrast to our hypothesis, there was no greater weight loss with time-restricted eating compared with the consistent meal timing.”

They explain that the 12 p.m. to 8 p.m. window for eating was chosen because they thought people might find it easier culturally to skip breakfast than dinner, the more social meal.

However, an 8 p.m. cutoff is somewhat late given there is some suggestion that fasting several hours before bedtime is most beneficial, Dr. Weiss noted. So it may be worth examining different time windows.

“I am very intrigued about looking at early time-restricted eating – 6 a.m. to 2 p.m.,” for example, he said. “It is on our list.”

Meanwhile, the study results support previous research showing no effect on weight outcomes in relation to skipping breakfast.

The study received funding from the UCSF cardiology division’s Cardiology Innovations Award Program and the National Institute of Diabetes and Digestive and Kidney Diseases, with additional support from the James Peter Read Foundation. Dr. Weiss has reported nonfinancial support from Mocacare and nonfinancial support from iHealth Labs during the conduct of the study. He also is a cofounder and equity stakeholder of Keyto, and owns stock and was formerly on the board of Virta.

A version of this article originally appeared on Medscape.com.

 

The popular new weight-loss approach of eating within a restricted window of time during the day, allowing for an extended period of fasting – also known as intermittent fasting – does not result in greater weight loss, compared with nonrestricted meal timing, results from a randomized clinical trial show.

“I was very surprised by all of [the results],” senior author Ethan J. Weiss, MD, said in an interview.

“Part of the reason we did the study was because I had been doing time-restricted eating myself for years and even recommending it to friends and patients as an effective weight-loss tool,” said Dr. Weiss, of the Cardiovascular Research Institute, University of California, San Francisco.

“But no matter how you slice it, prescription of time-restricted eating – at least this version –is not a very effective weight-loss strategy,” Dr. Weiss said.

The study, published online in JAMA Internal Medicine by Dylan A. Lowe, PhD, also of the University of California, San Francisco, involved 116 participants who were randomized to a 12-week regimen of either three structured meals per day or time-restricted eating, with instructions to eat only between 12:00 p.m. and 8:00 p.m. and to completely abstain from eating at other times.

The participants were not given any specific instructions regarding caloric or macronutrient intake “so as to offer a simple, real-world recommendation to free-living individuals,” the authors wrote.

Although some prior research has shown improvements in measures such as glucose tolerance with time-restricted eating, studies showing weight loss with the approach, including one recently reported by Medscape Medical News, have been small and lacked control groups.

“To my knowledge this is the first randomized, controlled trial and definitely the biggest,” Dr. Weiss. “I think it is the most comprehensive dataset available in people, at least for this intervention.”
 

Participants used app to log details

At baseline, participants had a mean weight of 99.2 kg (approximately 219 lb). Their mean age was 46.5 years and 60.3% were men. They were drawn from anywhere in the United States and received study surveys through a custom mobile study application on the Eureka Research Platform. They were given a Bluetooth weight scale to use daily, which was connected with the app, and randomized to one of the two interventions. A subset of 50 participants living near San Francisco underwent in-person testing.

At the end of the 12 weeks, those in the time-restricted eating group (n = 59) did have a significant decrease in weight, compared with baseline (−0.94 kg; P = .01), while weight loss in the consistent-meal group (n = 57) was not significant (−0.68 kg; P = .07).

But importantly, the difference in weight loss between the groups was not significant (−0.26 kg; P = .63).

There were no significant differences in secondary outcomes of fasting insulin, glucose, hemoglobin A1c, or blood lipids within or between the time-restricted eating and consistent-meal group either. Nor were there any significant differences in resting metabolic rate.

Although participants did not self-report their caloric intake, the authors estimated that the differences were not significant using mathematical modeling developed at the National Institutes of Health.

Rates of adherence to the diets were 92.1% in the consistent-meal group versus 83.5% in the time-restricted group.
 

 

 

Not all diets are equal: Time-restricted eating group lost more lean mass

In a subset analysis, loss of lean mass was significantly greater in the time-restricted eating group, compared with the consistent-meals group, in terms of both appendicular lean mass (P = .009) and the appendicular lean mass index (P = .005).

In fact, as much as 65% of the weight lost (1.10 kg of the average 1.70 kg) in the time-restricted eating group consisted of lean mass, while much less was fat mass (0.51 kg).

“The proportion of lean mass loss in this study (approximately 65%) far exceeds the normal range of 20%-30%,” the authors wrote. “In addition, there was a highly significant between-group difference in appendicular lean mass.”

Appendicular lean mass correlates with nutritional and physical status, and its reduction can lead to weakness, disability, and impaired quality of life.

“This serves as a caution for patient populations at risk for sarcopenia because time-restricted eating could exacerbate muscle loss,” the authors asserted.

Furthermore, previous studies suggest that the loss of lean mass in such studies is positively linked with weight regain.

While a limitation of the work is that self-reported measures of energy or macronutrient or protein intake were not obtained, the authors speculated that the role of protein intake could be linked to the greater loss of lean mass.

“Given the loss of appendicular lean mass in participants in the time-restricted eating arm and previous reports of decreased protein consumption from time-restricted eating, it is possible that protein intake was altered by time-restricted eating in this cohort, and this clearly warrants future study,” they wrote.

Dr. Weiss said the findings underscore that not all weight loss in dieting is beneficial.

“Losing 1 kg of lean mass (is not equal) to a kilogram of fat,” he said. “Indeed, if one loses 0.65 kg of lean mass and only 0.35 kg of fat mass, that is an intervention I’d probably pass on.”
 

Time-restricted eating is popular, perhaps because it’s easy?

Time-restricted eating has gained popularity in recent years.

The approach “is attractive as a weight-loss option in that it does not require tedious and time-consuming methods such as calorie counting or adherence to complicated diets,” the authors noted. “Indeed, we found that self-reported adherence to the time-restricted eating schedule was high; however, in contrast to our hypothesis, there was no greater weight loss with time-restricted eating compared with the consistent meal timing.”

They explain that the 12 p.m. to 8 p.m. window for eating was chosen because they thought people might find it easier culturally to skip breakfast than dinner, the more social meal.

However, an 8 p.m. cutoff is somewhat late given there is some suggestion that fasting several hours before bedtime is most beneficial, Dr. Weiss noted. So it may be worth examining different time windows.

“I am very intrigued about looking at early time-restricted eating – 6 a.m. to 2 p.m.,” for example, he said. “It is on our list.”

Meanwhile, the study results support previous research showing no effect on weight outcomes in relation to skipping breakfast.

The study received funding from the UCSF cardiology division’s Cardiology Innovations Award Program and the National Institute of Diabetes and Digestive and Kidney Diseases, with additional support from the James Peter Read Foundation. Dr. Weiss has reported nonfinancial support from Mocacare and nonfinancial support from iHealth Labs during the conduct of the study. He also is a cofounder and equity stakeholder of Keyto, and owns stock and was formerly on the board of Virta.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Keep desiccated thyroid as a treatment option for hypothyroidism

Article Type
Changed
Wed, 09/23/2020 - 15:33

 

For patients with hypothyroidism who underwent treatment with desiccated thyroid, there were no significant differences in the time spent in normal ranges of thyroid stimulating hormone (TSH) over 3 years, compared with patients who received the standard therapy of synthetic levothyroxine (T4), new research shows.

The findings are “unanticipated ... given concerns for variability between batches of desiccated thyroid cited by national guidelines,” wrote the authors of the study, which was published this month in the Annals of Family Medicine.

In the trial, patients who had been treated for hypothyroidism at Kaiser Permanente Colorado were matched retrospectively into groups of 450 patients each according to whether they were treated with desiccated thyroid or synthetic levothyroxine.

After a follow-up of 3 years, TSH values within normal ranges (0.320-5.500 uIU/mL) were seen at approximately the same rate among those treated with desiccated thyroid and those who received levothyroxine (79.1% vs. 79.3%; P = .905).

“This study showed that after 3 years TSH values in both groups remained within reference ranges approximately 80% of the time,” said Rolake Kuye, PharmD, and colleagues with Kaiser Permanente, in Denver, Colorado.

In an accompanying editorial, Jill Schneiderhan, MD, and Suzanna Zick, ND, MPH, of the University of Michigan, Ann Arbor, say the overall results indicate that the continued use of desiccated thyroid is warranted in some cases.

“Keeping desiccated thyroid medications as an option in our tool kit will allow for improved shared decision-making, while allowing for patient preference, and offer an option for those patients who remain symptomatic on levothyroxine monotherapy,” they advised.
 

Some variability still seen with desiccated thyroid

Desiccated thyroid (dehydrated porcine thyroid), which was long the standard of care, is still commonly used in the treatment of hypothyroidism, despite having been replaced beginning in the 1970s by synthetic levothyroxine in light of evidence that the former was associated with more variability in thyroid hormone levels.

Desiccated thyroid is still sold legally by prescription in the United States under the names Nature Thyroid, Thyroid USP, and Armour Thyroid and is currently used by up to 30% of patients with hypothyroidism, according to recent estimates.

Consistent with concerns about variability in thyroid hormone levels, the new study did show greater variability in TSH levels with desiccated thyroid when assessed on a visit-to-visit basis.

Dr. Kuye and coauthors therefore recommended that, “[f]or providers targeting a tighter TSH goal in certain patients, the decreased TSH variability with levothyroxine could be clinically meaningful.”
 

This long-term investigation is “much needed”

This new study adds important new insight to the ongoing debate over hypothyroidism treatment, said Dr. Schneiderhan and Dr. Zick in their editorial.

“[The study authors] begin a much-needed investigation into whether patients prescribed synthetic levothyroxine compared with desiccated thyroid had differences in TSH stability over the course of 3 years.

“Further prospective studies are needed to confirm these results and to explore differences in more diverse patient populations, such as Hashimoto’s thyroiditis, as well as on quality of life and other important patient-reported outcomes such as fatigue and weight gain,” the editorialists added.

“This study does, however, provide helpful information that desiccated thyroid products are a reasonable choice for treating some hypothyroid patients.”
 

 

 

For 60% of patients in both groups, TSH levels were within reference range for whole study

In the study, Dr. Kuye and colleagues matched patients (average age, 63 years; 90% women) in terms of characteristics such as race, comorbidities, and cholesterol levels.

Patients were excluded if they had been prescribed more than one agent for the treatment of hypothyroidism or if they had comorbid conditions, including a history of thyroid cancer or other related comorbidities, as well as pregnancy.

With respect to visit-to-visit TSH level variability, the lower rate among patients prescribed levothyroxine in comparison with patients prescribed desiccated thyroid was statistically significant (1.25 vs. 1.44; P = .015). Among 60% of patients in both groups, all TSH values measured during the study period were within reference ranges, however (P = .951).

The median number of TSH laboratory studies obtained during the study was four in the synthetic levothyroxine group and three for patients prescribed desiccated thyroid (P = .578).

There were some notable differences between the groups. Patients in the desiccated thyroid group had lower body mass index (P = .032), hemoglobin A1c levels (P = .041), and lower baseline TSH values (2.4 vs. 3.4 uIU/mL; P = .001). compared with those prescribed levothyroxine.

Limitations include the fact that the authors could not account for potentially important variables such as rates of adherence, differences in prescriber practice between agents, or the concurrent use of other medications.
 

Subjective outcomes not assessed: “One-size-fits-all approach doesn’t work”

The authors note they were not able to assess subjective outcomes, which, as noted by the editorialists, are particularly important in hypothyroidism.

“Emerging evidence shows that for many patients, symptoms persist despite normal TSH values,” Dr. Schneiderhan and Dr. Zick write.

They cite as an example a large study that found significant impairment in psychological well-being among patients treated with thyroxine replacement, despite their achieving normal TSH levels.

In addition, synthetic levothyroxine is associated with other uncertainties, such as complexities in the conversion of T4 to triiodothyronine (T3) that may disrupt thyroid metabolism in some patients.

In addition, there are differences in the amounts of thyroid replacement needed by certain groups, such as patients who have undergone thyroidectomies.

“The one-size-fits-all approach for treating hypothyroidism does not work ... for all patients,” they concluded.

The study authors and editorialists have disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

For patients with hypothyroidism who underwent treatment with desiccated thyroid, there were no significant differences in the time spent in normal ranges of thyroid stimulating hormone (TSH) over 3 years, compared with patients who received the standard therapy of synthetic levothyroxine (T4), new research shows.

The findings are “unanticipated ... given concerns for variability between batches of desiccated thyroid cited by national guidelines,” wrote the authors of the study, which was published this month in the Annals of Family Medicine.

In the trial, patients who had been treated for hypothyroidism at Kaiser Permanente Colorado were matched retrospectively into groups of 450 patients each according to whether they were treated with desiccated thyroid or synthetic levothyroxine.

After a follow-up of 3 years, TSH values within normal ranges (0.320-5.500 uIU/mL) were seen at approximately the same rate among those treated with desiccated thyroid and those who received levothyroxine (79.1% vs. 79.3%; P = .905).

“This study showed that after 3 years TSH values in both groups remained within reference ranges approximately 80% of the time,” said Rolake Kuye, PharmD, and colleagues with Kaiser Permanente, in Denver, Colorado.

In an accompanying editorial, Jill Schneiderhan, MD, and Suzanna Zick, ND, MPH, of the University of Michigan, Ann Arbor, say the overall results indicate that the continued use of desiccated thyroid is warranted in some cases.

“Keeping desiccated thyroid medications as an option in our tool kit will allow for improved shared decision-making, while allowing for patient preference, and offer an option for those patients who remain symptomatic on levothyroxine monotherapy,” they advised.
 

Some variability still seen with desiccated thyroid

Desiccated thyroid (dehydrated porcine thyroid), which was long the standard of care, is still commonly used in the treatment of hypothyroidism, despite having been replaced beginning in the 1970s by synthetic levothyroxine in light of evidence that the former was associated with more variability in thyroid hormone levels.

Desiccated thyroid is still sold legally by prescription in the United States under the names Nature Thyroid, Thyroid USP, and Armour Thyroid and is currently used by up to 30% of patients with hypothyroidism, according to recent estimates.

Consistent with concerns about variability in thyroid hormone levels, the new study did show greater variability in TSH levels with desiccated thyroid when assessed on a visit-to-visit basis.

Dr. Kuye and coauthors therefore recommended that, “[f]or providers targeting a tighter TSH goal in certain patients, the decreased TSH variability with levothyroxine could be clinically meaningful.”
 

This long-term investigation is “much needed”

This new study adds important new insight to the ongoing debate over hypothyroidism treatment, said Dr. Schneiderhan and Dr. Zick in their editorial.

“[The study authors] begin a much-needed investigation into whether patients prescribed synthetic levothyroxine compared with desiccated thyroid had differences in TSH stability over the course of 3 years.

“Further prospective studies are needed to confirm these results and to explore differences in more diverse patient populations, such as Hashimoto’s thyroiditis, as well as on quality of life and other important patient-reported outcomes such as fatigue and weight gain,” the editorialists added.

“This study does, however, provide helpful information that desiccated thyroid products are a reasonable choice for treating some hypothyroid patients.”
 

 

 

For 60% of patients in both groups, TSH levels were within reference range for whole study

In the study, Dr. Kuye and colleagues matched patients (average age, 63 years; 90% women) in terms of characteristics such as race, comorbidities, and cholesterol levels.

Patients were excluded if they had been prescribed more than one agent for the treatment of hypothyroidism or if they had comorbid conditions, including a history of thyroid cancer or other related comorbidities, as well as pregnancy.

With respect to visit-to-visit TSH level variability, the lower rate among patients prescribed levothyroxine in comparison with patients prescribed desiccated thyroid was statistically significant (1.25 vs. 1.44; P = .015). Among 60% of patients in both groups, all TSH values measured during the study period were within reference ranges, however (P = .951).

The median number of TSH laboratory studies obtained during the study was four in the synthetic levothyroxine group and three for patients prescribed desiccated thyroid (P = .578).

There were some notable differences between the groups. Patients in the desiccated thyroid group had lower body mass index (P = .032), hemoglobin A1c levels (P = .041), and lower baseline TSH values (2.4 vs. 3.4 uIU/mL; P = .001). compared with those prescribed levothyroxine.

Limitations include the fact that the authors could not account for potentially important variables such as rates of adherence, differences in prescriber practice between agents, or the concurrent use of other medications.
 

Subjective outcomes not assessed: “One-size-fits-all approach doesn’t work”

The authors note they were not able to assess subjective outcomes, which, as noted by the editorialists, are particularly important in hypothyroidism.

“Emerging evidence shows that for many patients, symptoms persist despite normal TSH values,” Dr. Schneiderhan and Dr. Zick write.

They cite as an example a large study that found significant impairment in psychological well-being among patients treated with thyroxine replacement, despite their achieving normal TSH levels.

In addition, synthetic levothyroxine is associated with other uncertainties, such as complexities in the conversion of T4 to triiodothyronine (T3) that may disrupt thyroid metabolism in some patients.

In addition, there are differences in the amounts of thyroid replacement needed by certain groups, such as patients who have undergone thyroidectomies.

“The one-size-fits-all approach for treating hypothyroidism does not work ... for all patients,” they concluded.

The study authors and editorialists have disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

 

For patients with hypothyroidism who underwent treatment with desiccated thyroid, there were no significant differences in the time spent in normal ranges of thyroid stimulating hormone (TSH) over 3 years, compared with patients who received the standard therapy of synthetic levothyroxine (T4), new research shows.

The findings are “unanticipated ... given concerns for variability between batches of desiccated thyroid cited by national guidelines,” wrote the authors of the study, which was published this month in the Annals of Family Medicine.

In the trial, patients who had been treated for hypothyroidism at Kaiser Permanente Colorado were matched retrospectively into groups of 450 patients each according to whether they were treated with desiccated thyroid or synthetic levothyroxine.

After a follow-up of 3 years, TSH values within normal ranges (0.320-5.500 uIU/mL) were seen at approximately the same rate among those treated with desiccated thyroid and those who received levothyroxine (79.1% vs. 79.3%; P = .905).

“This study showed that after 3 years TSH values in both groups remained within reference ranges approximately 80% of the time,” said Rolake Kuye, PharmD, and colleagues with Kaiser Permanente, in Denver, Colorado.

In an accompanying editorial, Jill Schneiderhan, MD, and Suzanna Zick, ND, MPH, of the University of Michigan, Ann Arbor, say the overall results indicate that the continued use of desiccated thyroid is warranted in some cases.

“Keeping desiccated thyroid medications as an option in our tool kit will allow for improved shared decision-making, while allowing for patient preference, and offer an option for those patients who remain symptomatic on levothyroxine monotherapy,” they advised.
 

Some variability still seen with desiccated thyroid

Desiccated thyroid (dehydrated porcine thyroid), which was long the standard of care, is still commonly used in the treatment of hypothyroidism, despite having been replaced beginning in the 1970s by synthetic levothyroxine in light of evidence that the former was associated with more variability in thyroid hormone levels.

Desiccated thyroid is still sold legally by prescription in the United States under the names Nature Thyroid, Thyroid USP, and Armour Thyroid and is currently used by up to 30% of patients with hypothyroidism, according to recent estimates.

Consistent with concerns about variability in thyroid hormone levels, the new study did show greater variability in TSH levels with desiccated thyroid when assessed on a visit-to-visit basis.

Dr. Kuye and coauthors therefore recommended that, “[f]or providers targeting a tighter TSH goal in certain patients, the decreased TSH variability with levothyroxine could be clinically meaningful.”
 

This long-term investigation is “much needed”

This new study adds important new insight to the ongoing debate over hypothyroidism treatment, said Dr. Schneiderhan and Dr. Zick in their editorial.

“[The study authors] begin a much-needed investigation into whether patients prescribed synthetic levothyroxine compared with desiccated thyroid had differences in TSH stability over the course of 3 years.

“Further prospective studies are needed to confirm these results and to explore differences in more diverse patient populations, such as Hashimoto’s thyroiditis, as well as on quality of life and other important patient-reported outcomes such as fatigue and weight gain,” the editorialists added.

“This study does, however, provide helpful information that desiccated thyroid products are a reasonable choice for treating some hypothyroid patients.”
 

 

 

For 60% of patients in both groups, TSH levels were within reference range for whole study

In the study, Dr. Kuye and colleagues matched patients (average age, 63 years; 90% women) in terms of characteristics such as race, comorbidities, and cholesterol levels.

Patients were excluded if they had been prescribed more than one agent for the treatment of hypothyroidism or if they had comorbid conditions, including a history of thyroid cancer or other related comorbidities, as well as pregnancy.

With respect to visit-to-visit TSH level variability, the lower rate among patients prescribed levothyroxine in comparison with patients prescribed desiccated thyroid was statistically significant (1.25 vs. 1.44; P = .015). Among 60% of patients in both groups, all TSH values measured during the study period were within reference ranges, however (P = .951).

The median number of TSH laboratory studies obtained during the study was four in the synthetic levothyroxine group and three for patients prescribed desiccated thyroid (P = .578).

There were some notable differences between the groups. Patients in the desiccated thyroid group had lower body mass index (P = .032), hemoglobin A1c levels (P = .041), and lower baseline TSH values (2.4 vs. 3.4 uIU/mL; P = .001). compared with those prescribed levothyroxine.

Limitations include the fact that the authors could not account for potentially important variables such as rates of adherence, differences in prescriber practice between agents, or the concurrent use of other medications.
 

Subjective outcomes not assessed: “One-size-fits-all approach doesn’t work”

The authors note they were not able to assess subjective outcomes, which, as noted by the editorialists, are particularly important in hypothyroidism.

“Emerging evidence shows that for many patients, symptoms persist despite normal TSH values,” Dr. Schneiderhan and Dr. Zick write.

They cite as an example a large study that found significant impairment in psychological well-being among patients treated with thyroxine replacement, despite their achieving normal TSH levels.

In addition, synthetic levothyroxine is associated with other uncertainties, such as complexities in the conversion of T4 to triiodothyronine (T3) that may disrupt thyroid metabolism in some patients.

In addition, there are differences in the amounts of thyroid replacement needed by certain groups, such as patients who have undergone thyroidectomies.

“The one-size-fits-all approach for treating hypothyroidism does not work ... for all patients,” they concluded.

The study authors and editorialists have disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Atypical fractures with bisphosphonates highest in Asians, study confirms

Article Type
Changed
Thu, 08/20/2020 - 12:57

The latest findings regarding the risk for atypical femur fracture (AFF) with use of bisphosphonates for osteoporosis show a significant increase in risk when treatment extends beyond 5 years. The risk is notably higher risk among Asian women, compared with White women. However, the benefits in fracture reduction still appear to far outweigh the risk for AFF.

The research, published in the New England Journal of Medicine, importantly adds to findings from smaller studies by showing effects in a population of nearly 200,000 women in a diverse cohort, said Angela M. Cheung, MD, PhD.

“This study answers some important questions – Kaiser Permanente Southern California is a large health maintenance organization with a diverse racial population,” said Dr. Cheung, director of the Center of Excellence in Skeletal Health Assessment and osteoporosis program at the University of Toronto.

“This is the first study that included a diverse population to definitively show that Asians are at a much higher risk of atypical femur fractures than Caucasians,” she emphasized.

Although AFFs are rare, concerns about them remain pressing in the treatment of osteoporosis, Dr. Cheung noted. “This is a big concern for clinicians – they want to do no harm.”
 

Risk for AFF increases with longer duration of bisphosphonate use

For the study, Dennis M. Black, PhD, of the departments of epidemiology and biostatistics and orthopedic surgery at the University of California, San Francisco, and colleagues identified women aged 50 years or older enrolled in the Kaiser Permanente Southern California system who were treated with bisphosphonates and were followed from January 2007 to November 2017.

Among the 196,129 women identified in the study, 277 AFFs occurred.

After multivariate adjustment, compared with those treated for less than 3 months, for women who were treated for 3-5 years, the hazard ratio for experiencing an AFF was 8.86. For therapy of 5-8 years, the HR increased to 19.88, and for those treated with bisphosphonates for 8 years or longer, the HR was 43.51.

The risk for AFF declined quickly upon bisphosphonate discontinuation; compared with current users, the HR dropped to 0.52 within 3-15 months after the last bisphosphonate use. It declined to 0.26 at more than 4 years after discontinuation.

The risk for AFF with bisphosphonate use was higher for Asian women than for White women (HR, 4.84); this did not apply to any other ethnic groups (HR, 0.99).



Other risk factors for AFF included shorter height (HR, 1.28 per 5-cm decrement), greater weight (HR, 1.15 per 5-kg increment), and glucocorticoid use (HR, 2.28 for glucocorticoid use of 1 or more years).

Among White women, the number of fractures prevented with bisphosphonate use far outweighed the risk for bisphosphonate-associated AFFs.

For example, among White women, during a 3-year treatment period, there were two bisphosphonate-associated AFFs, whereas 149 hip fractures and 541 clinical fractures were prevented, the authors wrote.

After 5 years, there were eight AFFs, but 286 hip fractures and 859 clinical fractures were prevented.

Although the risk-benefit ratio among Asian women still favored prevention of fractures, the difference was less pronounced – eight bisphosphonate-associated AFFs had occurred at 3 years, whereas 91 hip fractures and 330 clinical fractures were prevented.

The authors noted that previous studies have also shown Asian women to be at a disproportionately higher risk for AFF.

An earlier Kaiser Permanente Southern California case series showed that 49% of 142 AFFs occurred in Asian patients, despite the fact that those patients made up only 10% of the study population.

 

 

Various factors could cause higher risk in Asian women

The reasons for the increased risk among Asian women are likely multifactorial and could include greater medication adherence among Asian women, genetic differences in drug metabolism and bone turnover, and, notably, increased lateral stress caused by bowed Asian femora, the authors speculated.

Further questions include whether the risk is limited to Asians living outside of Asia and whether cultural differences in diet or physical activity are risk factors, they added.

“At this early stage, further research into the cause of the increased risk among women of Asian ancestry is warranted,” they wrote.

Although the risk for AFF may be higher among Asian women, the incidence of hip and other osteoporotic fractures is lower among Asians as well as other non-White persons, compared with White persons, they added.

The findings have important implications in how clinicians should discuss treatment options with different patient groups, Dr. Cheung said.

“I think this is one of the key findings of the study,” she added. “In this day and age of personalized medicine, we need to keep the individual patient in mind, and that includes their racial/ethnic background, genetic characteristics, sex, medical conditions and medications, etc. So it is important for physicians to pay attention to this. The risk-benefit ratio of these drugs for Asians will be quite different, compared to Caucasians.”
 

No link between traditional fracture risk factors and AFF, study shows

Interestingly, although older age, previous fractures, and lower bone mineral density are key risk factors for hip and other osteoporotic fractures in the general population, they do not significantly increase the risk for AFF with bisphosphonate use, the study also showed.

“In fact, the oldest women in our cohort, who are at highest risk for hip and other fractures, were at lowest risk for AFF,” the authors wrote.

The collective findings “add to the risk-benefit balance of bisphosphonate treatment in these populations and could directly affect decisions regarding treatment initiation and duration.”

Notable limitations of the study include the fact that most women were treated with one particular bisphosphonate, alendronate, and that other bisphosphonates were underrepresented, Dr. Cheung said.

“This study examined bisphosphonate therapy, but the vast majority of the women were exposed to alendronate, so whether women on risedronate or other bisphosphonates have similar risks is unclear,” she observed.

“In addition, because they can only capture bisphosphonate use using their database, any bisphosphonate exposure prior to joining Kaiser Permanente will not be captured. So the study may underestimate the total cumulative duration of bisphosphonate use,” she added.

The study received support from Kaiser Permanente and discretionary funds from the University of California, San Francisco. The study began with a pilot grant from Merck Sharp & Dohme, which had no role in the conduct of the study. Dr. Cheung has served as a consultant for Amgen. She chaired and led the 2019 International Society for Clinical Densitometry Position Development Conference on Detection of Atypical Femur Fractures and currently is on the Osteoporosis Canada Guidelines Committee.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

The latest findings regarding the risk for atypical femur fracture (AFF) with use of bisphosphonates for osteoporosis show a significant increase in risk when treatment extends beyond 5 years. The risk is notably higher risk among Asian women, compared with White women. However, the benefits in fracture reduction still appear to far outweigh the risk for AFF.

The research, published in the New England Journal of Medicine, importantly adds to findings from smaller studies by showing effects in a population of nearly 200,000 women in a diverse cohort, said Angela M. Cheung, MD, PhD.

“This study answers some important questions – Kaiser Permanente Southern California is a large health maintenance organization with a diverse racial population,” said Dr. Cheung, director of the Center of Excellence in Skeletal Health Assessment and osteoporosis program at the University of Toronto.

“This is the first study that included a diverse population to definitively show that Asians are at a much higher risk of atypical femur fractures than Caucasians,” she emphasized.

Although AFFs are rare, concerns about them remain pressing in the treatment of osteoporosis, Dr. Cheung noted. “This is a big concern for clinicians – they want to do no harm.”
 

Risk for AFF increases with longer duration of bisphosphonate use

For the study, Dennis M. Black, PhD, of the departments of epidemiology and biostatistics and orthopedic surgery at the University of California, San Francisco, and colleagues identified women aged 50 years or older enrolled in the Kaiser Permanente Southern California system who were treated with bisphosphonates and were followed from January 2007 to November 2017.

Among the 196,129 women identified in the study, 277 AFFs occurred.

After multivariate adjustment, compared with those treated for less than 3 months, for women who were treated for 3-5 years, the hazard ratio for experiencing an AFF was 8.86. For therapy of 5-8 years, the HR increased to 19.88, and for those treated with bisphosphonates for 8 years or longer, the HR was 43.51.

The risk for AFF declined quickly upon bisphosphonate discontinuation; compared with current users, the HR dropped to 0.52 within 3-15 months after the last bisphosphonate use. It declined to 0.26 at more than 4 years after discontinuation.

The risk for AFF with bisphosphonate use was higher for Asian women than for White women (HR, 4.84); this did not apply to any other ethnic groups (HR, 0.99).



Other risk factors for AFF included shorter height (HR, 1.28 per 5-cm decrement), greater weight (HR, 1.15 per 5-kg increment), and glucocorticoid use (HR, 2.28 for glucocorticoid use of 1 or more years).

Among White women, the number of fractures prevented with bisphosphonate use far outweighed the risk for bisphosphonate-associated AFFs.

For example, among White women, during a 3-year treatment period, there were two bisphosphonate-associated AFFs, whereas 149 hip fractures and 541 clinical fractures were prevented, the authors wrote.

After 5 years, there were eight AFFs, but 286 hip fractures and 859 clinical fractures were prevented.

Although the risk-benefit ratio among Asian women still favored prevention of fractures, the difference was less pronounced – eight bisphosphonate-associated AFFs had occurred at 3 years, whereas 91 hip fractures and 330 clinical fractures were prevented.

The authors noted that previous studies have also shown Asian women to be at a disproportionately higher risk for AFF.

An earlier Kaiser Permanente Southern California case series showed that 49% of 142 AFFs occurred in Asian patients, despite the fact that those patients made up only 10% of the study population.

 

 

Various factors could cause higher risk in Asian women

The reasons for the increased risk among Asian women are likely multifactorial and could include greater medication adherence among Asian women, genetic differences in drug metabolism and bone turnover, and, notably, increased lateral stress caused by bowed Asian femora, the authors speculated.

Further questions include whether the risk is limited to Asians living outside of Asia and whether cultural differences in diet or physical activity are risk factors, they added.

“At this early stage, further research into the cause of the increased risk among women of Asian ancestry is warranted,” they wrote.

Although the risk for AFF may be higher among Asian women, the incidence of hip and other osteoporotic fractures is lower among Asians as well as other non-White persons, compared with White persons, they added.

The findings have important implications in how clinicians should discuss treatment options with different patient groups, Dr. Cheung said.

“I think this is one of the key findings of the study,” she added. “In this day and age of personalized medicine, we need to keep the individual patient in mind, and that includes their racial/ethnic background, genetic characteristics, sex, medical conditions and medications, etc. So it is important for physicians to pay attention to this. The risk-benefit ratio of these drugs for Asians will be quite different, compared to Caucasians.”
 

No link between traditional fracture risk factors and AFF, study shows

Interestingly, although older age, previous fractures, and lower bone mineral density are key risk factors for hip and other osteoporotic fractures in the general population, they do not significantly increase the risk for AFF with bisphosphonate use, the study also showed.

“In fact, the oldest women in our cohort, who are at highest risk for hip and other fractures, were at lowest risk for AFF,” the authors wrote.

The collective findings “add to the risk-benefit balance of bisphosphonate treatment in these populations and could directly affect decisions regarding treatment initiation and duration.”

Notable limitations of the study include the fact that most women were treated with one particular bisphosphonate, alendronate, and that other bisphosphonates were underrepresented, Dr. Cheung said.

“This study examined bisphosphonate therapy, but the vast majority of the women were exposed to alendronate, so whether women on risedronate or other bisphosphonates have similar risks is unclear,” she observed.

“In addition, because they can only capture bisphosphonate use using their database, any bisphosphonate exposure prior to joining Kaiser Permanente will not be captured. So the study may underestimate the total cumulative duration of bisphosphonate use,” she added.

The study received support from Kaiser Permanente and discretionary funds from the University of California, San Francisco. The study began with a pilot grant from Merck Sharp & Dohme, which had no role in the conduct of the study. Dr. Cheung has served as a consultant for Amgen. She chaired and led the 2019 International Society for Clinical Densitometry Position Development Conference on Detection of Atypical Femur Fractures and currently is on the Osteoporosis Canada Guidelines Committee.

A version of this article originally appeared on Medscape.com.

The latest findings regarding the risk for atypical femur fracture (AFF) with use of bisphosphonates for osteoporosis show a significant increase in risk when treatment extends beyond 5 years. The risk is notably higher risk among Asian women, compared with White women. However, the benefits in fracture reduction still appear to far outweigh the risk for AFF.

The research, published in the New England Journal of Medicine, importantly adds to findings from smaller studies by showing effects in a population of nearly 200,000 women in a diverse cohort, said Angela M. Cheung, MD, PhD.

“This study answers some important questions – Kaiser Permanente Southern California is a large health maintenance organization with a diverse racial population,” said Dr. Cheung, director of the Center of Excellence in Skeletal Health Assessment and osteoporosis program at the University of Toronto.

“This is the first study that included a diverse population to definitively show that Asians are at a much higher risk of atypical femur fractures than Caucasians,” she emphasized.

Although AFFs are rare, concerns about them remain pressing in the treatment of osteoporosis, Dr. Cheung noted. “This is a big concern for clinicians – they want to do no harm.”
 

Risk for AFF increases with longer duration of bisphosphonate use

For the study, Dennis M. Black, PhD, of the departments of epidemiology and biostatistics and orthopedic surgery at the University of California, San Francisco, and colleagues identified women aged 50 years or older enrolled in the Kaiser Permanente Southern California system who were treated with bisphosphonates and were followed from January 2007 to November 2017.

Among the 196,129 women identified in the study, 277 AFFs occurred.

After multivariate adjustment, compared with those treated for less than 3 months, for women who were treated for 3-5 years, the hazard ratio for experiencing an AFF was 8.86. For therapy of 5-8 years, the HR increased to 19.88, and for those treated with bisphosphonates for 8 years or longer, the HR was 43.51.

The risk for AFF declined quickly upon bisphosphonate discontinuation; compared with current users, the HR dropped to 0.52 within 3-15 months after the last bisphosphonate use. It declined to 0.26 at more than 4 years after discontinuation.

The risk for AFF with bisphosphonate use was higher for Asian women than for White women (HR, 4.84); this did not apply to any other ethnic groups (HR, 0.99).



Other risk factors for AFF included shorter height (HR, 1.28 per 5-cm decrement), greater weight (HR, 1.15 per 5-kg increment), and glucocorticoid use (HR, 2.28 for glucocorticoid use of 1 or more years).

Among White women, the number of fractures prevented with bisphosphonate use far outweighed the risk for bisphosphonate-associated AFFs.

For example, among White women, during a 3-year treatment period, there were two bisphosphonate-associated AFFs, whereas 149 hip fractures and 541 clinical fractures were prevented, the authors wrote.

After 5 years, there were eight AFFs, but 286 hip fractures and 859 clinical fractures were prevented.

Although the risk-benefit ratio among Asian women still favored prevention of fractures, the difference was less pronounced – eight bisphosphonate-associated AFFs had occurred at 3 years, whereas 91 hip fractures and 330 clinical fractures were prevented.

The authors noted that previous studies have also shown Asian women to be at a disproportionately higher risk for AFF.

An earlier Kaiser Permanente Southern California case series showed that 49% of 142 AFFs occurred in Asian patients, despite the fact that those patients made up only 10% of the study population.

 

 

Various factors could cause higher risk in Asian women

The reasons for the increased risk among Asian women are likely multifactorial and could include greater medication adherence among Asian women, genetic differences in drug metabolism and bone turnover, and, notably, increased lateral stress caused by bowed Asian femora, the authors speculated.

Further questions include whether the risk is limited to Asians living outside of Asia and whether cultural differences in diet or physical activity are risk factors, they added.

“At this early stage, further research into the cause of the increased risk among women of Asian ancestry is warranted,” they wrote.

Although the risk for AFF may be higher among Asian women, the incidence of hip and other osteoporotic fractures is lower among Asians as well as other non-White persons, compared with White persons, they added.

The findings have important implications in how clinicians should discuss treatment options with different patient groups, Dr. Cheung said.

“I think this is one of the key findings of the study,” she added. “In this day and age of personalized medicine, we need to keep the individual patient in mind, and that includes their racial/ethnic background, genetic characteristics, sex, medical conditions and medications, etc. So it is important for physicians to pay attention to this. The risk-benefit ratio of these drugs for Asians will be quite different, compared to Caucasians.”
 

No link between traditional fracture risk factors and AFF, study shows

Interestingly, although older age, previous fractures, and lower bone mineral density are key risk factors for hip and other osteoporotic fractures in the general population, they do not significantly increase the risk for AFF with bisphosphonate use, the study also showed.

“In fact, the oldest women in our cohort, who are at highest risk for hip and other fractures, were at lowest risk for AFF,” the authors wrote.

The collective findings “add to the risk-benefit balance of bisphosphonate treatment in these populations and could directly affect decisions regarding treatment initiation and duration.”

Notable limitations of the study include the fact that most women were treated with one particular bisphosphonate, alendronate, and that other bisphosphonates were underrepresented, Dr. Cheung said.

“This study examined bisphosphonate therapy, but the vast majority of the women were exposed to alendronate, so whether women on risedronate or other bisphosphonates have similar risks is unclear,” she observed.

“In addition, because they can only capture bisphosphonate use using their database, any bisphosphonate exposure prior to joining Kaiser Permanente will not be captured. So the study may underestimate the total cumulative duration of bisphosphonate use,” she added.

The study received support from Kaiser Permanente and discretionary funds from the University of California, San Francisco. The study began with a pilot grant from Merck Sharp & Dohme, which had no role in the conduct of the study. Dr. Cheung has served as a consultant for Amgen. She chaired and led the 2019 International Society for Clinical Densitometry Position Development Conference on Detection of Atypical Femur Fractures and currently is on the Osteoporosis Canada Guidelines Committee.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Evidence mounts for COVID-19 effects on thyroid gland

Article Type
Changed
Thu, 08/26/2021 - 16:01

Rates of thyrotoxicosis are significantly higher among patients who are critically ill with COVID-19 than among patients who are critically ill but who do not not have COVID-19, suggesting an atypical form of thyroiditis related to the novel coronavirus infection, according to new research.

“We suggest routine assessment of thyroid function in patients with COVID-19 requiring high-intensity care because they frequently present with thyrotoxicosis due to a form of subacute thyroiditis related to SARS-CoV-2,” the authors wrote in correspondence published online in The Lancet Diabetes and Endocrinology.

However, notably, the study – which compared critically ill ICU patients who had COVID-19 with those who did not have COVID-19 or who had milder cases of COVID-19 – indicates that thyroid disorders do not appear to increase the risk of developing COVID-19, first author Ilaria Muller, MD, PhD, of the department of endocrinology, IRCCS Fondazione Ca’ Granda Ospedale Maggiore Policlinico, Milan, said in an interview.

“It is important to highlight that we did not find an increased prevalence of preexisting thyroid disorders in COVID-19 patients (contrary to early media reports),” she said. “So far, clinical observations do not support this fear, and we need to reassure people with thyroid disorders, since such disorders are very common among the general population.”

Yet the findings add to emerging evidence of a COVID-19/thyroid relationship, Angela M. Leung, MD, said in an interview.

“Given the health care impacts of the current COVID-19 pandemic worldwide, this study provides some insight on the potential systemic inflammation, as well as thyroid-specific inflammation, of the SARS-Cov-2 virus that is described in some emerging reports,” she said.

“This study joins at least six others that have reported a clinical presentation resembling subacute thyroiditis in critically ill patients with COVID-19,” noted Dr. Leung, of the division of endocrinology, diabetes, and metabolism in the department of medicine at the University of California, Los Angeles.
 

Thyroid function analysis in those with severe COVID-19

Dr. Muller explained that preliminary data from her institution showed thyroid abnormalities in patients who were severely ill with COVID-19. She and her team extended the evaluation to include thyroid data and other data on 93 patients with COVID-19 who were admitted to high-intensity care units (HICUs) in Italy during the 2020 pandemic.

Those data were compared with data on 101 critically ill patients admitted to the same HICUs in 2019 who did not have COVID-19. A third group of 52 patients with COVID-19 who were admitted to low-intensity care units (LICUs) in Italy in 2020 were also included in the analysis.

The mean age of the patients in the HICU 2020 group was 65.3 years; in the HICU 2019 group, it was 73 years; and in the LICU group, it was 70 years (P = .001). In addition, the HICU 2020 group included more men than the other two groups (69% vs. 56% and 48%; P = .03).

Of note, only 9% of patients in the HICU 2020 group had preexisting thyroid disorders, compared with 21% in the LICU group and 23% in the HICU 2019 group (P = .017).

These findings suggest that “such conditions are not a risk factor for SARS-CoV-2 infection or severity of COVID-19,” the authors wrote.

The patients with the preexisting thyroid conditions were excluded from the thyroid function analysis.

A significantly higher proportion of patients in the HICU 2020 group (13; 15%) were thyrotoxic upon admission, compared with just 1 (1%) of 78 patients in the HICU 2019 group (P = .002) and one (2%) of 41 patients in the LICU group (P = .025).

Among the 14 patients in the two COVID-19 groups who had thyrotoxicosis, the majority were male (9; 64%)

Among those in the HICU 2020 group, serum thyroid-stimulating hormone concentrations were lower than in either of the other two groups (P = .018), and serum free thyroxine (free T4) concentrations were higher than in the LICU group (P = .016) but not the HICU 2019 group.
 

 

 

Differences compared with other infection-related thyroiditis

Although thyrotoxicosis relating to subacute viral thyroiditis can result from a wide variety of viral infections, there are some key differences with COVID-19, Dr. Muller said.

“Thyroid dysfunction related to SARS-CoV-2 seems to be milder than that of classic subacute thyroiditis due to other viruses,” she explained. Furthermore, thyroid dysfunction associated with other viral infections is more common in women, whereas there were more male patients with the COVID-19–related atypical thyroiditis.

In addition, the thyroid effects developed early with COVID-19, whereas they usually emerge after the infections by other viruses.

Patients did not demonstrate the neck pain that is common with classic viral thyroiditis, and the thyroid abnormalities appear to correlate with the severity of COVID-19, whereas they are seen even in patients with mild symptoms when other viral infections are the cause.

In addition to the risk for subacute viral thyroiditis, critically ill patients in general are at risk of developing nonthyroidal illness syndrome, with alterations in thyroid function. However, thyroid hormone measures in the patients severely ill with COVID-19 were not consistent with that syndrome.

A subanalysis of eight HICU 2020 patients with thyroid dysfunction who were followed for 55 days after discharge showed that two experienced hyperthyroidism but likely not from COVID-19; in the remaining six, thyroid function normalized.

Muller speculated that, when ill with COVID-19, the patients likely had a combination of SARS-CoV-2–related atypical thyroiditis and nonthyroidal illness syndrome, known as T4 toxicosis.
 

Will there be any long-term effects?

Importantly, it remains unknown whether the novel coronavirus has longer-term effects on the thyroid, Dr. Muller said.

“We cannot predict what will be the long-lasting thyroid effects after COVID-19,” she said.

With classic subacute viral thyroiditis, “After a few years ... 5%-20% of patients develop permanent hypothyroidism, [and] the same might happen in COVID-19 patients,” she hypothesized. “We will follow our patients long term to answer this question – this study is already ongoing.”

In the meantime, diagnosis of thyroid dysfunction in patients with COVID-19 is important, inasmuch as it could worsen the already critical conditions of patients, Muller stressed.

“The gold-standard treatment for thyroiditis is steroids, so the presence of thyroid dysfunction might represent an additional indication to such treatment in COVID-19 patients, to be verified in properly designed clinical trials,” she advised.
 

ACE2 cell receptors highly expressed in thyroid

Dr. Muller and colleagues also noted recent research showing that ACE2 – demonstrated to be a key host-cell entry receptor for both SARS-CoV and SARS-CoV-2 – is expressed in even higher levels in the thyroid than the lungs, where it causes COVID-19’s notorious pulmonary effects.

Dr. Muller said the implications of ACE2 expression in the thyroid remain to be elucidated.

“If ACE2 is confirmed to be expressed at higher levels, compared with the lungs in the thyroid gland and other tissues, i.e., small intestine, testis, kidney, heart, etc, dedicated studies will be needed to correlate ACE2 expression with the organs’ susceptibility to SARS-CoV-2 reflected by clinical presentation,” she said.

Dr. Leung added that, as a take-home message from these and the other thyroid/COVID-19 studies, “data are starting to show us that COVID-19 infection may cause thyrotoxicosis that is possibly related to thyroid and systemic inflammation. However, the serum thyroid function test abnormalities seen in COVID-19 patients with subacute thyroiditis are also likely exacerbated to a substantial extent by nonthyroidal illness physiology.”

The authors have disclosed no relevant financial relationships. Dr. Leung is on the advisory board of Medscape Diabetes and Endocrinology.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Rates of thyrotoxicosis are significantly higher among patients who are critically ill with COVID-19 than among patients who are critically ill but who do not not have COVID-19, suggesting an atypical form of thyroiditis related to the novel coronavirus infection, according to new research.

“We suggest routine assessment of thyroid function in patients with COVID-19 requiring high-intensity care because they frequently present with thyrotoxicosis due to a form of subacute thyroiditis related to SARS-CoV-2,” the authors wrote in correspondence published online in The Lancet Diabetes and Endocrinology.

However, notably, the study – which compared critically ill ICU patients who had COVID-19 with those who did not have COVID-19 or who had milder cases of COVID-19 – indicates that thyroid disorders do not appear to increase the risk of developing COVID-19, first author Ilaria Muller, MD, PhD, of the department of endocrinology, IRCCS Fondazione Ca’ Granda Ospedale Maggiore Policlinico, Milan, said in an interview.

“It is important to highlight that we did not find an increased prevalence of preexisting thyroid disorders in COVID-19 patients (contrary to early media reports),” she said. “So far, clinical observations do not support this fear, and we need to reassure people with thyroid disorders, since such disorders are very common among the general population.”

Yet the findings add to emerging evidence of a COVID-19/thyroid relationship, Angela M. Leung, MD, said in an interview.

“Given the health care impacts of the current COVID-19 pandemic worldwide, this study provides some insight on the potential systemic inflammation, as well as thyroid-specific inflammation, of the SARS-Cov-2 virus that is described in some emerging reports,” she said.

“This study joins at least six others that have reported a clinical presentation resembling subacute thyroiditis in critically ill patients with COVID-19,” noted Dr. Leung, of the division of endocrinology, diabetes, and metabolism in the department of medicine at the University of California, Los Angeles.
 

Thyroid function analysis in those with severe COVID-19

Dr. Muller explained that preliminary data from her institution showed thyroid abnormalities in patients who were severely ill with COVID-19. She and her team extended the evaluation to include thyroid data and other data on 93 patients with COVID-19 who were admitted to high-intensity care units (HICUs) in Italy during the 2020 pandemic.

Those data were compared with data on 101 critically ill patients admitted to the same HICUs in 2019 who did not have COVID-19. A third group of 52 patients with COVID-19 who were admitted to low-intensity care units (LICUs) in Italy in 2020 were also included in the analysis.

The mean age of the patients in the HICU 2020 group was 65.3 years; in the HICU 2019 group, it was 73 years; and in the LICU group, it was 70 years (P = .001). In addition, the HICU 2020 group included more men than the other two groups (69% vs. 56% and 48%; P = .03).

Of note, only 9% of patients in the HICU 2020 group had preexisting thyroid disorders, compared with 21% in the LICU group and 23% in the HICU 2019 group (P = .017).

These findings suggest that “such conditions are not a risk factor for SARS-CoV-2 infection or severity of COVID-19,” the authors wrote.

The patients with the preexisting thyroid conditions were excluded from the thyroid function analysis.

A significantly higher proportion of patients in the HICU 2020 group (13; 15%) were thyrotoxic upon admission, compared with just 1 (1%) of 78 patients in the HICU 2019 group (P = .002) and one (2%) of 41 patients in the LICU group (P = .025).

Among the 14 patients in the two COVID-19 groups who had thyrotoxicosis, the majority were male (9; 64%)

Among those in the HICU 2020 group, serum thyroid-stimulating hormone concentrations were lower than in either of the other two groups (P = .018), and serum free thyroxine (free T4) concentrations were higher than in the LICU group (P = .016) but not the HICU 2019 group.
 

 

 

Differences compared with other infection-related thyroiditis

Although thyrotoxicosis relating to subacute viral thyroiditis can result from a wide variety of viral infections, there are some key differences with COVID-19, Dr. Muller said.

“Thyroid dysfunction related to SARS-CoV-2 seems to be milder than that of classic subacute thyroiditis due to other viruses,” she explained. Furthermore, thyroid dysfunction associated with other viral infections is more common in women, whereas there were more male patients with the COVID-19–related atypical thyroiditis.

In addition, the thyroid effects developed early with COVID-19, whereas they usually emerge after the infections by other viruses.

Patients did not demonstrate the neck pain that is common with classic viral thyroiditis, and the thyroid abnormalities appear to correlate with the severity of COVID-19, whereas they are seen even in patients with mild symptoms when other viral infections are the cause.

In addition to the risk for subacute viral thyroiditis, critically ill patients in general are at risk of developing nonthyroidal illness syndrome, with alterations in thyroid function. However, thyroid hormone measures in the patients severely ill with COVID-19 were not consistent with that syndrome.

A subanalysis of eight HICU 2020 patients with thyroid dysfunction who were followed for 55 days after discharge showed that two experienced hyperthyroidism but likely not from COVID-19; in the remaining six, thyroid function normalized.

Muller speculated that, when ill with COVID-19, the patients likely had a combination of SARS-CoV-2–related atypical thyroiditis and nonthyroidal illness syndrome, known as T4 toxicosis.
 

Will there be any long-term effects?

Importantly, it remains unknown whether the novel coronavirus has longer-term effects on the thyroid, Dr. Muller said.

“We cannot predict what will be the long-lasting thyroid effects after COVID-19,” she said.

With classic subacute viral thyroiditis, “After a few years ... 5%-20% of patients develop permanent hypothyroidism, [and] the same might happen in COVID-19 patients,” she hypothesized. “We will follow our patients long term to answer this question – this study is already ongoing.”

In the meantime, diagnosis of thyroid dysfunction in patients with COVID-19 is important, inasmuch as it could worsen the already critical conditions of patients, Muller stressed.

“The gold-standard treatment for thyroiditis is steroids, so the presence of thyroid dysfunction might represent an additional indication to such treatment in COVID-19 patients, to be verified in properly designed clinical trials,” she advised.
 

ACE2 cell receptors highly expressed in thyroid

Dr. Muller and colleagues also noted recent research showing that ACE2 – demonstrated to be a key host-cell entry receptor for both SARS-CoV and SARS-CoV-2 – is expressed in even higher levels in the thyroid than the lungs, where it causes COVID-19’s notorious pulmonary effects.

Dr. Muller said the implications of ACE2 expression in the thyroid remain to be elucidated.

“If ACE2 is confirmed to be expressed at higher levels, compared with the lungs in the thyroid gland and other tissues, i.e., small intestine, testis, kidney, heart, etc, dedicated studies will be needed to correlate ACE2 expression with the organs’ susceptibility to SARS-CoV-2 reflected by clinical presentation,” she said.

Dr. Leung added that, as a take-home message from these and the other thyroid/COVID-19 studies, “data are starting to show us that COVID-19 infection may cause thyrotoxicosis that is possibly related to thyroid and systemic inflammation. However, the serum thyroid function test abnormalities seen in COVID-19 patients with subacute thyroiditis are also likely exacerbated to a substantial extent by nonthyroidal illness physiology.”

The authors have disclosed no relevant financial relationships. Dr. Leung is on the advisory board of Medscape Diabetes and Endocrinology.

A version of this article originally appeared on Medscape.com.

Rates of thyrotoxicosis are significantly higher among patients who are critically ill with COVID-19 than among patients who are critically ill but who do not not have COVID-19, suggesting an atypical form of thyroiditis related to the novel coronavirus infection, according to new research.

“We suggest routine assessment of thyroid function in patients with COVID-19 requiring high-intensity care because they frequently present with thyrotoxicosis due to a form of subacute thyroiditis related to SARS-CoV-2,” the authors wrote in correspondence published online in The Lancet Diabetes and Endocrinology.

However, notably, the study – which compared critically ill ICU patients who had COVID-19 with those who did not have COVID-19 or who had milder cases of COVID-19 – indicates that thyroid disorders do not appear to increase the risk of developing COVID-19, first author Ilaria Muller, MD, PhD, of the department of endocrinology, IRCCS Fondazione Ca’ Granda Ospedale Maggiore Policlinico, Milan, said in an interview.

“It is important to highlight that we did not find an increased prevalence of preexisting thyroid disorders in COVID-19 patients (contrary to early media reports),” she said. “So far, clinical observations do not support this fear, and we need to reassure people with thyroid disorders, since such disorders are very common among the general population.”

Yet the findings add to emerging evidence of a COVID-19/thyroid relationship, Angela M. Leung, MD, said in an interview.

“Given the health care impacts of the current COVID-19 pandemic worldwide, this study provides some insight on the potential systemic inflammation, as well as thyroid-specific inflammation, of the SARS-Cov-2 virus that is described in some emerging reports,” she said.

“This study joins at least six others that have reported a clinical presentation resembling subacute thyroiditis in critically ill patients with COVID-19,” noted Dr. Leung, of the division of endocrinology, diabetes, and metabolism in the department of medicine at the University of California, Los Angeles.
 

Thyroid function analysis in those with severe COVID-19

Dr. Muller explained that preliminary data from her institution showed thyroid abnormalities in patients who were severely ill with COVID-19. She and her team extended the evaluation to include thyroid data and other data on 93 patients with COVID-19 who were admitted to high-intensity care units (HICUs) in Italy during the 2020 pandemic.

Those data were compared with data on 101 critically ill patients admitted to the same HICUs in 2019 who did not have COVID-19. A third group of 52 patients with COVID-19 who were admitted to low-intensity care units (LICUs) in Italy in 2020 were also included in the analysis.

The mean age of the patients in the HICU 2020 group was 65.3 years; in the HICU 2019 group, it was 73 years; and in the LICU group, it was 70 years (P = .001). In addition, the HICU 2020 group included more men than the other two groups (69% vs. 56% and 48%; P = .03).

Of note, only 9% of patients in the HICU 2020 group had preexisting thyroid disorders, compared with 21% in the LICU group and 23% in the HICU 2019 group (P = .017).

These findings suggest that “such conditions are not a risk factor for SARS-CoV-2 infection or severity of COVID-19,” the authors wrote.

The patients with the preexisting thyroid conditions were excluded from the thyroid function analysis.

A significantly higher proportion of patients in the HICU 2020 group (13; 15%) were thyrotoxic upon admission, compared with just 1 (1%) of 78 patients in the HICU 2019 group (P = .002) and one (2%) of 41 patients in the LICU group (P = .025).

Among the 14 patients in the two COVID-19 groups who had thyrotoxicosis, the majority were male (9; 64%)

Among those in the HICU 2020 group, serum thyroid-stimulating hormone concentrations were lower than in either of the other two groups (P = .018), and serum free thyroxine (free T4) concentrations were higher than in the LICU group (P = .016) but not the HICU 2019 group.
 

 

 

Differences compared with other infection-related thyroiditis

Although thyrotoxicosis relating to subacute viral thyroiditis can result from a wide variety of viral infections, there are some key differences with COVID-19, Dr. Muller said.

“Thyroid dysfunction related to SARS-CoV-2 seems to be milder than that of classic subacute thyroiditis due to other viruses,” she explained. Furthermore, thyroid dysfunction associated with other viral infections is more common in women, whereas there were more male patients with the COVID-19–related atypical thyroiditis.

In addition, the thyroid effects developed early with COVID-19, whereas they usually emerge after the infections by other viruses.

Patients did not demonstrate the neck pain that is common with classic viral thyroiditis, and the thyroid abnormalities appear to correlate with the severity of COVID-19, whereas they are seen even in patients with mild symptoms when other viral infections are the cause.

In addition to the risk for subacute viral thyroiditis, critically ill patients in general are at risk of developing nonthyroidal illness syndrome, with alterations in thyroid function. However, thyroid hormone measures in the patients severely ill with COVID-19 were not consistent with that syndrome.

A subanalysis of eight HICU 2020 patients with thyroid dysfunction who were followed for 55 days after discharge showed that two experienced hyperthyroidism but likely not from COVID-19; in the remaining six, thyroid function normalized.

Muller speculated that, when ill with COVID-19, the patients likely had a combination of SARS-CoV-2–related atypical thyroiditis and nonthyroidal illness syndrome, known as T4 toxicosis.
 

Will there be any long-term effects?

Importantly, it remains unknown whether the novel coronavirus has longer-term effects on the thyroid, Dr. Muller said.

“We cannot predict what will be the long-lasting thyroid effects after COVID-19,” she said.

With classic subacute viral thyroiditis, “After a few years ... 5%-20% of patients develop permanent hypothyroidism, [and] the same might happen in COVID-19 patients,” she hypothesized. “We will follow our patients long term to answer this question – this study is already ongoing.”

In the meantime, diagnosis of thyroid dysfunction in patients with COVID-19 is important, inasmuch as it could worsen the already critical conditions of patients, Muller stressed.

“The gold-standard treatment for thyroiditis is steroids, so the presence of thyroid dysfunction might represent an additional indication to such treatment in COVID-19 patients, to be verified in properly designed clinical trials,” she advised.
 

ACE2 cell receptors highly expressed in thyroid

Dr. Muller and colleagues also noted recent research showing that ACE2 – demonstrated to be a key host-cell entry receptor for both SARS-CoV and SARS-CoV-2 – is expressed in even higher levels in the thyroid than the lungs, where it causes COVID-19’s notorious pulmonary effects.

Dr. Muller said the implications of ACE2 expression in the thyroid remain to be elucidated.

“If ACE2 is confirmed to be expressed at higher levels, compared with the lungs in the thyroid gland and other tissues, i.e., small intestine, testis, kidney, heart, etc, dedicated studies will be needed to correlate ACE2 expression with the organs’ susceptibility to SARS-CoV-2 reflected by clinical presentation,” she said.

Dr. Leung added that, as a take-home message from these and the other thyroid/COVID-19 studies, “data are starting to show us that COVID-19 infection may cause thyrotoxicosis that is possibly related to thyroid and systemic inflammation. However, the serum thyroid function test abnormalities seen in COVID-19 patients with subacute thyroiditis are also likely exacerbated to a substantial extent by nonthyroidal illness physiology.”

The authors have disclosed no relevant financial relationships. Dr. Leung is on the advisory board of Medscape Diabetes and Endocrinology.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Urine screen as part of triple test improves ID of adrenal cancer

Article Type
Changed
Wed, 08/05/2020 - 08:29

A strategy that includes a urine steroid test along with imaging characteristics and tumor size criteria can significantly improve the challenging diagnosis of adrenocortical cancer, helping to avoid unnecessary, and often unsuccessful, further imaging and even surgery, new research shows.

“A triple-test strategy of tumor diameter, imaging characteristics, and urine steroid metabolomics improves detection of adrenocortical carcinoma, which could shorten time to surgery for patients with ... carcinoma and help to avoid unnecessary surgery in patients with benign tumors,” the authors say in research published online July 23 in The Lancet Diabetes & Endocrinology.

The triple-test strategy can be expected to make its way into international guidelines, notes joint lead author Irina Bancos, MD, an associate professor of endocrinology at the Mayo Clinic, Rochester, Minn., in a press statement issued by the University of Birmingham (England), which also had a number of researchers involved in the study.

“The findings of this study will feed into the next international guidelines on the management of adrenal tumors and the implementation of the new test will hopefully improve the overall outlook for patients diagnosed with adrenal tumors,” Dr. Bancos emphasized.

More imaging has led to detection of more adrenal tumors

Advances in CT and MRI imaging have increased the ability to detect adrenal incidentalomas, which are now picked up on about 5% of scans, and the widespread use of imaging has compounded the prevalence of such findings, particularly in older people.

Adrenocortical carcinomas represent only about 2%-12% of adrenal incidentalomas, but the prognosis is very poor, and early detection and surgery can improve outcomes, so findings of any adrenal tumor typically trigger additional multimodal imaging to rule out malignancy.



Evidence is lacking on the accuracy of imaging in determining whether such masses are truly cancerous, or benign, and such procedures add costs, as well as expose patients to radiation that may ultimately have no benefit. However, a previous proof-of-concept study from the same authors did show that the presence of excess adrenal steroid hormones in the urine is a key indicator of adrenal tumors, and other research has supported the findings.

All three tests together give best predictive value: EURINE-ACT

To further validate this work, the authors conducted the EURINE-ACT trial, a prospective 14-center study that is the first of its kind to evaluate the efficacy of a screening strategy for adrenocortical carcinoma that combines urine steroid profiling with tumor size and imaging characteristics.

The study of 2,017 participants with newly diagnosed adrenal masses, recruited from January 2011 to July 2016 from specialist centers in 11 different countries, assessed the diagnostic accuracy of three components: maximum tumor diameter (≥4 cm vs. <4 cm), imaging characteristics (positive vs. negative), and urine steroid metabolomics (low, medium, or high risk of adrenocortical carcinoma), separately and in combination.

Of the patients, 98 (4.9%) had adrenocortical carcinoma confirmed clinically, histopathologically, or biochemically.

Tumors with diameters of 4 cm or larger were identified in 488 patients (24.2%) and were observed in the vast majority of patients with adrenocortical carcinoma (96 of 98), for a positive predictive value (PPV) of 19.7%.

Likewise, the PPV for imaging characteristics was 19.7%. However, increasing the unenhanced CT tumor attenuation threshold to 20 Hounsfield units (HU) from the recommended 10 HU increased specificity for adrenocortical carcinoma (80.0% vs. 64.0%) while maintaining sensitivity (99.0% vs. 100.0%).

Comparatively, a urine steroid metabolomics result suggesting a high risk of adrenocortical carcinoma had a PPV of 34.6%.

A total of 106 patients (5.3%) met the criteria for all three measures, and the PPV for all three was 76.4%.

Using the criteria, 70 patients (3.5%) were classified as being at moderate risk of adrenocortical carcinoma and 1,841 (91.3%) at low risk, for a negative predictive value (NPV) of 99.7%.

“Use of radiation-free, noninvasive urine steroid metabolomics has a higher PPV than two standard imaging tests, and best performance was seen with the combination of all three tests,” the authors state.

 

 

Limit urine test to patients with larger tumors

They note that the use of the combined diagnostic strategy would have led to additional imaging in only 488 (24.2%) of the study’s 2,017 patients, compared with the 2,737 scans that were actually conducted before reaching a diagnostic decision.

“Implementation of urine steroid metabolomics in the routine diagnostic assessment of newly discovered adrenal masses could reduce the number of imaging procedures required to diagnose adrenocortical carcinoma and avoid unnecessary surgery of benign adrenal tumors, potentially yielding beneficial effects with respect to patient burden and health care costs,” they stress.

And regarding imaging parameters, “we also showed that using a cutoff of 20 HU for unenhanced CT tumor attenuation increases the accuracy of imaging characteristic assessment for exclusion of adrenocortical carcinoma, compared with the currently recommended cutoff of 10 HU, which has immediate implications for clinical practice,” they emphasize.

In an accompanying editorial, Adina F. Turcu, MD, of the division of metabolism, endocrinology, and diabetes, University of Michigan, Ann Arbor, and Axel K. Walch, MD, of the Helmholtz Zentrum München–German Research Centre for Environmental Health, agree. “The introduction of urine steroid metabolomics into routine clinical practice would provide major advantages,” they state.

However, they point out that, although the overall negative predictive value of the test was excellent, the specificity was weak.

“Thus, urine steroid metabolomics should be limited to patients who have adrenal nodules larger than 4 cm and have qualitative imaging characteristics suggestive of malignancy,” say Dr. Turcu and Dr. Walch.

The EURINE-ACT study results suggest this subgroup would represent roughly only 12% of all patients with adrenal incidentalomas, they add.

Issues that remain to be addressed with regard to the implementation of the screening strategy include how to best respond to patients who are classified as having intermediate or moderate risk of malignancy, and whether the diagnostic value of steroid metabolomics could be refined by adding analytes or parameters, the editorialists conclude.

The study was funded by the European Commission, U.K. Medical Research Council, Wellcome Trust, U.K. National Institute for Health Research, U.S. National Institutes of Health, the Claire Khan Trust Fund at University Hospitals Birmingham Charities, and the Mayo Clinic Foundation for Medical Education and Research.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

A strategy that includes a urine steroid test along with imaging characteristics and tumor size criteria can significantly improve the challenging diagnosis of adrenocortical cancer, helping to avoid unnecessary, and often unsuccessful, further imaging and even surgery, new research shows.

“A triple-test strategy of tumor diameter, imaging characteristics, and urine steroid metabolomics improves detection of adrenocortical carcinoma, which could shorten time to surgery for patients with ... carcinoma and help to avoid unnecessary surgery in patients with benign tumors,” the authors say in research published online July 23 in The Lancet Diabetes & Endocrinology.

The triple-test strategy can be expected to make its way into international guidelines, notes joint lead author Irina Bancos, MD, an associate professor of endocrinology at the Mayo Clinic, Rochester, Minn., in a press statement issued by the University of Birmingham (England), which also had a number of researchers involved in the study.

“The findings of this study will feed into the next international guidelines on the management of adrenal tumors and the implementation of the new test will hopefully improve the overall outlook for patients diagnosed with adrenal tumors,” Dr. Bancos emphasized.

More imaging has led to detection of more adrenal tumors

Advances in CT and MRI imaging have increased the ability to detect adrenal incidentalomas, which are now picked up on about 5% of scans, and the widespread use of imaging has compounded the prevalence of such findings, particularly in older people.

Adrenocortical carcinomas represent only about 2%-12% of adrenal incidentalomas, but the prognosis is very poor, and early detection and surgery can improve outcomes, so findings of any adrenal tumor typically trigger additional multimodal imaging to rule out malignancy.



Evidence is lacking on the accuracy of imaging in determining whether such masses are truly cancerous, or benign, and such procedures add costs, as well as expose patients to radiation that may ultimately have no benefit. However, a previous proof-of-concept study from the same authors did show that the presence of excess adrenal steroid hormones in the urine is a key indicator of adrenal tumors, and other research has supported the findings.

All three tests together give best predictive value: EURINE-ACT

To further validate this work, the authors conducted the EURINE-ACT trial, a prospective 14-center study that is the first of its kind to evaluate the efficacy of a screening strategy for adrenocortical carcinoma that combines urine steroid profiling with tumor size and imaging characteristics.

The study of 2,017 participants with newly diagnosed adrenal masses, recruited from January 2011 to July 2016 from specialist centers in 11 different countries, assessed the diagnostic accuracy of three components: maximum tumor diameter (≥4 cm vs. <4 cm), imaging characteristics (positive vs. negative), and urine steroid metabolomics (low, medium, or high risk of adrenocortical carcinoma), separately and in combination.

Of the patients, 98 (4.9%) had adrenocortical carcinoma confirmed clinically, histopathologically, or biochemically.

Tumors with diameters of 4 cm or larger were identified in 488 patients (24.2%) and were observed in the vast majority of patients with adrenocortical carcinoma (96 of 98), for a positive predictive value (PPV) of 19.7%.

Likewise, the PPV for imaging characteristics was 19.7%. However, increasing the unenhanced CT tumor attenuation threshold to 20 Hounsfield units (HU) from the recommended 10 HU increased specificity for adrenocortical carcinoma (80.0% vs. 64.0%) while maintaining sensitivity (99.0% vs. 100.0%).

Comparatively, a urine steroid metabolomics result suggesting a high risk of adrenocortical carcinoma had a PPV of 34.6%.

A total of 106 patients (5.3%) met the criteria for all three measures, and the PPV for all three was 76.4%.

Using the criteria, 70 patients (3.5%) were classified as being at moderate risk of adrenocortical carcinoma and 1,841 (91.3%) at low risk, for a negative predictive value (NPV) of 99.7%.

“Use of radiation-free, noninvasive urine steroid metabolomics has a higher PPV than two standard imaging tests, and best performance was seen with the combination of all three tests,” the authors state.

 

 

Limit urine test to patients with larger tumors

They note that the use of the combined diagnostic strategy would have led to additional imaging in only 488 (24.2%) of the study’s 2,017 patients, compared with the 2,737 scans that were actually conducted before reaching a diagnostic decision.

“Implementation of urine steroid metabolomics in the routine diagnostic assessment of newly discovered adrenal masses could reduce the number of imaging procedures required to diagnose adrenocortical carcinoma and avoid unnecessary surgery of benign adrenal tumors, potentially yielding beneficial effects with respect to patient burden and health care costs,” they stress.

And regarding imaging parameters, “we also showed that using a cutoff of 20 HU for unenhanced CT tumor attenuation increases the accuracy of imaging characteristic assessment for exclusion of adrenocortical carcinoma, compared with the currently recommended cutoff of 10 HU, which has immediate implications for clinical practice,” they emphasize.

In an accompanying editorial, Adina F. Turcu, MD, of the division of metabolism, endocrinology, and diabetes, University of Michigan, Ann Arbor, and Axel K. Walch, MD, of the Helmholtz Zentrum München–German Research Centre for Environmental Health, agree. “The introduction of urine steroid metabolomics into routine clinical practice would provide major advantages,” they state.

However, they point out that, although the overall negative predictive value of the test was excellent, the specificity was weak.

“Thus, urine steroid metabolomics should be limited to patients who have adrenal nodules larger than 4 cm and have qualitative imaging characteristics suggestive of malignancy,” say Dr. Turcu and Dr. Walch.

The EURINE-ACT study results suggest this subgroup would represent roughly only 12% of all patients with adrenal incidentalomas, they add.

Issues that remain to be addressed with regard to the implementation of the screening strategy include how to best respond to patients who are classified as having intermediate or moderate risk of malignancy, and whether the diagnostic value of steroid metabolomics could be refined by adding analytes or parameters, the editorialists conclude.

The study was funded by the European Commission, U.K. Medical Research Council, Wellcome Trust, U.K. National Institute for Health Research, U.S. National Institutes of Health, the Claire Khan Trust Fund at University Hospitals Birmingham Charities, and the Mayo Clinic Foundation for Medical Education and Research.
 

A version of this article originally appeared on Medscape.com.

A strategy that includes a urine steroid test along with imaging characteristics and tumor size criteria can significantly improve the challenging diagnosis of adrenocortical cancer, helping to avoid unnecessary, and often unsuccessful, further imaging and even surgery, new research shows.

“A triple-test strategy of tumor diameter, imaging characteristics, and urine steroid metabolomics improves detection of adrenocortical carcinoma, which could shorten time to surgery for patients with ... carcinoma and help to avoid unnecessary surgery in patients with benign tumors,” the authors say in research published online July 23 in The Lancet Diabetes & Endocrinology.

The triple-test strategy can be expected to make its way into international guidelines, notes joint lead author Irina Bancos, MD, an associate professor of endocrinology at the Mayo Clinic, Rochester, Minn., in a press statement issued by the University of Birmingham (England), which also had a number of researchers involved in the study.

“The findings of this study will feed into the next international guidelines on the management of adrenal tumors and the implementation of the new test will hopefully improve the overall outlook for patients diagnosed with adrenal tumors,” Dr. Bancos emphasized.

More imaging has led to detection of more adrenal tumors

Advances in CT and MRI imaging have increased the ability to detect adrenal incidentalomas, which are now picked up on about 5% of scans, and the widespread use of imaging has compounded the prevalence of such findings, particularly in older people.

Adrenocortical carcinomas represent only about 2%-12% of adrenal incidentalomas, but the prognosis is very poor, and early detection and surgery can improve outcomes, so findings of any adrenal tumor typically trigger additional multimodal imaging to rule out malignancy.



Evidence is lacking on the accuracy of imaging in determining whether such masses are truly cancerous, or benign, and such procedures add costs, as well as expose patients to radiation that may ultimately have no benefit. However, a previous proof-of-concept study from the same authors did show that the presence of excess adrenal steroid hormones in the urine is a key indicator of adrenal tumors, and other research has supported the findings.

All three tests together give best predictive value: EURINE-ACT

To further validate this work, the authors conducted the EURINE-ACT trial, a prospective 14-center study that is the first of its kind to evaluate the efficacy of a screening strategy for adrenocortical carcinoma that combines urine steroid profiling with tumor size and imaging characteristics.

The study of 2,017 participants with newly diagnosed adrenal masses, recruited from January 2011 to July 2016 from specialist centers in 11 different countries, assessed the diagnostic accuracy of three components: maximum tumor diameter (≥4 cm vs. <4 cm), imaging characteristics (positive vs. negative), and urine steroid metabolomics (low, medium, or high risk of adrenocortical carcinoma), separately and in combination.

Of the patients, 98 (4.9%) had adrenocortical carcinoma confirmed clinically, histopathologically, or biochemically.

Tumors with diameters of 4 cm or larger were identified in 488 patients (24.2%) and were observed in the vast majority of patients with adrenocortical carcinoma (96 of 98), for a positive predictive value (PPV) of 19.7%.

Likewise, the PPV for imaging characteristics was 19.7%. However, increasing the unenhanced CT tumor attenuation threshold to 20 Hounsfield units (HU) from the recommended 10 HU increased specificity for adrenocortical carcinoma (80.0% vs. 64.0%) while maintaining sensitivity (99.0% vs. 100.0%).

Comparatively, a urine steroid metabolomics result suggesting a high risk of adrenocortical carcinoma had a PPV of 34.6%.

A total of 106 patients (5.3%) met the criteria for all three measures, and the PPV for all three was 76.4%.

Using the criteria, 70 patients (3.5%) were classified as being at moderate risk of adrenocortical carcinoma and 1,841 (91.3%) at low risk, for a negative predictive value (NPV) of 99.7%.

“Use of radiation-free, noninvasive urine steroid metabolomics has a higher PPV than two standard imaging tests, and best performance was seen with the combination of all three tests,” the authors state.

 

 

Limit urine test to patients with larger tumors

They note that the use of the combined diagnostic strategy would have led to additional imaging in only 488 (24.2%) of the study’s 2,017 patients, compared with the 2,737 scans that were actually conducted before reaching a diagnostic decision.

“Implementation of urine steroid metabolomics in the routine diagnostic assessment of newly discovered adrenal masses could reduce the number of imaging procedures required to diagnose adrenocortical carcinoma and avoid unnecessary surgery of benign adrenal tumors, potentially yielding beneficial effects with respect to patient burden and health care costs,” they stress.

And regarding imaging parameters, “we also showed that using a cutoff of 20 HU for unenhanced CT tumor attenuation increases the accuracy of imaging characteristic assessment for exclusion of adrenocortical carcinoma, compared with the currently recommended cutoff of 10 HU, which has immediate implications for clinical practice,” they emphasize.

In an accompanying editorial, Adina F. Turcu, MD, of the division of metabolism, endocrinology, and diabetes, University of Michigan, Ann Arbor, and Axel K. Walch, MD, of the Helmholtz Zentrum München–German Research Centre for Environmental Health, agree. “The introduction of urine steroid metabolomics into routine clinical practice would provide major advantages,” they state.

However, they point out that, although the overall negative predictive value of the test was excellent, the specificity was weak.

“Thus, urine steroid metabolomics should be limited to patients who have adrenal nodules larger than 4 cm and have qualitative imaging characteristics suggestive of malignancy,” say Dr. Turcu and Dr. Walch.

The EURINE-ACT study results suggest this subgroup would represent roughly only 12% of all patients with adrenal incidentalomas, they add.

Issues that remain to be addressed with regard to the implementation of the screening strategy include how to best respond to patients who are classified as having intermediate or moderate risk of malignancy, and whether the diagnostic value of steroid metabolomics could be refined by adding analytes or parameters, the editorialists conclude.

The study was funded by the European Commission, U.K. Medical Research Council, Wellcome Trust, U.K. National Institute for Health Research, U.S. National Institutes of Health, the Claire Khan Trust Fund at University Hospitals Birmingham Charities, and the Mayo Clinic Foundation for Medical Education and Research.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

New osteoporosis recommendations from AACE help therapy selection

Article Type
Changed
Fri, 07/24/2020 - 15:23

Recommendations on use of the new dual-action anabolic agent romosozumab (Evenity, Amgen) and how to safely transition between osteoporosis agents are two of the issues addressed in the latest clinical practice guidelines for the diagnosis and treatment of postmenopausal osteoporosis from the American Association of Clinical Endocrinologists and American College of Endocrinology.

“This guideline is a practical tool for endocrinologists, physicians in general, regulatory bodies, health-related organizations, and interested laypersons regarding the diagnosis, evaluation, and treatment of postmenopausal osteoporosis,” the authors wrote.

The guidelines focus on 12 key clinical questions related to postmenopausal osteoporosis, with 52 specific recommendations, each graded according to the level of evidence.

They also include a treatment algorithm to help guide choice of therapy.
 

Reiterating role of FRAX in the diagnosis of patients with osteopenia

Among key updates is an emphasis on the role of the Fracture Risk Assessment Tool (FRAX) in the diagnosis of osteoporosis in patients with osteopenia.

While patients have traditionally been diagnosed with osteoporosis based on the presence of low bone mineral density (BMD) in the absence of fracture, the updated guidelines indicate that osteoporosis may be diagnosed in patients with osteopenia and an increased fracture risk using FRAX.

“The use of FRAX and osteopenia to diagnosis osteoporosis was first proposed by the National Bone Health Alliance years ago, and in the 2016 guideline, we agreed with it,” Pauline M. Camacho, MD, cochair of the guidelines task force, said in an interview.

“We reiterate in the 2020 guideline that we feel this is a valid diagnostic criteria,” said Dr. Camacho, professor of medicine and director of the Osteoporosis and Metabolic Bone Disease Center at Loyola University Chicago, Maywood, Ill. “It makes sense because when the thresholds are met by FRAX in patients with osteopenia, treatment is recommended. Therefore, why would they not fulfill treatment criteria for diagnosing osteoporosis?”

An increased risk of fracture based on a FRAX score may also be used to determine pharmacologic therapy, as can other traditional factors such as a low T score or a fragility fracture, the guidelines stated.
 

High risk vs. very high risk guides choice of first therapy

Another key update is the clarification of the risk stratification of patients who are high risk versus very high risk, which is key in determining the initial choice of agents and duration of therapy.

Specifically, patients should be considered at a very high fracture risk if they have the following criteria: a recent fracture (e.g., within the past 12 months), fractures while on approved osteoporosis therapy, multiple fractures, fractures while on drugs causing skeletal harm (e.g., long-term glucocorticoids), very low T score (e.g., less than −3.0), a high risk for falls or history of injurious falls, and a very high fracture probability by FRAX (e.g., major osteoporosis fracture >30%, hip fracture >4.5%) or other validated fracture risk algorithm.

Meanwhile, patients should be considered at high risk if they have been diagnosed with osteoporosis but do not meet the criteria for very high fracture risk.
 

Romosozumab brought into the mix

Another important update provides information on the role of one of the newest osteoporosis agents on the market, the anabolic drug romosozumab, a monoclonal antibody directed against sclerostin.

The drug’s approval by the Food and Drug Administration in 2019 for postmenopausal women at high risk of fracture was based on two large trials that showed dramatic increases in bone density through modeling as well as remodeling.

Those studies specifically showed significant reductions in radiographic vertebral fractures with romosozumab, compared with placebo and alendronate.

Dr. Camacho noted that romosozumab “will likely be for the very high risk group and those who have maxed out on teriparatide or abaloparatide.”

Romosozumab can safely be used in patients with prior radiation exposure, the guidelines noted.



Importantly, because of reports of a higher risk of serious cardiovascular events with romosozumab, compared with alendronate, romosozumab comes with a black-box warning that it should not be used in patients at high risk for cardiovascular events or who have had a recent myocardial infarction or stroke.

“Unfortunately, the very high risk group is often the older patients,” Dr. Camacho noted.

“The drug should not be given if there is a history of myocardial infarction or stroke in the past year,” she emphasized. “Clinical judgment is needed to decide who is at risk for cardiovascular complications.”

Notably, teriparatide and abaloparatide have black box warnings of their own regarding risk for osteosarcoma.

Switching therapies

Reflecting the evolving data on osteoporosis drug holidays, the guidelines also addressed the issue and the clinical challenges of switching therapies.

“In 2016, we said drug holidays are not recommended, and the treatment can be continued indefinitely, [however] in 2020, we felt that if some patients are no longer high risk, they can be transitioned off the drug,” Dr. Camacho said.

For teriparatide and abaloparatide, the FDA recommends treatment be limited to no more than 2 years, and for romosozumab, 1 year.

The updated guidelines recommend that upon discontinuation of an anabolic agent (e.g., abaloparatide, romosozumab, or teriparatide), a switch to therapy with an antiresorptive agent, such as denosumab or bisphosphonates, should be implemented to prevent loss of BMD and fracture efficacy.

Discontinuation of denosumab, however, can have notably negative effects. Clinical trials show rapid decreases in BMD when denosumab treatment is stopped after 2 or 8 years, as well as rapid loss of protection from vertebral fractures.

Therefore, if denosumab is going to be discontinued, there should be a proper transition to an antiresorptive agent for a limited time, such as one infusion of the bisphosphonate zoledronate.
 

Communicate the risks with and without treatment to patients

The authors underscored that, in addition to communicating the potential risk and expected benefits of osteoporosis treatments, clinicians should make sure patients fully appreciate the risk of fractures and their consequences, such as pain, disability, loss of independence, and death, when no treatment is given.

“It is incumbent on the clinician to provide this information to each patient in a manner that is fully understood, and it is equally important to learn from the patient about cultural beliefs, previous treatment experiences, fears, and concerns,” they wrote.

And in estimating patients’ fracture risk, T score must be combined with clinical risk factors, particularly advanced age and previous fracture, and clinicians should recognize that the absolute fracture risk is more useful than a risk ratio in developing treatment plans.

“Treatment recommendations may be quite different; an early postmenopausal woman with a T score of −2.5 has osteoporosis, although fracture risk is much lower than an 80-year-old woman with the same T score,” the authors explained.

Dr. Camacho reported financial relationships with Amgen and Shire. Disclosures for other task force members are detailed in the guidelines.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Recommendations on use of the new dual-action anabolic agent romosozumab (Evenity, Amgen) and how to safely transition between osteoporosis agents are two of the issues addressed in the latest clinical practice guidelines for the diagnosis and treatment of postmenopausal osteoporosis from the American Association of Clinical Endocrinologists and American College of Endocrinology.

“This guideline is a practical tool for endocrinologists, physicians in general, regulatory bodies, health-related organizations, and interested laypersons regarding the diagnosis, evaluation, and treatment of postmenopausal osteoporosis,” the authors wrote.

The guidelines focus on 12 key clinical questions related to postmenopausal osteoporosis, with 52 specific recommendations, each graded according to the level of evidence.

They also include a treatment algorithm to help guide choice of therapy.
 

Reiterating role of FRAX in the diagnosis of patients with osteopenia

Among key updates is an emphasis on the role of the Fracture Risk Assessment Tool (FRAX) in the diagnosis of osteoporosis in patients with osteopenia.

While patients have traditionally been diagnosed with osteoporosis based on the presence of low bone mineral density (BMD) in the absence of fracture, the updated guidelines indicate that osteoporosis may be diagnosed in patients with osteopenia and an increased fracture risk using FRAX.

“The use of FRAX and osteopenia to diagnosis osteoporosis was first proposed by the National Bone Health Alliance years ago, and in the 2016 guideline, we agreed with it,” Pauline M. Camacho, MD, cochair of the guidelines task force, said in an interview.

“We reiterate in the 2020 guideline that we feel this is a valid diagnostic criteria,” said Dr. Camacho, professor of medicine and director of the Osteoporosis and Metabolic Bone Disease Center at Loyola University Chicago, Maywood, Ill. “It makes sense because when the thresholds are met by FRAX in patients with osteopenia, treatment is recommended. Therefore, why would they not fulfill treatment criteria for diagnosing osteoporosis?”

An increased risk of fracture based on a FRAX score may also be used to determine pharmacologic therapy, as can other traditional factors such as a low T score or a fragility fracture, the guidelines stated.
 

High risk vs. very high risk guides choice of first therapy

Another key update is the clarification of the risk stratification of patients who are high risk versus very high risk, which is key in determining the initial choice of agents and duration of therapy.

Specifically, patients should be considered at a very high fracture risk if they have the following criteria: a recent fracture (e.g., within the past 12 months), fractures while on approved osteoporosis therapy, multiple fractures, fractures while on drugs causing skeletal harm (e.g., long-term glucocorticoids), very low T score (e.g., less than −3.0), a high risk for falls or history of injurious falls, and a very high fracture probability by FRAX (e.g., major osteoporosis fracture >30%, hip fracture >4.5%) or other validated fracture risk algorithm.

Meanwhile, patients should be considered at high risk if they have been diagnosed with osteoporosis but do not meet the criteria for very high fracture risk.
 

Romosozumab brought into the mix

Another important update provides information on the role of one of the newest osteoporosis agents on the market, the anabolic drug romosozumab, a monoclonal antibody directed against sclerostin.

The drug’s approval by the Food and Drug Administration in 2019 for postmenopausal women at high risk of fracture was based on two large trials that showed dramatic increases in bone density through modeling as well as remodeling.

Those studies specifically showed significant reductions in radiographic vertebral fractures with romosozumab, compared with placebo and alendronate.

Dr. Camacho noted that romosozumab “will likely be for the very high risk group and those who have maxed out on teriparatide or abaloparatide.”

Romosozumab can safely be used in patients with prior radiation exposure, the guidelines noted.



Importantly, because of reports of a higher risk of serious cardiovascular events with romosozumab, compared with alendronate, romosozumab comes with a black-box warning that it should not be used in patients at high risk for cardiovascular events or who have had a recent myocardial infarction or stroke.

“Unfortunately, the very high risk group is often the older patients,” Dr. Camacho noted.

“The drug should not be given if there is a history of myocardial infarction or stroke in the past year,” she emphasized. “Clinical judgment is needed to decide who is at risk for cardiovascular complications.”

Notably, teriparatide and abaloparatide have black box warnings of their own regarding risk for osteosarcoma.

Switching therapies

Reflecting the evolving data on osteoporosis drug holidays, the guidelines also addressed the issue and the clinical challenges of switching therapies.

“In 2016, we said drug holidays are not recommended, and the treatment can be continued indefinitely, [however] in 2020, we felt that if some patients are no longer high risk, they can be transitioned off the drug,” Dr. Camacho said.

For teriparatide and abaloparatide, the FDA recommends treatment be limited to no more than 2 years, and for romosozumab, 1 year.

The updated guidelines recommend that upon discontinuation of an anabolic agent (e.g., abaloparatide, romosozumab, or teriparatide), a switch to therapy with an antiresorptive agent, such as denosumab or bisphosphonates, should be implemented to prevent loss of BMD and fracture efficacy.

Discontinuation of denosumab, however, can have notably negative effects. Clinical trials show rapid decreases in BMD when denosumab treatment is stopped after 2 or 8 years, as well as rapid loss of protection from vertebral fractures.

Therefore, if denosumab is going to be discontinued, there should be a proper transition to an antiresorptive agent for a limited time, such as one infusion of the bisphosphonate zoledronate.
 

Communicate the risks with and without treatment to patients

The authors underscored that, in addition to communicating the potential risk and expected benefits of osteoporosis treatments, clinicians should make sure patients fully appreciate the risk of fractures and their consequences, such as pain, disability, loss of independence, and death, when no treatment is given.

“It is incumbent on the clinician to provide this information to each patient in a manner that is fully understood, and it is equally important to learn from the patient about cultural beliefs, previous treatment experiences, fears, and concerns,” they wrote.

And in estimating patients’ fracture risk, T score must be combined with clinical risk factors, particularly advanced age and previous fracture, and clinicians should recognize that the absolute fracture risk is more useful than a risk ratio in developing treatment plans.

“Treatment recommendations may be quite different; an early postmenopausal woman with a T score of −2.5 has osteoporosis, although fracture risk is much lower than an 80-year-old woman with the same T score,” the authors explained.

Dr. Camacho reported financial relationships with Amgen and Shire. Disclosures for other task force members are detailed in the guidelines.

A version of this article originally appeared on Medscape.com.

Recommendations on use of the new dual-action anabolic agent romosozumab (Evenity, Amgen) and how to safely transition between osteoporosis agents are two of the issues addressed in the latest clinical practice guidelines for the diagnosis and treatment of postmenopausal osteoporosis from the American Association of Clinical Endocrinologists and American College of Endocrinology.

“This guideline is a practical tool for endocrinologists, physicians in general, regulatory bodies, health-related organizations, and interested laypersons regarding the diagnosis, evaluation, and treatment of postmenopausal osteoporosis,” the authors wrote.

The guidelines focus on 12 key clinical questions related to postmenopausal osteoporosis, with 52 specific recommendations, each graded according to the level of evidence.

They also include a treatment algorithm to help guide choice of therapy.
 

Reiterating role of FRAX in the diagnosis of patients with osteopenia

Among key updates is an emphasis on the role of the Fracture Risk Assessment Tool (FRAX) in the diagnosis of osteoporosis in patients with osteopenia.

While patients have traditionally been diagnosed with osteoporosis based on the presence of low bone mineral density (BMD) in the absence of fracture, the updated guidelines indicate that osteoporosis may be diagnosed in patients with osteopenia and an increased fracture risk using FRAX.

“The use of FRAX and osteopenia to diagnosis osteoporosis was first proposed by the National Bone Health Alliance years ago, and in the 2016 guideline, we agreed with it,” Pauline M. Camacho, MD, cochair of the guidelines task force, said in an interview.

“We reiterate in the 2020 guideline that we feel this is a valid diagnostic criteria,” said Dr. Camacho, professor of medicine and director of the Osteoporosis and Metabolic Bone Disease Center at Loyola University Chicago, Maywood, Ill. “It makes sense because when the thresholds are met by FRAX in patients with osteopenia, treatment is recommended. Therefore, why would they not fulfill treatment criteria for diagnosing osteoporosis?”

An increased risk of fracture based on a FRAX score may also be used to determine pharmacologic therapy, as can other traditional factors such as a low T score or a fragility fracture, the guidelines stated.
 

High risk vs. very high risk guides choice of first therapy

Another key update is the clarification of the risk stratification of patients who are high risk versus very high risk, which is key in determining the initial choice of agents and duration of therapy.

Specifically, patients should be considered at a very high fracture risk if they have the following criteria: a recent fracture (e.g., within the past 12 months), fractures while on approved osteoporosis therapy, multiple fractures, fractures while on drugs causing skeletal harm (e.g., long-term glucocorticoids), very low T score (e.g., less than −3.0), a high risk for falls or history of injurious falls, and a very high fracture probability by FRAX (e.g., major osteoporosis fracture >30%, hip fracture >4.5%) or other validated fracture risk algorithm.

Meanwhile, patients should be considered at high risk if they have been diagnosed with osteoporosis but do not meet the criteria for very high fracture risk.
 

Romosozumab brought into the mix

Another important update provides information on the role of one of the newest osteoporosis agents on the market, the anabolic drug romosozumab, a monoclonal antibody directed against sclerostin.

The drug’s approval by the Food and Drug Administration in 2019 for postmenopausal women at high risk of fracture was based on two large trials that showed dramatic increases in bone density through modeling as well as remodeling.

Those studies specifically showed significant reductions in radiographic vertebral fractures with romosozumab, compared with placebo and alendronate.

Dr. Camacho noted that romosozumab “will likely be for the very high risk group and those who have maxed out on teriparatide or abaloparatide.”

Romosozumab can safely be used in patients with prior radiation exposure, the guidelines noted.



Importantly, because of reports of a higher risk of serious cardiovascular events with romosozumab, compared with alendronate, romosozumab comes with a black-box warning that it should not be used in patients at high risk for cardiovascular events or who have had a recent myocardial infarction or stroke.

“Unfortunately, the very high risk group is often the older patients,” Dr. Camacho noted.

“The drug should not be given if there is a history of myocardial infarction or stroke in the past year,” she emphasized. “Clinical judgment is needed to decide who is at risk for cardiovascular complications.”

Notably, teriparatide and abaloparatide have black box warnings of their own regarding risk for osteosarcoma.

Switching therapies

Reflecting the evolving data on osteoporosis drug holidays, the guidelines also addressed the issue and the clinical challenges of switching therapies.

“In 2016, we said drug holidays are not recommended, and the treatment can be continued indefinitely, [however] in 2020, we felt that if some patients are no longer high risk, they can be transitioned off the drug,” Dr. Camacho said.

For teriparatide and abaloparatide, the FDA recommends treatment be limited to no more than 2 years, and for romosozumab, 1 year.

The updated guidelines recommend that upon discontinuation of an anabolic agent (e.g., abaloparatide, romosozumab, or teriparatide), a switch to therapy with an antiresorptive agent, such as denosumab or bisphosphonates, should be implemented to prevent loss of BMD and fracture efficacy.

Discontinuation of denosumab, however, can have notably negative effects. Clinical trials show rapid decreases in BMD when denosumab treatment is stopped after 2 or 8 years, as well as rapid loss of protection from vertebral fractures.

Therefore, if denosumab is going to be discontinued, there should be a proper transition to an antiresorptive agent for a limited time, such as one infusion of the bisphosphonate zoledronate.
 

Communicate the risks with and without treatment to patients

The authors underscored that, in addition to communicating the potential risk and expected benefits of osteoporosis treatments, clinicians should make sure patients fully appreciate the risk of fractures and their consequences, such as pain, disability, loss of independence, and death, when no treatment is given.

“It is incumbent on the clinician to provide this information to each patient in a manner that is fully understood, and it is equally important to learn from the patient about cultural beliefs, previous treatment experiences, fears, and concerns,” they wrote.

And in estimating patients’ fracture risk, T score must be combined with clinical risk factors, particularly advanced age and previous fracture, and clinicians should recognize that the absolute fracture risk is more useful than a risk ratio in developing treatment plans.

“Treatment recommendations may be quite different; an early postmenopausal woman with a T score of −2.5 has osteoporosis, although fracture risk is much lower than an 80-year-old woman with the same T score,” the authors explained.

Dr. Camacho reported financial relationships with Amgen and Shire. Disclosures for other task force members are detailed in the guidelines.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Epilepsy after TBI linked to worse 12-month outcomes

Article Type
Changed
Thu, 07/30/2020 - 12:02

The severity of head injury in traumatic brain injury (TBI) is significantly linked with the risk of developing posttraumatic epilepsy and seizures, and posttraumatic epilepsy itself further worsens outcomes at 12 months, findings from an analysis of a large, prospective database suggest. “We found that patients essentially have a 10-times greater risk of developing posttraumatic epilepsy and seizures at 12 months [post injury] if the presenting Glasgow Coma Scale GCS) is less than 8,” said lead author John F. Burke, MD, PhD, University of California, San Francisco, in presenting the findings as part of the virtual annual meeting of the American Association of Neurological Surgeons.

Assessing risk factors

While posttraumatic epilepsy represents an estimated 20% of all cases of symptomatic epilepsy, many questions remain on those most at risk and on the long-term effects of posttraumatic epilepsy on TBI outcomes. To probe those issues, Dr. Burke and colleagues turned to the multicenter TRACK-TBI database, which has prospective, longitudinal data on more than 2,700 patients with traumatic brain injuries and is considered the largest source of prospective data on posttraumatic epilepsy.

Using the criteria of no previous epilepsy and having 12 months of follow-up, the team identified 1,493 patients with TBI. In addition, investigators identified 182 orthopedic controls (included and prospectively followed because they have injuries but not specifically head trauma) and 210 controls who are friends of the patients and who do not have injuries but allow researchers to control for socioeconomic and environmental factors.

Of the 1,493 patients with TBI, 41 (2.7%) were determined to have posttraumatic epilepsy, assessed according to a National Institute of Neurological Disorders and Stroke epilepsy screening questionnaire, which is designed to identify patients with posttraumatic epilepsy symptoms. There were no reports of epilepsy symptoms using the screening tool among the controls. Dr. Burke noted that the 2.7% was in agreement with historical reports.

In comparing patients with TBI who did and did not have posttraumatic epilepsy, no differences were observed in the groups in terms of gender, although there was a trend toward younger age among those with PTE (mean age, 35.4 years with posttraumatic injury vs. 41.5 without; P = .05).

A major risk factor for the development of posttraumatic epilepsy was presenting GCS scores. Among those with scores of less than 8, indicative of severe injury, the rate of posttraumatic epilepsy was 6% at 6 months and 12.5% at 12 months. In contrast, those with TBI presenting with GCS scores between 13 and 15, indicative of minor injury, had an incidence of posttraumatic epilepsy of 0.9% at 6 months and 1.4% at 12 months.

Imaging findings in the two groups showed that hemorrhage detected on CT imaging was associated with a significantly higher risk for posttraumatic epilepsy (P < .001).

“The main takeaway is that any hemorrhage in the brain is a major risk factor for developing seizures,” Dr. Burke said. “Whether it is subdural, epidural blood, subarachnoid or contusion, any blood confers a very [high] risk for developing seizures.”

Posttraumatic epilepsy was linked to poorer longer-term outcomes even for patients with lesser injury: Among those with TBI and GCS of 13-15, the mean Glasgow Outcome Scale Extended (GOSE) score at 12 months among those without posttraumatic epilepsy was 7, indicative of a good recovery with minor defects, whereas the mean GOSE score for those with PTE was 4.6, indicative of moderate to severe disability (P  < .001).

“It was surprising to us that PTE-positive patients had a very significant decrease in GOSE, compared to PTE-negative patients,” Dr. Burke said. “There was a nearly 2-point drop in the GOSE and that was extremely significant.”

A multivariate analysis showed there was still a significant independent risk for a poor GOSE score with posttraumatic epilepsy after controlling for GCS score, head CT findings, and age (P < .001).

The authors also looked at mood outcomes using the Brief Symptom Inventory–18, which showed significant worse effect in those with posttraumatic epilepsy after multivariate adjustment (P = .01). Additionally, a highly significant worse effect in cognitive outcomes on the Rivermead cognitive metric was observed with posttraumatic epilepsy (P = .001).

“On all metrics tested, posttraumatic epilepsy worsened outcomes,” Dr. Burke said.

He noted that the study has some key limitations, including the 12-month follow-up. A previous study showed a linear increase in posttraumatic follow-up up to 30 years. “The fact that we found 41 patients at 12 months indicates there are probably more that are out there who are going to develop seizures, but because we don’t have the follow-up we can’t look at that.”

Although the screening questionnaires are effective, “the issue is these people are not being seen by an epileptologist or having scalp EEG done, and we need a more accurate way to do this,” he said. A new study, TRACK-TBI EPI, will address those limitations and a host of other issues with a 5-year follow-up.
 

 

 

Capturing the nuances of brain injury

Commenting on the study as a discussant, neurosurgeon Uzma Samadani, MD, PhD, of the Minneapolis Veterans Affairs Medical Center and CentraCare in Minneapolis, suggested that the future work should focus on issues including the wide-ranging mechanisms that could explain the seizure activity.

“For example, it’s known that posttraumatic epilepsy or seizures can be triggered by abnormal conductivity due to multiple different mechanisms associated with brain injury, such as endocrine dysfunction, cortical-spreading depression, and many others,” said Dr. Samadani, who has been a researcher on the TRACK-TBI study.

Factors ranging from genetic differences to comorbid conditions such as alcoholism can play a role in brain injury susceptibility, Dr. Samadani added. Furthermore, outcome measures currently available simply may not capture the unknown nuances of brain injury.

“We have to ask, are these an all-or-none phenomena, or is aberrant electrical activity after brain injury a continuum of dysfunction?” Dr. Samadani speculated.

“I would caution that we are likely underestimating the non–easily measurable consequences of brain injury,” she said. “And the better we can quantitate susceptibility, classify the nature of injury and target acute management, the less posttraumatic epilepsy/aberrant electrical activity our patients will have.”

Dr. Burke and Dr. Samadani disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(8)
Publications
Topics
Sections

The severity of head injury in traumatic brain injury (TBI) is significantly linked with the risk of developing posttraumatic epilepsy and seizures, and posttraumatic epilepsy itself further worsens outcomes at 12 months, findings from an analysis of a large, prospective database suggest. “We found that patients essentially have a 10-times greater risk of developing posttraumatic epilepsy and seizures at 12 months [post injury] if the presenting Glasgow Coma Scale GCS) is less than 8,” said lead author John F. Burke, MD, PhD, University of California, San Francisco, in presenting the findings as part of the virtual annual meeting of the American Association of Neurological Surgeons.

Assessing risk factors

While posttraumatic epilepsy represents an estimated 20% of all cases of symptomatic epilepsy, many questions remain on those most at risk and on the long-term effects of posttraumatic epilepsy on TBI outcomes. To probe those issues, Dr. Burke and colleagues turned to the multicenter TRACK-TBI database, which has prospective, longitudinal data on more than 2,700 patients with traumatic brain injuries and is considered the largest source of prospective data on posttraumatic epilepsy.

Using the criteria of no previous epilepsy and having 12 months of follow-up, the team identified 1,493 patients with TBI. In addition, investigators identified 182 orthopedic controls (included and prospectively followed because they have injuries but not specifically head trauma) and 210 controls who are friends of the patients and who do not have injuries but allow researchers to control for socioeconomic and environmental factors.

Of the 1,493 patients with TBI, 41 (2.7%) were determined to have posttraumatic epilepsy, assessed according to a National Institute of Neurological Disorders and Stroke epilepsy screening questionnaire, which is designed to identify patients with posttraumatic epilepsy symptoms. There were no reports of epilepsy symptoms using the screening tool among the controls. Dr. Burke noted that the 2.7% was in agreement with historical reports.

In comparing patients with TBI who did and did not have posttraumatic epilepsy, no differences were observed in the groups in terms of gender, although there was a trend toward younger age among those with PTE (mean age, 35.4 years with posttraumatic injury vs. 41.5 without; P = .05).

A major risk factor for the development of posttraumatic epilepsy was presenting GCS scores. Among those with scores of less than 8, indicative of severe injury, the rate of posttraumatic epilepsy was 6% at 6 months and 12.5% at 12 months. In contrast, those with TBI presenting with GCS scores between 13 and 15, indicative of minor injury, had an incidence of posttraumatic epilepsy of 0.9% at 6 months and 1.4% at 12 months.

Imaging findings in the two groups showed that hemorrhage detected on CT imaging was associated with a significantly higher risk for posttraumatic epilepsy (P < .001).

“The main takeaway is that any hemorrhage in the brain is a major risk factor for developing seizures,” Dr. Burke said. “Whether it is subdural, epidural blood, subarachnoid or contusion, any blood confers a very [high] risk for developing seizures.”

Posttraumatic epilepsy was linked to poorer longer-term outcomes even for patients with lesser injury: Among those with TBI and GCS of 13-15, the mean Glasgow Outcome Scale Extended (GOSE) score at 12 months among those without posttraumatic epilepsy was 7, indicative of a good recovery with minor defects, whereas the mean GOSE score for those with PTE was 4.6, indicative of moderate to severe disability (P  < .001).

“It was surprising to us that PTE-positive patients had a very significant decrease in GOSE, compared to PTE-negative patients,” Dr. Burke said. “There was a nearly 2-point drop in the GOSE and that was extremely significant.”

A multivariate analysis showed there was still a significant independent risk for a poor GOSE score with posttraumatic epilepsy after controlling for GCS score, head CT findings, and age (P < .001).

The authors also looked at mood outcomes using the Brief Symptom Inventory–18, which showed significant worse effect in those with posttraumatic epilepsy after multivariate adjustment (P = .01). Additionally, a highly significant worse effect in cognitive outcomes on the Rivermead cognitive metric was observed with posttraumatic epilepsy (P = .001).

“On all metrics tested, posttraumatic epilepsy worsened outcomes,” Dr. Burke said.

He noted that the study has some key limitations, including the 12-month follow-up. A previous study showed a linear increase in posttraumatic follow-up up to 30 years. “The fact that we found 41 patients at 12 months indicates there are probably more that are out there who are going to develop seizures, but because we don’t have the follow-up we can’t look at that.”

Although the screening questionnaires are effective, “the issue is these people are not being seen by an epileptologist or having scalp EEG done, and we need a more accurate way to do this,” he said. A new study, TRACK-TBI EPI, will address those limitations and a host of other issues with a 5-year follow-up.
 

 

 

Capturing the nuances of brain injury

Commenting on the study as a discussant, neurosurgeon Uzma Samadani, MD, PhD, of the Minneapolis Veterans Affairs Medical Center and CentraCare in Minneapolis, suggested that the future work should focus on issues including the wide-ranging mechanisms that could explain the seizure activity.

“For example, it’s known that posttraumatic epilepsy or seizures can be triggered by abnormal conductivity due to multiple different mechanisms associated with brain injury, such as endocrine dysfunction, cortical-spreading depression, and many others,” said Dr. Samadani, who has been a researcher on the TRACK-TBI study.

Factors ranging from genetic differences to comorbid conditions such as alcoholism can play a role in brain injury susceptibility, Dr. Samadani added. Furthermore, outcome measures currently available simply may not capture the unknown nuances of brain injury.

“We have to ask, are these an all-or-none phenomena, or is aberrant electrical activity after brain injury a continuum of dysfunction?” Dr. Samadani speculated.

“I would caution that we are likely underestimating the non–easily measurable consequences of brain injury,” she said. “And the better we can quantitate susceptibility, classify the nature of injury and target acute management, the less posttraumatic epilepsy/aberrant electrical activity our patients will have.”

Dr. Burke and Dr. Samadani disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

The severity of head injury in traumatic brain injury (TBI) is significantly linked with the risk of developing posttraumatic epilepsy and seizures, and posttraumatic epilepsy itself further worsens outcomes at 12 months, findings from an analysis of a large, prospective database suggest. “We found that patients essentially have a 10-times greater risk of developing posttraumatic epilepsy and seizures at 12 months [post injury] if the presenting Glasgow Coma Scale GCS) is less than 8,” said lead author John F. Burke, MD, PhD, University of California, San Francisco, in presenting the findings as part of the virtual annual meeting of the American Association of Neurological Surgeons.

Assessing risk factors

While posttraumatic epilepsy represents an estimated 20% of all cases of symptomatic epilepsy, many questions remain on those most at risk and on the long-term effects of posttraumatic epilepsy on TBI outcomes. To probe those issues, Dr. Burke and colleagues turned to the multicenter TRACK-TBI database, which has prospective, longitudinal data on more than 2,700 patients with traumatic brain injuries and is considered the largest source of prospective data on posttraumatic epilepsy.

Using the criteria of no previous epilepsy and having 12 months of follow-up, the team identified 1,493 patients with TBI. In addition, investigators identified 182 orthopedic controls (included and prospectively followed because they have injuries but not specifically head trauma) and 210 controls who are friends of the patients and who do not have injuries but allow researchers to control for socioeconomic and environmental factors.

Of the 1,493 patients with TBI, 41 (2.7%) were determined to have posttraumatic epilepsy, assessed according to a National Institute of Neurological Disorders and Stroke epilepsy screening questionnaire, which is designed to identify patients with posttraumatic epilepsy symptoms. There were no reports of epilepsy symptoms using the screening tool among the controls. Dr. Burke noted that the 2.7% was in agreement with historical reports.

In comparing patients with TBI who did and did not have posttraumatic epilepsy, no differences were observed in the groups in terms of gender, although there was a trend toward younger age among those with PTE (mean age, 35.4 years with posttraumatic injury vs. 41.5 without; P = .05).

A major risk factor for the development of posttraumatic epilepsy was presenting GCS scores. Among those with scores of less than 8, indicative of severe injury, the rate of posttraumatic epilepsy was 6% at 6 months and 12.5% at 12 months. In contrast, those with TBI presenting with GCS scores between 13 and 15, indicative of minor injury, had an incidence of posttraumatic epilepsy of 0.9% at 6 months and 1.4% at 12 months.

Imaging findings in the two groups showed that hemorrhage detected on CT imaging was associated with a significantly higher risk for posttraumatic epilepsy (P < .001).

“The main takeaway is that any hemorrhage in the brain is a major risk factor for developing seizures,” Dr. Burke said. “Whether it is subdural, epidural blood, subarachnoid or contusion, any blood confers a very [high] risk for developing seizures.”

Posttraumatic epilepsy was linked to poorer longer-term outcomes even for patients with lesser injury: Among those with TBI and GCS of 13-15, the mean Glasgow Outcome Scale Extended (GOSE) score at 12 months among those without posttraumatic epilepsy was 7, indicative of a good recovery with minor defects, whereas the mean GOSE score for those with PTE was 4.6, indicative of moderate to severe disability (P  < .001).

“It was surprising to us that PTE-positive patients had a very significant decrease in GOSE, compared to PTE-negative patients,” Dr. Burke said. “There was a nearly 2-point drop in the GOSE and that was extremely significant.”

A multivariate analysis showed there was still a significant independent risk for a poor GOSE score with posttraumatic epilepsy after controlling for GCS score, head CT findings, and age (P < .001).

The authors also looked at mood outcomes using the Brief Symptom Inventory–18, which showed significant worse effect in those with posttraumatic epilepsy after multivariate adjustment (P = .01). Additionally, a highly significant worse effect in cognitive outcomes on the Rivermead cognitive metric was observed with posttraumatic epilepsy (P = .001).

“On all metrics tested, posttraumatic epilepsy worsened outcomes,” Dr. Burke said.

He noted that the study has some key limitations, including the 12-month follow-up. A previous study showed a linear increase in posttraumatic follow-up up to 30 years. “The fact that we found 41 patients at 12 months indicates there are probably more that are out there who are going to develop seizures, but because we don’t have the follow-up we can’t look at that.”

Although the screening questionnaires are effective, “the issue is these people are not being seen by an epileptologist or having scalp EEG done, and we need a more accurate way to do this,” he said. A new study, TRACK-TBI EPI, will address those limitations and a host of other issues with a 5-year follow-up.
 

 

 

Capturing the nuances of brain injury

Commenting on the study as a discussant, neurosurgeon Uzma Samadani, MD, PhD, of the Minneapolis Veterans Affairs Medical Center and CentraCare in Minneapolis, suggested that the future work should focus on issues including the wide-ranging mechanisms that could explain the seizure activity.

“For example, it’s known that posttraumatic epilepsy or seizures can be triggered by abnormal conductivity due to multiple different mechanisms associated with brain injury, such as endocrine dysfunction, cortical-spreading depression, and many others,” said Dr. Samadani, who has been a researcher on the TRACK-TBI study.

Factors ranging from genetic differences to comorbid conditions such as alcoholism can play a role in brain injury susceptibility, Dr. Samadani added. Furthermore, outcome measures currently available simply may not capture the unknown nuances of brain injury.

“We have to ask, are these an all-or-none phenomena, or is aberrant electrical activity after brain injury a continuum of dysfunction?” Dr. Samadani speculated.

“I would caution that we are likely underestimating the non–easily measurable consequences of brain injury,” she said. “And the better we can quantitate susceptibility, classify the nature of injury and target acute management, the less posttraumatic epilepsy/aberrant electrical activity our patients will have.”

Dr. Burke and Dr. Samadani disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(8)
Issue
Neurology Reviews- 28(8)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AANS 2020

Citation Override
Publish date: July 7, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

FDA approves first oral somatostatin analog for acromegaly

Article Type
Changed
Mon, 06/29/2020 - 15:04

The Food and Drug Administration has approved oral octreotide (Mycapssa, Chiasma) delayed-release capsules for the long-term maintenance treatment of patients with acromegaly who previously responded to and tolerated octreotide or lanreotide injections.

Wikimedia Commons/FitzColinGerald/ Creative Commons License

“People living with acromegaly experience many challenges associated with injectable therapies and are in need of new treatment options,” Jill Sisco, president of Acromegaly Community, a patient support group, said in a Chiasma press release.

“The entire acromegaly community has long awaited oral therapeutic options and it is gratifying to see that the FDA has now approved the first oral somatostatin analog (SSA) therapy with the potential to make a significant impact in the lives of people with acromegaly and their caregivers,” she added.

Acromegaly, a rare, chronic disease usually caused by a benign pituitary tumor that leads to excess production of growth hormone and insulin-like growth factor-1 (IGF-1) hormone, can be cured through the successful surgical removal of the pituitary tumor. However, management of the disease remains a lifelong challenge for many who must rely on chronic injections.

The new oral formulation of octreotide is the first and only oral somatostatin analog approved by the FDA.

The approval was based on the results of the 9-month, phase 3 pivotal CHIASMA OPTIMAL clinical trial, involving 56 adults with acromegaly controlled by injectable SSAs.

The patients, who were randomized 1:1 to octreotide capsules or placebo, were dose-titrated from 40 mg/day up to a maximum of 80 mg/day, equaling two capsules in the morning and two in the evening.

The study met its primary endpoint. Overall, 58% of patients taking octreotide maintained IGF-1 response compared with 19% of those on placebo at the end of 9 months (P = .008), according to the average of the last two IGF-1 levels that were 1 times or less the upper limit of normal, assessed at weeks 34 and 36.  

The trial also met its secondary endpoints, which included the proportion of patients who maintain growth hormone response at week 36 compared with screening; time to loss of response; and proportion of patients requiring reversion to prior treatment.

Safety data were favorable. Adverse reactions to the drug, detailed in the prescribing information, include cholelithiasis and associated complications; hyperglycemia and hypoglycemia; thyroid function abnormalities; cardiac function abnormalities; decreased vitamin B12 levels, and abnormal Schilling’s test results.

Results from the clinical trial “are encouraging for patients with acromegaly,” the study’s principal investigator, Susan Samson, MD, PhD, of Baylor College of Medicine, Houston, said in the Chiasma statement.

“Based on data from the CHIASMA OPTIMAL trial showing patients on therapy being able to maintain mean IGF-1 levels within the normal range at the end of treatment, I believe oral octreotide capsules hold meaningful promise for patients with this disease and will address a long-standing unmet treatment need,” she added.

Chiasma reports that it expects Mycapssa to be available in the fourth quarter of 2020, pending FDA approval of a planned manufacturing supplement to the approved new drug application.

The company further plans to provide patient support services including assistance with insurance providers and specialty pharmacies and support in incorporating treatment into patients’ daily routines.

Despite effective biochemical control of growth hormone, many patients with acromegaly continue to suffer symptoms, mainly because of comorbidities, so it is important that these are also adequately treated, a consensus group concluded earlier this year.

The CHIASMA OPTIMAL trial was funded by Chiasma.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

The Food and Drug Administration has approved oral octreotide (Mycapssa, Chiasma) delayed-release capsules for the long-term maintenance treatment of patients with acromegaly who previously responded to and tolerated octreotide or lanreotide injections.

Wikimedia Commons/FitzColinGerald/ Creative Commons License

“People living with acromegaly experience many challenges associated with injectable therapies and are in need of new treatment options,” Jill Sisco, president of Acromegaly Community, a patient support group, said in a Chiasma press release.

“The entire acromegaly community has long awaited oral therapeutic options and it is gratifying to see that the FDA has now approved the first oral somatostatin analog (SSA) therapy with the potential to make a significant impact in the lives of people with acromegaly and their caregivers,” she added.

Acromegaly, a rare, chronic disease usually caused by a benign pituitary tumor that leads to excess production of growth hormone and insulin-like growth factor-1 (IGF-1) hormone, can be cured through the successful surgical removal of the pituitary tumor. However, management of the disease remains a lifelong challenge for many who must rely on chronic injections.

The new oral formulation of octreotide is the first and only oral somatostatin analog approved by the FDA.

The approval was based on the results of the 9-month, phase 3 pivotal CHIASMA OPTIMAL clinical trial, involving 56 adults with acromegaly controlled by injectable SSAs.

The patients, who were randomized 1:1 to octreotide capsules or placebo, were dose-titrated from 40 mg/day up to a maximum of 80 mg/day, equaling two capsules in the morning and two in the evening.

The study met its primary endpoint. Overall, 58% of patients taking octreotide maintained IGF-1 response compared with 19% of those on placebo at the end of 9 months (P = .008), according to the average of the last two IGF-1 levels that were 1 times or less the upper limit of normal, assessed at weeks 34 and 36.  

The trial also met its secondary endpoints, which included the proportion of patients who maintain growth hormone response at week 36 compared with screening; time to loss of response; and proportion of patients requiring reversion to prior treatment.

Safety data were favorable. Adverse reactions to the drug, detailed in the prescribing information, include cholelithiasis and associated complications; hyperglycemia and hypoglycemia; thyroid function abnormalities; cardiac function abnormalities; decreased vitamin B12 levels, and abnormal Schilling’s test results.

Results from the clinical trial “are encouraging for patients with acromegaly,” the study’s principal investigator, Susan Samson, MD, PhD, of Baylor College of Medicine, Houston, said in the Chiasma statement.

“Based on data from the CHIASMA OPTIMAL trial showing patients on therapy being able to maintain mean IGF-1 levels within the normal range at the end of treatment, I believe oral octreotide capsules hold meaningful promise for patients with this disease and will address a long-standing unmet treatment need,” she added.

Chiasma reports that it expects Mycapssa to be available in the fourth quarter of 2020, pending FDA approval of a planned manufacturing supplement to the approved new drug application.

The company further plans to provide patient support services including assistance with insurance providers and specialty pharmacies and support in incorporating treatment into patients’ daily routines.

Despite effective biochemical control of growth hormone, many patients with acromegaly continue to suffer symptoms, mainly because of comorbidities, so it is important that these are also adequately treated, a consensus group concluded earlier this year.

The CHIASMA OPTIMAL trial was funded by Chiasma.
 

A version of this article originally appeared on Medscape.com.

The Food and Drug Administration has approved oral octreotide (Mycapssa, Chiasma) delayed-release capsules for the long-term maintenance treatment of patients with acromegaly who previously responded to and tolerated octreotide or lanreotide injections.

Wikimedia Commons/FitzColinGerald/ Creative Commons License

“People living with acromegaly experience many challenges associated with injectable therapies and are in need of new treatment options,” Jill Sisco, president of Acromegaly Community, a patient support group, said in a Chiasma press release.

“The entire acromegaly community has long awaited oral therapeutic options and it is gratifying to see that the FDA has now approved the first oral somatostatin analog (SSA) therapy with the potential to make a significant impact in the lives of people with acromegaly and their caregivers,” she added.

Acromegaly, a rare, chronic disease usually caused by a benign pituitary tumor that leads to excess production of growth hormone and insulin-like growth factor-1 (IGF-1) hormone, can be cured through the successful surgical removal of the pituitary tumor. However, management of the disease remains a lifelong challenge for many who must rely on chronic injections.

The new oral formulation of octreotide is the first and only oral somatostatin analog approved by the FDA.

The approval was based on the results of the 9-month, phase 3 pivotal CHIASMA OPTIMAL clinical trial, involving 56 adults with acromegaly controlled by injectable SSAs.

The patients, who were randomized 1:1 to octreotide capsules or placebo, were dose-titrated from 40 mg/day up to a maximum of 80 mg/day, equaling two capsules in the morning and two in the evening.

The study met its primary endpoint. Overall, 58% of patients taking octreotide maintained IGF-1 response compared with 19% of those on placebo at the end of 9 months (P = .008), according to the average of the last two IGF-1 levels that were 1 times or less the upper limit of normal, assessed at weeks 34 and 36.  

The trial also met its secondary endpoints, which included the proportion of patients who maintain growth hormone response at week 36 compared with screening; time to loss of response; and proportion of patients requiring reversion to prior treatment.

Safety data were favorable. Adverse reactions to the drug, detailed in the prescribing information, include cholelithiasis and associated complications; hyperglycemia and hypoglycemia; thyroid function abnormalities; cardiac function abnormalities; decreased vitamin B12 levels, and abnormal Schilling’s test results.

Results from the clinical trial “are encouraging for patients with acromegaly,” the study’s principal investigator, Susan Samson, MD, PhD, of Baylor College of Medicine, Houston, said in the Chiasma statement.

“Based on data from the CHIASMA OPTIMAL trial showing patients on therapy being able to maintain mean IGF-1 levels within the normal range at the end of treatment, I believe oral octreotide capsules hold meaningful promise for patients with this disease and will address a long-standing unmet treatment need,” she added.

Chiasma reports that it expects Mycapssa to be available in the fourth quarter of 2020, pending FDA approval of a planned manufacturing supplement to the approved new drug application.

The company further plans to provide patient support services including assistance with insurance providers and specialty pharmacies and support in incorporating treatment into patients’ daily routines.

Despite effective biochemical control of growth hormone, many patients with acromegaly continue to suffer symptoms, mainly because of comorbidities, so it is important that these are also adequately treated, a consensus group concluded earlier this year.

The CHIASMA OPTIMAL trial was funded by Chiasma.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

CMSC MRI guidelines evolve into international consensus protocol

Article Type
Changed
Thu, 07/30/2020 - 12:10

Proposed updates to guidelines for magnetic resonance imaging in patients with multiple sclerosis (MS) are in the works to make the Consortium of Multiple Sclerosis Centers protocol and other international guidelines more similar, with the hope that internationally accepted consensus guidelines will improve lagging conformity with the recommendations.

“We’ve always envisioned the guidelines as being international, but now we have harmony with the groups, so this is truly a global protocol,” Anthony Traboulsee, MD, a professor of neurology and director of the MS clinic and neuromyelitis optica clinic at the University of British Columbia in Vancouver, said in presenting the proposed updates during the virtual meeting of the CMSC.

The updates reflect the input of an international expert panel convened by the CMSC in October 2019, made up of neurologists, radiologists, magnetic resonance technologists, and imaging scientists with expertise in MS. Attendees represented groups including the European-based Magnetic Resonance Imaging in MS (MAGNIMS), North American Imaging in Multiple Sclerosis Cooperative, National MS Society, Multiple Sclerosis Association of America, MRI manufacturers, and commercial image analysis.
 

Standardizing scans

While the mission was to review and update the current guidelines, an important overriding objective was to boost universal acceptance and improve the utilization of the protocol, which research shows is surprisingly low. According to one poster presented at the meeting, a real-world MRI dataset of 1,233 sessions showed only 8% satisfied criteria for the T1 sequence outlined in the 2018 guidelines, and only 7% satisfied criteria for the T2 sequence. “In a real-world MRI dataset of patients with MS, the conformance to the CMSC brain MRI guidelines was extremely low,” concluded the authors, who were with Icometrix, in Chicago and Belgium.

David Li, MD, also of the University of British Columbia and cochair of the MRI guideline committee, said the nonconformity has important implications. “Nonstandardized scans, with inconsistent slice thickness and gaps, nonstandardized slice acquisition (not in the subcallosal plane), and incomplete brain coverage, all contribute to scans that are difficult to compare,” he said. Those factors, “allow for assessment of new lesions and lesion activity that are invaluable for diagnosis as well as determining the effectiveness of therapy or the need for initiating/changing therapy.”

Dr. Traboulsee said the lack of adherence to guidelines may simply have to do with a mistaken perception of complexity. “Part of the challenge is MRI centers don’t realize how easy it is to implement these guidelines,” he said in presenting the proposed updates.

Dr. Traboulsee noted that the CMSC has been working with manufacturers to try to incorporate the protocol into the scanners “so that it’s just a button to press” for the MRI. “I think that will get us over a major hurdle of adaptation,” Dr. Traboulsee said. “Most radiologists said once they started using it they were really happy with it. They found they were using it beyond MS for other basic neurologic imaging, so just raising awareness and making things more of a one-step process for individuals to use will be helpful,” he said.
 

 

 

Repositioning consistency is key

Among key suggestions that the expert panel proposed for guideline updates include the use of the subcallosal plane for consistent repositioning, which should allow for more accuracy and consistency in the identification of lesions in MS, Dr. Traboulsee said. “A major change reflecting improvements in MRI technology is the ability to acquire high-resolution 3-D images and that’s particularly helpful with fluid attenuation inversion recovery (FLAIR) sequences, which is what we do to identify lesions,” he explained. “The repositioning along the subcallosal line is important because it allows us to easily compare studies over time. It takes very little time but allows us to prepare studies over time much more easily,” he said.

Central vein sign

Another update is the establishment of a new category of optimum plus sequences allowing for the monitoring of brain atrophy and identifying lesions with a central vein sign, which has gained high interest as a marker on 3T MRI of demyelinating plaques in MS. As described in recent research, the central vein sign shows high accuracy in differentiating between MS and non-MS lesions.

“Many people have a few white spots on neuroimaging, but with MRI so much more available around the world, many of them are being misdiagnosed with MS,” Dr. Traboulsee said. “But the central vein sign, using a very simple MRI technique, can identify lesions with a vein in the center that (distinguishes them as) MS lesions.”

Though the process is still several years from routine clinical use, the proposed update would better implement susceptibility weighted imaging, which has traditionally been used for functional MRI.
 

PML Surveillance

The updates also include recommendations to help in the detection of the rare but potentially serious complication of some disease-modifying therapies of progressive multifocal leukoencephalopathy (PML). “We need a very quick and comprehensive way to monitor patients for PML before symptoms develop,” Dr. Traboulsee said. “The sequences we recommended were based on expert opinion of people who have worked quite a bit with PML in MS, and if one wants to survey for PML it’s only about a 10-minute scan.”

International protocol

Corey Ford, MD, a professor of neurology and director of the MS Specialty Clinic at the University of New Mexico Health Sciences Center in Albuquerque, commented that, with imaging playing such an important role in MS, the lack of adherence to the protocol can be a significant hindrance. “MRI is the most important imaging tool we have in the diagnosing and management of MS, but ... it’s quite amazing how different the sequences that are used can be when imaging centers are asked to image someone with a diagnosis of MS, so it’s a problem,” he said.

Dr. Ford speculated that part of the problem is simply inertia at some imaging centers. “Practices will have been programmed into their protocol for a long time, so when a patient comes in for imaging regarding MS, they may [turn to] their typical sequence,” he said. “There is an inertial barrier to upgrading that sequence, which can involve testing it out on the machine, training the techs to do it that way, and interpreting it for the physician clients who requested the imaging.”

In addition, there is a lack of exposure of MS imaging guidelines in the radiology literature, Dr. Ford added. “Maybe it’s a matter of giving more presentations at meetings that include radiologists, or getting the information out through the manufacturers. I think at the end of the day it could be a combination of all of those things,” he said.

However, the CMSC collaboration could make a big difference, Dr. Ford noted. “This is where the international protocol could be important in terms of making all of this happen,” he said. “What we’re seeing is the confluence of representatives of the U.S. and European centers hash out a consensus, and if it’s international, I think that adds a lot of weight to an eventual implementation on a wider basis.”

“I think the group has done a stellar job, and we should not try to be too focused on adding everyone’s little tweak,” he noted. “If we can get a good baseline foundational imaging sequence that can be implemented worldwide, we would be much better off.”

The CMSC updated imaging guidelines are expected to be published in coming months. The most recent previous updates are available online.

Dr. Traboulsee disclosed relationships with Biogen, Chugai, Roche, Sanofi, and Teva. Dr. Ford and Dr. Li have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Meeting/Event
Issue
Neurology Reviews- 28(8)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Proposed updates to guidelines for magnetic resonance imaging in patients with multiple sclerosis (MS) are in the works to make the Consortium of Multiple Sclerosis Centers protocol and other international guidelines more similar, with the hope that internationally accepted consensus guidelines will improve lagging conformity with the recommendations.

“We’ve always envisioned the guidelines as being international, but now we have harmony with the groups, so this is truly a global protocol,” Anthony Traboulsee, MD, a professor of neurology and director of the MS clinic and neuromyelitis optica clinic at the University of British Columbia in Vancouver, said in presenting the proposed updates during the virtual meeting of the CMSC.

The updates reflect the input of an international expert panel convened by the CMSC in October 2019, made up of neurologists, radiologists, magnetic resonance technologists, and imaging scientists with expertise in MS. Attendees represented groups including the European-based Magnetic Resonance Imaging in MS (MAGNIMS), North American Imaging in Multiple Sclerosis Cooperative, National MS Society, Multiple Sclerosis Association of America, MRI manufacturers, and commercial image analysis.
 

Standardizing scans

While the mission was to review and update the current guidelines, an important overriding objective was to boost universal acceptance and improve the utilization of the protocol, which research shows is surprisingly low. According to one poster presented at the meeting, a real-world MRI dataset of 1,233 sessions showed only 8% satisfied criteria for the T1 sequence outlined in the 2018 guidelines, and only 7% satisfied criteria for the T2 sequence. “In a real-world MRI dataset of patients with MS, the conformance to the CMSC brain MRI guidelines was extremely low,” concluded the authors, who were with Icometrix, in Chicago and Belgium.

David Li, MD, also of the University of British Columbia and cochair of the MRI guideline committee, said the nonconformity has important implications. “Nonstandardized scans, with inconsistent slice thickness and gaps, nonstandardized slice acquisition (not in the subcallosal plane), and incomplete brain coverage, all contribute to scans that are difficult to compare,” he said. Those factors, “allow for assessment of new lesions and lesion activity that are invaluable for diagnosis as well as determining the effectiveness of therapy or the need for initiating/changing therapy.”

Dr. Traboulsee said the lack of adherence to guidelines may simply have to do with a mistaken perception of complexity. “Part of the challenge is MRI centers don’t realize how easy it is to implement these guidelines,” he said in presenting the proposed updates.

Dr. Traboulsee noted that the CMSC has been working with manufacturers to try to incorporate the protocol into the scanners “so that it’s just a button to press” for the MRI. “I think that will get us over a major hurdle of adaptation,” Dr. Traboulsee said. “Most radiologists said once they started using it they were really happy with it. They found they were using it beyond MS for other basic neurologic imaging, so just raising awareness and making things more of a one-step process for individuals to use will be helpful,” he said.
 

 

 

Repositioning consistency is key

Among key suggestions that the expert panel proposed for guideline updates include the use of the subcallosal plane for consistent repositioning, which should allow for more accuracy and consistency in the identification of lesions in MS, Dr. Traboulsee said. “A major change reflecting improvements in MRI technology is the ability to acquire high-resolution 3-D images and that’s particularly helpful with fluid attenuation inversion recovery (FLAIR) sequences, which is what we do to identify lesions,” he explained. “The repositioning along the subcallosal line is important because it allows us to easily compare studies over time. It takes very little time but allows us to prepare studies over time much more easily,” he said.

Central vein sign

Another update is the establishment of a new category of optimum plus sequences allowing for the monitoring of brain atrophy and identifying lesions with a central vein sign, which has gained high interest as a marker on 3T MRI of demyelinating plaques in MS. As described in recent research, the central vein sign shows high accuracy in differentiating between MS and non-MS lesions.

“Many people have a few white spots on neuroimaging, but with MRI so much more available around the world, many of them are being misdiagnosed with MS,” Dr. Traboulsee said. “But the central vein sign, using a very simple MRI technique, can identify lesions with a vein in the center that (distinguishes them as) MS lesions.”

Though the process is still several years from routine clinical use, the proposed update would better implement susceptibility weighted imaging, which has traditionally been used for functional MRI.
 

PML Surveillance

The updates also include recommendations to help in the detection of the rare but potentially serious complication of some disease-modifying therapies of progressive multifocal leukoencephalopathy (PML). “We need a very quick and comprehensive way to monitor patients for PML before symptoms develop,” Dr. Traboulsee said. “The sequences we recommended were based on expert opinion of people who have worked quite a bit with PML in MS, and if one wants to survey for PML it’s only about a 10-minute scan.”

International protocol

Corey Ford, MD, a professor of neurology and director of the MS Specialty Clinic at the University of New Mexico Health Sciences Center in Albuquerque, commented that, with imaging playing such an important role in MS, the lack of adherence to the protocol can be a significant hindrance. “MRI is the most important imaging tool we have in the diagnosing and management of MS, but ... it’s quite amazing how different the sequences that are used can be when imaging centers are asked to image someone with a diagnosis of MS, so it’s a problem,” he said.

Dr. Ford speculated that part of the problem is simply inertia at some imaging centers. “Practices will have been programmed into their protocol for a long time, so when a patient comes in for imaging regarding MS, they may [turn to] their typical sequence,” he said. “There is an inertial barrier to upgrading that sequence, which can involve testing it out on the machine, training the techs to do it that way, and interpreting it for the physician clients who requested the imaging.”

In addition, there is a lack of exposure of MS imaging guidelines in the radiology literature, Dr. Ford added. “Maybe it’s a matter of giving more presentations at meetings that include radiologists, or getting the information out through the manufacturers. I think at the end of the day it could be a combination of all of those things,” he said.

However, the CMSC collaboration could make a big difference, Dr. Ford noted. “This is where the international protocol could be important in terms of making all of this happen,” he said. “What we’re seeing is the confluence of representatives of the U.S. and European centers hash out a consensus, and if it’s international, I think that adds a lot of weight to an eventual implementation on a wider basis.”

“I think the group has done a stellar job, and we should not try to be too focused on adding everyone’s little tweak,” he noted. “If we can get a good baseline foundational imaging sequence that can be implemented worldwide, we would be much better off.”

The CMSC updated imaging guidelines are expected to be published in coming months. The most recent previous updates are available online.

Dr. Traboulsee disclosed relationships with Biogen, Chugai, Roche, Sanofi, and Teva. Dr. Ford and Dr. Li have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Proposed updates to guidelines for magnetic resonance imaging in patients with multiple sclerosis (MS) are in the works to make the Consortium of Multiple Sclerosis Centers protocol and other international guidelines more similar, with the hope that internationally accepted consensus guidelines will improve lagging conformity with the recommendations.

“We’ve always envisioned the guidelines as being international, but now we have harmony with the groups, so this is truly a global protocol,” Anthony Traboulsee, MD, a professor of neurology and director of the MS clinic and neuromyelitis optica clinic at the University of British Columbia in Vancouver, said in presenting the proposed updates during the virtual meeting of the CMSC.

The updates reflect the input of an international expert panel convened by the CMSC in October 2019, made up of neurologists, radiologists, magnetic resonance technologists, and imaging scientists with expertise in MS. Attendees represented groups including the European-based Magnetic Resonance Imaging in MS (MAGNIMS), North American Imaging in Multiple Sclerosis Cooperative, National MS Society, Multiple Sclerosis Association of America, MRI manufacturers, and commercial image analysis.
 

Standardizing scans

While the mission was to review and update the current guidelines, an important overriding objective was to boost universal acceptance and improve the utilization of the protocol, which research shows is surprisingly low. According to one poster presented at the meeting, a real-world MRI dataset of 1,233 sessions showed only 8% satisfied criteria for the T1 sequence outlined in the 2018 guidelines, and only 7% satisfied criteria for the T2 sequence. “In a real-world MRI dataset of patients with MS, the conformance to the CMSC brain MRI guidelines was extremely low,” concluded the authors, who were with Icometrix, in Chicago and Belgium.

David Li, MD, also of the University of British Columbia and cochair of the MRI guideline committee, said the nonconformity has important implications. “Nonstandardized scans, with inconsistent slice thickness and gaps, nonstandardized slice acquisition (not in the subcallosal plane), and incomplete brain coverage, all contribute to scans that are difficult to compare,” he said. Those factors, “allow for assessment of new lesions and lesion activity that are invaluable for diagnosis as well as determining the effectiveness of therapy or the need for initiating/changing therapy.”

Dr. Traboulsee said the lack of adherence to guidelines may simply have to do with a mistaken perception of complexity. “Part of the challenge is MRI centers don’t realize how easy it is to implement these guidelines,” he said in presenting the proposed updates.

Dr. Traboulsee noted that the CMSC has been working with manufacturers to try to incorporate the protocol into the scanners “so that it’s just a button to press” for the MRI. “I think that will get us over a major hurdle of adaptation,” Dr. Traboulsee said. “Most radiologists said once they started using it they were really happy with it. They found they were using it beyond MS for other basic neurologic imaging, so just raising awareness and making things more of a one-step process for individuals to use will be helpful,” he said.
 

 

 

Repositioning consistency is key

Among key suggestions that the expert panel proposed for guideline updates include the use of the subcallosal plane for consistent repositioning, which should allow for more accuracy and consistency in the identification of lesions in MS, Dr. Traboulsee said. “A major change reflecting improvements in MRI technology is the ability to acquire high-resolution 3-D images and that’s particularly helpful with fluid attenuation inversion recovery (FLAIR) sequences, which is what we do to identify lesions,” he explained. “The repositioning along the subcallosal line is important because it allows us to easily compare studies over time. It takes very little time but allows us to prepare studies over time much more easily,” he said.

Central vein sign

Another update is the establishment of a new category of optimum plus sequences allowing for the monitoring of brain atrophy and identifying lesions with a central vein sign, which has gained high interest as a marker on 3T MRI of demyelinating plaques in MS. As described in recent research, the central vein sign shows high accuracy in differentiating between MS and non-MS lesions.

“Many people have a few white spots on neuroimaging, but with MRI so much more available around the world, many of them are being misdiagnosed with MS,” Dr. Traboulsee said. “But the central vein sign, using a very simple MRI technique, can identify lesions with a vein in the center that (distinguishes them as) MS lesions.”

Though the process is still several years from routine clinical use, the proposed update would better implement susceptibility weighted imaging, which has traditionally been used for functional MRI.
 

PML Surveillance

The updates also include recommendations to help in the detection of the rare but potentially serious complication of some disease-modifying therapies of progressive multifocal leukoencephalopathy (PML). “We need a very quick and comprehensive way to monitor patients for PML before symptoms develop,” Dr. Traboulsee said. “The sequences we recommended were based on expert opinion of people who have worked quite a bit with PML in MS, and if one wants to survey for PML it’s only about a 10-minute scan.”

International protocol

Corey Ford, MD, a professor of neurology and director of the MS Specialty Clinic at the University of New Mexico Health Sciences Center in Albuquerque, commented that, with imaging playing such an important role in MS, the lack of adherence to the protocol can be a significant hindrance. “MRI is the most important imaging tool we have in the diagnosing and management of MS, but ... it’s quite amazing how different the sequences that are used can be when imaging centers are asked to image someone with a diagnosis of MS, so it’s a problem,” he said.

Dr. Ford speculated that part of the problem is simply inertia at some imaging centers. “Practices will have been programmed into their protocol for a long time, so when a patient comes in for imaging regarding MS, they may [turn to] their typical sequence,” he said. “There is an inertial barrier to upgrading that sequence, which can involve testing it out on the machine, training the techs to do it that way, and interpreting it for the physician clients who requested the imaging.”

In addition, there is a lack of exposure of MS imaging guidelines in the radiology literature, Dr. Ford added. “Maybe it’s a matter of giving more presentations at meetings that include radiologists, or getting the information out through the manufacturers. I think at the end of the day it could be a combination of all of those things,” he said.

However, the CMSC collaboration could make a big difference, Dr. Ford noted. “This is where the international protocol could be important in terms of making all of this happen,” he said. “What we’re seeing is the confluence of representatives of the U.S. and European centers hash out a consensus, and if it’s international, I think that adds a lot of weight to an eventual implementation on a wider basis.”

“I think the group has done a stellar job, and we should not try to be too focused on adding everyone’s little tweak,” he noted. “If we can get a good baseline foundational imaging sequence that can be implemented worldwide, we would be much better off.”

The CMSC updated imaging guidelines are expected to be published in coming months. The most recent previous updates are available online.

Dr. Traboulsee disclosed relationships with Biogen, Chugai, Roche, Sanofi, and Teva. Dr. Ford and Dr. Li have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(8)
Issue
Neurology Reviews- 28(8)
Publications
Publications
Topics
Article Type
Sections
Article Source

From CMSC 2020

Citation Override
Publish date: June 19, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Safe to skip radiotherapy with negative PET in Hodgkin lymphoma

Article Type
Changed
Wed, 06/17/2020 - 09:12

 

The majority of patients with early-stage unfavorable Hodgkin lymphoma respond well enough to a current standard regimen of four cycles of chemotherapy and can skip the additional radiotherapy that is normally included in the combined modality treatment, say experts reporting the final results from an international phase 3 randomized trial dubbed HD17.

“Most patients with this disease will not need radiotherapy any longer,” concluded first author Peter Borchmann, MD, assistant medical director in the department of hematology/oncology at the University Hospital Cologne (Germany).

Dr. Borchmann was speaking online as part of the virtual edition of the European Hematology Association 25th Annual Congress 2020.

“Importantly, the mortality of patients with early-stage unfavorable Hodgkin lymphoma in the HD17 study did not differ from the normal healthy German population, and this is the first time we have had this finding in one of our studies,” he emphasized.

Dr. Borchmann added that positron emission tomography imaging is key in deciding which patients can skip radiation.

“We conclude from the HD17 trial that the combined modality concept can and should be replaced by a PET-guided omission of radiotherapy for patients with newly diagnosed early-stage unfavorable Hodgkin lymphoma,” he said.

“The vast majority of early-stage unfavorable Hodgkin lymphoma patients can be treated with the brief and highly effective 2+2 chemotherapy alone,” he added.

Therefore, he continued, “PET-guided 2+2 chemotherapy is the new standard of care for the German Hodgkin study group,” which conducted the trial.

The use of both chemotherapy and radiation has long been a standard approach to treatment, and this combined modality treatment is highly effective, Dr. Borchmann explained. But it can cause long-term damage, and the known longer-term negative effects of radiotherapy, such as cardiovascular disease and second malignancies, are a particular concern because patients with early-stage Hodgkin lymphoma are relatively young, with a median age of around 30 years at disease onset.

An expert approached for comment said that the momentum to skip radiotherapy when possible is an ongoing issue, and importantly, this study adds to those efforts.

“The treatment of Hodgkin lymphoma has moved for many years now to less radiation therapy, and this trend will continue with the results of this study,” commented John G. Gribben, MD, director of the Stem Cell Transplantation Program and medical director of the North East London Cancer Research Network Centre at Barts Cancer Center of Excellence and the London School of Medicine.

“We have moved to lower doses and involved fields with the intent of decreasing toxicity, and particularly long-term toxicity from radiotherapy,” he said in an interview. 

HD17 study details  

For the multicenter, phase 3 HD17 trial, Dr. Borchmann and colleagues turned to PET to identify patients who had and had not responded well to chemotherapy (PET negative and PET positive) and to determine if those who had responded well could safely avoid radiotherapy without compromising efficacy.

“We wanted to determine if we could reduce the treatment intensity by omission of radiotherapy in patients who respond very well to the systemic treatment, so who have a complete metabolic remission after the chemotherapy,” Dr. Borchmann said.

The 2+2 treatment approach includes two cycles of eBEACOPP (bleomycin, etoposidedoxorubicincyclophosphamidevincristineprocarbazine, and prednisone) and two subsequent cycles of ABVD (doxorubicin, bleomycinvinblastine, and dacarbazine).

The trial enrolled 1,100 patients with newly diagnosed Hodgkin lymphoma between January 2012 and March 2017. Of these, 979 patients had confirmed PET results, with 651 (66.5%) found to be PET negative, defined as having a Deauville score (DS) of less than 3 (DS3); 238 (24.3%) were DS3, and 90 (9.2%) were DS4.

The study met its primary endpoint of noninferiority in progression-free survival (PFS) at 5 years, with a PFS of 95.1% in the PET-guided group (n = 447), compared with 97.3% in the standard combined-modality treatment group (n = 428), over a median observation time of 45 months, for a difference of 2.2% (P = .12).

“We found that the survival levels were very high, and we can safely conclude the noninferiority of the PET-guided approach in PET-negative patients,” Dr. Borchmann said.

A further analysis showed that the 597 PET-negative patients who did not receive radiotherapy because of their PET status had 5-year PFS that was noninferior to the combined modality group (95.9% vs. 97.7%, respectively; P = .20).

And among 646 patients who received the 2+2 regimen plus radiotherapy, of those confirmed as PET positive (n = 328), the estimated 5-year PFS was significantly lower (94.2%), compared with those determined to be PET negative (n = 318; 97.6%; hazard ratio, 3.03).

A cut-off of DS4 for positivity was associated with a stronger effect, with a lower estimated 5-year PFS of 81.6% vs. 98.8% for DS3 patients and 97.6% for DS less than 3 (P < .0001).

“Only DS4 has a prognostic impact, but not DS3,” Dr. Borchmann said. “DS4 positivity indicates a relevant risk for treatment failure, however, there are few patients in this risk group (9.2% in this trial).”

The 5-year overall survival rates in an intent-to-treat analysis were 98.8% in the standard combined modality group and 98.4% in the PET-guided group.

With a median observation time of 47 months, there have been 10 fatal events in the trial out of 1,100 patients, including two Hodgkin lymphoma-related events and one treatment-related death.

“Overall, Hodgkin lymphoma or treatment-related mortality rates were extremely low,” Dr. Borchmann said.

The study was funded by Deutsche Krebshilfe. Dr. Borchmann and Dr. Gribben have reported no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

The majority of patients with early-stage unfavorable Hodgkin lymphoma respond well enough to a current standard regimen of four cycles of chemotherapy and can skip the additional radiotherapy that is normally included in the combined modality treatment, say experts reporting the final results from an international phase 3 randomized trial dubbed HD17.

“Most patients with this disease will not need radiotherapy any longer,” concluded first author Peter Borchmann, MD, assistant medical director in the department of hematology/oncology at the University Hospital Cologne (Germany).

Dr. Borchmann was speaking online as part of the virtual edition of the European Hematology Association 25th Annual Congress 2020.

“Importantly, the mortality of patients with early-stage unfavorable Hodgkin lymphoma in the HD17 study did not differ from the normal healthy German population, and this is the first time we have had this finding in one of our studies,” he emphasized.

Dr. Borchmann added that positron emission tomography imaging is key in deciding which patients can skip radiation.

“We conclude from the HD17 trial that the combined modality concept can and should be replaced by a PET-guided omission of radiotherapy for patients with newly diagnosed early-stage unfavorable Hodgkin lymphoma,” he said.

“The vast majority of early-stage unfavorable Hodgkin lymphoma patients can be treated with the brief and highly effective 2+2 chemotherapy alone,” he added.

Therefore, he continued, “PET-guided 2+2 chemotherapy is the new standard of care for the German Hodgkin study group,” which conducted the trial.

The use of both chemotherapy and radiation has long been a standard approach to treatment, and this combined modality treatment is highly effective, Dr. Borchmann explained. But it can cause long-term damage, and the known longer-term negative effects of radiotherapy, such as cardiovascular disease and second malignancies, are a particular concern because patients with early-stage Hodgkin lymphoma are relatively young, with a median age of around 30 years at disease onset.

An expert approached for comment said that the momentum to skip radiotherapy when possible is an ongoing issue, and importantly, this study adds to those efforts.

“The treatment of Hodgkin lymphoma has moved for many years now to less radiation therapy, and this trend will continue with the results of this study,” commented John G. Gribben, MD, director of the Stem Cell Transplantation Program and medical director of the North East London Cancer Research Network Centre at Barts Cancer Center of Excellence and the London School of Medicine.

“We have moved to lower doses and involved fields with the intent of decreasing toxicity, and particularly long-term toxicity from radiotherapy,” he said in an interview. 

HD17 study details  

For the multicenter, phase 3 HD17 trial, Dr. Borchmann and colleagues turned to PET to identify patients who had and had not responded well to chemotherapy (PET negative and PET positive) and to determine if those who had responded well could safely avoid radiotherapy without compromising efficacy.

“We wanted to determine if we could reduce the treatment intensity by omission of radiotherapy in patients who respond very well to the systemic treatment, so who have a complete metabolic remission after the chemotherapy,” Dr. Borchmann said.

The 2+2 treatment approach includes two cycles of eBEACOPP (bleomycin, etoposidedoxorubicincyclophosphamidevincristineprocarbazine, and prednisone) and two subsequent cycles of ABVD (doxorubicin, bleomycinvinblastine, and dacarbazine).

The trial enrolled 1,100 patients with newly diagnosed Hodgkin lymphoma between January 2012 and March 2017. Of these, 979 patients had confirmed PET results, with 651 (66.5%) found to be PET negative, defined as having a Deauville score (DS) of less than 3 (DS3); 238 (24.3%) were DS3, and 90 (9.2%) were DS4.

The study met its primary endpoint of noninferiority in progression-free survival (PFS) at 5 years, with a PFS of 95.1% in the PET-guided group (n = 447), compared with 97.3% in the standard combined-modality treatment group (n = 428), over a median observation time of 45 months, for a difference of 2.2% (P = .12).

“We found that the survival levels were very high, and we can safely conclude the noninferiority of the PET-guided approach in PET-negative patients,” Dr. Borchmann said.

A further analysis showed that the 597 PET-negative patients who did not receive radiotherapy because of their PET status had 5-year PFS that was noninferior to the combined modality group (95.9% vs. 97.7%, respectively; P = .20).

And among 646 patients who received the 2+2 regimen plus radiotherapy, of those confirmed as PET positive (n = 328), the estimated 5-year PFS was significantly lower (94.2%), compared with those determined to be PET negative (n = 318; 97.6%; hazard ratio, 3.03).

A cut-off of DS4 for positivity was associated with a stronger effect, with a lower estimated 5-year PFS of 81.6% vs. 98.8% for DS3 patients and 97.6% for DS less than 3 (P < .0001).

“Only DS4 has a prognostic impact, but not DS3,” Dr. Borchmann said. “DS4 positivity indicates a relevant risk for treatment failure, however, there are few patients in this risk group (9.2% in this trial).”

The 5-year overall survival rates in an intent-to-treat analysis were 98.8% in the standard combined modality group and 98.4% in the PET-guided group.

With a median observation time of 47 months, there have been 10 fatal events in the trial out of 1,100 patients, including two Hodgkin lymphoma-related events and one treatment-related death.

“Overall, Hodgkin lymphoma or treatment-related mortality rates were extremely low,” Dr. Borchmann said.

The study was funded by Deutsche Krebshilfe. Dr. Borchmann and Dr. Gribben have reported no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

 

The majority of patients with early-stage unfavorable Hodgkin lymphoma respond well enough to a current standard regimen of four cycles of chemotherapy and can skip the additional radiotherapy that is normally included in the combined modality treatment, say experts reporting the final results from an international phase 3 randomized trial dubbed HD17.

“Most patients with this disease will not need radiotherapy any longer,” concluded first author Peter Borchmann, MD, assistant medical director in the department of hematology/oncology at the University Hospital Cologne (Germany).

Dr. Borchmann was speaking online as part of the virtual edition of the European Hematology Association 25th Annual Congress 2020.

“Importantly, the mortality of patients with early-stage unfavorable Hodgkin lymphoma in the HD17 study did not differ from the normal healthy German population, and this is the first time we have had this finding in one of our studies,” he emphasized.

Dr. Borchmann added that positron emission tomography imaging is key in deciding which patients can skip radiation.

“We conclude from the HD17 trial that the combined modality concept can and should be replaced by a PET-guided omission of radiotherapy for patients with newly diagnosed early-stage unfavorable Hodgkin lymphoma,” he said.

“The vast majority of early-stage unfavorable Hodgkin lymphoma patients can be treated with the brief and highly effective 2+2 chemotherapy alone,” he added.

Therefore, he continued, “PET-guided 2+2 chemotherapy is the new standard of care for the German Hodgkin study group,” which conducted the trial.

The use of both chemotherapy and radiation has long been a standard approach to treatment, and this combined modality treatment is highly effective, Dr. Borchmann explained. But it can cause long-term damage, and the known longer-term negative effects of radiotherapy, such as cardiovascular disease and second malignancies, are a particular concern because patients with early-stage Hodgkin lymphoma are relatively young, with a median age of around 30 years at disease onset.

An expert approached for comment said that the momentum to skip radiotherapy when possible is an ongoing issue, and importantly, this study adds to those efforts.

“The treatment of Hodgkin lymphoma has moved for many years now to less radiation therapy, and this trend will continue with the results of this study,” commented John G. Gribben, MD, director of the Stem Cell Transplantation Program and medical director of the North East London Cancer Research Network Centre at Barts Cancer Center of Excellence and the London School of Medicine.

“We have moved to lower doses and involved fields with the intent of decreasing toxicity, and particularly long-term toxicity from radiotherapy,” he said in an interview. 

HD17 study details  

For the multicenter, phase 3 HD17 trial, Dr. Borchmann and colleagues turned to PET to identify patients who had and had not responded well to chemotherapy (PET negative and PET positive) and to determine if those who had responded well could safely avoid radiotherapy without compromising efficacy.

“We wanted to determine if we could reduce the treatment intensity by omission of radiotherapy in patients who respond very well to the systemic treatment, so who have a complete metabolic remission after the chemotherapy,” Dr. Borchmann said.

The 2+2 treatment approach includes two cycles of eBEACOPP (bleomycin, etoposidedoxorubicincyclophosphamidevincristineprocarbazine, and prednisone) and two subsequent cycles of ABVD (doxorubicin, bleomycinvinblastine, and dacarbazine).

The trial enrolled 1,100 patients with newly diagnosed Hodgkin lymphoma between January 2012 and March 2017. Of these, 979 patients had confirmed PET results, with 651 (66.5%) found to be PET negative, defined as having a Deauville score (DS) of less than 3 (DS3); 238 (24.3%) were DS3, and 90 (9.2%) were DS4.

The study met its primary endpoint of noninferiority in progression-free survival (PFS) at 5 years, with a PFS of 95.1% in the PET-guided group (n = 447), compared with 97.3% in the standard combined-modality treatment group (n = 428), over a median observation time of 45 months, for a difference of 2.2% (P = .12).

“We found that the survival levels were very high, and we can safely conclude the noninferiority of the PET-guided approach in PET-negative patients,” Dr. Borchmann said.

A further analysis showed that the 597 PET-negative patients who did not receive radiotherapy because of their PET status had 5-year PFS that was noninferior to the combined modality group (95.9% vs. 97.7%, respectively; P = .20).

And among 646 patients who received the 2+2 regimen plus radiotherapy, of those confirmed as PET positive (n = 328), the estimated 5-year PFS was significantly lower (94.2%), compared with those determined to be PET negative (n = 318; 97.6%; hazard ratio, 3.03).

A cut-off of DS4 for positivity was associated with a stronger effect, with a lower estimated 5-year PFS of 81.6% vs. 98.8% for DS3 patients and 97.6% for DS less than 3 (P < .0001).

“Only DS4 has a prognostic impact, but not DS3,” Dr. Borchmann said. “DS4 positivity indicates a relevant risk for treatment failure, however, there are few patients in this risk group (9.2% in this trial).”

The 5-year overall survival rates in an intent-to-treat analysis were 98.8% in the standard combined modality group and 98.4% in the PET-guided group.

With a median observation time of 47 months, there have been 10 fatal events in the trial out of 1,100 patients, including two Hodgkin lymphoma-related events and one treatment-related death.

“Overall, Hodgkin lymphoma or treatment-related mortality rates were extremely low,” Dr. Borchmann said.

The study was funded by Deutsche Krebshilfe. Dr. Borchmann and Dr. Gribben have reported no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge