Polypectomy clipping success is based on anticoagulant type

Article Type
Changed
Tue, 05/17/2022 - 15:54

A subanalysis of anticoagulant types reveals that, despite no overall benefit for prophylactic polypectomy clipping, there are differences by subgroup: There was a significantly lower bleeding in patients on direct oral anticoagulants (DOACs) and higher bleeding risk in patients on warfarin.

“In DOAC users, prophylactic clipping was associated with a 64% relative risk reduction in 30-day PPB [postpolypectomy bleeding],” versus no clipping, reported the authors of the study, published in Gastrointestinal Endoscopy.

Dr. Louis H.S. Lau

The removal of colonic polyps is known to carry a high risk of hemorrhage, and the use of antithrombotic medications, including DOACs and warfarin, are well-established as key risk factors for the bleeding.

However, data on the effectiveness of prophylactic hemoclips in preventing PPB is inconsistent, with one meta-analysis only showing a benefit in colonic lesions that are larger than 20 mm and proximal to the hepatic flexure, and other studies failing to show any significant benefits.

To further investigate the effects among patients treated with anticoagulants, first author Louis H.S. Lau, MBChB, an assistant clinical professor in the department of medicine and therapeutics at the Chinese University of Hong Kong, and colleagues enrolled 547 patients with 1,485 polyps who underwent colonoscopic polypectomy while being treated with an oral anticoagulant between 2012 and 2020.

The percentages of warfarin and DOAC users were similar between the groups, at about 50% each.

Overall, PPB occurred in 30 out of the 285 patients (10.6%) who had clipping and 11 out of the 262 patients (4.2%) who did not have clipping. The mean polyp size among patients with PPB was about 8-9 mm, and the mean time to bleeding was between 7 and 9 days.

In the propensity-weighted analysis, there was no statistically significant difference in bleeding among those who did and did not receive clipping (odds ratio, 1.19; 95% confidence interval, 0.73-1.95; P = .497).

However, a subgroup per-patient analysis did show prophylactic clipping was associated with a significantly lower 30-day PPB risk among patients treated with DOACs (OR, 0.36; 95% CI, 0.16-0.82; P = .015), but a significantly higher bleeding risk in patients taking warfarin (OR, 2.98; P = .003), and in those with heparin bridging (OR, 1.69; P = .023).

The subanalysis showed no benefit of prophylactic clipping among the largest polyps, which differed in size across the subgroups (<10 mm vs. 10-20 mm vs. 20 mm).

Of note, the overall analysis showed a significantly higher risk of PPB with hot resection polypectomy using electrocautery (OR, 9.76; 95% CI, 3.94-32.60; P < .001), compared with cold biopsy or snare polypectomy.

The authors noted several limitations to their study, including the relatively high rate of bleeding overall (7.5%), which could be related to the more frequent use of hot snare in their cohort earlier in the study.
 

Effects caused by DOACs’ rapid onset?

In speculating on the reasons for the different risks observed between DOACs and warfarin, the authors suggested that “a possible explanation could be the rapid onset of action and steady pharmacokinetics of DOAC, reducing the necessity of heparin bridging in most cases.”

Meanwhile, the increased bleeding observed with warfarin despite clipping could be “related to the intrinsic properties of warfarin,” they added.

“Because of its slow onset of action, a larger proportion of patients will receive heparin bridging, which was previously reported to be a significant risk factor of PPB,” they noted. “Moreover, due to the substantial fluctuation in anticoagulation effect during warfarin titration, it may provoke delayed bleeding after the endoclips fall off subsequently.”
 

Unique focus on anticoagulant-treated patients

Senior author Raymond Shing-Yan Tang, MD, an assistant professor in the department of medicine and therapeutics, faculty of medicine, at the Chinese University of Hong Kong, noted that the study’s unique focus on patients treated with anticoagulants is important.

Dr. Raymond Shing-Yan Tang

“Prior studies evaluating the effectiveness of prophylactic clipping in preventing postpolypectomy bleeding included a more heterogeneous patient population with both nonanticoagulated and anticoagulated patients,” Dr. Tang said in an interview.

“The strengths of our study were that it was a dedicated study that included only patients on oral anticoagulants, including warfarin and DOACs, and had a relatively larger sample size when compared to prior studies,” he said.

While most guidelines recommend prophylactic clipping in patients undergoing polypectomy for colonic lesions larger than 20 mm and proximal to the hepatic flexure, a variety of factors may ultimately guide decisions, Dr. Tang noted.

“In clinical practice, the decision to use prophylactic clipping after polypectomy in patients on anticoagulation is often individualized at the discretion of the endoscopist,” he said.
 

Anticoagulation question is important, but study has limitations

In commenting on the study, Heiko Pohl, MD, a professor of medicine at Geisel School of Medicine at Dartmouth, Hanover, N.H., noted that, while this study is important, it has some key limitations.

Dr. Heiko Pohl

“The question the study raises is relevant – we really have no good idea whether this subset of patients that are anticoagulated should always be clipped,” he said in an interview.

However, he noted potential limitations in the methodology.

“It’s difficult to control for all important factors in a propensity trial,” he said, adding “there could be some unmeasured confounders that could not be accounted for due to the retrospective design.”

Nevertheless, Dr. Pohl agreed that the relatively rapid action of DOACs could help explain the effects.

“DOACs may have a high risk of bleeding sooner [than warfarin] to begin with, and therefore the clipping makes sense, so that may be the mechanistic idea,” he said. “But it’s difficult to generalize, because there have been no previous studies that have shown benefits from clipping for smaller polyps, even among patients on anticoagulants.”

The authors had no disclosures to report. Dr. Pohl has received grants from Steris and Cosmo Pharmaceuticals.

Publications
Topics
Sections

A subanalysis of anticoagulant types reveals that, despite no overall benefit for prophylactic polypectomy clipping, there are differences by subgroup: There was a significantly lower bleeding in patients on direct oral anticoagulants (DOACs) and higher bleeding risk in patients on warfarin.

“In DOAC users, prophylactic clipping was associated with a 64% relative risk reduction in 30-day PPB [postpolypectomy bleeding],” versus no clipping, reported the authors of the study, published in Gastrointestinal Endoscopy.

Dr. Louis H.S. Lau

The removal of colonic polyps is known to carry a high risk of hemorrhage, and the use of antithrombotic medications, including DOACs and warfarin, are well-established as key risk factors for the bleeding.

However, data on the effectiveness of prophylactic hemoclips in preventing PPB is inconsistent, with one meta-analysis only showing a benefit in colonic lesions that are larger than 20 mm and proximal to the hepatic flexure, and other studies failing to show any significant benefits.

To further investigate the effects among patients treated with anticoagulants, first author Louis H.S. Lau, MBChB, an assistant clinical professor in the department of medicine and therapeutics at the Chinese University of Hong Kong, and colleagues enrolled 547 patients with 1,485 polyps who underwent colonoscopic polypectomy while being treated with an oral anticoagulant between 2012 and 2020.

The percentages of warfarin and DOAC users were similar between the groups, at about 50% each.

Overall, PPB occurred in 30 out of the 285 patients (10.6%) who had clipping and 11 out of the 262 patients (4.2%) who did not have clipping. The mean polyp size among patients with PPB was about 8-9 mm, and the mean time to bleeding was between 7 and 9 days.

In the propensity-weighted analysis, there was no statistically significant difference in bleeding among those who did and did not receive clipping (odds ratio, 1.19; 95% confidence interval, 0.73-1.95; P = .497).

However, a subgroup per-patient analysis did show prophylactic clipping was associated with a significantly lower 30-day PPB risk among patients treated with DOACs (OR, 0.36; 95% CI, 0.16-0.82; P = .015), but a significantly higher bleeding risk in patients taking warfarin (OR, 2.98; P = .003), and in those with heparin bridging (OR, 1.69; P = .023).

The subanalysis showed no benefit of prophylactic clipping among the largest polyps, which differed in size across the subgroups (<10 mm vs. 10-20 mm vs. 20 mm).

Of note, the overall analysis showed a significantly higher risk of PPB with hot resection polypectomy using electrocautery (OR, 9.76; 95% CI, 3.94-32.60; P < .001), compared with cold biopsy or snare polypectomy.

The authors noted several limitations to their study, including the relatively high rate of bleeding overall (7.5%), which could be related to the more frequent use of hot snare in their cohort earlier in the study.
 

Effects caused by DOACs’ rapid onset?

In speculating on the reasons for the different risks observed between DOACs and warfarin, the authors suggested that “a possible explanation could be the rapid onset of action and steady pharmacokinetics of DOAC, reducing the necessity of heparin bridging in most cases.”

Meanwhile, the increased bleeding observed with warfarin despite clipping could be “related to the intrinsic properties of warfarin,” they added.

“Because of its slow onset of action, a larger proportion of patients will receive heparin bridging, which was previously reported to be a significant risk factor of PPB,” they noted. “Moreover, due to the substantial fluctuation in anticoagulation effect during warfarin titration, it may provoke delayed bleeding after the endoclips fall off subsequently.”
 

Unique focus on anticoagulant-treated patients

Senior author Raymond Shing-Yan Tang, MD, an assistant professor in the department of medicine and therapeutics, faculty of medicine, at the Chinese University of Hong Kong, noted that the study’s unique focus on patients treated with anticoagulants is important.

Dr. Raymond Shing-Yan Tang

“Prior studies evaluating the effectiveness of prophylactic clipping in preventing postpolypectomy bleeding included a more heterogeneous patient population with both nonanticoagulated and anticoagulated patients,” Dr. Tang said in an interview.

“The strengths of our study were that it was a dedicated study that included only patients on oral anticoagulants, including warfarin and DOACs, and had a relatively larger sample size when compared to prior studies,” he said.

While most guidelines recommend prophylactic clipping in patients undergoing polypectomy for colonic lesions larger than 20 mm and proximal to the hepatic flexure, a variety of factors may ultimately guide decisions, Dr. Tang noted.

“In clinical practice, the decision to use prophylactic clipping after polypectomy in patients on anticoagulation is often individualized at the discretion of the endoscopist,” he said.
 

Anticoagulation question is important, but study has limitations

In commenting on the study, Heiko Pohl, MD, a professor of medicine at Geisel School of Medicine at Dartmouth, Hanover, N.H., noted that, while this study is important, it has some key limitations.

Dr. Heiko Pohl

“The question the study raises is relevant – we really have no good idea whether this subset of patients that are anticoagulated should always be clipped,” he said in an interview.

However, he noted potential limitations in the methodology.

“It’s difficult to control for all important factors in a propensity trial,” he said, adding “there could be some unmeasured confounders that could not be accounted for due to the retrospective design.”

Nevertheless, Dr. Pohl agreed that the relatively rapid action of DOACs could help explain the effects.

“DOACs may have a high risk of bleeding sooner [than warfarin] to begin with, and therefore the clipping makes sense, so that may be the mechanistic idea,” he said. “But it’s difficult to generalize, because there have been no previous studies that have shown benefits from clipping for smaller polyps, even among patients on anticoagulants.”

The authors had no disclosures to report. Dr. Pohl has received grants from Steris and Cosmo Pharmaceuticals.

A subanalysis of anticoagulant types reveals that, despite no overall benefit for prophylactic polypectomy clipping, there are differences by subgroup: There was a significantly lower bleeding in patients on direct oral anticoagulants (DOACs) and higher bleeding risk in patients on warfarin.

“In DOAC users, prophylactic clipping was associated with a 64% relative risk reduction in 30-day PPB [postpolypectomy bleeding],” versus no clipping, reported the authors of the study, published in Gastrointestinal Endoscopy.

Dr. Louis H.S. Lau

The removal of colonic polyps is known to carry a high risk of hemorrhage, and the use of antithrombotic medications, including DOACs and warfarin, are well-established as key risk factors for the bleeding.

However, data on the effectiveness of prophylactic hemoclips in preventing PPB is inconsistent, with one meta-analysis only showing a benefit in colonic lesions that are larger than 20 mm and proximal to the hepatic flexure, and other studies failing to show any significant benefits.

To further investigate the effects among patients treated with anticoagulants, first author Louis H.S. Lau, MBChB, an assistant clinical professor in the department of medicine and therapeutics at the Chinese University of Hong Kong, and colleagues enrolled 547 patients with 1,485 polyps who underwent colonoscopic polypectomy while being treated with an oral anticoagulant between 2012 and 2020.

The percentages of warfarin and DOAC users were similar between the groups, at about 50% each.

Overall, PPB occurred in 30 out of the 285 patients (10.6%) who had clipping and 11 out of the 262 patients (4.2%) who did not have clipping. The mean polyp size among patients with PPB was about 8-9 mm, and the mean time to bleeding was between 7 and 9 days.

In the propensity-weighted analysis, there was no statistically significant difference in bleeding among those who did and did not receive clipping (odds ratio, 1.19; 95% confidence interval, 0.73-1.95; P = .497).

However, a subgroup per-patient analysis did show prophylactic clipping was associated with a significantly lower 30-day PPB risk among patients treated with DOACs (OR, 0.36; 95% CI, 0.16-0.82; P = .015), but a significantly higher bleeding risk in patients taking warfarin (OR, 2.98; P = .003), and in those with heparin bridging (OR, 1.69; P = .023).

The subanalysis showed no benefit of prophylactic clipping among the largest polyps, which differed in size across the subgroups (<10 mm vs. 10-20 mm vs. 20 mm).

Of note, the overall analysis showed a significantly higher risk of PPB with hot resection polypectomy using electrocautery (OR, 9.76; 95% CI, 3.94-32.60; P < .001), compared with cold biopsy or snare polypectomy.

The authors noted several limitations to their study, including the relatively high rate of bleeding overall (7.5%), which could be related to the more frequent use of hot snare in their cohort earlier in the study.
 

Effects caused by DOACs’ rapid onset?

In speculating on the reasons for the different risks observed between DOACs and warfarin, the authors suggested that “a possible explanation could be the rapid onset of action and steady pharmacokinetics of DOAC, reducing the necessity of heparin bridging in most cases.”

Meanwhile, the increased bleeding observed with warfarin despite clipping could be “related to the intrinsic properties of warfarin,” they added.

“Because of its slow onset of action, a larger proportion of patients will receive heparin bridging, which was previously reported to be a significant risk factor of PPB,” they noted. “Moreover, due to the substantial fluctuation in anticoagulation effect during warfarin titration, it may provoke delayed bleeding after the endoclips fall off subsequently.”
 

Unique focus on anticoagulant-treated patients

Senior author Raymond Shing-Yan Tang, MD, an assistant professor in the department of medicine and therapeutics, faculty of medicine, at the Chinese University of Hong Kong, noted that the study’s unique focus on patients treated with anticoagulants is important.

Dr. Raymond Shing-Yan Tang

“Prior studies evaluating the effectiveness of prophylactic clipping in preventing postpolypectomy bleeding included a more heterogeneous patient population with both nonanticoagulated and anticoagulated patients,” Dr. Tang said in an interview.

“The strengths of our study were that it was a dedicated study that included only patients on oral anticoagulants, including warfarin and DOACs, and had a relatively larger sample size when compared to prior studies,” he said.

While most guidelines recommend prophylactic clipping in patients undergoing polypectomy for colonic lesions larger than 20 mm and proximal to the hepatic flexure, a variety of factors may ultimately guide decisions, Dr. Tang noted.

“In clinical practice, the decision to use prophylactic clipping after polypectomy in patients on anticoagulation is often individualized at the discretion of the endoscopist,” he said.
 

Anticoagulation question is important, but study has limitations

In commenting on the study, Heiko Pohl, MD, a professor of medicine at Geisel School of Medicine at Dartmouth, Hanover, N.H., noted that, while this study is important, it has some key limitations.

Dr. Heiko Pohl

“The question the study raises is relevant – we really have no good idea whether this subset of patients that are anticoagulated should always be clipped,” he said in an interview.

However, he noted potential limitations in the methodology.

“It’s difficult to control for all important factors in a propensity trial,” he said, adding “there could be some unmeasured confounders that could not be accounted for due to the retrospective design.”

Nevertheless, Dr. Pohl agreed that the relatively rapid action of DOACs could help explain the effects.

“DOACs may have a high risk of bleeding sooner [than warfarin] to begin with, and therefore the clipping makes sense, so that may be the mechanistic idea,” he said. “But it’s difficult to generalize, because there have been no previous studies that have shown benefits from clipping for smaller polyps, even among patients on anticoagulants.”

The authors had no disclosures to report. Dr. Pohl has received grants from Steris and Cosmo Pharmaceuticals.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROINTESTINAL ENDOSCOPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Obesity interactions complex in acute pancreatitis

Article Type
Changed
Wed, 06/01/2022 - 14:45

Obesity, in combination with other risk factors, is associated with increased morbidity and mortality in acute pancreatitis (AP); however, body mass index (BMI) alone is not a successful predictor of disease severity, new research shows.

“As there was no agreement or consistency between BMI and AP severity, it can be concluded that AP severity cannot be predicted successfully by examining BMI only,” reported the authors in research published recently in Pancreatology.

iStock/ThinkStock

The course of acute pancreatitis is typically mild in the majority (80%-85%) of cases; however, in severe cases, permanent organ failure can occur, with much worse outcomes and mortality rates of up to 35%.

Research has previously shown not only a link between obesity and acute pancreatitis but also an increased risk for complications and in-hospital mortality in obese patients with severe cases of acute pancreatitis – though a wide range of factors and comorbidities may complicate the association.

To more closely evaluate the course and outcomes of acute pancreatitis based on BMI classification, study authors led by Ali Tuzun Ince, MD, of the department of internal medicine, Gastroenterology Clinic of Bezmialem Vakif University, Istanbul, analyzed retrospective data from 2010 to 2020 on 1,334 adult patients (720 female, 614 male) who were diagnosed with acute pancreatitis per the Revised Atlanta Classification (RAC) criteria.

The patients were stratified based on their BMI as normal weight, overweight, or obese and whether they had mild, moderate, or severe (with permanent organ failure) acute pancreatitis.

In terms of acute pancreatitis severity, based on RAC criteria, 57.1% of patients had mild disease, 20.4% had moderate disease, and 22.5% had severe disease.

The overall mortality rate was 9.9% (n = 132); half of these patients were obese, and 87% had severe acute pancreatitis.

The overall rate of complications was 42.9%, including 20.8% in the normal weight group, 40.6% in the overweight group, and 38.6% in the obese group.

Patients in the overweight and obese groups also had higher mortality rates (3.7% and 4.9%, respectively), interventional procedures (36% and 39%, respectively), and length of hospital stay (11.6% and 9.8%, respectively), compared with the normal-weight group.

Other factors that were significantly associated with an increased mortality risk, in addition to obesity (P = .046), included old age (P = .000), male sex (P = .05), alcohol use (P = .014), low hematocrit (P = .044), high C-reactive protein (P = .024), moderate to severe and severe acute pancreatitis (P = .02 and P < .001, respectively), and any complications (P < .001).

Risk factors associated with increased admission to the ICU differed from those for mortality, and included female gender (P = .024), smoking (P = .021), hypertriglyceridemia (P = .047), idiopathic etiology (P = .023), and moderate to severe and severe acute pancreatitis (P < .001).

Of note, there were no significant associations between BMI and either the RAC score or Balthazar CT severity index (Balthazar CTSI) groups.

Specifically, among patients considered to have severe acute pancreatitis per Balthazar CTSI, 6.3% were of normal weight, 5% were overweight, and 7.1% were obese.

“In addition, since agreement and consistency between BMI and Balthazar score cannot be determined, the Balthazar score cannot be estimated from BMI,” the authors reported.

While the prediction of prognosis in acute pancreatitis is gaining interest, the findings underscore the role of combined factors, they added.

“Although many scoring systems are currently in use attempt to estimate the severity [in acute pancreatitis], none is 100% accurate yet,” the authors noted. “Each risk factor exacerbates the course of disease. Therefore, it would be better to consider the combined effects of risk factors.”

That being said, the findings show “mortality is increased significantly by the combined presence of risk factors such as male sex, OB [obesity], alcohol, MSAP [moderate to severe acute pancreatitis] and SAP [severe acute pancreatitis], all kinds of complications, old age, low Hct, and high CRP,” they wrote.
 

 

 

Obesity’s complex interactions

Commenting on the study, Vijay P. Singh, MD, a professor of medicine in the division of gastroenterology and hepatology at the Mayo Clinic in Scottsdale, Ariz., agreed that the complexities risk factors, particularly with obesity, can be tricky to detangle.

“Broadly, the study confirms several previous reports from different parts of the world that obesity was associated with increased mortality in acute pancreatitis,” he said in an interview.

“However, obesity had two complex interactions, the first that obesity is also associated with increased diabetes, and hypertriglyceridemia, which may themselves be risk factors for severity,” he explained.

“The second one is that intermediary severity markers [e.g., Balthazar score on imaging] did not correlate with the BMI categories.”

Dr. Singh noted that is “likely because therapies like IV fluids that may get more intense in predicted severe disease alter the natural course of pancreatitis.”

The findings are a reminder that “BMI is only a number that attempts to quantify fat,” Dr. Singh said, noting that BMI doesn’t address either the location of fat, such as being in close proximity to the pancreas, or fat composition, such as the proportion of unsaturated fat.

“When the unsaturated fat proportion is higher, the pancreatitis is worse, even at smaller total fat amounts [for example, at a lower BMI],” he said. “Taking these aspects into account may help in risk assessment.”

The authors and Dr. Singh had no disclosures to report.

Publications
Topics
Sections

Obesity, in combination with other risk factors, is associated with increased morbidity and mortality in acute pancreatitis (AP); however, body mass index (BMI) alone is not a successful predictor of disease severity, new research shows.

“As there was no agreement or consistency between BMI and AP severity, it can be concluded that AP severity cannot be predicted successfully by examining BMI only,” reported the authors in research published recently in Pancreatology.

iStock/ThinkStock

The course of acute pancreatitis is typically mild in the majority (80%-85%) of cases; however, in severe cases, permanent organ failure can occur, with much worse outcomes and mortality rates of up to 35%.

Research has previously shown not only a link between obesity and acute pancreatitis but also an increased risk for complications and in-hospital mortality in obese patients with severe cases of acute pancreatitis – though a wide range of factors and comorbidities may complicate the association.

To more closely evaluate the course and outcomes of acute pancreatitis based on BMI classification, study authors led by Ali Tuzun Ince, MD, of the department of internal medicine, Gastroenterology Clinic of Bezmialem Vakif University, Istanbul, analyzed retrospective data from 2010 to 2020 on 1,334 adult patients (720 female, 614 male) who were diagnosed with acute pancreatitis per the Revised Atlanta Classification (RAC) criteria.

The patients were stratified based on their BMI as normal weight, overweight, or obese and whether they had mild, moderate, or severe (with permanent organ failure) acute pancreatitis.

In terms of acute pancreatitis severity, based on RAC criteria, 57.1% of patients had mild disease, 20.4% had moderate disease, and 22.5% had severe disease.

The overall mortality rate was 9.9% (n = 132); half of these patients were obese, and 87% had severe acute pancreatitis.

The overall rate of complications was 42.9%, including 20.8% in the normal weight group, 40.6% in the overweight group, and 38.6% in the obese group.

Patients in the overweight and obese groups also had higher mortality rates (3.7% and 4.9%, respectively), interventional procedures (36% and 39%, respectively), and length of hospital stay (11.6% and 9.8%, respectively), compared with the normal-weight group.

Other factors that were significantly associated with an increased mortality risk, in addition to obesity (P = .046), included old age (P = .000), male sex (P = .05), alcohol use (P = .014), low hematocrit (P = .044), high C-reactive protein (P = .024), moderate to severe and severe acute pancreatitis (P = .02 and P < .001, respectively), and any complications (P < .001).

Risk factors associated with increased admission to the ICU differed from those for mortality, and included female gender (P = .024), smoking (P = .021), hypertriglyceridemia (P = .047), idiopathic etiology (P = .023), and moderate to severe and severe acute pancreatitis (P < .001).

Of note, there were no significant associations between BMI and either the RAC score or Balthazar CT severity index (Balthazar CTSI) groups.

Specifically, among patients considered to have severe acute pancreatitis per Balthazar CTSI, 6.3% were of normal weight, 5% were overweight, and 7.1% were obese.

“In addition, since agreement and consistency between BMI and Balthazar score cannot be determined, the Balthazar score cannot be estimated from BMI,” the authors reported.

While the prediction of prognosis in acute pancreatitis is gaining interest, the findings underscore the role of combined factors, they added.

“Although many scoring systems are currently in use attempt to estimate the severity [in acute pancreatitis], none is 100% accurate yet,” the authors noted. “Each risk factor exacerbates the course of disease. Therefore, it would be better to consider the combined effects of risk factors.”

That being said, the findings show “mortality is increased significantly by the combined presence of risk factors such as male sex, OB [obesity], alcohol, MSAP [moderate to severe acute pancreatitis] and SAP [severe acute pancreatitis], all kinds of complications, old age, low Hct, and high CRP,” they wrote.
 

 

 

Obesity’s complex interactions

Commenting on the study, Vijay P. Singh, MD, a professor of medicine in the division of gastroenterology and hepatology at the Mayo Clinic in Scottsdale, Ariz., agreed that the complexities risk factors, particularly with obesity, can be tricky to detangle.

“Broadly, the study confirms several previous reports from different parts of the world that obesity was associated with increased mortality in acute pancreatitis,” he said in an interview.

“However, obesity had two complex interactions, the first that obesity is also associated with increased diabetes, and hypertriglyceridemia, which may themselves be risk factors for severity,” he explained.

“The second one is that intermediary severity markers [e.g., Balthazar score on imaging] did not correlate with the BMI categories.”

Dr. Singh noted that is “likely because therapies like IV fluids that may get more intense in predicted severe disease alter the natural course of pancreatitis.”

The findings are a reminder that “BMI is only a number that attempts to quantify fat,” Dr. Singh said, noting that BMI doesn’t address either the location of fat, such as being in close proximity to the pancreas, or fat composition, such as the proportion of unsaturated fat.

“When the unsaturated fat proportion is higher, the pancreatitis is worse, even at smaller total fat amounts [for example, at a lower BMI],” he said. “Taking these aspects into account may help in risk assessment.”

The authors and Dr. Singh had no disclosures to report.

Obesity, in combination with other risk factors, is associated with increased morbidity and mortality in acute pancreatitis (AP); however, body mass index (BMI) alone is not a successful predictor of disease severity, new research shows.

“As there was no agreement or consistency between BMI and AP severity, it can be concluded that AP severity cannot be predicted successfully by examining BMI only,” reported the authors in research published recently in Pancreatology.

iStock/ThinkStock

The course of acute pancreatitis is typically mild in the majority (80%-85%) of cases; however, in severe cases, permanent organ failure can occur, with much worse outcomes and mortality rates of up to 35%.

Research has previously shown not only a link between obesity and acute pancreatitis but also an increased risk for complications and in-hospital mortality in obese patients with severe cases of acute pancreatitis – though a wide range of factors and comorbidities may complicate the association.

To more closely evaluate the course and outcomes of acute pancreatitis based on BMI classification, study authors led by Ali Tuzun Ince, MD, of the department of internal medicine, Gastroenterology Clinic of Bezmialem Vakif University, Istanbul, analyzed retrospective data from 2010 to 2020 on 1,334 adult patients (720 female, 614 male) who were diagnosed with acute pancreatitis per the Revised Atlanta Classification (RAC) criteria.

The patients were stratified based on their BMI as normal weight, overweight, or obese and whether they had mild, moderate, or severe (with permanent organ failure) acute pancreatitis.

In terms of acute pancreatitis severity, based on RAC criteria, 57.1% of patients had mild disease, 20.4% had moderate disease, and 22.5% had severe disease.

The overall mortality rate was 9.9% (n = 132); half of these patients were obese, and 87% had severe acute pancreatitis.

The overall rate of complications was 42.9%, including 20.8% in the normal weight group, 40.6% in the overweight group, and 38.6% in the obese group.

Patients in the overweight and obese groups also had higher mortality rates (3.7% and 4.9%, respectively), interventional procedures (36% and 39%, respectively), and length of hospital stay (11.6% and 9.8%, respectively), compared with the normal-weight group.

Other factors that were significantly associated with an increased mortality risk, in addition to obesity (P = .046), included old age (P = .000), male sex (P = .05), alcohol use (P = .014), low hematocrit (P = .044), high C-reactive protein (P = .024), moderate to severe and severe acute pancreatitis (P = .02 and P < .001, respectively), and any complications (P < .001).

Risk factors associated with increased admission to the ICU differed from those for mortality, and included female gender (P = .024), smoking (P = .021), hypertriglyceridemia (P = .047), idiopathic etiology (P = .023), and moderate to severe and severe acute pancreatitis (P < .001).

Of note, there were no significant associations between BMI and either the RAC score or Balthazar CT severity index (Balthazar CTSI) groups.

Specifically, among patients considered to have severe acute pancreatitis per Balthazar CTSI, 6.3% were of normal weight, 5% were overweight, and 7.1% were obese.

“In addition, since agreement and consistency between BMI and Balthazar score cannot be determined, the Balthazar score cannot be estimated from BMI,” the authors reported.

While the prediction of prognosis in acute pancreatitis is gaining interest, the findings underscore the role of combined factors, they added.

“Although many scoring systems are currently in use attempt to estimate the severity [in acute pancreatitis], none is 100% accurate yet,” the authors noted. “Each risk factor exacerbates the course of disease. Therefore, it would be better to consider the combined effects of risk factors.”

That being said, the findings show “mortality is increased significantly by the combined presence of risk factors such as male sex, OB [obesity], alcohol, MSAP [moderate to severe acute pancreatitis] and SAP [severe acute pancreatitis], all kinds of complications, old age, low Hct, and high CRP,” they wrote.
 

 

 

Obesity’s complex interactions

Commenting on the study, Vijay P. Singh, MD, a professor of medicine in the division of gastroenterology and hepatology at the Mayo Clinic in Scottsdale, Ariz., agreed that the complexities risk factors, particularly with obesity, can be tricky to detangle.

“Broadly, the study confirms several previous reports from different parts of the world that obesity was associated with increased mortality in acute pancreatitis,” he said in an interview.

“However, obesity had two complex interactions, the first that obesity is also associated with increased diabetes, and hypertriglyceridemia, which may themselves be risk factors for severity,” he explained.

“The second one is that intermediary severity markers [e.g., Balthazar score on imaging] did not correlate with the BMI categories.”

Dr. Singh noted that is “likely because therapies like IV fluids that may get more intense in predicted severe disease alter the natural course of pancreatitis.”

The findings are a reminder that “BMI is only a number that attempts to quantify fat,” Dr. Singh said, noting that BMI doesn’t address either the location of fat, such as being in close proximity to the pancreas, or fat composition, such as the proportion of unsaturated fat.

“When the unsaturated fat proportion is higher, the pancreatitis is worse, even at smaller total fat amounts [for example, at a lower BMI],” he said. “Taking these aspects into account may help in risk assessment.”

The authors and Dr. Singh had no disclosures to report.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PANCREATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Agony of choice’ for clinicians treating leukemia

Article Type
Changed
Fri, 12/16/2022 - 11:25

With an abundance of targeted therapies transforming the treatment landscape for chronic lymphocytic leukemia (CLL), picking the optimal drug or drug sequence for the right situation can be a challenge, but emerging data is helping guide clinicians facing the “agony of choice,” a new review reports.

“Targeted therapies have outnumbered chemoimmunotherapy-based treatment approaches, demonstrating superior efficacy and tolerability profiles across nearly all CLL patient subgroups in the frontline and relapsed disease treatment setting,” author Jan-Paul Bohn, MD, PhD, of the department of internal medicine V, hematology and oncology, at Medical University of Innsbruck (Austria), reported in the review published in Memo, the Magazine of European Medical Oncology.

The options leave clinicians “spoilt for choice when selecting optimal therapy,” he said.

The three major drug classes to emerge – inhibitors of Bruton tyrosine kinase (BTK), antiapoptotic protein B-cell lymphoma 2 (BCL2) and phosphoinositide 3’-kinase (PI3K) – all appear similar in efficacy and tolerability.

Particularly in high-risk patients, the drugs have been so effective that the less desirable previous standard of “chemoimmunotherapy has widely faded into the background in the Western hemisphere,” Dr. Bohn wrote.

However, with caveats of the newer drugs including acquired resistances and potential toxicities, challenges have shifted to determining how to best juggle and/or combine the agents.
 

Frontline therapy

In terms of frontline options for CLL therapy, the BTK inhibitors, along with the BCL2 inhibitor venetoclax have been key in negating the need for chemotherapy, with some of the latest data showing superiority of venetoclax in combination with obinutuzumab (GVe) over chemotherapy even in the higher-risk subset of patients with mutated IGHV status and without TP53 disruption.

Hence, “chemoimmunotherapy may now even be questioned in the remaining subset of CLL patients with mutated IGHV status and without TP53 disruption,” Dr. Bohn reported.

That being said, the criteria for treatment choices in the frontline setting among the newer drug classes can often come down to the key issues of patients’ comorbidities and treatment preferences.

For example, in terms of patients who have higher risk because of tumor lysis syndrome (TLS), or issues including declining renal function, continuous BTK inhibitor treatment may be the preferred choice over the combination of venetoclax plus obinutuzumab (GVe), Dr. Bohn noted.

Conversely, for patients with cardiac comorbidities or a higher risk of bleeding, the GVe combination may be preferred over ibrutinib, with recent findings showing ibrutinib to be associated with as much as an 18-times higher risk of sudden unexplained death or cardiac death in young and fit patients who had preexisting arterial hypertension and/or a history of cardiac disorders requiring therapy.

For those with cardiac comorbidities, the more selective second-generation BTK inhibitor acalabrutinib is a potentially favorable alternative, as the drug is “at least similarly effective and more favorable in terms of tolerability, compared with ibrutinib, particularly as far as cardiac and bleeding side effects are considered,” Dr. Bohn said.

And in higher-risk cases involving TP53 dysfunction, a BTK inhibitor may be superior to GVe for frontline treatment, Dr. Bohn noted, with data showing progression-free survival in patients with and without deletion 17p to be significantly reduced with GVe versus the BTK inhibitor ibrutinib.
 

 

 

Relapsed and refractory disease

With similarly high efficacy observed with the new drug classes among relapsed and/or refractory patients, chemoimmunotherapy has likewise “become obsolete in nearly all patients naive to novel agents at relapse who typically present with genetically high-risk disease,” Dr. Bohn noted.

He wrote that most of the recommendations for frontline therapy hold true in the relapsed and refractory patients, with comorbidities and personal preferences again key drivers of treatment choices.

While data is currently limited regarding benefits of venetoclax-based regimens over BTK inhibitors in relapsed/refractory patients, there is “growing evidence suggesting similar clinical outcomes achievable with these agents in either order,” Dr. Bohn wrote.

Further recommendations regarding relapsed or refractory patients include:

  • Among patients who do experience disease progression while on continuous treatment with BTK inhibitors, venetoclax-based regimes seem most effective. However, with relapse after venetoclax-based regimes, some growing evidence supports retreatment with the drug “depending on depth and duration of response achieved after first venetoclax exposure,” Dr. Bohn noted.
  • For patients with deletion 17p, venetoclax shows promising efficacy during relapse when given as monotherapy until disease progression or occurrence of unacceptable toxicity.
  • And for patients with TP53 abnormalities, the considerations are the same as for frontline therapy, with venetoclax showing promising efficacy when given in monotherapy until disease progression or occurrence of unacceptable toxicity.

Of note, PI3K inhibitors are generally not used in CLL patients naive to BTK and BCL2 inhibitors because of the higher risk of immune-mediated toxicities and infectious complications associated with the currently approved PI3K inhibitors idelalisib and duvelisib, he reported.

Nevertheless, “PI3K inhibitors remain a valuable therapeutic addition in patients refractory or intolerant to BTK inhibitors and venetoclax-based regimens,” Dr. Bohn said.
 

Newer agents, fixed duration

Commenting on the review, hematologist Seema A. Bhat, MD, an assistant professor with the Ohio State University Comprehensive Cancer Center, Columbus, said that the advances with targeted therapies in CLL are paying off with improved survival.

Dr. Seema Bhat

“With these recent advances in the treatment of CLL, especially the availability of targeted therapies, there has been an improvement in survival of patients with CLL, as the CLL-related death rate steadily reduced by approximately 3% per year between 2006 and 2015,” she said in an interview.

She added that even-newer agents in development, including the reversibly binding BTK inhibitor–like pirtobrutinib and nemtabrutinib, when approved, will further add to the treatment choices for patients.

Meanwhile, a key area of focus is the combination of BTK inhibitors and BCL2 inhibitors, specifically for a fixed duration of time to obtain a deeper response and hence possibility a time-limited therapy, she noted. “We are also excited about the possibility of having more fixed-duration treatments available for our patients, which will make their treatment journey less troublesome, both physically as well as financially.”

Dr. Bohn reported receiving personal fees from AbbVie, AstraZeneca and Janssen for advisory board participation. Dr. Bhat has served on advisory board for AstraZeneca and received honorarium from them.

Publications
Topics
Sections

With an abundance of targeted therapies transforming the treatment landscape for chronic lymphocytic leukemia (CLL), picking the optimal drug or drug sequence for the right situation can be a challenge, but emerging data is helping guide clinicians facing the “agony of choice,” a new review reports.

“Targeted therapies have outnumbered chemoimmunotherapy-based treatment approaches, demonstrating superior efficacy and tolerability profiles across nearly all CLL patient subgroups in the frontline and relapsed disease treatment setting,” author Jan-Paul Bohn, MD, PhD, of the department of internal medicine V, hematology and oncology, at Medical University of Innsbruck (Austria), reported in the review published in Memo, the Magazine of European Medical Oncology.

The options leave clinicians “spoilt for choice when selecting optimal therapy,” he said.

The three major drug classes to emerge – inhibitors of Bruton tyrosine kinase (BTK), antiapoptotic protein B-cell lymphoma 2 (BCL2) and phosphoinositide 3’-kinase (PI3K) – all appear similar in efficacy and tolerability.

Particularly in high-risk patients, the drugs have been so effective that the less desirable previous standard of “chemoimmunotherapy has widely faded into the background in the Western hemisphere,” Dr. Bohn wrote.

However, with caveats of the newer drugs including acquired resistances and potential toxicities, challenges have shifted to determining how to best juggle and/or combine the agents.
 

Frontline therapy

In terms of frontline options for CLL therapy, the BTK inhibitors, along with the BCL2 inhibitor venetoclax have been key in negating the need for chemotherapy, with some of the latest data showing superiority of venetoclax in combination with obinutuzumab (GVe) over chemotherapy even in the higher-risk subset of patients with mutated IGHV status and without TP53 disruption.

Hence, “chemoimmunotherapy may now even be questioned in the remaining subset of CLL patients with mutated IGHV status and without TP53 disruption,” Dr. Bohn reported.

That being said, the criteria for treatment choices in the frontline setting among the newer drug classes can often come down to the key issues of patients’ comorbidities and treatment preferences.

For example, in terms of patients who have higher risk because of tumor lysis syndrome (TLS), or issues including declining renal function, continuous BTK inhibitor treatment may be the preferred choice over the combination of venetoclax plus obinutuzumab (GVe), Dr. Bohn noted.

Conversely, for patients with cardiac comorbidities or a higher risk of bleeding, the GVe combination may be preferred over ibrutinib, with recent findings showing ibrutinib to be associated with as much as an 18-times higher risk of sudden unexplained death or cardiac death in young and fit patients who had preexisting arterial hypertension and/or a history of cardiac disorders requiring therapy.

For those with cardiac comorbidities, the more selective second-generation BTK inhibitor acalabrutinib is a potentially favorable alternative, as the drug is “at least similarly effective and more favorable in terms of tolerability, compared with ibrutinib, particularly as far as cardiac and bleeding side effects are considered,” Dr. Bohn said.

And in higher-risk cases involving TP53 dysfunction, a BTK inhibitor may be superior to GVe for frontline treatment, Dr. Bohn noted, with data showing progression-free survival in patients with and without deletion 17p to be significantly reduced with GVe versus the BTK inhibitor ibrutinib.
 

 

 

Relapsed and refractory disease

With similarly high efficacy observed with the new drug classes among relapsed and/or refractory patients, chemoimmunotherapy has likewise “become obsolete in nearly all patients naive to novel agents at relapse who typically present with genetically high-risk disease,” Dr. Bohn noted.

He wrote that most of the recommendations for frontline therapy hold true in the relapsed and refractory patients, with comorbidities and personal preferences again key drivers of treatment choices.

While data is currently limited regarding benefits of venetoclax-based regimens over BTK inhibitors in relapsed/refractory patients, there is “growing evidence suggesting similar clinical outcomes achievable with these agents in either order,” Dr. Bohn wrote.

Further recommendations regarding relapsed or refractory patients include:

  • Among patients who do experience disease progression while on continuous treatment with BTK inhibitors, venetoclax-based regimes seem most effective. However, with relapse after venetoclax-based regimes, some growing evidence supports retreatment with the drug “depending on depth and duration of response achieved after first venetoclax exposure,” Dr. Bohn noted.
  • For patients with deletion 17p, venetoclax shows promising efficacy during relapse when given as monotherapy until disease progression or occurrence of unacceptable toxicity.
  • And for patients with TP53 abnormalities, the considerations are the same as for frontline therapy, with venetoclax showing promising efficacy when given in monotherapy until disease progression or occurrence of unacceptable toxicity.

Of note, PI3K inhibitors are generally not used in CLL patients naive to BTK and BCL2 inhibitors because of the higher risk of immune-mediated toxicities and infectious complications associated with the currently approved PI3K inhibitors idelalisib and duvelisib, he reported.

Nevertheless, “PI3K inhibitors remain a valuable therapeutic addition in patients refractory or intolerant to BTK inhibitors and venetoclax-based regimens,” Dr. Bohn said.
 

Newer agents, fixed duration

Commenting on the review, hematologist Seema A. Bhat, MD, an assistant professor with the Ohio State University Comprehensive Cancer Center, Columbus, said that the advances with targeted therapies in CLL are paying off with improved survival.

Dr. Seema Bhat

“With these recent advances in the treatment of CLL, especially the availability of targeted therapies, there has been an improvement in survival of patients with CLL, as the CLL-related death rate steadily reduced by approximately 3% per year between 2006 and 2015,” she said in an interview.

She added that even-newer agents in development, including the reversibly binding BTK inhibitor–like pirtobrutinib and nemtabrutinib, when approved, will further add to the treatment choices for patients.

Meanwhile, a key area of focus is the combination of BTK inhibitors and BCL2 inhibitors, specifically for a fixed duration of time to obtain a deeper response and hence possibility a time-limited therapy, she noted. “We are also excited about the possibility of having more fixed-duration treatments available for our patients, which will make their treatment journey less troublesome, both physically as well as financially.”

Dr. Bohn reported receiving personal fees from AbbVie, AstraZeneca and Janssen for advisory board participation. Dr. Bhat has served on advisory board for AstraZeneca and received honorarium from them.

With an abundance of targeted therapies transforming the treatment landscape for chronic lymphocytic leukemia (CLL), picking the optimal drug or drug sequence for the right situation can be a challenge, but emerging data is helping guide clinicians facing the “agony of choice,” a new review reports.

“Targeted therapies have outnumbered chemoimmunotherapy-based treatment approaches, demonstrating superior efficacy and tolerability profiles across nearly all CLL patient subgroups in the frontline and relapsed disease treatment setting,” author Jan-Paul Bohn, MD, PhD, of the department of internal medicine V, hematology and oncology, at Medical University of Innsbruck (Austria), reported in the review published in Memo, the Magazine of European Medical Oncology.

The options leave clinicians “spoilt for choice when selecting optimal therapy,” he said.

The three major drug classes to emerge – inhibitors of Bruton tyrosine kinase (BTK), antiapoptotic protein B-cell lymphoma 2 (BCL2) and phosphoinositide 3’-kinase (PI3K) – all appear similar in efficacy and tolerability.

Particularly in high-risk patients, the drugs have been so effective that the less desirable previous standard of “chemoimmunotherapy has widely faded into the background in the Western hemisphere,” Dr. Bohn wrote.

However, with caveats of the newer drugs including acquired resistances and potential toxicities, challenges have shifted to determining how to best juggle and/or combine the agents.
 

Frontline therapy

In terms of frontline options for CLL therapy, the BTK inhibitors, along with the BCL2 inhibitor venetoclax have been key in negating the need for chemotherapy, with some of the latest data showing superiority of venetoclax in combination with obinutuzumab (GVe) over chemotherapy even in the higher-risk subset of patients with mutated IGHV status and without TP53 disruption.

Hence, “chemoimmunotherapy may now even be questioned in the remaining subset of CLL patients with mutated IGHV status and without TP53 disruption,” Dr. Bohn reported.

That being said, the criteria for treatment choices in the frontline setting among the newer drug classes can often come down to the key issues of patients’ comorbidities and treatment preferences.

For example, in terms of patients who have higher risk because of tumor lysis syndrome (TLS), or issues including declining renal function, continuous BTK inhibitor treatment may be the preferred choice over the combination of venetoclax plus obinutuzumab (GVe), Dr. Bohn noted.

Conversely, for patients with cardiac comorbidities or a higher risk of bleeding, the GVe combination may be preferred over ibrutinib, with recent findings showing ibrutinib to be associated with as much as an 18-times higher risk of sudden unexplained death or cardiac death in young and fit patients who had preexisting arterial hypertension and/or a history of cardiac disorders requiring therapy.

For those with cardiac comorbidities, the more selective second-generation BTK inhibitor acalabrutinib is a potentially favorable alternative, as the drug is “at least similarly effective and more favorable in terms of tolerability, compared with ibrutinib, particularly as far as cardiac and bleeding side effects are considered,” Dr. Bohn said.

And in higher-risk cases involving TP53 dysfunction, a BTK inhibitor may be superior to GVe for frontline treatment, Dr. Bohn noted, with data showing progression-free survival in patients with and without deletion 17p to be significantly reduced with GVe versus the BTK inhibitor ibrutinib.
 

 

 

Relapsed and refractory disease

With similarly high efficacy observed with the new drug classes among relapsed and/or refractory patients, chemoimmunotherapy has likewise “become obsolete in nearly all patients naive to novel agents at relapse who typically present with genetically high-risk disease,” Dr. Bohn noted.

He wrote that most of the recommendations for frontline therapy hold true in the relapsed and refractory patients, with comorbidities and personal preferences again key drivers of treatment choices.

While data is currently limited regarding benefits of venetoclax-based regimens over BTK inhibitors in relapsed/refractory patients, there is “growing evidence suggesting similar clinical outcomes achievable with these agents in either order,” Dr. Bohn wrote.

Further recommendations regarding relapsed or refractory patients include:

  • Among patients who do experience disease progression while on continuous treatment with BTK inhibitors, venetoclax-based regimes seem most effective. However, with relapse after venetoclax-based regimes, some growing evidence supports retreatment with the drug “depending on depth and duration of response achieved after first venetoclax exposure,” Dr. Bohn noted.
  • For patients with deletion 17p, venetoclax shows promising efficacy during relapse when given as monotherapy until disease progression or occurrence of unacceptable toxicity.
  • And for patients with TP53 abnormalities, the considerations are the same as for frontline therapy, with venetoclax showing promising efficacy when given in monotherapy until disease progression or occurrence of unacceptable toxicity.

Of note, PI3K inhibitors are generally not used in CLL patients naive to BTK and BCL2 inhibitors because of the higher risk of immune-mediated toxicities and infectious complications associated with the currently approved PI3K inhibitors idelalisib and duvelisib, he reported.

Nevertheless, “PI3K inhibitors remain a valuable therapeutic addition in patients refractory or intolerant to BTK inhibitors and venetoclax-based regimens,” Dr. Bohn said.
 

Newer agents, fixed duration

Commenting on the review, hematologist Seema A. Bhat, MD, an assistant professor with the Ohio State University Comprehensive Cancer Center, Columbus, said that the advances with targeted therapies in CLL are paying off with improved survival.

Dr. Seema Bhat

“With these recent advances in the treatment of CLL, especially the availability of targeted therapies, there has been an improvement in survival of patients with CLL, as the CLL-related death rate steadily reduced by approximately 3% per year between 2006 and 2015,” she said in an interview.

She added that even-newer agents in development, including the reversibly binding BTK inhibitor–like pirtobrutinib and nemtabrutinib, when approved, will further add to the treatment choices for patients.

Meanwhile, a key area of focus is the combination of BTK inhibitors and BCL2 inhibitors, specifically for a fixed duration of time to obtain a deeper response and hence possibility a time-limited therapy, she noted. “We are also excited about the possibility of having more fixed-duration treatments available for our patients, which will make their treatment journey less troublesome, both physically as well as financially.”

Dr. Bohn reported receiving personal fees from AbbVie, AstraZeneca and Janssen for advisory board participation. Dr. Bhat has served on advisory board for AstraZeneca and received honorarium from them.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM MEMO – MAGAZINE OF EUROPEAN MEDICAL ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Surgery shows no survival, morbidity benefit for mild hyperparathyroidism

Article Type
Changed
Tue, 04/19/2022 - 15:21

Patients who receive parathyroidectomy for mild primary hyperparathyroidism show no benefits in survival or morbidity, including fractures, cancer, or cardiovascular outcomes over more than 10 years, compared with those not receiving the surgery, results from a randomized, prospective trial show.

“In contrast to existing data showing increased mortality and cardiovascular morbidity in mild primary hyperparathyroidism, we did not find any treatment effect of parathyroidectomy on these important clinical endpoints,” report the authors of the study, published in the Annals of Internal Medicine.
 

Reason to evaluate and revise current recommendations?

With mild primary hyperparathyroidism becoming the predominant form of hyperparathyroidism, the results suggest rethinking the current recommendations for the condition, the study authors note. 

“Over the years, more active management of mild primary hyperparathyroidism has been recommended, with a widening of criteria for parathyroidectomy,” they write.

“With the low number of kidney stones (n = 5) and no effect of parathyroidectomy on fractures, there may be a need to evaluate and potentially revise the current recommendations.”

The authors of an accompanying editorial agree that “the [results] provide a strong rationale for nonoperative management of patients with mild primary hyperparathyroidism.”

“The findings suggest that most patients can be managed nonoperatively, with monitoring of serum calcium levels every 1 to 2 years or if symptoms occur,” write the editorial authors, Mark J. Bolland, PhD, and Andrew Grey, MD, of the department of medicine, University of Auckland, New Zealand.

Although parathyroidectomy is recommended for the treatment in patients with hyperparathyroidism with severe hypercalcemia or overt symptoms, there has been debate on the long-term benefits of surgery among those with milder cases.  

Most previous studies that have shown benefits, such as reductions in the risk of fracture with parathyroidectomy, have importantly not distinguished between mild and more severe primary hyperparathyroidism, the authors note.
 

No significant differences in mortality between surgery, nonsurgery groups

For the Scandinavian Investigation of Primary Hyperparathyroidism (SIPH) trial, first author Mikkel Pretorius, MD, Oslo University Hospital and Faculty of Medicine, University of Oslo, and colleagues enrolled 191 patients between 1998 and 2005 in Sweden, Norway, and Denmark, who were aged 50-80 years and had mild primary hyperparathyroidism, defined as serum calcium levels of 10.42-11.22 mg/dL.

Participants were randomized to receive surgery (n = 95) or nonoperative observation without intervention (n = 96).

After a 10-year follow-up, 129 patients had completed the final visit. The overall death rate was 7.6%, and, with eight deaths in the surgery group and seven in the nonsurgery group, there were no significant differences between groups in terms of mortality (HR, 1.17; P = .76).

During an extended observation period that lasted until 2018, mortality rates increased by 23%, but with a relatively even distribution of 24 deaths in the surgery group and 20 among those with no surgery.

Chronic hypercalcemia related to primary hyperparathyroidism has been debated as being associated with an increased risk of cardiovascular disease or cancer, however, “the absolute numbers for these and the other disease-specific causes of death were nearly identical between groups,” the authors write, with 17 deaths from cardiovascular disease, eight from cancer, and eight from cerebrovascular disease.

In terms of morbidity, including cardiovascular events, cerebrovascular events, cancer, peripheral fractures, and renal stones, there were 101 events overall, with 52 in the parathyroidectomy group and 49 in the nonsurgery group, which again, was not a significant difference.

Sixteen vertebral fractures occurred overall in 14 patients, which were evenly split at seven patients in each group.

The authors note that “the incidence of peripheral fractures for women in our study was around 2,900 per 100,000 person-years, in the same range as for 70-year-old women in a study in Gothenburg, Sweden (about 2,600 per 100,000 person-years).”



There were no between-group differences in terms of time to death or first morbidity event for any of the prespecified events.

Of the 96 patients originally assigned to the nonsurgery group, 17 (18%) had surgery during follow-up, including three for serious hypercalcemia, three by their own choice, two for decreasing bone density, one for kidney stones, and the others for unclear or unrelated reasons.

Study limitations include that only 26 men (13 in each group) were included, and only 16 completed the study. “The external validity for men based on this study is therefore limited,” the authors note.

And although most people with primary hyperparathyroidism are adults, the older age of participants suggests the results should not be generalized to younger patients with benign parathyroid tumors.

The editorialists note that age should be one of the few factors that may, indeed, suggest appropriate candidates for parathyroidectomy.

“Younger patients (aged < 50 years) may have more aggressive disease,” they explain.

In addition, “patients with serum calcium levels above 3 mmol/L (> 12 mg/dL) are at greater risk for symptomatic hypercalcemia, and patients with a recent history of kidney stones may have fewer future stones after surgical cure.”

“Yet, such patients are a small minority of those with primary hyperparathyroidism,” they note.

The study authors underscore that “our data add evidence to guide the decisionmaking process in deliberative dialogue between clinicians and patients.”

The study received funding from Swedish government grants, the Norwegian Research Council, and the South-Eastern Norway Regional Health Authority.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Patients who receive parathyroidectomy for mild primary hyperparathyroidism show no benefits in survival or morbidity, including fractures, cancer, or cardiovascular outcomes over more than 10 years, compared with those not receiving the surgery, results from a randomized, prospective trial show.

“In contrast to existing data showing increased mortality and cardiovascular morbidity in mild primary hyperparathyroidism, we did not find any treatment effect of parathyroidectomy on these important clinical endpoints,” report the authors of the study, published in the Annals of Internal Medicine.
 

Reason to evaluate and revise current recommendations?

With mild primary hyperparathyroidism becoming the predominant form of hyperparathyroidism, the results suggest rethinking the current recommendations for the condition, the study authors note. 

“Over the years, more active management of mild primary hyperparathyroidism has been recommended, with a widening of criteria for parathyroidectomy,” they write.

“With the low number of kidney stones (n = 5) and no effect of parathyroidectomy on fractures, there may be a need to evaluate and potentially revise the current recommendations.”

The authors of an accompanying editorial agree that “the [results] provide a strong rationale for nonoperative management of patients with mild primary hyperparathyroidism.”

“The findings suggest that most patients can be managed nonoperatively, with monitoring of serum calcium levels every 1 to 2 years or if symptoms occur,” write the editorial authors, Mark J. Bolland, PhD, and Andrew Grey, MD, of the department of medicine, University of Auckland, New Zealand.

Although parathyroidectomy is recommended for the treatment in patients with hyperparathyroidism with severe hypercalcemia or overt symptoms, there has been debate on the long-term benefits of surgery among those with milder cases.  

Most previous studies that have shown benefits, such as reductions in the risk of fracture with parathyroidectomy, have importantly not distinguished between mild and more severe primary hyperparathyroidism, the authors note.
 

No significant differences in mortality between surgery, nonsurgery groups

For the Scandinavian Investigation of Primary Hyperparathyroidism (SIPH) trial, first author Mikkel Pretorius, MD, Oslo University Hospital and Faculty of Medicine, University of Oslo, and colleagues enrolled 191 patients between 1998 and 2005 in Sweden, Norway, and Denmark, who were aged 50-80 years and had mild primary hyperparathyroidism, defined as serum calcium levels of 10.42-11.22 mg/dL.

Participants were randomized to receive surgery (n = 95) or nonoperative observation without intervention (n = 96).

After a 10-year follow-up, 129 patients had completed the final visit. The overall death rate was 7.6%, and, with eight deaths in the surgery group and seven in the nonsurgery group, there were no significant differences between groups in terms of mortality (HR, 1.17; P = .76).

During an extended observation period that lasted until 2018, mortality rates increased by 23%, but with a relatively even distribution of 24 deaths in the surgery group and 20 among those with no surgery.

Chronic hypercalcemia related to primary hyperparathyroidism has been debated as being associated with an increased risk of cardiovascular disease or cancer, however, “the absolute numbers for these and the other disease-specific causes of death were nearly identical between groups,” the authors write, with 17 deaths from cardiovascular disease, eight from cancer, and eight from cerebrovascular disease.

In terms of morbidity, including cardiovascular events, cerebrovascular events, cancer, peripheral fractures, and renal stones, there were 101 events overall, with 52 in the parathyroidectomy group and 49 in the nonsurgery group, which again, was not a significant difference.

Sixteen vertebral fractures occurred overall in 14 patients, which were evenly split at seven patients in each group.

The authors note that “the incidence of peripheral fractures for women in our study was around 2,900 per 100,000 person-years, in the same range as for 70-year-old women in a study in Gothenburg, Sweden (about 2,600 per 100,000 person-years).”



There were no between-group differences in terms of time to death or first morbidity event for any of the prespecified events.

Of the 96 patients originally assigned to the nonsurgery group, 17 (18%) had surgery during follow-up, including three for serious hypercalcemia, three by their own choice, two for decreasing bone density, one for kidney stones, and the others for unclear or unrelated reasons.

Study limitations include that only 26 men (13 in each group) were included, and only 16 completed the study. “The external validity for men based on this study is therefore limited,” the authors note.

And although most people with primary hyperparathyroidism are adults, the older age of participants suggests the results should not be generalized to younger patients with benign parathyroid tumors.

The editorialists note that age should be one of the few factors that may, indeed, suggest appropriate candidates for parathyroidectomy.

“Younger patients (aged < 50 years) may have more aggressive disease,” they explain.

In addition, “patients with serum calcium levels above 3 mmol/L (> 12 mg/dL) are at greater risk for symptomatic hypercalcemia, and patients with a recent history of kidney stones may have fewer future stones after surgical cure.”

“Yet, such patients are a small minority of those with primary hyperparathyroidism,” they note.

The study authors underscore that “our data add evidence to guide the decisionmaking process in deliberative dialogue between clinicians and patients.”

The study received funding from Swedish government grants, the Norwegian Research Council, and the South-Eastern Norway Regional Health Authority.

A version of this article first appeared on Medscape.com.

Patients who receive parathyroidectomy for mild primary hyperparathyroidism show no benefits in survival or morbidity, including fractures, cancer, or cardiovascular outcomes over more than 10 years, compared with those not receiving the surgery, results from a randomized, prospective trial show.

“In contrast to existing data showing increased mortality and cardiovascular morbidity in mild primary hyperparathyroidism, we did not find any treatment effect of parathyroidectomy on these important clinical endpoints,” report the authors of the study, published in the Annals of Internal Medicine.
 

Reason to evaluate and revise current recommendations?

With mild primary hyperparathyroidism becoming the predominant form of hyperparathyroidism, the results suggest rethinking the current recommendations for the condition, the study authors note. 

“Over the years, more active management of mild primary hyperparathyroidism has been recommended, with a widening of criteria for parathyroidectomy,” they write.

“With the low number of kidney stones (n = 5) and no effect of parathyroidectomy on fractures, there may be a need to evaluate and potentially revise the current recommendations.”

The authors of an accompanying editorial agree that “the [results] provide a strong rationale for nonoperative management of patients with mild primary hyperparathyroidism.”

“The findings suggest that most patients can be managed nonoperatively, with monitoring of serum calcium levels every 1 to 2 years or if symptoms occur,” write the editorial authors, Mark J. Bolland, PhD, and Andrew Grey, MD, of the department of medicine, University of Auckland, New Zealand.

Although parathyroidectomy is recommended for the treatment in patients with hyperparathyroidism with severe hypercalcemia or overt symptoms, there has been debate on the long-term benefits of surgery among those with milder cases.  

Most previous studies that have shown benefits, such as reductions in the risk of fracture with parathyroidectomy, have importantly not distinguished between mild and more severe primary hyperparathyroidism, the authors note.
 

No significant differences in mortality between surgery, nonsurgery groups

For the Scandinavian Investigation of Primary Hyperparathyroidism (SIPH) trial, first author Mikkel Pretorius, MD, Oslo University Hospital and Faculty of Medicine, University of Oslo, and colleagues enrolled 191 patients between 1998 and 2005 in Sweden, Norway, and Denmark, who were aged 50-80 years and had mild primary hyperparathyroidism, defined as serum calcium levels of 10.42-11.22 mg/dL.

Participants were randomized to receive surgery (n = 95) or nonoperative observation without intervention (n = 96).

After a 10-year follow-up, 129 patients had completed the final visit. The overall death rate was 7.6%, and, with eight deaths in the surgery group and seven in the nonsurgery group, there were no significant differences between groups in terms of mortality (HR, 1.17; P = .76).

During an extended observation period that lasted until 2018, mortality rates increased by 23%, but with a relatively even distribution of 24 deaths in the surgery group and 20 among those with no surgery.

Chronic hypercalcemia related to primary hyperparathyroidism has been debated as being associated with an increased risk of cardiovascular disease or cancer, however, “the absolute numbers for these and the other disease-specific causes of death were nearly identical between groups,” the authors write, with 17 deaths from cardiovascular disease, eight from cancer, and eight from cerebrovascular disease.

In terms of morbidity, including cardiovascular events, cerebrovascular events, cancer, peripheral fractures, and renal stones, there were 101 events overall, with 52 in the parathyroidectomy group and 49 in the nonsurgery group, which again, was not a significant difference.

Sixteen vertebral fractures occurred overall in 14 patients, which were evenly split at seven patients in each group.

The authors note that “the incidence of peripheral fractures for women in our study was around 2,900 per 100,000 person-years, in the same range as for 70-year-old women in a study in Gothenburg, Sweden (about 2,600 per 100,000 person-years).”



There were no between-group differences in terms of time to death or first morbidity event for any of the prespecified events.

Of the 96 patients originally assigned to the nonsurgery group, 17 (18%) had surgery during follow-up, including three for serious hypercalcemia, three by their own choice, two for decreasing bone density, one for kidney stones, and the others for unclear or unrelated reasons.

Study limitations include that only 26 men (13 in each group) were included, and only 16 completed the study. “The external validity for men based on this study is therefore limited,” the authors note.

And although most people with primary hyperparathyroidism are adults, the older age of participants suggests the results should not be generalized to younger patients with benign parathyroid tumors.

The editorialists note that age should be one of the few factors that may, indeed, suggest appropriate candidates for parathyroidectomy.

“Younger patients (aged < 50 years) may have more aggressive disease,” they explain.

In addition, “patients with serum calcium levels above 3 mmol/L (> 12 mg/dL) are at greater risk for symptomatic hypercalcemia, and patients with a recent history of kidney stones may have fewer future stones after surgical cure.”

“Yet, such patients are a small minority of those with primary hyperparathyroidism,” they note.

The study authors underscore that “our data add evidence to guide the decisionmaking process in deliberative dialogue between clinicians and patients.”

The study received funding from Swedish government grants, the Norwegian Research Council, and the South-Eastern Norway Regional Health Authority.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANNALS OF INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Melanoma screening study stokes overdiagnosis debate

Article Type
Changed
Tue, 04/19/2022 - 09:55

Screening for melanoma at the primary care level is associated with significant increases in the detection of in situ and invasive thin melanomas but not thicker, more worrisome disease, new research shows.

Without a corresponding decrease in melanoma mortality, an increase in the detection of those thin melanomas “raises the concern that early detection efforts, such as visual skin screening, may result in overdiagnosis,” the study authors wrote. “The value of a cancer screening program should most rigorously be measured not by the number of new, early cancers detected, but by its impact on the development of late-stage disease and its associated morbidity, cost, and mortality.”

The research, published in JAMA Dermatology, has reignited the controversy over the benefits and harms of primary care skin cancer screening, garnering two editorials that reflect different sides of the debate.

In one, Robert A. Swerlick, MD, pointed out that, “despite public messaging to the contrary, to my knowledge there is no evidence that routine skin examinations have any effect on melanoma mortality.

“The stage shift to smaller tumors should not be viewed as success and is very strong evidence of overdiagnosis,” wrote Dr. Swerlick, of the department of dermatology, Emory University, Atlanta.

The other editorial, however, argued that routine screening saves lives. “Most melanoma deaths are because of stage I disease, with an estimated 3%-15% of thin melanomas (≤ 1 mm) being lethal,” wrote a trio of editorialists from Oregon Health & Science University, Portland.

When considering the high mutation rate associated with melanoma and the current limits of treatment options, early diagnosis becomes “particularly important and counterbalances the risk of overdiagnosis,” the editorialists asserted.

Primary care screening study

The new findings come from an observational study of a quality improvement initiative conducted at the University of Pittsburgh Medical Center system between 2014 and 2018, in which primary care clinicians were offered training in melanoma identification through skin examination and were encouraged to offer annual skin cancer screening to patients aged 35 years and older.

Of 595,799 eligible patients, 144,851 (24.3%) were screened at least once during the study period. Those who received screening were more likely than unscreened patients to be older (median age, 59 vs. 55 years), women, and non-Hispanic White persons.

During a follow-up of 5 years, the researchers found that patients who received screening were significantly more likely than unscreened patients to be diagnosed with in situ melanoma (incidence, 30.4 vs. 14.4; hazard ratio, 2.6; P < .001) or thin invasive melanoma (incidence, 24.5 vs. 16.1; HR, 1.8; P < .001), after adjusting for factors that included age, sex, and race.

The screened patients were also more likely than unscreened patients to be diagnosed with in situ interval melanomas, defined as melanomas occurring at least 60 days after initial screening (incidence, 26.7 vs. 12.9; HR, 2.1; P < .001), as well as thin invasive interval melanomas (incidence, 18.5 vs. 14.4; HR, 1.3; P = .03).

The 60-day interval was included to account for the possible time to referral to a specialist for definitive diagnosis, the authors explained.

The incidence of the detection of melanomas thicker than 4 mm was lower in screened versus unscreened patients, but the difference was not statistically significant for all melanomas (2.7 vs. 3.3; HR, 0.8; P = .38) or interval melanomas (1.5 vs. 2.7; HR, 0.6; P = .15).
 

 

 

Experts weigh in

Although the follow-up period was of 5 years, not all patients were followed that long after undergoing screening. For instance, for some patients, follow-up occurred only 1 year after they had been screened.

The study’s senior author, Laura K. Ferris, MD, PhD, of the department of dermatology, University of Pittsburgh, noted that a longer follow-up could shift the results.

“When you look at the curves in our figures, you do start to see them separate more and more over time for the thicker melanomas,” Dr. Ferris said in an interview. “I do suspect that, if we followed patients longer, we might start to see a more significant difference.”

The findings nevertheless add to evidence that although routine screening substantially increases the detection of melanomas overall, these melanomas are often not the ones doctors are most worried about or that increase a person’s risk of mortality, Dr. Ferris noted.

When it comes to melanoma screening, balancing the risks and benefits is key. One major downside, Dr. Ferris said, is in regard to the burden such screening could place on the health care system, with potentially unproductive screenings causing delays in care for patients with more urgent needs.

“We are undersupplied in the dermatology workforce, and there is often a long wait to see dermatologists, so we really want to make sure, as trained professionals, that patients have access to us,” she said. “If we’re doing something that doesn’t have proven benefit and is increasing the wait time, that will come at the expense of other patients’ access.”

Costs involved in skin biopsies and excisions of borderline lesions as well as the potential to increase patients’ anxiety represent other important considerations, Dr. Ferris noted.

However, Sancy A. Leachman, MD, PhD, a coauthor of the editorial in favor of screening, said in an interview that “at the individual level, there are an almost infinite number of individual circumstances that could lead a person to decide that the potential benefits outweigh the harms.”

According to Dr. Leachman, who is chair of the department of dermatology, Oregon Health & Science University, these individual priorities may not align with those of the various decision-makers or with guidelines, such as those from the U.S. Preventive Services Task Force, which gives visual skin cancer screening of asymptomatic patients an “I” rating, indicating “insufficient evidence.”

“Many federal agencies and payer groups focus on minimizing costs and optimizing outcomes,” Dr. Leachman and coauthors wrote. As the only professional advocates for individual patients, physicians “have a responsibility to assure that the best interests of patients are served.”

The study was funded by the University of Pittsburgh Melanoma and Skin Cancer Program. Dr. Ferris and Dr. Swerlick disclosed no relevant financial relationships. Dr. Leachman is the principal investigator for War on Melanoma, an early-detection program in Oregon.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Screening for melanoma at the primary care level is associated with significant increases in the detection of in situ and invasive thin melanomas but not thicker, more worrisome disease, new research shows.

Without a corresponding decrease in melanoma mortality, an increase in the detection of those thin melanomas “raises the concern that early detection efforts, such as visual skin screening, may result in overdiagnosis,” the study authors wrote. “The value of a cancer screening program should most rigorously be measured not by the number of new, early cancers detected, but by its impact on the development of late-stage disease and its associated morbidity, cost, and mortality.”

The research, published in JAMA Dermatology, has reignited the controversy over the benefits and harms of primary care skin cancer screening, garnering two editorials that reflect different sides of the debate.

In one, Robert A. Swerlick, MD, pointed out that, “despite public messaging to the contrary, to my knowledge there is no evidence that routine skin examinations have any effect on melanoma mortality.

“The stage shift to smaller tumors should not be viewed as success and is very strong evidence of overdiagnosis,” wrote Dr. Swerlick, of the department of dermatology, Emory University, Atlanta.

The other editorial, however, argued that routine screening saves lives. “Most melanoma deaths are because of stage I disease, with an estimated 3%-15% of thin melanomas (≤ 1 mm) being lethal,” wrote a trio of editorialists from Oregon Health & Science University, Portland.

When considering the high mutation rate associated with melanoma and the current limits of treatment options, early diagnosis becomes “particularly important and counterbalances the risk of overdiagnosis,” the editorialists asserted.

Primary care screening study

The new findings come from an observational study of a quality improvement initiative conducted at the University of Pittsburgh Medical Center system between 2014 and 2018, in which primary care clinicians were offered training in melanoma identification through skin examination and were encouraged to offer annual skin cancer screening to patients aged 35 years and older.

Of 595,799 eligible patients, 144,851 (24.3%) were screened at least once during the study period. Those who received screening were more likely than unscreened patients to be older (median age, 59 vs. 55 years), women, and non-Hispanic White persons.

During a follow-up of 5 years, the researchers found that patients who received screening were significantly more likely than unscreened patients to be diagnosed with in situ melanoma (incidence, 30.4 vs. 14.4; hazard ratio, 2.6; P < .001) or thin invasive melanoma (incidence, 24.5 vs. 16.1; HR, 1.8; P < .001), after adjusting for factors that included age, sex, and race.

The screened patients were also more likely than unscreened patients to be diagnosed with in situ interval melanomas, defined as melanomas occurring at least 60 days after initial screening (incidence, 26.7 vs. 12.9; HR, 2.1; P < .001), as well as thin invasive interval melanomas (incidence, 18.5 vs. 14.4; HR, 1.3; P = .03).

The 60-day interval was included to account for the possible time to referral to a specialist for definitive diagnosis, the authors explained.

The incidence of the detection of melanomas thicker than 4 mm was lower in screened versus unscreened patients, but the difference was not statistically significant for all melanomas (2.7 vs. 3.3; HR, 0.8; P = .38) or interval melanomas (1.5 vs. 2.7; HR, 0.6; P = .15).
 

 

 

Experts weigh in

Although the follow-up period was of 5 years, not all patients were followed that long after undergoing screening. For instance, for some patients, follow-up occurred only 1 year after they had been screened.

The study’s senior author, Laura K. Ferris, MD, PhD, of the department of dermatology, University of Pittsburgh, noted that a longer follow-up could shift the results.

“When you look at the curves in our figures, you do start to see them separate more and more over time for the thicker melanomas,” Dr. Ferris said in an interview. “I do suspect that, if we followed patients longer, we might start to see a more significant difference.”

The findings nevertheless add to evidence that although routine screening substantially increases the detection of melanomas overall, these melanomas are often not the ones doctors are most worried about or that increase a person’s risk of mortality, Dr. Ferris noted.

When it comes to melanoma screening, balancing the risks and benefits is key. One major downside, Dr. Ferris said, is in regard to the burden such screening could place on the health care system, with potentially unproductive screenings causing delays in care for patients with more urgent needs.

“We are undersupplied in the dermatology workforce, and there is often a long wait to see dermatologists, so we really want to make sure, as trained professionals, that patients have access to us,” she said. “If we’re doing something that doesn’t have proven benefit and is increasing the wait time, that will come at the expense of other patients’ access.”

Costs involved in skin biopsies and excisions of borderline lesions as well as the potential to increase patients’ anxiety represent other important considerations, Dr. Ferris noted.

However, Sancy A. Leachman, MD, PhD, a coauthor of the editorial in favor of screening, said in an interview that “at the individual level, there are an almost infinite number of individual circumstances that could lead a person to decide that the potential benefits outweigh the harms.”

According to Dr. Leachman, who is chair of the department of dermatology, Oregon Health & Science University, these individual priorities may not align with those of the various decision-makers or with guidelines, such as those from the U.S. Preventive Services Task Force, which gives visual skin cancer screening of asymptomatic patients an “I” rating, indicating “insufficient evidence.”

“Many federal agencies and payer groups focus on minimizing costs and optimizing outcomes,” Dr. Leachman and coauthors wrote. As the only professional advocates for individual patients, physicians “have a responsibility to assure that the best interests of patients are served.”

The study was funded by the University of Pittsburgh Melanoma and Skin Cancer Program. Dr. Ferris and Dr. Swerlick disclosed no relevant financial relationships. Dr. Leachman is the principal investigator for War on Melanoma, an early-detection program in Oregon.

A version of this article first appeared on Medscape.com.

Screening for melanoma at the primary care level is associated with significant increases in the detection of in situ and invasive thin melanomas but not thicker, more worrisome disease, new research shows.

Without a corresponding decrease in melanoma mortality, an increase in the detection of those thin melanomas “raises the concern that early detection efforts, such as visual skin screening, may result in overdiagnosis,” the study authors wrote. “The value of a cancer screening program should most rigorously be measured not by the number of new, early cancers detected, but by its impact on the development of late-stage disease and its associated morbidity, cost, and mortality.”

The research, published in JAMA Dermatology, has reignited the controversy over the benefits and harms of primary care skin cancer screening, garnering two editorials that reflect different sides of the debate.

In one, Robert A. Swerlick, MD, pointed out that, “despite public messaging to the contrary, to my knowledge there is no evidence that routine skin examinations have any effect on melanoma mortality.

“The stage shift to smaller tumors should not be viewed as success and is very strong evidence of overdiagnosis,” wrote Dr. Swerlick, of the department of dermatology, Emory University, Atlanta.

The other editorial, however, argued that routine screening saves lives. “Most melanoma deaths are because of stage I disease, with an estimated 3%-15% of thin melanomas (≤ 1 mm) being lethal,” wrote a trio of editorialists from Oregon Health & Science University, Portland.

When considering the high mutation rate associated with melanoma and the current limits of treatment options, early diagnosis becomes “particularly important and counterbalances the risk of overdiagnosis,” the editorialists asserted.

Primary care screening study

The new findings come from an observational study of a quality improvement initiative conducted at the University of Pittsburgh Medical Center system between 2014 and 2018, in which primary care clinicians were offered training in melanoma identification through skin examination and were encouraged to offer annual skin cancer screening to patients aged 35 years and older.

Of 595,799 eligible patients, 144,851 (24.3%) were screened at least once during the study period. Those who received screening were more likely than unscreened patients to be older (median age, 59 vs. 55 years), women, and non-Hispanic White persons.

During a follow-up of 5 years, the researchers found that patients who received screening were significantly more likely than unscreened patients to be diagnosed with in situ melanoma (incidence, 30.4 vs. 14.4; hazard ratio, 2.6; P < .001) or thin invasive melanoma (incidence, 24.5 vs. 16.1; HR, 1.8; P < .001), after adjusting for factors that included age, sex, and race.

The screened patients were also more likely than unscreened patients to be diagnosed with in situ interval melanomas, defined as melanomas occurring at least 60 days after initial screening (incidence, 26.7 vs. 12.9; HR, 2.1; P < .001), as well as thin invasive interval melanomas (incidence, 18.5 vs. 14.4; HR, 1.3; P = .03).

The 60-day interval was included to account for the possible time to referral to a specialist for definitive diagnosis, the authors explained.

The incidence of the detection of melanomas thicker than 4 mm was lower in screened versus unscreened patients, but the difference was not statistically significant for all melanomas (2.7 vs. 3.3; HR, 0.8; P = .38) or interval melanomas (1.5 vs. 2.7; HR, 0.6; P = .15).
 

 

 

Experts weigh in

Although the follow-up period was of 5 years, not all patients were followed that long after undergoing screening. For instance, for some patients, follow-up occurred only 1 year after they had been screened.

The study’s senior author, Laura K. Ferris, MD, PhD, of the department of dermatology, University of Pittsburgh, noted that a longer follow-up could shift the results.

“When you look at the curves in our figures, you do start to see them separate more and more over time for the thicker melanomas,” Dr. Ferris said in an interview. “I do suspect that, if we followed patients longer, we might start to see a more significant difference.”

The findings nevertheless add to evidence that although routine screening substantially increases the detection of melanomas overall, these melanomas are often not the ones doctors are most worried about or that increase a person’s risk of mortality, Dr. Ferris noted.

When it comes to melanoma screening, balancing the risks and benefits is key. One major downside, Dr. Ferris said, is in regard to the burden such screening could place on the health care system, with potentially unproductive screenings causing delays in care for patients with more urgent needs.

“We are undersupplied in the dermatology workforce, and there is often a long wait to see dermatologists, so we really want to make sure, as trained professionals, that patients have access to us,” she said. “If we’re doing something that doesn’t have proven benefit and is increasing the wait time, that will come at the expense of other patients’ access.”

Costs involved in skin biopsies and excisions of borderline lesions as well as the potential to increase patients’ anxiety represent other important considerations, Dr. Ferris noted.

However, Sancy A. Leachman, MD, PhD, a coauthor of the editorial in favor of screening, said in an interview that “at the individual level, there are an almost infinite number of individual circumstances that could lead a person to decide that the potential benefits outweigh the harms.”

According to Dr. Leachman, who is chair of the department of dermatology, Oregon Health & Science University, these individual priorities may not align with those of the various decision-makers or with guidelines, such as those from the U.S. Preventive Services Task Force, which gives visual skin cancer screening of asymptomatic patients an “I” rating, indicating “insufficient evidence.”

“Many federal agencies and payer groups focus on minimizing costs and optimizing outcomes,” Dr. Leachman and coauthors wrote. As the only professional advocates for individual patients, physicians “have a responsibility to assure that the best interests of patients are served.”

The study was funded by the University of Pittsburgh Melanoma and Skin Cancer Program. Dr. Ferris and Dr. Swerlick disclosed no relevant financial relationships. Dr. Leachman is the principal investigator for War on Melanoma, an early-detection program in Oregon.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Forever chemicals’ exposures may compound diabetes risk

Article Type
Changed
Tue, 05/03/2022 - 15:01

Women in midlife exposed to combinations of perfluoroalkyl and polyfluoroalkyl substances (PFASs), dubbed “forever and everywhere chemicals”, are at increased risk of developing diabetes, similar to the magnitude of risk associated with overweight and even greater than the risk associated with smoking, new research shows.

“This is the first study to examine the joint effect of PFAS on incident diabetes,” first author Sung Kyun Park, ScD, MPH, told this news organization.

“We showed that multiple PFAS as mixtures have larger effects than individual PFAS,” said Dr. Park, of the department of epidemiology, School of Public Health, University of Michigan, Ann Arbor.

The results suggest that, “given that 1.5 million Americans are newly diagnosed with diabetes each year in the USA, approximately 370,000 new cases of diabetes annually in the U.S. are attributable to PFAS exposure,” Dr. Park and authors note in the study, published in Diabetologia.

However, Kevin McConway, PhD, emeritus professor of applied statistics, The Open University, U.K., told the UK Science Media Centre: “[Some] doubt about cause still remains. Yes, this study does show that PFAS may increase diabetes risk in middle-aged women, but it certainly can’t rule out other explanations for its findings.”
 

Is there any way to reduce exposure?

PFASs, known to be ubiquitous in the environment and also often dubbed “endocrine-disrupting” chemicals, have structures similar to fatty acids. They have been detected in the blood of most people and linked to health concerns including pre-eclampsia, altered levels of liver enzymes, inflammation, and altered lipid and glucose metabolism.

Sources of PFAS exposure can run the gamut from nonstick cookware, food wrappers, and waterproof fabrics to cosmetics and even drinking water.

The authors note a recent Consumer Reports investigation of 118 food packaging products, for instance, which reported finding PFAS chemicals in the packaging of every fast-food chain and retailer examined, including Burger King, McDonald’s, and even more health-focused chains, such as Trader Joe’s.

While efforts to pressure industry to limit PFAS in products are ongoing, Dr. Park asserted that “PFAS exposure reduction at the individual-level is very limited, so a more important way is to change policies and to limit PFAS in the air, drinking water, and foods, etc.”

“It is impossible to completely avoid exposure to PFAS, but I think it is important to acknowledge such sources and change our mindset,” he said.

In terms of clinical practice, the authors add that “it is also important for clinicians to be aware of PFAS as unrecognized risk factors for diabetes and to be prepared to counsel patients in terms of sources of exposure and potential health effects.”
 

Prospective findings from the SWAN-MPS study

The findings come from a prospective study of 1,237 women, with a median age of 49.4 years, who were diabetes-free upon entering the Study of Women’s Health Across the Nation – Multi-Pollutant Study (SWAN-MPS) between 1999 and 2000 and followed until 2017.

Blood samples taken throughout the study were analyzed for serum concentrations of seven PFASs.

Over the study period, there were 102 cases of incident diabetes, representing a rate of 6 cases per 1,000 person-years. Type of diabetes was not determined, but given the age of study participants, most were assumed to have type 2 diabetes, Dr. Park and colleagues note.

namiroz/iStock/Getty Images

After adjustment for key confounders including race/ethnicity, smoking status, alcohol consumption, total energy intake, physical activity, menopausal status, and body mass index (BMI), those in the highest tertile of exposure to a combination of all seven of the PFASs were significantly more likely to develop diabetes, compared with those in the lowest tertile for exposure (hazard ratio, 2.62).

This risk was greater than that seen with individual PFASs (HR, 1.36-1.85), suggesting a potential additive or synergistic effect of multiple PFASs on diabetes risk.

The association between the combined exposure to PFASs among the highest versus lowest tertile was similar to the risk of diabetes developing among those with overweight (BMI 25-< 30 kg/m2) versus normal weight (HR, 2.89) and higher than the risk among current versus never smokers (HR, 2.30).

“Our findings suggest that PFAS may be an important risk factor for diabetes that has a substantial public health impact,” the authors say.

“Given the widespread exposure to PFAS in the general population, the expected benefit of reducing exposure to these ubiquitous chemicals might be considerable,” they emphasize.

The authors have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Women in midlife exposed to combinations of perfluoroalkyl and polyfluoroalkyl substances (PFASs), dubbed “forever and everywhere chemicals”, are at increased risk of developing diabetes, similar to the magnitude of risk associated with overweight and even greater than the risk associated with smoking, new research shows.

“This is the first study to examine the joint effect of PFAS on incident diabetes,” first author Sung Kyun Park, ScD, MPH, told this news organization.

“We showed that multiple PFAS as mixtures have larger effects than individual PFAS,” said Dr. Park, of the department of epidemiology, School of Public Health, University of Michigan, Ann Arbor.

The results suggest that, “given that 1.5 million Americans are newly diagnosed with diabetes each year in the USA, approximately 370,000 new cases of diabetes annually in the U.S. are attributable to PFAS exposure,” Dr. Park and authors note in the study, published in Diabetologia.

However, Kevin McConway, PhD, emeritus professor of applied statistics, The Open University, U.K., told the UK Science Media Centre: “[Some] doubt about cause still remains. Yes, this study does show that PFAS may increase diabetes risk in middle-aged women, but it certainly can’t rule out other explanations for its findings.”
 

Is there any way to reduce exposure?

PFASs, known to be ubiquitous in the environment and also often dubbed “endocrine-disrupting” chemicals, have structures similar to fatty acids. They have been detected in the blood of most people and linked to health concerns including pre-eclampsia, altered levels of liver enzymes, inflammation, and altered lipid and glucose metabolism.

Sources of PFAS exposure can run the gamut from nonstick cookware, food wrappers, and waterproof fabrics to cosmetics and even drinking water.

The authors note a recent Consumer Reports investigation of 118 food packaging products, for instance, which reported finding PFAS chemicals in the packaging of every fast-food chain and retailer examined, including Burger King, McDonald’s, and even more health-focused chains, such as Trader Joe’s.

While efforts to pressure industry to limit PFAS in products are ongoing, Dr. Park asserted that “PFAS exposure reduction at the individual-level is very limited, so a more important way is to change policies and to limit PFAS in the air, drinking water, and foods, etc.”

“It is impossible to completely avoid exposure to PFAS, but I think it is important to acknowledge such sources and change our mindset,” he said.

In terms of clinical practice, the authors add that “it is also important for clinicians to be aware of PFAS as unrecognized risk factors for diabetes and to be prepared to counsel patients in terms of sources of exposure and potential health effects.”
 

Prospective findings from the SWAN-MPS study

The findings come from a prospective study of 1,237 women, with a median age of 49.4 years, who were diabetes-free upon entering the Study of Women’s Health Across the Nation – Multi-Pollutant Study (SWAN-MPS) between 1999 and 2000 and followed until 2017.

Blood samples taken throughout the study were analyzed for serum concentrations of seven PFASs.

Over the study period, there were 102 cases of incident diabetes, representing a rate of 6 cases per 1,000 person-years. Type of diabetes was not determined, but given the age of study participants, most were assumed to have type 2 diabetes, Dr. Park and colleagues note.

namiroz/iStock/Getty Images

After adjustment for key confounders including race/ethnicity, smoking status, alcohol consumption, total energy intake, physical activity, menopausal status, and body mass index (BMI), those in the highest tertile of exposure to a combination of all seven of the PFASs were significantly more likely to develop diabetes, compared with those in the lowest tertile for exposure (hazard ratio, 2.62).

This risk was greater than that seen with individual PFASs (HR, 1.36-1.85), suggesting a potential additive or synergistic effect of multiple PFASs on diabetes risk.

The association between the combined exposure to PFASs among the highest versus lowest tertile was similar to the risk of diabetes developing among those with overweight (BMI 25-< 30 kg/m2) versus normal weight (HR, 2.89) and higher than the risk among current versus never smokers (HR, 2.30).

“Our findings suggest that PFAS may be an important risk factor for diabetes that has a substantial public health impact,” the authors say.

“Given the widespread exposure to PFAS in the general population, the expected benefit of reducing exposure to these ubiquitous chemicals might be considerable,” they emphasize.

The authors have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Women in midlife exposed to combinations of perfluoroalkyl and polyfluoroalkyl substances (PFASs), dubbed “forever and everywhere chemicals”, are at increased risk of developing diabetes, similar to the magnitude of risk associated with overweight and even greater than the risk associated with smoking, new research shows.

“This is the first study to examine the joint effect of PFAS on incident diabetes,” first author Sung Kyun Park, ScD, MPH, told this news organization.

“We showed that multiple PFAS as mixtures have larger effects than individual PFAS,” said Dr. Park, of the department of epidemiology, School of Public Health, University of Michigan, Ann Arbor.

The results suggest that, “given that 1.5 million Americans are newly diagnosed with diabetes each year in the USA, approximately 370,000 new cases of diabetes annually in the U.S. are attributable to PFAS exposure,” Dr. Park and authors note in the study, published in Diabetologia.

However, Kevin McConway, PhD, emeritus professor of applied statistics, The Open University, U.K., told the UK Science Media Centre: “[Some] doubt about cause still remains. Yes, this study does show that PFAS may increase diabetes risk in middle-aged women, but it certainly can’t rule out other explanations for its findings.”
 

Is there any way to reduce exposure?

PFASs, known to be ubiquitous in the environment and also often dubbed “endocrine-disrupting” chemicals, have structures similar to fatty acids. They have been detected in the blood of most people and linked to health concerns including pre-eclampsia, altered levels of liver enzymes, inflammation, and altered lipid and glucose metabolism.

Sources of PFAS exposure can run the gamut from nonstick cookware, food wrappers, and waterproof fabrics to cosmetics and even drinking water.

The authors note a recent Consumer Reports investigation of 118 food packaging products, for instance, which reported finding PFAS chemicals in the packaging of every fast-food chain and retailer examined, including Burger King, McDonald’s, and even more health-focused chains, such as Trader Joe’s.

While efforts to pressure industry to limit PFAS in products are ongoing, Dr. Park asserted that “PFAS exposure reduction at the individual-level is very limited, so a more important way is to change policies and to limit PFAS in the air, drinking water, and foods, etc.”

“It is impossible to completely avoid exposure to PFAS, but I think it is important to acknowledge such sources and change our mindset,” he said.

In terms of clinical practice, the authors add that “it is also important for clinicians to be aware of PFAS as unrecognized risk factors for diabetes and to be prepared to counsel patients in terms of sources of exposure and potential health effects.”
 

Prospective findings from the SWAN-MPS study

The findings come from a prospective study of 1,237 women, with a median age of 49.4 years, who were diabetes-free upon entering the Study of Women’s Health Across the Nation – Multi-Pollutant Study (SWAN-MPS) between 1999 and 2000 and followed until 2017.

Blood samples taken throughout the study were analyzed for serum concentrations of seven PFASs.

Over the study period, there were 102 cases of incident diabetes, representing a rate of 6 cases per 1,000 person-years. Type of diabetes was not determined, but given the age of study participants, most were assumed to have type 2 diabetes, Dr. Park and colleagues note.

namiroz/iStock/Getty Images

After adjustment for key confounders including race/ethnicity, smoking status, alcohol consumption, total energy intake, physical activity, menopausal status, and body mass index (BMI), those in the highest tertile of exposure to a combination of all seven of the PFASs were significantly more likely to develop diabetes, compared with those in the lowest tertile for exposure (hazard ratio, 2.62).

This risk was greater than that seen with individual PFASs (HR, 1.36-1.85), suggesting a potential additive or synergistic effect of multiple PFASs on diabetes risk.

The association between the combined exposure to PFASs among the highest versus lowest tertile was similar to the risk of diabetes developing among those with overweight (BMI 25-< 30 kg/m2) versus normal weight (HR, 2.89) and higher than the risk among current versus never smokers (HR, 2.30).

“Our findings suggest that PFAS may be an important risk factor for diabetes that has a substantial public health impact,” the authors say.

“Given the widespread exposure to PFAS in the general population, the expected benefit of reducing exposure to these ubiquitous chemicals might be considerable,” they emphasize.

The authors have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DIABETOLOGIA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

CO2 laser excision therapy for hidradenitis suppurativa shows no keloid risk

Article Type
Changed
Tue, 04/12/2022 - 16:09

– The use of carbon dioxide (CO2) laser excision therapy for hidradenitis suppurativa (HS) is not associated with an increased risk for the development of keloids, new research shows.

“With keloids disproportionately affecting Black and other skin of color patients, denying treatment on a notion that lacks evidentiary support further potentiates the health disparities experienced by these marginalized groups,” the researchers reported at the Annual Meeting of the Skin of Color Society Scientific Symposium (SOCS) 2022. In their retrospective study of 129 patients with HS treated with CO2 laser, “there were no cases of keloid formation,” they say.

HS, a potentially debilitating chronic inflammatory condition that involves painful nodules, boils, and abscesses, is often refractory to standard treatment. CO2 laser excision therapy has yielded favorable outcomes in some studies.

Although CO2 laser therapy is also used to treat keloids, some clinicians hesitate to use this treatment in these patients because of concerns that its use for treating HS could trigger the development of keloids.

“Many patients come in telling us they were denied [CO2 laser] surgery due to keloids,” senior author Iltefat Hamzavi, MD, a senior staff physician in the Department of Dermatology at the Henry Ford Health System, Detroit, told this news organization.

Dr. Iltefat H. Hamzavi


Although patients with HS are commonly treated with CO2 laser excision in his department, this treatment approach “is underused nationally,” he said.

“Of note, the sinus tunnels of hidradenitis suppurativa can look like keloids, so this might drive surgeons away from treating [those] lesions,” Dr. Hamzavi said.

To further evaluate the risk of developing keloids with the treatment, Dr. Hamzavi and his colleagues conducted a retrospective review of 129 patients with HS treated at Henry Ford who had undergone follicular destruction with CO2 laser between 2014 and 2021; 102 (79%) patients were female. The mean age was about 38 years (range, 15-78 years).

Of the patients, almost half were Black, almost 40% were White, 5% were Asian, and 3% were of unknown ethnicity.

Medical records of nine patients included diagnoses of keloids or hypertrophic scars. Further review indicated that none of the diagnoses were for keloids but were for hypertrophic scars, hypertrophic granulation tissue, an HS nodule, or contracture scar, the authors report.

“While the emergence of hypertrophic scars, hypertrophic granulation tissue, and scar contracture following CO2 laser excision therapy for hidradenitis suppurativa has been documented in the literature, existing evidence does not support postoperative keloid formation,” the authors conclude.

Because healing time with CO2 laser treatment is prolonged and there is an increase in risk of adverse events, Dr. Hamzavi underscored that “safety protocols for CO2 lasers should be followed, and wound prep instructions should be provided along with counseling on healing times.”



Regarding patient selection, he noted that “the disease should be medically stable with reduction in drainage to help control postop bleeding risk.”

The findings of the study are supported by a recent systematic review that compared outcomes and adverse effects of treatment with ablative laser therapies with nonablative lasers for skin resurfacing. The review included 34 studies and involved 1,093 patients. The conditions that were treated ranged from photodamage and acne scars to HS and post-traumatic scarring from basal cell carcinoma excision.

That review found that overall, rates of adverse events were higher with nonablative therapies (12.2%, 31 events), compared with ablative laser therapy, such as with CO2 laser (8.28%, 81 events). In addition, when transient events were excluded, ablative lasers were associated with fewer complications overall, compared with nonablative lasers (2.56% vs. 7.48%).

The authors conclude: “It is our hope that this study will facilitate continued research in this domain in an effort to combat these inequities and improve access to CO2 excision or standardized excisional therapy for hidradenitis suppurativa treatment.”

Dr. Hamzavi and the other authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

– The use of carbon dioxide (CO2) laser excision therapy for hidradenitis suppurativa (HS) is not associated with an increased risk for the development of keloids, new research shows.

“With keloids disproportionately affecting Black and other skin of color patients, denying treatment on a notion that lacks evidentiary support further potentiates the health disparities experienced by these marginalized groups,” the researchers reported at the Annual Meeting of the Skin of Color Society Scientific Symposium (SOCS) 2022. In their retrospective study of 129 patients with HS treated with CO2 laser, “there were no cases of keloid formation,” they say.

HS, a potentially debilitating chronic inflammatory condition that involves painful nodules, boils, and abscesses, is often refractory to standard treatment. CO2 laser excision therapy has yielded favorable outcomes in some studies.

Although CO2 laser therapy is also used to treat keloids, some clinicians hesitate to use this treatment in these patients because of concerns that its use for treating HS could trigger the development of keloids.

“Many patients come in telling us they were denied [CO2 laser] surgery due to keloids,” senior author Iltefat Hamzavi, MD, a senior staff physician in the Department of Dermatology at the Henry Ford Health System, Detroit, told this news organization.

Dr. Iltefat H. Hamzavi


Although patients with HS are commonly treated with CO2 laser excision in his department, this treatment approach “is underused nationally,” he said.

“Of note, the sinus tunnels of hidradenitis suppurativa can look like keloids, so this might drive surgeons away from treating [those] lesions,” Dr. Hamzavi said.

To further evaluate the risk of developing keloids with the treatment, Dr. Hamzavi and his colleagues conducted a retrospective review of 129 patients with HS treated at Henry Ford who had undergone follicular destruction with CO2 laser between 2014 and 2021; 102 (79%) patients were female. The mean age was about 38 years (range, 15-78 years).

Of the patients, almost half were Black, almost 40% were White, 5% were Asian, and 3% were of unknown ethnicity.

Medical records of nine patients included diagnoses of keloids or hypertrophic scars. Further review indicated that none of the diagnoses were for keloids but were for hypertrophic scars, hypertrophic granulation tissue, an HS nodule, or contracture scar, the authors report.

“While the emergence of hypertrophic scars, hypertrophic granulation tissue, and scar contracture following CO2 laser excision therapy for hidradenitis suppurativa has been documented in the literature, existing evidence does not support postoperative keloid formation,” the authors conclude.

Because healing time with CO2 laser treatment is prolonged and there is an increase in risk of adverse events, Dr. Hamzavi underscored that “safety protocols for CO2 lasers should be followed, and wound prep instructions should be provided along with counseling on healing times.”



Regarding patient selection, he noted that “the disease should be medically stable with reduction in drainage to help control postop bleeding risk.”

The findings of the study are supported by a recent systematic review that compared outcomes and adverse effects of treatment with ablative laser therapies with nonablative lasers for skin resurfacing. The review included 34 studies and involved 1,093 patients. The conditions that were treated ranged from photodamage and acne scars to HS and post-traumatic scarring from basal cell carcinoma excision.

That review found that overall, rates of adverse events were higher with nonablative therapies (12.2%, 31 events), compared with ablative laser therapy, such as with CO2 laser (8.28%, 81 events). In addition, when transient events were excluded, ablative lasers were associated with fewer complications overall, compared with nonablative lasers (2.56% vs. 7.48%).

The authors conclude: “It is our hope that this study will facilitate continued research in this domain in an effort to combat these inequities and improve access to CO2 excision or standardized excisional therapy for hidradenitis suppurativa treatment.”

Dr. Hamzavi and the other authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

– The use of carbon dioxide (CO2) laser excision therapy for hidradenitis suppurativa (HS) is not associated with an increased risk for the development of keloids, new research shows.

“With keloids disproportionately affecting Black and other skin of color patients, denying treatment on a notion that lacks evidentiary support further potentiates the health disparities experienced by these marginalized groups,” the researchers reported at the Annual Meeting of the Skin of Color Society Scientific Symposium (SOCS) 2022. In their retrospective study of 129 patients with HS treated with CO2 laser, “there were no cases of keloid formation,” they say.

HS, a potentially debilitating chronic inflammatory condition that involves painful nodules, boils, and abscesses, is often refractory to standard treatment. CO2 laser excision therapy has yielded favorable outcomes in some studies.

Although CO2 laser therapy is also used to treat keloids, some clinicians hesitate to use this treatment in these patients because of concerns that its use for treating HS could trigger the development of keloids.

“Many patients come in telling us they were denied [CO2 laser] surgery due to keloids,” senior author Iltefat Hamzavi, MD, a senior staff physician in the Department of Dermatology at the Henry Ford Health System, Detroit, told this news organization.

Dr. Iltefat H. Hamzavi


Although patients with HS are commonly treated with CO2 laser excision in his department, this treatment approach “is underused nationally,” he said.

“Of note, the sinus tunnels of hidradenitis suppurativa can look like keloids, so this might drive surgeons away from treating [those] lesions,” Dr. Hamzavi said.

To further evaluate the risk of developing keloids with the treatment, Dr. Hamzavi and his colleagues conducted a retrospective review of 129 patients with HS treated at Henry Ford who had undergone follicular destruction with CO2 laser between 2014 and 2021; 102 (79%) patients were female. The mean age was about 38 years (range, 15-78 years).

Of the patients, almost half were Black, almost 40% were White, 5% were Asian, and 3% were of unknown ethnicity.

Medical records of nine patients included diagnoses of keloids or hypertrophic scars. Further review indicated that none of the diagnoses were for keloids but were for hypertrophic scars, hypertrophic granulation tissue, an HS nodule, or contracture scar, the authors report.

“While the emergence of hypertrophic scars, hypertrophic granulation tissue, and scar contracture following CO2 laser excision therapy for hidradenitis suppurativa has been documented in the literature, existing evidence does not support postoperative keloid formation,” the authors conclude.

Because healing time with CO2 laser treatment is prolonged and there is an increase in risk of adverse events, Dr. Hamzavi underscored that “safety protocols for CO2 lasers should be followed, and wound prep instructions should be provided along with counseling on healing times.”



Regarding patient selection, he noted that “the disease should be medically stable with reduction in drainage to help control postop bleeding risk.”

The findings of the study are supported by a recent systematic review that compared outcomes and adverse effects of treatment with ablative laser therapies with nonablative lasers for skin resurfacing. The review included 34 studies and involved 1,093 patients. The conditions that were treated ranged from photodamage and acne scars to HS and post-traumatic scarring from basal cell carcinoma excision.

That review found that overall, rates of adverse events were higher with nonablative therapies (12.2%, 31 events), compared with ablative laser therapy, such as with CO2 laser (8.28%, 81 events). In addition, when transient events were excluded, ablative lasers were associated with fewer complications overall, compared with nonablative lasers (2.56% vs. 7.48%).

The authors conclude: “It is our hope that this study will facilitate continued research in this domain in an effort to combat these inequities and improve access to CO2 excision or standardized excisional therapy for hidradenitis suppurativa treatment.”

Dr. Hamzavi and the other authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT SOCS 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Study finds discrepancies in biopsy decisions, diagnoses based on skin type

Article Type
Changed
Thu, 12/15/2022 - 14:33

Among dermatology residents and attending dermatologists, rates of diagnostic accuracy and appropriate biopsy recommendations were significantly lower for patients with skin of color, compared with White patients, new research shows.

“Our findings suggest diagnostic biases based on skin color exist in dermatology practice,” lead author Loren Krueger, MD, assistant professor in the department of dermatology, Emory University School of Medicine, Atlanta, said at the Annual Skin of Color Society Scientific Symposium. “A lower likelihood of biopsy of malignancy in darker skin types could contribute to disparities in cutaneous malignancies,” she added.

Dr. Loren Krueger
Loren Krueger, MD, assistant professor in the Department of Dermatology, Emory University. Atlanta


Disparities in dermatologic care among Black patients, compared with White patients, have been well documented. Recent evidence includes a 2020 study that showed significant shortcomings among medical students in correctly diagnosing squamous cell carcinoma, urticaria, and atopic dermatitis for patients with skin of color.

“It’s no secret that our images do not accurately or in the right quantity include skin of color,” Dr. Krueger said. “Yet few papers talk about how these biases actually impact our care. Importantly, this study demonstrates that diagnostic bias develops as early as the medical student level.”

To further investigate the role of skin color in the assessment of neoplastic and inflammatory skin conditions and decisions to perform biopsy, Dr. Krueger and her colleagues surveyed 144 dermatology residents and attending dermatologists to evaluate their clinical decisionmaking skills in assessing skin conditions for patients with lighter skin and those with darker skin. Almost 80% (113) provided complete responses and were included in the study.

For the survey, participants were shown photos of 10 neoplastic and 10 inflammatory skin conditions. Each image was matched in lighter (skin types I-II) and darker (skin types IV-VI) skinned patients in random order. Participants were asked to identify the suspected underlying etiology (neoplastic–benign, neoplastic–malignant, papulosquamous, lichenoid, infectious, bullous, or no suspected etiology) and whether they would choose to perform biopsy for the pictured condition.

Overall, their responses showed a slightly higher probability of recommending a biopsy for patients with skin types IV-V (odds ratio, 1.18; P = .054).

However, respondents were more than twice as likely to recommend a biopsy for benign neoplasms for patients with skin of color, compared with those with lighter skin types (OR, 2.57; P < .0001). They were significantly less likely to recommend a biopsy for a malignant neoplasm for patients with skin of color (OR, 0.42; P < .0001).

In addition, the correct etiology was much more commonly missed in diagnosing patients with skin of color, even after adjusting for years in dermatology practice (OR, 0.569; P < .0001).

Conversely, respondents were significantly less likely to recommend a biopsy for benign neoplasms and were more likely to recommend a biopsy for malignant neoplasms among White patients. Etiology was more commonly correct.



The findings underscore that “for skin of color patients, you’re more likely to have a benign neoplasm biopsied, you’re less likely to have a malignant neoplasm biopsied, and more often, your etiology may be missed,” Dr. Krueger said at the meeting.

Of note, while 45% of respondents were dermatology residents or fellows, 20.4% had 1-5 years of experience, and about 28% had 10 to more than 25 years of experience.

And while 75% of the dermatology residents, fellows, and attendings were White, there was no difference in the probability of correctly identifying the underlying etiology in dark or light skin types based on the provider’s self-identified race.

Importantly, the patterns in the study of diagnostic discrepancies are reflected in broader dermatologic outcomes. The 5-year melanoma survival rate is 74.1% among Black patients and 92.9% among White patients. Dr. Krueger referred to data showing that only 52.6% of Black patients have stage I melanoma at diagnosis, whereas among White patients, the rate is much higher, at 75.9%.

“We know skin malignancy can be more aggressive and late-stage in skin of color populations, leading to increased morbidity and later stage at initial diagnosis,” Dr. Krueger told this news organization. “We routinely attribute this to limited access to care and lack of awareness on skin malignancy. However, we have no evidence on how we, as dermatologists, may be playing a role.”

Furthermore, the decision to perform biopsy or not can affect the size and stage at diagnosis of a cutaneous malignancy, she noted.

Key changes needed to prevent the disparities – and their implications – should start at the training level, she emphasized. “I would love to see increased photo representation in training materials – this is a great place to start,” Dr. Krueger said.

In addition, “encouraging medical students, residents, and dermatologists to learn from skin of color experts is vital,” she said. “We should also provide hands-on experience and training with diverse patient populations.”

The first step to addressing biases “is to acknowledge they exist,” Dr. Krueger added. “I am hopeful this inspires others to continue to investigate these biases, as well as how we can eliminate them.”

The study was funded by the Rudin Resident Research Award. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Among dermatology residents and attending dermatologists, rates of diagnostic accuracy and appropriate biopsy recommendations were significantly lower for patients with skin of color, compared with White patients, new research shows.

“Our findings suggest diagnostic biases based on skin color exist in dermatology practice,” lead author Loren Krueger, MD, assistant professor in the department of dermatology, Emory University School of Medicine, Atlanta, said at the Annual Skin of Color Society Scientific Symposium. “A lower likelihood of biopsy of malignancy in darker skin types could contribute to disparities in cutaneous malignancies,” she added.

Dr. Loren Krueger
Loren Krueger, MD, assistant professor in the Department of Dermatology, Emory University. Atlanta


Disparities in dermatologic care among Black patients, compared with White patients, have been well documented. Recent evidence includes a 2020 study that showed significant shortcomings among medical students in correctly diagnosing squamous cell carcinoma, urticaria, and atopic dermatitis for patients with skin of color.

“It’s no secret that our images do not accurately or in the right quantity include skin of color,” Dr. Krueger said. “Yet few papers talk about how these biases actually impact our care. Importantly, this study demonstrates that diagnostic bias develops as early as the medical student level.”

To further investigate the role of skin color in the assessment of neoplastic and inflammatory skin conditions and decisions to perform biopsy, Dr. Krueger and her colleagues surveyed 144 dermatology residents and attending dermatologists to evaluate their clinical decisionmaking skills in assessing skin conditions for patients with lighter skin and those with darker skin. Almost 80% (113) provided complete responses and were included in the study.

For the survey, participants were shown photos of 10 neoplastic and 10 inflammatory skin conditions. Each image was matched in lighter (skin types I-II) and darker (skin types IV-VI) skinned patients in random order. Participants were asked to identify the suspected underlying etiology (neoplastic–benign, neoplastic–malignant, papulosquamous, lichenoid, infectious, bullous, or no suspected etiology) and whether they would choose to perform biopsy for the pictured condition.

Overall, their responses showed a slightly higher probability of recommending a biopsy for patients with skin types IV-V (odds ratio, 1.18; P = .054).

However, respondents were more than twice as likely to recommend a biopsy for benign neoplasms for patients with skin of color, compared with those with lighter skin types (OR, 2.57; P < .0001). They were significantly less likely to recommend a biopsy for a malignant neoplasm for patients with skin of color (OR, 0.42; P < .0001).

In addition, the correct etiology was much more commonly missed in diagnosing patients with skin of color, even after adjusting for years in dermatology practice (OR, 0.569; P < .0001).

Conversely, respondents were significantly less likely to recommend a biopsy for benign neoplasms and were more likely to recommend a biopsy for malignant neoplasms among White patients. Etiology was more commonly correct.



The findings underscore that “for skin of color patients, you’re more likely to have a benign neoplasm biopsied, you’re less likely to have a malignant neoplasm biopsied, and more often, your etiology may be missed,” Dr. Krueger said at the meeting.

Of note, while 45% of respondents were dermatology residents or fellows, 20.4% had 1-5 years of experience, and about 28% had 10 to more than 25 years of experience.

And while 75% of the dermatology residents, fellows, and attendings were White, there was no difference in the probability of correctly identifying the underlying etiology in dark or light skin types based on the provider’s self-identified race.

Importantly, the patterns in the study of diagnostic discrepancies are reflected in broader dermatologic outcomes. The 5-year melanoma survival rate is 74.1% among Black patients and 92.9% among White patients. Dr. Krueger referred to data showing that only 52.6% of Black patients have stage I melanoma at diagnosis, whereas among White patients, the rate is much higher, at 75.9%.

“We know skin malignancy can be more aggressive and late-stage in skin of color populations, leading to increased morbidity and later stage at initial diagnosis,” Dr. Krueger told this news organization. “We routinely attribute this to limited access to care and lack of awareness on skin malignancy. However, we have no evidence on how we, as dermatologists, may be playing a role.”

Furthermore, the decision to perform biopsy or not can affect the size and stage at diagnosis of a cutaneous malignancy, she noted.

Key changes needed to prevent the disparities – and their implications – should start at the training level, she emphasized. “I would love to see increased photo representation in training materials – this is a great place to start,” Dr. Krueger said.

In addition, “encouraging medical students, residents, and dermatologists to learn from skin of color experts is vital,” she said. “We should also provide hands-on experience and training with diverse patient populations.”

The first step to addressing biases “is to acknowledge they exist,” Dr. Krueger added. “I am hopeful this inspires others to continue to investigate these biases, as well as how we can eliminate them.”

The study was funded by the Rudin Resident Research Award. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Among dermatology residents and attending dermatologists, rates of diagnostic accuracy and appropriate biopsy recommendations were significantly lower for patients with skin of color, compared with White patients, new research shows.

“Our findings suggest diagnostic biases based on skin color exist in dermatology practice,” lead author Loren Krueger, MD, assistant professor in the department of dermatology, Emory University School of Medicine, Atlanta, said at the Annual Skin of Color Society Scientific Symposium. “A lower likelihood of biopsy of malignancy in darker skin types could contribute to disparities in cutaneous malignancies,” she added.

Dr. Loren Krueger
Loren Krueger, MD, assistant professor in the Department of Dermatology, Emory University. Atlanta


Disparities in dermatologic care among Black patients, compared with White patients, have been well documented. Recent evidence includes a 2020 study that showed significant shortcomings among medical students in correctly diagnosing squamous cell carcinoma, urticaria, and atopic dermatitis for patients with skin of color.

“It’s no secret that our images do not accurately or in the right quantity include skin of color,” Dr. Krueger said. “Yet few papers talk about how these biases actually impact our care. Importantly, this study demonstrates that diagnostic bias develops as early as the medical student level.”

To further investigate the role of skin color in the assessment of neoplastic and inflammatory skin conditions and decisions to perform biopsy, Dr. Krueger and her colleagues surveyed 144 dermatology residents and attending dermatologists to evaluate their clinical decisionmaking skills in assessing skin conditions for patients with lighter skin and those with darker skin. Almost 80% (113) provided complete responses and were included in the study.

For the survey, participants were shown photos of 10 neoplastic and 10 inflammatory skin conditions. Each image was matched in lighter (skin types I-II) and darker (skin types IV-VI) skinned patients in random order. Participants were asked to identify the suspected underlying etiology (neoplastic–benign, neoplastic–malignant, papulosquamous, lichenoid, infectious, bullous, or no suspected etiology) and whether they would choose to perform biopsy for the pictured condition.

Overall, their responses showed a slightly higher probability of recommending a biopsy for patients with skin types IV-V (odds ratio, 1.18; P = .054).

However, respondents were more than twice as likely to recommend a biopsy for benign neoplasms for patients with skin of color, compared with those with lighter skin types (OR, 2.57; P < .0001). They were significantly less likely to recommend a biopsy for a malignant neoplasm for patients with skin of color (OR, 0.42; P < .0001).

In addition, the correct etiology was much more commonly missed in diagnosing patients with skin of color, even after adjusting for years in dermatology practice (OR, 0.569; P < .0001).

Conversely, respondents were significantly less likely to recommend a biopsy for benign neoplasms and were more likely to recommend a biopsy for malignant neoplasms among White patients. Etiology was more commonly correct.



The findings underscore that “for skin of color patients, you’re more likely to have a benign neoplasm biopsied, you’re less likely to have a malignant neoplasm biopsied, and more often, your etiology may be missed,” Dr. Krueger said at the meeting.

Of note, while 45% of respondents were dermatology residents or fellows, 20.4% had 1-5 years of experience, and about 28% had 10 to more than 25 years of experience.

And while 75% of the dermatology residents, fellows, and attendings were White, there was no difference in the probability of correctly identifying the underlying etiology in dark or light skin types based on the provider’s self-identified race.

Importantly, the patterns in the study of diagnostic discrepancies are reflected in broader dermatologic outcomes. The 5-year melanoma survival rate is 74.1% among Black patients and 92.9% among White patients. Dr. Krueger referred to data showing that only 52.6% of Black patients have stage I melanoma at diagnosis, whereas among White patients, the rate is much higher, at 75.9%.

“We know skin malignancy can be more aggressive and late-stage in skin of color populations, leading to increased morbidity and later stage at initial diagnosis,” Dr. Krueger told this news organization. “We routinely attribute this to limited access to care and lack of awareness on skin malignancy. However, we have no evidence on how we, as dermatologists, may be playing a role.”

Furthermore, the decision to perform biopsy or not can affect the size and stage at diagnosis of a cutaneous malignancy, she noted.

Key changes needed to prevent the disparities – and their implications – should start at the training level, she emphasized. “I would love to see increased photo representation in training materials – this is a great place to start,” Dr. Krueger said.

In addition, “encouraging medical students, residents, and dermatologists to learn from skin of color experts is vital,” she said. “We should also provide hands-on experience and training with diverse patient populations.”

The first step to addressing biases “is to acknowledge they exist,” Dr. Krueger added. “I am hopeful this inspires others to continue to investigate these biases, as well as how we can eliminate them.”

The study was funded by the Rudin Resident Research Award. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Live-donor liver transplants for patients with CRC liver mets

Article Type
Changed
Wed, 04/13/2022 - 08:19

Encouraging improvements in survival have been reported by surgeons who used liver transplants from live donors as a treatment for patients with colorectal cancer (CRC) and unresectable liver metastases. These patients usually have a poor prognosis, and for many, palliative chemotherapy is the standard of care.

“For the first time, we have been able to demonstrate [outside of Norway] that liver transplantation for patients with unresectable liver metastases is feasible with good outcomes,” lead author Gonzalo Sapisochin, MD, PhD, an assistant professor of surgery at the University of Toronto, said in an interview.

“Furthermore, this is the first time we are able to prove that living donation may be a good strategy in this setting,” Dr. Sapisochin said of the series of 10 cases that they published in JAMA Surgery.

The series showed “excellent perioperative outcomes for both donors and recipients,” noted the authors of an accompanying commentary. They said the team “should be commended for adding liver-donor live transplantation to the armamentarium of surgical options for patients with CRC liver metastases.”

However, they express concern about the relatively short follow-up of 1.5 years and the “very high” recurrence rate of 30%.

Commenting in an interview, lead editorialist Shimul Shah, MD, an associate professor of surgery and the chief of solid organ transplantation at the University of Cincinnati, said: “I agree that overall survival is an important measure to look at, but it’s hard to look at overall survival with [1.5] years of follow-up.”

Other key areas of concern are the need for more standardized practices and for more data on how liver transplantation compares with patients who just continue to receive chemotherapy.

“I certainly think that there’s a role for liver transplantation in these patients, and I am a big fan of this,” Dr. Shah emphasized, noting that four patients at his own center have recently received liver transplants, including three from deceased donors.

“However, I just think that as a community, we need to be cautious and not get too excited too early,” he said. “We need to keep studying it and take it one step at a time.”

Moving from deceased to living donors

Nearly 70% of patients with CRC develop liver metastases, and when these are unresectable, the prognosis is poor, with 5-year survival rates of less than 10%.

The option of liver transplantation was first reported in 2015 by a group in Norway. Their study included 21 patients with CRC and unresectable liver tumors. They reported a striking improvement in overall survival at 5 years (56% vs. 9% among patients who started first-line chemotherapy).

But with shortages of donor livers, this approach has not caught on. Deceased-donor liver allografts are in short supply in most countries, and recent allocation changes have further shifted available organs away from patients with liver tumors.

An alternative is to use living donors. In a recent study, Dr. Sapisochin and colleagues showed viability and a survival advantage, compared with deceased-donor liver transplantation.

Building on that work, they established treatment protocols at three centers – the University of Rochester (N.Y.) Medical Center, the Cleveland Clinic, , and the University Health Network in Toronto.

Of 91 evaluated patients who were prospectively enrolled with liver-confined, unresectable CRC liver metastases, 10 met all inclusion criteria and received living-donor liver transplants between December 2017 and May 2021. The median age of the patients was 45 years; six were men, and four were women.

These patients all had primary tumors greater than stage T2 (six T3 and four T4b). Lymphovascular invasion was present in two patients, and perineural invasion was present in one patient.

The median time from diagnosis of the liver metastases to liver transplant was 1.7 years (range, 1.1-7.8 years).

At a median follow-up of 1.5 years (range, 0.4-2.9 years), recurrences occurred in three patients, with a rate of recurrence-free survival, using Kaplan-Meier estimates, of 62% and a rate of overall survival of 100%.

Rates of morbidity associated with transplantation were no higher than those observed in established standards for the donors or recipients, the authors noted.

Among transplant recipients, three patients had no Clavien-Dindo complications; three had grade II, and four had grade III complications. Among donors, five had no complications, four had grade I, and one had grade III complications.

All 10 donors were discharged from the hospital 4-7 days after surgery and recovered fully.

All three patients who experienced recurrences were treated with palliative chemotherapy. One died of disease after 3 months of treatment. As of the time of publication of the study, the other two had survived for 2 or more years following their live donor liver transplant.
 

Patient selection key

The authors are now investigating tumor subtypes, responses in CRC liver metastases, and other factors, with the aim of developing a novel screening method to identify appropriate candidates more quickly.

In the meantime, they emphasized that indicators of disease biology, such as the Oslo Score, the Clinical Risk Score, and sustained clinical response to systemic therapy, “remain the key filters through which to select patients who have sufficient opportunity for long-term cancer control, which is necessary to justify the risk to a living donor.”

Dr. Sapisochin reported receiving grants from Roche and Bayer and personal fees from Integra, Roche, AstraZeneca, and Novartis outside the submitted work. Dr. Shah disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Encouraging improvements in survival have been reported by surgeons who used liver transplants from live donors as a treatment for patients with colorectal cancer (CRC) and unresectable liver metastases. These patients usually have a poor prognosis, and for many, palliative chemotherapy is the standard of care.

“For the first time, we have been able to demonstrate [outside of Norway] that liver transplantation for patients with unresectable liver metastases is feasible with good outcomes,” lead author Gonzalo Sapisochin, MD, PhD, an assistant professor of surgery at the University of Toronto, said in an interview.

“Furthermore, this is the first time we are able to prove that living donation may be a good strategy in this setting,” Dr. Sapisochin said of the series of 10 cases that they published in JAMA Surgery.

The series showed “excellent perioperative outcomes for both donors and recipients,” noted the authors of an accompanying commentary. They said the team “should be commended for adding liver-donor live transplantation to the armamentarium of surgical options for patients with CRC liver metastases.”

However, they express concern about the relatively short follow-up of 1.5 years and the “very high” recurrence rate of 30%.

Commenting in an interview, lead editorialist Shimul Shah, MD, an associate professor of surgery and the chief of solid organ transplantation at the University of Cincinnati, said: “I agree that overall survival is an important measure to look at, but it’s hard to look at overall survival with [1.5] years of follow-up.”

Other key areas of concern are the need for more standardized practices and for more data on how liver transplantation compares with patients who just continue to receive chemotherapy.

“I certainly think that there’s a role for liver transplantation in these patients, and I am a big fan of this,” Dr. Shah emphasized, noting that four patients at his own center have recently received liver transplants, including three from deceased donors.

“However, I just think that as a community, we need to be cautious and not get too excited too early,” he said. “We need to keep studying it and take it one step at a time.”

Moving from deceased to living donors

Nearly 70% of patients with CRC develop liver metastases, and when these are unresectable, the prognosis is poor, with 5-year survival rates of less than 10%.

The option of liver transplantation was first reported in 2015 by a group in Norway. Their study included 21 patients with CRC and unresectable liver tumors. They reported a striking improvement in overall survival at 5 years (56% vs. 9% among patients who started first-line chemotherapy).

But with shortages of donor livers, this approach has not caught on. Deceased-donor liver allografts are in short supply in most countries, and recent allocation changes have further shifted available organs away from patients with liver tumors.

An alternative is to use living donors. In a recent study, Dr. Sapisochin and colleagues showed viability and a survival advantage, compared with deceased-donor liver transplantation.

Building on that work, they established treatment protocols at three centers – the University of Rochester (N.Y.) Medical Center, the Cleveland Clinic, , and the University Health Network in Toronto.

Of 91 evaluated patients who were prospectively enrolled with liver-confined, unresectable CRC liver metastases, 10 met all inclusion criteria and received living-donor liver transplants between December 2017 and May 2021. The median age of the patients was 45 years; six were men, and four were women.

These patients all had primary tumors greater than stage T2 (six T3 and four T4b). Lymphovascular invasion was present in two patients, and perineural invasion was present in one patient.

The median time from diagnosis of the liver metastases to liver transplant was 1.7 years (range, 1.1-7.8 years).

At a median follow-up of 1.5 years (range, 0.4-2.9 years), recurrences occurred in three patients, with a rate of recurrence-free survival, using Kaplan-Meier estimates, of 62% and a rate of overall survival of 100%.

Rates of morbidity associated with transplantation were no higher than those observed in established standards for the donors or recipients, the authors noted.

Among transplant recipients, three patients had no Clavien-Dindo complications; three had grade II, and four had grade III complications. Among donors, five had no complications, four had grade I, and one had grade III complications.

All 10 donors were discharged from the hospital 4-7 days after surgery and recovered fully.

All three patients who experienced recurrences were treated with palliative chemotherapy. One died of disease after 3 months of treatment. As of the time of publication of the study, the other two had survived for 2 or more years following their live donor liver transplant.
 

Patient selection key

The authors are now investigating tumor subtypes, responses in CRC liver metastases, and other factors, with the aim of developing a novel screening method to identify appropriate candidates more quickly.

In the meantime, they emphasized that indicators of disease biology, such as the Oslo Score, the Clinical Risk Score, and sustained clinical response to systemic therapy, “remain the key filters through which to select patients who have sufficient opportunity for long-term cancer control, which is necessary to justify the risk to a living donor.”

Dr. Sapisochin reported receiving grants from Roche and Bayer and personal fees from Integra, Roche, AstraZeneca, and Novartis outside the submitted work. Dr. Shah disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Encouraging improvements in survival have been reported by surgeons who used liver transplants from live donors as a treatment for patients with colorectal cancer (CRC) and unresectable liver metastases. These patients usually have a poor prognosis, and for many, palliative chemotherapy is the standard of care.

“For the first time, we have been able to demonstrate [outside of Norway] that liver transplantation for patients with unresectable liver metastases is feasible with good outcomes,” lead author Gonzalo Sapisochin, MD, PhD, an assistant professor of surgery at the University of Toronto, said in an interview.

“Furthermore, this is the first time we are able to prove that living donation may be a good strategy in this setting,” Dr. Sapisochin said of the series of 10 cases that they published in JAMA Surgery.

The series showed “excellent perioperative outcomes for both donors and recipients,” noted the authors of an accompanying commentary. They said the team “should be commended for adding liver-donor live transplantation to the armamentarium of surgical options for patients with CRC liver metastases.”

However, they express concern about the relatively short follow-up of 1.5 years and the “very high” recurrence rate of 30%.

Commenting in an interview, lead editorialist Shimul Shah, MD, an associate professor of surgery and the chief of solid organ transplantation at the University of Cincinnati, said: “I agree that overall survival is an important measure to look at, but it’s hard to look at overall survival with [1.5] years of follow-up.”

Other key areas of concern are the need for more standardized practices and for more data on how liver transplantation compares with patients who just continue to receive chemotherapy.

“I certainly think that there’s a role for liver transplantation in these patients, and I am a big fan of this,” Dr. Shah emphasized, noting that four patients at his own center have recently received liver transplants, including three from deceased donors.

“However, I just think that as a community, we need to be cautious and not get too excited too early,” he said. “We need to keep studying it and take it one step at a time.”

Moving from deceased to living donors

Nearly 70% of patients with CRC develop liver metastases, and when these are unresectable, the prognosis is poor, with 5-year survival rates of less than 10%.

The option of liver transplantation was first reported in 2015 by a group in Norway. Their study included 21 patients with CRC and unresectable liver tumors. They reported a striking improvement in overall survival at 5 years (56% vs. 9% among patients who started first-line chemotherapy).

But with shortages of donor livers, this approach has not caught on. Deceased-donor liver allografts are in short supply in most countries, and recent allocation changes have further shifted available organs away from patients with liver tumors.

An alternative is to use living donors. In a recent study, Dr. Sapisochin and colleagues showed viability and a survival advantage, compared with deceased-donor liver transplantation.

Building on that work, they established treatment protocols at three centers – the University of Rochester (N.Y.) Medical Center, the Cleveland Clinic, , and the University Health Network in Toronto.

Of 91 evaluated patients who were prospectively enrolled with liver-confined, unresectable CRC liver metastases, 10 met all inclusion criteria and received living-donor liver transplants between December 2017 and May 2021. The median age of the patients was 45 years; six were men, and four were women.

These patients all had primary tumors greater than stage T2 (six T3 and four T4b). Lymphovascular invasion was present in two patients, and perineural invasion was present in one patient.

The median time from diagnosis of the liver metastases to liver transplant was 1.7 years (range, 1.1-7.8 years).

At a median follow-up of 1.5 years (range, 0.4-2.9 years), recurrences occurred in three patients, with a rate of recurrence-free survival, using Kaplan-Meier estimates, of 62% and a rate of overall survival of 100%.

Rates of morbidity associated with transplantation were no higher than those observed in established standards for the donors or recipients, the authors noted.

Among transplant recipients, three patients had no Clavien-Dindo complications; three had grade II, and four had grade III complications. Among donors, five had no complications, four had grade I, and one had grade III complications.

All 10 donors were discharged from the hospital 4-7 days after surgery and recovered fully.

All three patients who experienced recurrences were treated with palliative chemotherapy. One died of disease after 3 months of treatment. As of the time of publication of the study, the other two had survived for 2 or more years following their live donor liver transplant.
 

Patient selection key

The authors are now investigating tumor subtypes, responses in CRC liver metastases, and other factors, with the aim of developing a novel screening method to identify appropriate candidates more quickly.

In the meantime, they emphasized that indicators of disease biology, such as the Oslo Score, the Clinical Risk Score, and sustained clinical response to systemic therapy, “remain the key filters through which to select patients who have sufficient opportunity for long-term cancer control, which is necessary to justify the risk to a living donor.”

Dr. Sapisochin reported receiving grants from Roche and Bayer and personal fees from Integra, Roche, AstraZeneca, and Novartis outside the submitted work. Dr. Shah disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA SURGERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Novel medication tied to better quality of life in major depression

Article Type
Changed
Thu, 04/14/2022 - 16:05

An investigational once-daily oral neuroactive steroid is linked to significant improvement in quality of life (QoL) and well-being in patients with major depressive disorder (MDD), new research shows.

In a phase 3 trial that included more than 500 adult patients with MDD, those who received zuranolone for 14 days showed greater improvement at day 15 across numerous QoL outcomes, compared with their counterparts in the placebo group.

Dr. Anita H. Clayton

In addition, combined analysis of four zuranolone clinical trials showed “mental well-being and functioning improved to near general population norm levels” for the active-treatment group, reported the researchers, led by Anita H. Clayton, MD, chair and professor of psychiatry, University of Virginia, Charlottesville.

“Based on these integrated analyses, the benefit of treatment with zuranolone may extend beyond reduction in depressive symptoms to include potential improvement in quality of life and overall health, as perceived by patients,” they add.

The findings were presented as part of the Anxiety and Depression Association of America Anxiety & Depression conference.
 

First oral formulation

Zuranolone represents the second entry in the new class of neuroactive steroid drugs, which modulate GABA-A receptor activity – but it would be the first to have an oral formulation. Brexanolone, which was approved by the Food and Drug Administration in 2019 for postpartum depression, is administered through continuous IV infusion over 60 hours.

As previously reported by this news organization, zuranolone improved depressive symptoms as early as day 3, achieving the primary endpoint of significantly greater reduction in scores on the 17-item Hamilton Rating Scale for Depression from baseline to day 15 versus placebo (P = .014).

In the new analysis, patient-reported measures of functional health and well-being were assessed in the WATERFALL trial. It included 266 patients with MDD who were treated with zuranolone 50 mg daily for 2 weeks and 268 patients with MDD who were treated with placebo.

The study used the Short Form–36 (SF-36v2), which covers a wide range of patient-reported measures, including physical function, bodily pain, general health, vitality, social function, and “role-emotional” symptoms.

Results showed that although the treatment and placebo groups had similar baseline SF-36v2 scores, those receiving zuranolone reported significantly greater improvements at day 15 in almost all of the assessment’s domains, including physical function (treatment difference, 0.8), general health (1.0), vitality (3.1), social functioning (1.1), and role-emotional symptoms (1.5; for all comparisons, P < .05). The only exceptions were in role-physical symptoms and bodily pain.

In measures that included physical function, bodily pain, and general health, the patients achieved improvements at day 15 that were consistent with normal levels, with the improvement in vitality considered clinically meaningful versus placebo.

 

 

Integrated data

In further analysis of integrated data from four zuranolone clinical trials in the NEST and LANDSCAPE programs for patients with MDD and postpartum depression, results showed similar improvements at day 15 for zuranolone in QoL and overall health across all of the SF-36v2 functioning and well-being domains (P <.05), with the exceptions of physical measure and bodily pain.

By day 42, all of the domains showed significantly greater improvement with zuranolone versus placebo (all, P <.05).

Among the strongest score improvements in the integrated trials were measures in social functioning, which improved from baseline scores of 29.66 to 42.82 on day 15 and to 43.59 on day 42.

Emotional domain scores improved from 24.43 at baseline to 39.13 on day 15 and to 39.82 on day 42. For mental health, the integrated scores for the zuranolone group improved from 27.13 at baseline to 42.40 on day 15 and 42.62 on day 42.

Of note, the baseline scores for mental health represented just 54.3% of those in the normal population; with the increase at day 15, the level was 84.8% of the normal population.

“Across four completed placebo-controlled NEST and LANDSCAPE clinical trials, patient reports of functional health and well-being as assessed by the SF-36v2 indicated substantial impairment at baseline compared to the population norm,” the researchers reported.

The improvements are especially important in light of the fact that in some patients with MDD, functional improvement is a top priority.

“Patients have often prioritized returning to their usual level of functioning over reduction in depressive symptoms, and functional recovery has been associated with better prognosis of depression,” the investigators wrote.

Zuranolone trials have shown that treatment-emergent adverse events (AEs) occur among about 60% of patients, versus about 44% with placebo. The most common AEs are somnolence, dizziness, headache, sedation, and diarrhea, with no increases in suicidal ideation or withdrawal.

The rates of severe AEs are low, and they are observed in about 3% of patients, versus 1.1% with placebo, the researchers noted.

Further, as opposed to serotonergic antidepressants such as SNRIs and SSRIs, zuranolone does not appear to have the undesirable side effects of decreased libido and sexual dysfunction, they added.
 

Clinically meaningful?

Andrew J. Cutler, MD, clinical associate professor of psychiatry at State University of New York, Syracuse, said the data are “very significant” for a number of reasons.

“We need more options to treat depression, especially ones with novel mechanisms of action and faster onset of efficacy, such as zuranolone,” said Dr. Cutler, who was not involved in the current study. He has coauthored other studies on zuranolone.

Regarding the study’s QoL outcomes, “while improvement in depressive symptoms is very important, what really matters to patients is improvement in function and quality of life,” Dr. Cutler noted.

Also commenting on the study, Jonathan E. Alpert, MD, PhD, chair of the department of psychiatry and behavioral sciences and professor of psychiatry, neuroscience, and pediatrics at Albert Einstein College of Medicine, New York, said the investigational drug could represent an important addition to the armamentarium for treating depression.

“Zuranolone has good oral bioavailability and would represent the first neuroactive steroid antidepressant available in oral form and, indeed, the first non–monoamine-based antidepressant available in oral form,” he said in an interview.

Courtesy Dr. Jonathan E. Alpert
Dr. Jonathan E. Alpert

Dr. Alpert was not involved in the research and has no relationship with the drug’s development.

He noted that although there are modest differences between the patients who received zuranolone and those who received placebo in the trials, “this may have been related to high placebo response rates, which often complicate antidepressant trials.

“Further research is needed to determine whether differences between zuranolone and placebo are clinically meaningful, though the separation between drug and placebo on the primary endpoint, as well as some other measures, such as quality of life measures, is promising,” Dr. Alpert said.

However, he added that comparisons with other active antidepressants in terms of efficacy and tolerability remain to be seen.

“Given the large number of individuals with major depressive disorder who have incomplete response to or do not tolerate monoaminergic antidepressants, the development of agents that leverage novel nonmonoaminergic mechanisms is important,” Dr. Alpert concluded.

The study was funded by Sage Therapeutics and Biogen. Dr. Cutler has been involved in research of zuranolone for Sage Therapeutics. Dr. Alpert has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

An investigational once-daily oral neuroactive steroid is linked to significant improvement in quality of life (QoL) and well-being in patients with major depressive disorder (MDD), new research shows.

In a phase 3 trial that included more than 500 adult patients with MDD, those who received zuranolone for 14 days showed greater improvement at day 15 across numerous QoL outcomes, compared with their counterparts in the placebo group.

Dr. Anita H. Clayton

In addition, combined analysis of four zuranolone clinical trials showed “mental well-being and functioning improved to near general population norm levels” for the active-treatment group, reported the researchers, led by Anita H. Clayton, MD, chair and professor of psychiatry, University of Virginia, Charlottesville.

“Based on these integrated analyses, the benefit of treatment with zuranolone may extend beyond reduction in depressive symptoms to include potential improvement in quality of life and overall health, as perceived by patients,” they add.

The findings were presented as part of the Anxiety and Depression Association of America Anxiety & Depression conference.
 

First oral formulation

Zuranolone represents the second entry in the new class of neuroactive steroid drugs, which modulate GABA-A receptor activity – but it would be the first to have an oral formulation. Brexanolone, which was approved by the Food and Drug Administration in 2019 for postpartum depression, is administered through continuous IV infusion over 60 hours.

As previously reported by this news organization, zuranolone improved depressive symptoms as early as day 3, achieving the primary endpoint of significantly greater reduction in scores on the 17-item Hamilton Rating Scale for Depression from baseline to day 15 versus placebo (P = .014).

In the new analysis, patient-reported measures of functional health and well-being were assessed in the WATERFALL trial. It included 266 patients with MDD who were treated with zuranolone 50 mg daily for 2 weeks and 268 patients with MDD who were treated with placebo.

The study used the Short Form–36 (SF-36v2), which covers a wide range of patient-reported measures, including physical function, bodily pain, general health, vitality, social function, and “role-emotional” symptoms.

Results showed that although the treatment and placebo groups had similar baseline SF-36v2 scores, those receiving zuranolone reported significantly greater improvements at day 15 in almost all of the assessment’s domains, including physical function (treatment difference, 0.8), general health (1.0), vitality (3.1), social functioning (1.1), and role-emotional symptoms (1.5; for all comparisons, P < .05). The only exceptions were in role-physical symptoms and bodily pain.

In measures that included physical function, bodily pain, and general health, the patients achieved improvements at day 15 that were consistent with normal levels, with the improvement in vitality considered clinically meaningful versus placebo.

 

 

Integrated data

In further analysis of integrated data from four zuranolone clinical trials in the NEST and LANDSCAPE programs for patients with MDD and postpartum depression, results showed similar improvements at day 15 for zuranolone in QoL and overall health across all of the SF-36v2 functioning and well-being domains (P <.05), with the exceptions of physical measure and bodily pain.

By day 42, all of the domains showed significantly greater improvement with zuranolone versus placebo (all, P <.05).

Among the strongest score improvements in the integrated trials were measures in social functioning, which improved from baseline scores of 29.66 to 42.82 on day 15 and to 43.59 on day 42.

Emotional domain scores improved from 24.43 at baseline to 39.13 on day 15 and to 39.82 on day 42. For mental health, the integrated scores for the zuranolone group improved from 27.13 at baseline to 42.40 on day 15 and 42.62 on day 42.

Of note, the baseline scores for mental health represented just 54.3% of those in the normal population; with the increase at day 15, the level was 84.8% of the normal population.

“Across four completed placebo-controlled NEST and LANDSCAPE clinical trials, patient reports of functional health and well-being as assessed by the SF-36v2 indicated substantial impairment at baseline compared to the population norm,” the researchers reported.

The improvements are especially important in light of the fact that in some patients with MDD, functional improvement is a top priority.

“Patients have often prioritized returning to their usual level of functioning over reduction in depressive symptoms, and functional recovery has been associated with better prognosis of depression,” the investigators wrote.

Zuranolone trials have shown that treatment-emergent adverse events (AEs) occur among about 60% of patients, versus about 44% with placebo. The most common AEs are somnolence, dizziness, headache, sedation, and diarrhea, with no increases in suicidal ideation or withdrawal.

The rates of severe AEs are low, and they are observed in about 3% of patients, versus 1.1% with placebo, the researchers noted.

Further, as opposed to serotonergic antidepressants such as SNRIs and SSRIs, zuranolone does not appear to have the undesirable side effects of decreased libido and sexual dysfunction, they added.
 

Clinically meaningful?

Andrew J. Cutler, MD, clinical associate professor of psychiatry at State University of New York, Syracuse, said the data are “very significant” for a number of reasons.

“We need more options to treat depression, especially ones with novel mechanisms of action and faster onset of efficacy, such as zuranolone,” said Dr. Cutler, who was not involved in the current study. He has coauthored other studies on zuranolone.

Regarding the study’s QoL outcomes, “while improvement in depressive symptoms is very important, what really matters to patients is improvement in function and quality of life,” Dr. Cutler noted.

Also commenting on the study, Jonathan E. Alpert, MD, PhD, chair of the department of psychiatry and behavioral sciences and professor of psychiatry, neuroscience, and pediatrics at Albert Einstein College of Medicine, New York, said the investigational drug could represent an important addition to the armamentarium for treating depression.

“Zuranolone has good oral bioavailability and would represent the first neuroactive steroid antidepressant available in oral form and, indeed, the first non–monoamine-based antidepressant available in oral form,” he said in an interview.

Courtesy Dr. Jonathan E. Alpert
Dr. Jonathan E. Alpert

Dr. Alpert was not involved in the research and has no relationship with the drug’s development.

He noted that although there are modest differences between the patients who received zuranolone and those who received placebo in the trials, “this may have been related to high placebo response rates, which often complicate antidepressant trials.

“Further research is needed to determine whether differences between zuranolone and placebo are clinically meaningful, though the separation between drug and placebo on the primary endpoint, as well as some other measures, such as quality of life measures, is promising,” Dr. Alpert said.

However, he added that comparisons with other active antidepressants in terms of efficacy and tolerability remain to be seen.

“Given the large number of individuals with major depressive disorder who have incomplete response to or do not tolerate monoaminergic antidepressants, the development of agents that leverage novel nonmonoaminergic mechanisms is important,” Dr. Alpert concluded.

The study was funded by Sage Therapeutics and Biogen. Dr. Cutler has been involved in research of zuranolone for Sage Therapeutics. Dr. Alpert has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

An investigational once-daily oral neuroactive steroid is linked to significant improvement in quality of life (QoL) and well-being in patients with major depressive disorder (MDD), new research shows.

In a phase 3 trial that included more than 500 adult patients with MDD, those who received zuranolone for 14 days showed greater improvement at day 15 across numerous QoL outcomes, compared with their counterparts in the placebo group.

Dr. Anita H. Clayton

In addition, combined analysis of four zuranolone clinical trials showed “mental well-being and functioning improved to near general population norm levels” for the active-treatment group, reported the researchers, led by Anita H. Clayton, MD, chair and professor of psychiatry, University of Virginia, Charlottesville.

“Based on these integrated analyses, the benefit of treatment with zuranolone may extend beyond reduction in depressive symptoms to include potential improvement in quality of life and overall health, as perceived by patients,” they add.

The findings were presented as part of the Anxiety and Depression Association of America Anxiety & Depression conference.
 

First oral formulation

Zuranolone represents the second entry in the new class of neuroactive steroid drugs, which modulate GABA-A receptor activity – but it would be the first to have an oral formulation. Brexanolone, which was approved by the Food and Drug Administration in 2019 for postpartum depression, is administered through continuous IV infusion over 60 hours.

As previously reported by this news organization, zuranolone improved depressive symptoms as early as day 3, achieving the primary endpoint of significantly greater reduction in scores on the 17-item Hamilton Rating Scale for Depression from baseline to day 15 versus placebo (P = .014).

In the new analysis, patient-reported measures of functional health and well-being were assessed in the WATERFALL trial. It included 266 patients with MDD who were treated with zuranolone 50 mg daily for 2 weeks and 268 patients with MDD who were treated with placebo.

The study used the Short Form–36 (SF-36v2), which covers a wide range of patient-reported measures, including physical function, bodily pain, general health, vitality, social function, and “role-emotional” symptoms.

Results showed that although the treatment and placebo groups had similar baseline SF-36v2 scores, those receiving zuranolone reported significantly greater improvements at day 15 in almost all of the assessment’s domains, including physical function (treatment difference, 0.8), general health (1.0), vitality (3.1), social functioning (1.1), and role-emotional symptoms (1.5; for all comparisons, P < .05). The only exceptions were in role-physical symptoms and bodily pain.

In measures that included physical function, bodily pain, and general health, the patients achieved improvements at day 15 that were consistent with normal levels, with the improvement in vitality considered clinically meaningful versus placebo.

 

 

Integrated data

In further analysis of integrated data from four zuranolone clinical trials in the NEST and LANDSCAPE programs for patients with MDD and postpartum depression, results showed similar improvements at day 15 for zuranolone in QoL and overall health across all of the SF-36v2 functioning and well-being domains (P <.05), with the exceptions of physical measure and bodily pain.

By day 42, all of the domains showed significantly greater improvement with zuranolone versus placebo (all, P <.05).

Among the strongest score improvements in the integrated trials were measures in social functioning, which improved from baseline scores of 29.66 to 42.82 on day 15 and to 43.59 on day 42.

Emotional domain scores improved from 24.43 at baseline to 39.13 on day 15 and to 39.82 on day 42. For mental health, the integrated scores for the zuranolone group improved from 27.13 at baseline to 42.40 on day 15 and 42.62 on day 42.

Of note, the baseline scores for mental health represented just 54.3% of those in the normal population; with the increase at day 15, the level was 84.8% of the normal population.

“Across four completed placebo-controlled NEST and LANDSCAPE clinical trials, patient reports of functional health and well-being as assessed by the SF-36v2 indicated substantial impairment at baseline compared to the population norm,” the researchers reported.

The improvements are especially important in light of the fact that in some patients with MDD, functional improvement is a top priority.

“Patients have often prioritized returning to their usual level of functioning over reduction in depressive symptoms, and functional recovery has been associated with better prognosis of depression,” the investigators wrote.

Zuranolone trials have shown that treatment-emergent adverse events (AEs) occur among about 60% of patients, versus about 44% with placebo. The most common AEs are somnolence, dizziness, headache, sedation, and diarrhea, with no increases in suicidal ideation or withdrawal.

The rates of severe AEs are low, and they are observed in about 3% of patients, versus 1.1% with placebo, the researchers noted.

Further, as opposed to serotonergic antidepressants such as SNRIs and SSRIs, zuranolone does not appear to have the undesirable side effects of decreased libido and sexual dysfunction, they added.
 

Clinically meaningful?

Andrew J. Cutler, MD, clinical associate professor of psychiatry at State University of New York, Syracuse, said the data are “very significant” for a number of reasons.

“We need more options to treat depression, especially ones with novel mechanisms of action and faster onset of efficacy, such as zuranolone,” said Dr. Cutler, who was not involved in the current study. He has coauthored other studies on zuranolone.

Regarding the study’s QoL outcomes, “while improvement in depressive symptoms is very important, what really matters to patients is improvement in function and quality of life,” Dr. Cutler noted.

Also commenting on the study, Jonathan E. Alpert, MD, PhD, chair of the department of psychiatry and behavioral sciences and professor of psychiatry, neuroscience, and pediatrics at Albert Einstein College of Medicine, New York, said the investigational drug could represent an important addition to the armamentarium for treating depression.

“Zuranolone has good oral bioavailability and would represent the first neuroactive steroid antidepressant available in oral form and, indeed, the first non–monoamine-based antidepressant available in oral form,” he said in an interview.

Courtesy Dr. Jonathan E. Alpert
Dr. Jonathan E. Alpert

Dr. Alpert was not involved in the research and has no relationship with the drug’s development.

He noted that although there are modest differences between the patients who received zuranolone and those who received placebo in the trials, “this may have been related to high placebo response rates, which often complicate antidepressant trials.

“Further research is needed to determine whether differences between zuranolone and placebo are clinically meaningful, though the separation between drug and placebo on the primary endpoint, as well as some other measures, such as quality of life measures, is promising,” Dr. Alpert said.

However, he added that comparisons with other active antidepressants in terms of efficacy and tolerability remain to be seen.

“Given the large number of individuals with major depressive disorder who have incomplete response to or do not tolerate monoaminergic antidepressants, the development of agents that leverage novel nonmonoaminergic mechanisms is important,” Dr. Alpert concluded.

The study was funded by Sage Therapeutics and Biogen. Dr. Cutler has been involved in research of zuranolone for Sage Therapeutics. Dr. Alpert has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT ADAA 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article