Cryoballoon ablation demonstrates long-term durability in BE

Cryoballoons offer tempting alternative
Article Type
Changed
Tue, 03/15/2022 - 21:21

 

Similar to radiofrequency ablation, cryoballoon ablation (CBA) is a durable approach that can eradicate Barrett’s esophagus (BE) in treatment-naive patients with dysplastic BE, according to a single-center cohort study.

Endoscopic mucosal resection (EMR), radiofrequency ablation (RFA), and cryotherapy are established techniques used in endoscopic eradication therapy of BE, wrote study authors led by Mohamad Dbouk, MD, of Johns Hopkins Medical Institutions in Baltimore in Techniques and Innovations in Gastrointestinal Endoscopy. Unlike RFA which uses heat to induce tissue necrosis and reepithelialization of normal tissue, cryotherapy applies extreme cold in the treatment of BE. While cryotherapy as an endoscopic ablative technique has been studied over the past decade as an alternative treatment modality, Dr. Dbouk and researchers noted that long-term data on durability of and outcomes with this approach are lacking.

To gauge the durability of CBA for dysplastic BE, the researchers examined outcomes of 59 consecutive patients with BE and confirmed low-grade dysplasia (n = 22), high-grade dysplasia (n = 33), or intramucosal cancer (n = 4), all of whom were treated with CBA for the purposes of BE eradication. The single-center cohort comprised only treatment-naive patients who had a mean age of 66.8 (91.5% male). In the overall cohort, the mean length of the BE was 5 cm, although 23.7% of patients had BE ≥8 cm in length.

Following confirmation of dysplastic BE in biopsies and/or EMR specimens at baseline, patients underwent CBA applied to the gastric cardia as well as all visible BE with the cryoballoon focal ablation system and focal or pear-shaped cryoballoon. The investigators performed surveillance esophagogastroduodenoscopy (EGD) to assess the CBA response. Patients with high-grade dysplasia underwent EGD and biopsy every 3 months for the first year after completing CBA, every 6 months for the second year, and once per year thereafter. However, those with biopsies at baseline that showed low-grade dysplasia (LGD) underwent EGD and biopsy every 6 months during the first year after CBA and annually thereafter. Retreatment with ablation was allowed if recurrent dysplasia or intestinal metaplasia was found.

The study’s primary endpoints included short-term efficacy – defined as the rate of complete eradication of dysplasia (CE-D) and intestinal metaplasia (CE-IM) at 1-year follow-up – and durability – characterized by the proportion of patients with CE-D and CE-IM within 18 months and maintained at 2- and 3-year follow-up.

The median follow-up period for the patient cohort was 54.3 months. Approximately 95% of the 56 patients who were evaluable at 1 year achieved CE-D, while 75% achieved CE-IM. In an analysis that stratified patients by their baseline dysplasia grade, the rates of CE-D were each 96% in the LGD and HGD groups. At 1 year, the median number of CBA sessions used to achieve CE-IM was 3.

Throughout treatment and the follow-up period, none of the patients progressed beyond their dysplasia grade at baseline or developed esophageal cancer. All patients had maintained CE-D for years 2, 3, and 4. In addition, 98% of patients had CE-IM at 2 years, 98% had CE-IM at 3 years, and 97% of patients had CE-IM at 4 years. After stratification of patients by baseline grade of dysplasia, the researchers found no significant difference between groups in the rates of CE-D and CE-IM at each follow-up year.

In 48 patients who initially achieved CE-IM, 14.6% developed recurrent intestinal metaplasia (IM), including six in the esophagus and one in the GEJ, after a median of 20.7 months. Approximately 57% of patients who developed recurrent IM had baseline LGD, while 43% had HGD at baseline. The length of BE was not significantly associated with the risk of IM recurrence, according to a Cox proportional hazard analysis (hazard ratio, 1.02; 95% confidence interval, 0.86-1.2; P = .8).

Approximately 8.5% of patients had post-CBA strictures that required dilation during the study period. According to bivariate analysis, individuals with a BE length of ≥8 cm were significantly more likely to develop strictures, compared with patients without ultra-long BE (28.6% vs. 2.2%, respectively; P = .009). Strictures occurred during the first 4 months after the initial CBA. The median period from the first CBA treatment to stricture detection on follow-up EGD was 2 months. Around 1.7% of patients experienced postprocedural bleeding that required clipping for closure. These patients were on clopidogrel for atrial fibrillation during the first year of active treatment.

Limitations of the study included the small sample size as well as the inclusion of patients from a single center, which the researchers suggest may limit the generalizability of the results.

“More research is needed to confirm the long-term durability of CBA,” the authors concluded. “Randomized controlled trials comparing CBA with RFA are needed to assess the role of CBA as a first-line and rescue EET.”

Several of the researchers reported conflicts of interest with industry. The study received no industry funding.
 

Body

 

Barrett’s endoscopic eradication therapy, resection of visible lesions, and ablation of remaining Barrett’s mucosa are the standard of care for dysplasia management. Radiofrequency ablation (RFA) is 91% successful in eliminating dysplasia and 78% in eliminating intestinal metaplasia (IM). Recurrence of dysplasia is rare, although recurrence of IM is 20%.

Dr. Bruce D. Greenwald
This study by Dbouk et al. examines the success of a newer ablation modality, the cryoballoon focal ablation system (CbFAS), in ablating Barrett’s tissue. With CbFAS, mucosa is focally ablated by freezing when in contact with a nitrous oxide–cooled balloon. In this single-center, single-operator study, CbFAS successfully eliminated dysplasia and IM for up to 4 years at rates comparable to RFA, with dysplasia and IM recurrence seen in 1.9% and 14.6%. The stricture rate was 8.5%, higher than the 5% typically reported for RFA.

Given the impressive results of RFA, one might ask why alternative ablation therapies are needed. CbFAS equipment costs are lower than those of RFA, and discomfort after the procedure may be less. Failure of ablation is poorly understood, likely attributable to inadequate reflux suppression and maybe thicker areas of Barrett’s mucosa. The greater depth of injury with cryoablation may succeed in some cases of RFA failure. Complexity of this ablation procedure remains high, and excessive overlap of treatment sites probably explains the higher stricture rate. Where cryoballoon ablation fits in the Barrett’s ablation paradigm is not clear. The lower cost and availability may provide traction for this new technology in the established field of Barrett’s ablation.

Bruce D. Greenwald, MD, is a professor of medicine at the University of Maryland, Baltimore, and the Marlene and Stewart Greenebaum Comprehensive Cancer Center, Baltimore. He is a consultant for Steris Endoscopy.

Publications
Topics
Sections
Body

 

Barrett’s endoscopic eradication therapy, resection of visible lesions, and ablation of remaining Barrett’s mucosa are the standard of care for dysplasia management. Radiofrequency ablation (RFA) is 91% successful in eliminating dysplasia and 78% in eliminating intestinal metaplasia (IM). Recurrence of dysplasia is rare, although recurrence of IM is 20%.

Dr. Bruce D. Greenwald
This study by Dbouk et al. examines the success of a newer ablation modality, the cryoballoon focal ablation system (CbFAS), in ablating Barrett’s tissue. With CbFAS, mucosa is focally ablated by freezing when in contact with a nitrous oxide–cooled balloon. In this single-center, single-operator study, CbFAS successfully eliminated dysplasia and IM for up to 4 years at rates comparable to RFA, with dysplasia and IM recurrence seen in 1.9% and 14.6%. The stricture rate was 8.5%, higher than the 5% typically reported for RFA.

Given the impressive results of RFA, one might ask why alternative ablation therapies are needed. CbFAS equipment costs are lower than those of RFA, and discomfort after the procedure may be less. Failure of ablation is poorly understood, likely attributable to inadequate reflux suppression and maybe thicker areas of Barrett’s mucosa. The greater depth of injury with cryoablation may succeed in some cases of RFA failure. Complexity of this ablation procedure remains high, and excessive overlap of treatment sites probably explains the higher stricture rate. Where cryoballoon ablation fits in the Barrett’s ablation paradigm is not clear. The lower cost and availability may provide traction for this new technology in the established field of Barrett’s ablation.

Bruce D. Greenwald, MD, is a professor of medicine at the University of Maryland, Baltimore, and the Marlene and Stewart Greenebaum Comprehensive Cancer Center, Baltimore. He is a consultant for Steris Endoscopy.

Body

 

Barrett’s endoscopic eradication therapy, resection of visible lesions, and ablation of remaining Barrett’s mucosa are the standard of care for dysplasia management. Radiofrequency ablation (RFA) is 91% successful in eliminating dysplasia and 78% in eliminating intestinal metaplasia (IM). Recurrence of dysplasia is rare, although recurrence of IM is 20%.

Dr. Bruce D. Greenwald
This study by Dbouk et al. examines the success of a newer ablation modality, the cryoballoon focal ablation system (CbFAS), in ablating Barrett’s tissue. With CbFAS, mucosa is focally ablated by freezing when in contact with a nitrous oxide–cooled balloon. In this single-center, single-operator study, CbFAS successfully eliminated dysplasia and IM for up to 4 years at rates comparable to RFA, with dysplasia and IM recurrence seen in 1.9% and 14.6%. The stricture rate was 8.5%, higher than the 5% typically reported for RFA.

Given the impressive results of RFA, one might ask why alternative ablation therapies are needed. CbFAS equipment costs are lower than those of RFA, and discomfort after the procedure may be less. Failure of ablation is poorly understood, likely attributable to inadequate reflux suppression and maybe thicker areas of Barrett’s mucosa. The greater depth of injury with cryoablation may succeed in some cases of RFA failure. Complexity of this ablation procedure remains high, and excessive overlap of treatment sites probably explains the higher stricture rate. Where cryoballoon ablation fits in the Barrett’s ablation paradigm is not clear. The lower cost and availability may provide traction for this new technology in the established field of Barrett’s ablation.

Bruce D. Greenwald, MD, is a professor of medicine at the University of Maryland, Baltimore, and the Marlene and Stewart Greenebaum Comprehensive Cancer Center, Baltimore. He is a consultant for Steris Endoscopy.

Title
Cryoballoons offer tempting alternative
Cryoballoons offer tempting alternative

 

Similar to radiofrequency ablation, cryoballoon ablation (CBA) is a durable approach that can eradicate Barrett’s esophagus (BE) in treatment-naive patients with dysplastic BE, according to a single-center cohort study.

Endoscopic mucosal resection (EMR), radiofrequency ablation (RFA), and cryotherapy are established techniques used in endoscopic eradication therapy of BE, wrote study authors led by Mohamad Dbouk, MD, of Johns Hopkins Medical Institutions in Baltimore in Techniques and Innovations in Gastrointestinal Endoscopy. Unlike RFA which uses heat to induce tissue necrosis and reepithelialization of normal tissue, cryotherapy applies extreme cold in the treatment of BE. While cryotherapy as an endoscopic ablative technique has been studied over the past decade as an alternative treatment modality, Dr. Dbouk and researchers noted that long-term data on durability of and outcomes with this approach are lacking.

To gauge the durability of CBA for dysplastic BE, the researchers examined outcomes of 59 consecutive patients with BE and confirmed low-grade dysplasia (n = 22), high-grade dysplasia (n = 33), or intramucosal cancer (n = 4), all of whom were treated with CBA for the purposes of BE eradication. The single-center cohort comprised only treatment-naive patients who had a mean age of 66.8 (91.5% male). In the overall cohort, the mean length of the BE was 5 cm, although 23.7% of patients had BE ≥8 cm in length.

Following confirmation of dysplastic BE in biopsies and/or EMR specimens at baseline, patients underwent CBA applied to the gastric cardia as well as all visible BE with the cryoballoon focal ablation system and focal or pear-shaped cryoballoon. The investigators performed surveillance esophagogastroduodenoscopy (EGD) to assess the CBA response. Patients with high-grade dysplasia underwent EGD and biopsy every 3 months for the first year after completing CBA, every 6 months for the second year, and once per year thereafter. However, those with biopsies at baseline that showed low-grade dysplasia (LGD) underwent EGD and biopsy every 6 months during the first year after CBA and annually thereafter. Retreatment with ablation was allowed if recurrent dysplasia or intestinal metaplasia was found.

The study’s primary endpoints included short-term efficacy – defined as the rate of complete eradication of dysplasia (CE-D) and intestinal metaplasia (CE-IM) at 1-year follow-up – and durability – characterized by the proportion of patients with CE-D and CE-IM within 18 months and maintained at 2- and 3-year follow-up.

The median follow-up period for the patient cohort was 54.3 months. Approximately 95% of the 56 patients who were evaluable at 1 year achieved CE-D, while 75% achieved CE-IM. In an analysis that stratified patients by their baseline dysplasia grade, the rates of CE-D were each 96% in the LGD and HGD groups. At 1 year, the median number of CBA sessions used to achieve CE-IM was 3.

Throughout treatment and the follow-up period, none of the patients progressed beyond their dysplasia grade at baseline or developed esophageal cancer. All patients had maintained CE-D for years 2, 3, and 4. In addition, 98% of patients had CE-IM at 2 years, 98% had CE-IM at 3 years, and 97% of patients had CE-IM at 4 years. After stratification of patients by baseline grade of dysplasia, the researchers found no significant difference between groups in the rates of CE-D and CE-IM at each follow-up year.

In 48 patients who initially achieved CE-IM, 14.6% developed recurrent intestinal metaplasia (IM), including six in the esophagus and one in the GEJ, after a median of 20.7 months. Approximately 57% of patients who developed recurrent IM had baseline LGD, while 43% had HGD at baseline. The length of BE was not significantly associated with the risk of IM recurrence, according to a Cox proportional hazard analysis (hazard ratio, 1.02; 95% confidence interval, 0.86-1.2; P = .8).

Approximately 8.5% of patients had post-CBA strictures that required dilation during the study period. According to bivariate analysis, individuals with a BE length of ≥8 cm were significantly more likely to develop strictures, compared with patients without ultra-long BE (28.6% vs. 2.2%, respectively; P = .009). Strictures occurred during the first 4 months after the initial CBA. The median period from the first CBA treatment to stricture detection on follow-up EGD was 2 months. Around 1.7% of patients experienced postprocedural bleeding that required clipping for closure. These patients were on clopidogrel for atrial fibrillation during the first year of active treatment.

Limitations of the study included the small sample size as well as the inclusion of patients from a single center, which the researchers suggest may limit the generalizability of the results.

“More research is needed to confirm the long-term durability of CBA,” the authors concluded. “Randomized controlled trials comparing CBA with RFA are needed to assess the role of CBA as a first-line and rescue EET.”

Several of the researchers reported conflicts of interest with industry. The study received no industry funding.
 

 

Similar to radiofrequency ablation, cryoballoon ablation (CBA) is a durable approach that can eradicate Barrett’s esophagus (BE) in treatment-naive patients with dysplastic BE, according to a single-center cohort study.

Endoscopic mucosal resection (EMR), radiofrequency ablation (RFA), and cryotherapy are established techniques used in endoscopic eradication therapy of BE, wrote study authors led by Mohamad Dbouk, MD, of Johns Hopkins Medical Institutions in Baltimore in Techniques and Innovations in Gastrointestinal Endoscopy. Unlike RFA which uses heat to induce tissue necrosis and reepithelialization of normal tissue, cryotherapy applies extreme cold in the treatment of BE. While cryotherapy as an endoscopic ablative technique has been studied over the past decade as an alternative treatment modality, Dr. Dbouk and researchers noted that long-term data on durability of and outcomes with this approach are lacking.

To gauge the durability of CBA for dysplastic BE, the researchers examined outcomes of 59 consecutive patients with BE and confirmed low-grade dysplasia (n = 22), high-grade dysplasia (n = 33), or intramucosal cancer (n = 4), all of whom were treated with CBA for the purposes of BE eradication. The single-center cohort comprised only treatment-naive patients who had a mean age of 66.8 (91.5% male). In the overall cohort, the mean length of the BE was 5 cm, although 23.7% of patients had BE ≥8 cm in length.

Following confirmation of dysplastic BE in biopsies and/or EMR specimens at baseline, patients underwent CBA applied to the gastric cardia as well as all visible BE with the cryoballoon focal ablation system and focal or pear-shaped cryoballoon. The investigators performed surveillance esophagogastroduodenoscopy (EGD) to assess the CBA response. Patients with high-grade dysplasia underwent EGD and biopsy every 3 months for the first year after completing CBA, every 6 months for the second year, and once per year thereafter. However, those with biopsies at baseline that showed low-grade dysplasia (LGD) underwent EGD and biopsy every 6 months during the first year after CBA and annually thereafter. Retreatment with ablation was allowed if recurrent dysplasia or intestinal metaplasia was found.

The study’s primary endpoints included short-term efficacy – defined as the rate of complete eradication of dysplasia (CE-D) and intestinal metaplasia (CE-IM) at 1-year follow-up – and durability – characterized by the proportion of patients with CE-D and CE-IM within 18 months and maintained at 2- and 3-year follow-up.

The median follow-up period for the patient cohort was 54.3 months. Approximately 95% of the 56 patients who were evaluable at 1 year achieved CE-D, while 75% achieved CE-IM. In an analysis that stratified patients by their baseline dysplasia grade, the rates of CE-D were each 96% in the LGD and HGD groups. At 1 year, the median number of CBA sessions used to achieve CE-IM was 3.

Throughout treatment and the follow-up period, none of the patients progressed beyond their dysplasia grade at baseline or developed esophageal cancer. All patients had maintained CE-D for years 2, 3, and 4. In addition, 98% of patients had CE-IM at 2 years, 98% had CE-IM at 3 years, and 97% of patients had CE-IM at 4 years. After stratification of patients by baseline grade of dysplasia, the researchers found no significant difference between groups in the rates of CE-D and CE-IM at each follow-up year.

In 48 patients who initially achieved CE-IM, 14.6% developed recurrent intestinal metaplasia (IM), including six in the esophagus and one in the GEJ, after a median of 20.7 months. Approximately 57% of patients who developed recurrent IM had baseline LGD, while 43% had HGD at baseline. The length of BE was not significantly associated with the risk of IM recurrence, according to a Cox proportional hazard analysis (hazard ratio, 1.02; 95% confidence interval, 0.86-1.2; P = .8).

Approximately 8.5% of patients had post-CBA strictures that required dilation during the study period. According to bivariate analysis, individuals with a BE length of ≥8 cm were significantly more likely to develop strictures, compared with patients without ultra-long BE (28.6% vs. 2.2%, respectively; P = .009). Strictures occurred during the first 4 months after the initial CBA. The median period from the first CBA treatment to stricture detection on follow-up EGD was 2 months. Around 1.7% of patients experienced postprocedural bleeding that required clipping for closure. These patients were on clopidogrel for atrial fibrillation during the first year of active treatment.

Limitations of the study included the small sample size as well as the inclusion of patients from a single center, which the researchers suggest may limit the generalizability of the results.

“More research is needed to confirm the long-term durability of CBA,” the authors concluded. “Randomized controlled trials comparing CBA with RFA are needed to assess the role of CBA as a first-line and rescue EET.”

Several of the researchers reported conflicts of interest with industry. The study received no industry funding.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM TECHNIQUES AND INNOVATIONS IN GASTROINTESTINAL ENDOSCOPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

FDA approves first PARP inhibitor for early BRCA+ breast cancer

Article Type
Changed
Fri, 12/16/2022 - 10:07
Display Headline
FDA approves first PARP inhibitor for early
BRCA+ breast cancer

 

The PARP inhibitor olaparib (Lynparza) is now approved by the U.S. Food and Drug Administration for use in early-stage breast cancer and later-stage disease. Specifically, the new approval is for the adjuvant treatment of adult patients with high-risk early-stage HER2-negative, BRCA-mutated breast cancer who have completed chemotherapy and local treatment.

The FDA also approved BRACAnalysis CDx (Myriad Genetics), a companion diagnostic test to identify patients who may benefit from olaparib.

The latest approval was based on phase 3 OlympiA trial results, which showed a 42% improvement in invasive and distant disease-free survival with olaparib in comparison with placebo. Data from OlympiaA and other clinical studies also confirm BRACAnalysis CDx as “an effective test for patients deciding on their best treatment options,” Myriad Genetics noted in a press release.

The OlympiA results, as reported by this news organization, were presented during the plenary session of the American Society of Clinical Oncology 2021 annual meeting and were published in the New England Journal of Medicine.

Those findings prompted an ASCO “rapid recommendation” updating of ASCO’s 2020 guidelines for the management of hereditary breast cancer.

The latest results from OlympiA show that olaparib reduced the risk of death by 32% (hazard ratio, 0.68) in comparison with placebo, according to a company press release announcing the approval. Overall survival data are slated for presentation at a European Society for Medical Oncology Virtual Plenary session on March 16, 2022.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

The PARP inhibitor olaparib (Lynparza) is now approved by the U.S. Food and Drug Administration for use in early-stage breast cancer and later-stage disease. Specifically, the new approval is for the adjuvant treatment of adult patients with high-risk early-stage HER2-negative, BRCA-mutated breast cancer who have completed chemotherapy and local treatment.

The FDA also approved BRACAnalysis CDx (Myriad Genetics), a companion diagnostic test to identify patients who may benefit from olaparib.

The latest approval was based on phase 3 OlympiA trial results, which showed a 42% improvement in invasive and distant disease-free survival with olaparib in comparison with placebo. Data from OlympiaA and other clinical studies also confirm BRACAnalysis CDx as “an effective test for patients deciding on their best treatment options,” Myriad Genetics noted in a press release.

The OlympiA results, as reported by this news organization, were presented during the plenary session of the American Society of Clinical Oncology 2021 annual meeting and were published in the New England Journal of Medicine.

Those findings prompted an ASCO “rapid recommendation” updating of ASCO’s 2020 guidelines for the management of hereditary breast cancer.

The latest results from OlympiA show that olaparib reduced the risk of death by 32% (hazard ratio, 0.68) in comparison with placebo, according to a company press release announcing the approval. Overall survival data are slated for presentation at a European Society for Medical Oncology Virtual Plenary session on March 16, 2022.

A version of this article first appeared on Medscape.com.

 

The PARP inhibitor olaparib (Lynparza) is now approved by the U.S. Food and Drug Administration for use in early-stage breast cancer and later-stage disease. Specifically, the new approval is for the adjuvant treatment of adult patients with high-risk early-stage HER2-negative, BRCA-mutated breast cancer who have completed chemotherapy and local treatment.

The FDA also approved BRACAnalysis CDx (Myriad Genetics), a companion diagnostic test to identify patients who may benefit from olaparib.

The latest approval was based on phase 3 OlympiA trial results, which showed a 42% improvement in invasive and distant disease-free survival with olaparib in comparison with placebo. Data from OlympiaA and other clinical studies also confirm BRACAnalysis CDx as “an effective test for patients deciding on their best treatment options,” Myriad Genetics noted in a press release.

The OlympiA results, as reported by this news organization, were presented during the plenary session of the American Society of Clinical Oncology 2021 annual meeting and were published in the New England Journal of Medicine.

Those findings prompted an ASCO “rapid recommendation” updating of ASCO’s 2020 guidelines for the management of hereditary breast cancer.

The latest results from OlympiA show that olaparib reduced the risk of death by 32% (hazard ratio, 0.68) in comparison with placebo, according to a company press release announcing the approval. Overall survival data are slated for presentation at a European Society for Medical Oncology Virtual Plenary session on March 16, 2022.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Display Headline
FDA approves first PARP inhibitor for early
BRCA+ breast cancer
Display Headline
FDA approves first PARP inhibitor for early
BRCA+ breast cancer
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Registry data support lowering CRC screening age to 45

Article Type
Changed
Wed, 03/16/2022 - 12:30

Approximately one-third of people between 45 and 49 years of age who undergo colonoscopies have neoplastic colorectal pathology, according to a retrospective analysis.

According to the researchers, led by Parth Trivedi, MD, of the Icahn School of Medicine at Mount Sinai, New York, there has progressively been a “disturbing” rise in early-onset colorectal cancer (CRC) in the United States, which has prompted guidelines from the American Cancer Society to the U.S. Preventive Services Task Force to recommend lowering the CRC screening starting age to 45 years old for average-risk individuals. Despite these recommendations, little research to date has fully characterized the prevalence of colorectal neoplasia in individuals younger than the currently recommended CRC onset screening age of 50 years.

Dr. Trivedi and colleagues, who published their study findings in Gastroenterology, retrospectively reviewed colonoscopy data recorded in the Gastrointestinal Quality Improvement Consortium Registry to address the current knowledge gaps on early-onset CRC. Collected data were for procedures conducted at 123 AMSURG ambulatory endoscopy centers across 29 states between January 2014 and February 2021. In total, 2,921,816 colonoscopies during the study period among patients aged 18-54 years were recorded by AMSURG-associated endoscopists; of these, 562,559 met inclusion criteria for high-quality screening or diagnostic colonoscopy procedures.

The researchers pooled a young-onset age group, including patients between the ages of 18 and 49 years old, in whom 145,998 procedures were performed, including 79,934 procedures in patients aged 45-49 years. A comparator group with 336,627 procedures in patients aged 50-54 years was also included in the study. The findings were categorized into CRC, advanced premalignant lesions (APL), and “any neoplasia,” the latter of which included all adenomas, sessile serrated polyps, and CRC.

Among patients aged 18-44 years, the most frequent indications were “diagnostic-other” (45.6%) as well as “diagnostic-bleeding” (39.4%). Among patients between 45 and 49 years of age, the most frequent indications were “screening” (41.4%) and “diagnostic-other” (30.7%). Nearly all (90%) procedures among those aged 50-54 years were for screening.

A multivariable logistic regression identified 5 variables predictive of either APL or CRC in patients between 18 and 49 years of age: increasing age (odds ratio, 1.08; 95% confidence interval, 1.07-1.08; P <0.01), male sex (OR = 1.67; 95% CI, 1.63-1.70; P <0.01), White race (vs. African American: OR = 0.76; 95% CI, 0.73-0.79, P <0.01; vs. Asian: OR = 0.89; 95% CI, 0.84-0.94, P <0.01), family history of CRC (OR = 1.21; 95% CI, 1.16-1.26; P <0.01) and polyps (OR = 1.33; 95% CI, 1.24-1.43; P <0.01), and examinations for bleeding (OR = 1.15; 95% CI, 1.12-1.18; P <0.01) or screening (OR = 1.20; 95% CI, 1.16-1.24; P <0.01).

The prevalence of neoplastic findings in the young-onset age-group increased with increasing age for the categories of any neoplasia, APLs, and CRC. Among patients aged 40-44, 26.59% had any neoplasia, 5.76% had APL, and 0.53% had CRC. In those aged 45-49 years, around 32% had any neoplasia, approximately 7.5% had APLs, and nearly 0.58% had CRC. In the 50- to 54-year-old group, the prevalences of any neoplasia, APL, and CRC were 37.72%, 9.48%, and 0.32%, respectively.

Across all age groups, a family history of CRC was associated with a higher prevalence of any neoplasia and APL. In addition, the rates of any APL and neoplasia in patients with a family history of CRC were comparable to patients who were 5 years older but had no family history of the disease. Across most young-onset age group, individuals with a positive family history had a lower CRC prevalence versus patients with no family history.

The researchers noted that their population data are derived from ambulatory endoscopy centers, which may introduce bias associated with insurance coverage or patient preference to attend specific endoscopic centers. Additionally, the investigators stated that many records on race and ethnicity were missing, further limiting the findings.

“The present analysis of neoplastic colorectal pathology among individuals younger than age 50 suggests that lowering the screening age to 45 for men and women of all races and ethnicities will likely detect important pathology rather frequently,” they concluded. In addition, the researchers noted that the study results “underscore the importance of early messaging to patients and providers in the years leading up to age 45.” Ultimately, improved “awareness of pathology prevalence in individuals younger than age 45 can help guide clinicians in the clinical management of CRC risk,” the researchers wrote.

Several of the researchers reported conflicts of interest with Exact Sciences Corp and Freenome. The study received no industry funding.

Publications
Topics
Sections

Approximately one-third of people between 45 and 49 years of age who undergo colonoscopies have neoplastic colorectal pathology, according to a retrospective analysis.

According to the researchers, led by Parth Trivedi, MD, of the Icahn School of Medicine at Mount Sinai, New York, there has progressively been a “disturbing” rise in early-onset colorectal cancer (CRC) in the United States, which has prompted guidelines from the American Cancer Society to the U.S. Preventive Services Task Force to recommend lowering the CRC screening starting age to 45 years old for average-risk individuals. Despite these recommendations, little research to date has fully characterized the prevalence of colorectal neoplasia in individuals younger than the currently recommended CRC onset screening age of 50 years.

Dr. Trivedi and colleagues, who published their study findings in Gastroenterology, retrospectively reviewed colonoscopy data recorded in the Gastrointestinal Quality Improvement Consortium Registry to address the current knowledge gaps on early-onset CRC. Collected data were for procedures conducted at 123 AMSURG ambulatory endoscopy centers across 29 states between January 2014 and February 2021. In total, 2,921,816 colonoscopies during the study period among patients aged 18-54 years were recorded by AMSURG-associated endoscopists; of these, 562,559 met inclusion criteria for high-quality screening or diagnostic colonoscopy procedures.

The researchers pooled a young-onset age group, including patients between the ages of 18 and 49 years old, in whom 145,998 procedures were performed, including 79,934 procedures in patients aged 45-49 years. A comparator group with 336,627 procedures in patients aged 50-54 years was also included in the study. The findings were categorized into CRC, advanced premalignant lesions (APL), and “any neoplasia,” the latter of which included all adenomas, sessile serrated polyps, and CRC.

Among patients aged 18-44 years, the most frequent indications were “diagnostic-other” (45.6%) as well as “diagnostic-bleeding” (39.4%). Among patients between 45 and 49 years of age, the most frequent indications were “screening” (41.4%) and “diagnostic-other” (30.7%). Nearly all (90%) procedures among those aged 50-54 years were for screening.

A multivariable logistic regression identified 5 variables predictive of either APL or CRC in patients between 18 and 49 years of age: increasing age (odds ratio, 1.08; 95% confidence interval, 1.07-1.08; P <0.01), male sex (OR = 1.67; 95% CI, 1.63-1.70; P <0.01), White race (vs. African American: OR = 0.76; 95% CI, 0.73-0.79, P <0.01; vs. Asian: OR = 0.89; 95% CI, 0.84-0.94, P <0.01), family history of CRC (OR = 1.21; 95% CI, 1.16-1.26; P <0.01) and polyps (OR = 1.33; 95% CI, 1.24-1.43; P <0.01), and examinations for bleeding (OR = 1.15; 95% CI, 1.12-1.18; P <0.01) or screening (OR = 1.20; 95% CI, 1.16-1.24; P <0.01).

The prevalence of neoplastic findings in the young-onset age-group increased with increasing age for the categories of any neoplasia, APLs, and CRC. Among patients aged 40-44, 26.59% had any neoplasia, 5.76% had APL, and 0.53% had CRC. In those aged 45-49 years, around 32% had any neoplasia, approximately 7.5% had APLs, and nearly 0.58% had CRC. In the 50- to 54-year-old group, the prevalences of any neoplasia, APL, and CRC were 37.72%, 9.48%, and 0.32%, respectively.

Across all age groups, a family history of CRC was associated with a higher prevalence of any neoplasia and APL. In addition, the rates of any APL and neoplasia in patients with a family history of CRC were comparable to patients who were 5 years older but had no family history of the disease. Across most young-onset age group, individuals with a positive family history had a lower CRC prevalence versus patients with no family history.

The researchers noted that their population data are derived from ambulatory endoscopy centers, which may introduce bias associated with insurance coverage or patient preference to attend specific endoscopic centers. Additionally, the investigators stated that many records on race and ethnicity were missing, further limiting the findings.

“The present analysis of neoplastic colorectal pathology among individuals younger than age 50 suggests that lowering the screening age to 45 for men and women of all races and ethnicities will likely detect important pathology rather frequently,” they concluded. In addition, the researchers noted that the study results “underscore the importance of early messaging to patients and providers in the years leading up to age 45.” Ultimately, improved “awareness of pathology prevalence in individuals younger than age 45 can help guide clinicians in the clinical management of CRC risk,” the researchers wrote.

Several of the researchers reported conflicts of interest with Exact Sciences Corp and Freenome. The study received no industry funding.

Approximately one-third of people between 45 and 49 years of age who undergo colonoscopies have neoplastic colorectal pathology, according to a retrospective analysis.

According to the researchers, led by Parth Trivedi, MD, of the Icahn School of Medicine at Mount Sinai, New York, there has progressively been a “disturbing” rise in early-onset colorectal cancer (CRC) in the United States, which has prompted guidelines from the American Cancer Society to the U.S. Preventive Services Task Force to recommend lowering the CRC screening starting age to 45 years old for average-risk individuals. Despite these recommendations, little research to date has fully characterized the prevalence of colorectal neoplasia in individuals younger than the currently recommended CRC onset screening age of 50 years.

Dr. Trivedi and colleagues, who published their study findings in Gastroenterology, retrospectively reviewed colonoscopy data recorded in the Gastrointestinal Quality Improvement Consortium Registry to address the current knowledge gaps on early-onset CRC. Collected data were for procedures conducted at 123 AMSURG ambulatory endoscopy centers across 29 states between January 2014 and February 2021. In total, 2,921,816 colonoscopies during the study period among patients aged 18-54 years were recorded by AMSURG-associated endoscopists; of these, 562,559 met inclusion criteria for high-quality screening or diagnostic colonoscopy procedures.

The researchers pooled a young-onset age group, including patients between the ages of 18 and 49 years old, in whom 145,998 procedures were performed, including 79,934 procedures in patients aged 45-49 years. A comparator group with 336,627 procedures in patients aged 50-54 years was also included in the study. The findings were categorized into CRC, advanced premalignant lesions (APL), and “any neoplasia,” the latter of which included all adenomas, sessile serrated polyps, and CRC.

Among patients aged 18-44 years, the most frequent indications were “diagnostic-other” (45.6%) as well as “diagnostic-bleeding” (39.4%). Among patients between 45 and 49 years of age, the most frequent indications were “screening” (41.4%) and “diagnostic-other” (30.7%). Nearly all (90%) procedures among those aged 50-54 years were for screening.

A multivariable logistic regression identified 5 variables predictive of either APL or CRC in patients between 18 and 49 years of age: increasing age (odds ratio, 1.08; 95% confidence interval, 1.07-1.08; P <0.01), male sex (OR = 1.67; 95% CI, 1.63-1.70; P <0.01), White race (vs. African American: OR = 0.76; 95% CI, 0.73-0.79, P <0.01; vs. Asian: OR = 0.89; 95% CI, 0.84-0.94, P <0.01), family history of CRC (OR = 1.21; 95% CI, 1.16-1.26; P <0.01) and polyps (OR = 1.33; 95% CI, 1.24-1.43; P <0.01), and examinations for bleeding (OR = 1.15; 95% CI, 1.12-1.18; P <0.01) or screening (OR = 1.20; 95% CI, 1.16-1.24; P <0.01).

The prevalence of neoplastic findings in the young-onset age-group increased with increasing age for the categories of any neoplasia, APLs, and CRC. Among patients aged 40-44, 26.59% had any neoplasia, 5.76% had APL, and 0.53% had CRC. In those aged 45-49 years, around 32% had any neoplasia, approximately 7.5% had APLs, and nearly 0.58% had CRC. In the 50- to 54-year-old group, the prevalences of any neoplasia, APL, and CRC were 37.72%, 9.48%, and 0.32%, respectively.

Across all age groups, a family history of CRC was associated with a higher prevalence of any neoplasia and APL. In addition, the rates of any APL and neoplasia in patients with a family history of CRC were comparable to patients who were 5 years older but had no family history of the disease. Across most young-onset age group, individuals with a positive family history had a lower CRC prevalence versus patients with no family history.

The researchers noted that their population data are derived from ambulatory endoscopy centers, which may introduce bias associated with insurance coverage or patient preference to attend specific endoscopic centers. Additionally, the investigators stated that many records on race and ethnicity were missing, further limiting the findings.

“The present analysis of neoplastic colorectal pathology among individuals younger than age 50 suggests that lowering the screening age to 45 for men and women of all races and ethnicities will likely detect important pathology rather frequently,” they concluded. In addition, the researchers noted that the study results “underscore the importance of early messaging to patients and providers in the years leading up to age 45.” Ultimately, improved “awareness of pathology prevalence in individuals younger than age 45 can help guide clinicians in the clinical management of CRC risk,” the researchers wrote.

Several of the researchers reported conflicts of interest with Exact Sciences Corp and Freenome. The study received no industry funding.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New HBV model may open door to more effective antivirals

Long–sought-after breakthrough?
Article Type
Changed
Tue, 03/15/2022 - 17:03

A new mouse model that better represents chronic infection with hepatitis B virus (HBV) in humans may lead to more effective antiviral therapies for HBV, according to investigators.

During human infection, HBV genomes take the form of covalently closed circular DNA (cccDNA), a structure that has thwarted effective antiviral therapy and, until now, creation of an accurate mouse model, reported lead author Zaichao Xu, PhD, of Wuhan (China) University and colleagues.

“As the viral persistence reservoir plays a central role in HBV infection, HBV cccDNA is the key obstacle for a cure,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.

Although several previous mouse models have approximated this phenomenon with recombinant cccDNA-like molecules (rcccDNA), the present model is the first to achieve genuine cccDNA, which does not naturally occur in mice.

“Although rcccDNA supports persistent viral replication and antigen expression, the nature of rcccDNA may differ from authentic cccDNA, as additional sequences, like LoxP or attR, were inserted into the HBV genome,” the investigators noted.

The new model was created by first constructing an adeno-associated virus vector carrying a replication-deficient HBV1.04-fold genome (AAV-HBV1.04). When injected into mice, the vector led to cccDNA formation via ataxia-telangiectasia and Rad3-related protein (ATR)–mediated DNA damage response, a finding that was confirmed by blocking the same process with ATR inhibitors.

Immediately after injection, mice tested positive for both hepatitis B e antigen (HBeAg) and hepatitis B surface antigen (HBsAg), with peak concentrations after either 4 or 8 weeks depending on dose. HBV DNA was also detected in serum after injection, and 50% of hepatocytes exhibited HBsAg and hepatitis B core protein (HBc) after 1 week. At week 66, HBsAg, HBeAg, and HBc were still detectable in the liver.

“The expression of HBc could only be observed in the liver, but not in other organs or tissues, suggesting that the AAV-HBV1.04 only targeted the mouse liver,” the investigators wrote.

Further experimentation involving known cccDNA-binding proteins supported the similarity between cccDNA in the mouse model and natural infection.

“These results suggested that the chromatinization and transcriptional activation of cccDNA formed in this model dose not differ from wild-type cccDNA formed through infection.”

Next, Dr. Xu and colleagues demonstrated that the infected mice could serve as a reliable model for antiviral research. One week after injection with the vector, mice were treated with entecavir, polyinosinic-polycytidylic acid (poly[I:C]), or phosphate-buffered saline (PBS; control). As anticipated, entecavir suppressed circulating HBV DNA, but not HBsAg, HBeAg, or HBV cccDNA, whereas treatment with poly(I:C) reduced all HBV markers.

“This novel mouse model will provide a unique platform for studying HBV cccDNA and developing novel antivirals to achieve HBV cure,” the investigators concluded.

The study was supported by the National Natural Science Foundation of China, the Fundamental Research Funds for the Central Universities, Hubei Province’s Outstanding Medical Academic Leader Program, and others. The investigators reported no conflicts of interest.

Body

 

On the heels of the wondrous development of curative antiviral agents for hepatitis C virus (HCV), renewed attention has been directed to efforts to bring about the cure of HBV. However, this task will hinge on successful elimination of covalently closed circular DNA (cccDNA), a highly stable form of viral DNA that is exceedingly difficult to eliminate. Efforts to develop successful curative strategies will in turn rely on development of small animal models that support HBV cccDNA formation and virus production, which has until recently proved elusive. In the past several years, several mouse HBV models supporting cccDNA formation have been constructed using adeno-associated vector (AAV)–mediated transduction of a linearized HBV genome. Both the AAV-HBV linear episome and cccDNA have been consistently replicated and detected in these models. While they recapitulate the key steps of the viral life cycle, these models do not, however, lend themselves to direct assessment of cccDNA, which have traditionally required detection of cccDNA in the liver.

Dr. Raymond T. Chung
Xu et al. have now developed a novel mouse model in which generation of HBsAg is directly dependent on generation of cccDNA. This dependence thus yields a simple marker for assessment of cccDNA status and allows monitoring of the therapeutic effects of novel agents targeting cccDNA by simply following HBsAg titers. More studies are required to explore the mechanisms underlying HBV cccDNA formation and elimination, but this work suggests a new way forward to tractably evaluate agents that specifically interrupt cccDNA metabolism, an important step in our systematic march toward HBV cure.
 

Raymond T. Chung, MD, is a professor of medicine at Harvard Medical School and director of the Hepatology and Liver Center at Massachusetts General Hospital, both in Boston. He has no conflicts to disclose.

Publications
Topics
Sections
Body

 

On the heels of the wondrous development of curative antiviral agents for hepatitis C virus (HCV), renewed attention has been directed to efforts to bring about the cure of HBV. However, this task will hinge on successful elimination of covalently closed circular DNA (cccDNA), a highly stable form of viral DNA that is exceedingly difficult to eliminate. Efforts to develop successful curative strategies will in turn rely on development of small animal models that support HBV cccDNA formation and virus production, which has until recently proved elusive. In the past several years, several mouse HBV models supporting cccDNA formation have been constructed using adeno-associated vector (AAV)–mediated transduction of a linearized HBV genome. Both the AAV-HBV linear episome and cccDNA have been consistently replicated and detected in these models. While they recapitulate the key steps of the viral life cycle, these models do not, however, lend themselves to direct assessment of cccDNA, which have traditionally required detection of cccDNA in the liver.

Dr. Raymond T. Chung
Xu et al. have now developed a novel mouse model in which generation of HBsAg is directly dependent on generation of cccDNA. This dependence thus yields a simple marker for assessment of cccDNA status and allows monitoring of the therapeutic effects of novel agents targeting cccDNA by simply following HBsAg titers. More studies are required to explore the mechanisms underlying HBV cccDNA formation and elimination, but this work suggests a new way forward to tractably evaluate agents that specifically interrupt cccDNA metabolism, an important step in our systematic march toward HBV cure.
 

Raymond T. Chung, MD, is a professor of medicine at Harvard Medical School and director of the Hepatology and Liver Center at Massachusetts General Hospital, both in Boston. He has no conflicts to disclose.

Body

 

On the heels of the wondrous development of curative antiviral agents for hepatitis C virus (HCV), renewed attention has been directed to efforts to bring about the cure of HBV. However, this task will hinge on successful elimination of covalently closed circular DNA (cccDNA), a highly stable form of viral DNA that is exceedingly difficult to eliminate. Efforts to develop successful curative strategies will in turn rely on development of small animal models that support HBV cccDNA formation and virus production, which has until recently proved elusive. In the past several years, several mouse HBV models supporting cccDNA formation have been constructed using adeno-associated vector (AAV)–mediated transduction of a linearized HBV genome. Both the AAV-HBV linear episome and cccDNA have been consistently replicated and detected in these models. While they recapitulate the key steps of the viral life cycle, these models do not, however, lend themselves to direct assessment of cccDNA, which have traditionally required detection of cccDNA in the liver.

Dr. Raymond T. Chung
Xu et al. have now developed a novel mouse model in which generation of HBsAg is directly dependent on generation of cccDNA. This dependence thus yields a simple marker for assessment of cccDNA status and allows monitoring of the therapeutic effects of novel agents targeting cccDNA by simply following HBsAg titers. More studies are required to explore the mechanisms underlying HBV cccDNA formation and elimination, but this work suggests a new way forward to tractably evaluate agents that specifically interrupt cccDNA metabolism, an important step in our systematic march toward HBV cure.
 

Raymond T. Chung, MD, is a professor of medicine at Harvard Medical School and director of the Hepatology and Liver Center at Massachusetts General Hospital, both in Boston. He has no conflicts to disclose.

Title
Long–sought-after breakthrough?
Long–sought-after breakthrough?

A new mouse model that better represents chronic infection with hepatitis B virus (HBV) in humans may lead to more effective antiviral therapies for HBV, according to investigators.

During human infection, HBV genomes take the form of covalently closed circular DNA (cccDNA), a structure that has thwarted effective antiviral therapy and, until now, creation of an accurate mouse model, reported lead author Zaichao Xu, PhD, of Wuhan (China) University and colleagues.

“As the viral persistence reservoir plays a central role in HBV infection, HBV cccDNA is the key obstacle for a cure,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.

Although several previous mouse models have approximated this phenomenon with recombinant cccDNA-like molecules (rcccDNA), the present model is the first to achieve genuine cccDNA, which does not naturally occur in mice.

“Although rcccDNA supports persistent viral replication and antigen expression, the nature of rcccDNA may differ from authentic cccDNA, as additional sequences, like LoxP or attR, were inserted into the HBV genome,” the investigators noted.

The new model was created by first constructing an adeno-associated virus vector carrying a replication-deficient HBV1.04-fold genome (AAV-HBV1.04). When injected into mice, the vector led to cccDNA formation via ataxia-telangiectasia and Rad3-related protein (ATR)–mediated DNA damage response, a finding that was confirmed by blocking the same process with ATR inhibitors.

Immediately after injection, mice tested positive for both hepatitis B e antigen (HBeAg) and hepatitis B surface antigen (HBsAg), with peak concentrations after either 4 or 8 weeks depending on dose. HBV DNA was also detected in serum after injection, and 50% of hepatocytes exhibited HBsAg and hepatitis B core protein (HBc) after 1 week. At week 66, HBsAg, HBeAg, and HBc were still detectable in the liver.

“The expression of HBc could only be observed in the liver, but not in other organs or tissues, suggesting that the AAV-HBV1.04 only targeted the mouse liver,” the investigators wrote.

Further experimentation involving known cccDNA-binding proteins supported the similarity between cccDNA in the mouse model and natural infection.

“These results suggested that the chromatinization and transcriptional activation of cccDNA formed in this model dose not differ from wild-type cccDNA formed through infection.”

Next, Dr. Xu and colleagues demonstrated that the infected mice could serve as a reliable model for antiviral research. One week after injection with the vector, mice were treated with entecavir, polyinosinic-polycytidylic acid (poly[I:C]), or phosphate-buffered saline (PBS; control). As anticipated, entecavir suppressed circulating HBV DNA, but not HBsAg, HBeAg, or HBV cccDNA, whereas treatment with poly(I:C) reduced all HBV markers.

“This novel mouse model will provide a unique platform for studying HBV cccDNA and developing novel antivirals to achieve HBV cure,” the investigators concluded.

The study was supported by the National Natural Science Foundation of China, the Fundamental Research Funds for the Central Universities, Hubei Province’s Outstanding Medical Academic Leader Program, and others. The investigators reported no conflicts of interest.

A new mouse model that better represents chronic infection with hepatitis B virus (HBV) in humans may lead to more effective antiviral therapies for HBV, according to investigators.

During human infection, HBV genomes take the form of covalently closed circular DNA (cccDNA), a structure that has thwarted effective antiviral therapy and, until now, creation of an accurate mouse model, reported lead author Zaichao Xu, PhD, of Wuhan (China) University and colleagues.

“As the viral persistence reservoir plays a central role in HBV infection, HBV cccDNA is the key obstacle for a cure,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.

Although several previous mouse models have approximated this phenomenon with recombinant cccDNA-like molecules (rcccDNA), the present model is the first to achieve genuine cccDNA, which does not naturally occur in mice.

“Although rcccDNA supports persistent viral replication and antigen expression, the nature of rcccDNA may differ from authentic cccDNA, as additional sequences, like LoxP or attR, were inserted into the HBV genome,” the investigators noted.

The new model was created by first constructing an adeno-associated virus vector carrying a replication-deficient HBV1.04-fold genome (AAV-HBV1.04). When injected into mice, the vector led to cccDNA formation via ataxia-telangiectasia and Rad3-related protein (ATR)–mediated DNA damage response, a finding that was confirmed by blocking the same process with ATR inhibitors.

Immediately after injection, mice tested positive for both hepatitis B e antigen (HBeAg) and hepatitis B surface antigen (HBsAg), with peak concentrations after either 4 or 8 weeks depending on dose. HBV DNA was also detected in serum after injection, and 50% of hepatocytes exhibited HBsAg and hepatitis B core protein (HBc) after 1 week. At week 66, HBsAg, HBeAg, and HBc were still detectable in the liver.

“The expression of HBc could only be observed in the liver, but not in other organs or tissues, suggesting that the AAV-HBV1.04 only targeted the mouse liver,” the investigators wrote.

Further experimentation involving known cccDNA-binding proteins supported the similarity between cccDNA in the mouse model and natural infection.

“These results suggested that the chromatinization and transcriptional activation of cccDNA formed in this model dose not differ from wild-type cccDNA formed through infection.”

Next, Dr. Xu and colleagues demonstrated that the infected mice could serve as a reliable model for antiviral research. One week after injection with the vector, mice were treated with entecavir, polyinosinic-polycytidylic acid (poly[I:C]), or phosphate-buffered saline (PBS; control). As anticipated, entecavir suppressed circulating HBV DNA, but not HBsAg, HBeAg, or HBV cccDNA, whereas treatment with poly(I:C) reduced all HBV markers.

“This novel mouse model will provide a unique platform for studying HBV cccDNA and developing novel antivirals to achieve HBV cure,” the investigators concluded.

The study was supported by the National Natural Science Foundation of China, the Fundamental Research Funds for the Central Universities, Hubei Province’s Outstanding Medical Academic Leader Program, and others. The investigators reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Bowel ultrasound may overtake colonoscopy in Crohn’s

A 'significant financial burden' avoided
Article Type
Changed
Mon, 04/11/2022 - 16:16

Bowel ultrasound predicts the clinical course of Crohn’s disease for up to 1 year, according to results of a prospective trial involving 225 patients.

After additional confirmation in larger studies, ultrasound could serve as a noninvasive alternative to colonoscopy for monitoring and predicting disease course, reported lead author Mariangela Allocca, MD, PhD, of Humanitas University, Milan, and colleagues.

“Frequent colonoscopies are expensive, invasive, and not well tolerated by patients, thus noninvasive tools for assessment and monitoring are strongly needed,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Bowel ultrasound accurately detects inflammatory bowel disease activity, extent, and complications, particularly in Crohn’s disease. Considering its low cost, minimal invasiveness ... and easy repeatability, bowel ultrasound may be a simple, readily available tool for assessing and monitoring Crohn’s disease.”

To test this hypothesis, Dr. Allocca and colleagues enrolled 225 consecutive patients with ileal and/or colonic Crohn’s disease diagnosed for at least 6 months and managed at a tertiary hospital in Italy. All patients underwent both colonoscopy and bowel ultrasound with no more than 3 months between each procedure.

Colonoscopy results were characterized by the Simplified Endoscopic Score for Crohn’s disease (SES-CD), whereas ultrasound was scored using a several parameters, including bowel wall pattern, bowel thickness, bowel wall flow, presence of complications (abscess, fistula, stricture), and characteristics of mesenteric lymph nodes and tissue. Ultrasound scores were considered high if they exceeded a cut-off of 3.52, which was determined by a receiver operating characteristic curve analysis.

Participants were followed for 12 months after baseline ultrasound. The primary objective was to determine the relationship between baseline ultrasound findings and negative disease course, defined by steroid usage, need for surgery, need for hospitalization, and/or change in therapy. The secondary objective was to understand the relationship between ultrasound findings and endoscopy activity.

Multivariable analysis revealed that ultrasound scores greater than 3.52 predicted a negative clinical disease course for up to one year (odds ratio, 6.97; 95% confidence interval, 2.87-16.93; P < .001), as did the presence of at least one disease complication at baseline (OR, 3.90; 95% CI, 1.21-12.53; P = 0.21). A worse clinical course at one-year was also predicted by a baseline fecal calprotectin value of at least 250 mcg/g (OR, 5.43; 95% CI, 2.25-13.11; P < .001) and male sex (OR, 2.60; 95% CI, 1.12-6.02; P = .025).

Investigators then assessed individual disease outcomes at 12 months and baseline results. For example, high ultrasound score and calprotectin at baseline each predicted the need for treatment escalation. In comparison, disease behavior (inflammatory, stricturing, penetrating) and C reactive protein predicted need for corticosteroids. The only significant predictor of hospitalization a year later was CRP.

“[B]owel ultrasound is able to predict disease course in Crohn’s disease patients,” they wrote. “It may identify patients at high risk of a negative course to adopt effective strategies to prevent any disease progression. Our data need to be confirmed and validated in further large studies.”

The investigators disclosed relationships with Janssen, AbbVie, Mundipharma, and others.

Body

Patients with Crohn’s disease (CD) undergo multiple colonoscopies during their lifetime. Endoscopic assessment is often necessary to determine extent and severity of inflammation to guide choice of therapy, assess mucosal healing on current therapy, and for surveillance examination for colorectal dysplasia. Multiple colonoscopies over a lifetime present a significant financial burden for patients. The invasive nature of the procedure, along with the small but potential risk of perforation and patient discomfort make for an undesirable experience. Cross-sectional imaging offers the advantage of noninvasive modality to assess bowel wall and extraluminal complications related to CD. Bowel ultrasound, performed as point of care imaging by gastroenterologists, is an emerging imaging alternative to visualize the bowel.

In the study by Allocca et al., the authors developed a bowel ultrasound–based score incorporating bowel wall thickness, pattern, flow, and presence of extraluminal complications. The score was developed by comparing ultrasound parameters with colonoscopy findings for each segment of the colon and terminal ileum. In a cohort of 225 patients, a bowel ultrasound score of >3.52 along with at least one extraluminal complication, baseline fecal calprotectin of >250 mcg/g, and male gender were linked with adverse outcomes within 12 months (defined as need for steroids, change of therapy, hospitalization, or surgery).

Dr. Manreet Kaur

While these observations need to be validated externally, this study further consolidates the role for bowel ultrasound as a viable imaging modality to monitor disease and response to therapy in CD. Prior studies have shown bowel ultrasound is a valid alternative to MR enterography – without the expense, limited availability, and need for gadolinium contrast. As the therapeutic targets in IBD move toward mucosa healing, bowel ultrasound offers the promise of a cost-effective, noninvasive, point-of care test that can be performed during an office consultation. The operator dependent nature of this modality may limit its uptake and utilization. The International Bowel Ultrasound Group (IBUS) has collaborated with the European Crohn’s and Colitis organization as well as the Canadian Association of Gastroenterology to establish training and research in bowel ultrasound. Soon, patients can expect a bowel ultrasound to become part of their routine assessment during an office consultation. 

Manreet Kaur, MD, is medical director of the Inflammatory Bowel Disease Center and an associate professor in the division of gastroenterology and hepatology at Baylor College of Medicine, Houston. She has no relevant conflicts of interest.
 

Publications
Topics
Sections
Body

Patients with Crohn’s disease (CD) undergo multiple colonoscopies during their lifetime. Endoscopic assessment is often necessary to determine extent and severity of inflammation to guide choice of therapy, assess mucosal healing on current therapy, and for surveillance examination for colorectal dysplasia. Multiple colonoscopies over a lifetime present a significant financial burden for patients. The invasive nature of the procedure, along with the small but potential risk of perforation and patient discomfort make for an undesirable experience. Cross-sectional imaging offers the advantage of noninvasive modality to assess bowel wall and extraluminal complications related to CD. Bowel ultrasound, performed as point of care imaging by gastroenterologists, is an emerging imaging alternative to visualize the bowel.

In the study by Allocca et al., the authors developed a bowel ultrasound–based score incorporating bowel wall thickness, pattern, flow, and presence of extraluminal complications. The score was developed by comparing ultrasound parameters with colonoscopy findings for each segment of the colon and terminal ileum. In a cohort of 225 patients, a bowel ultrasound score of >3.52 along with at least one extraluminal complication, baseline fecal calprotectin of >250 mcg/g, and male gender were linked with adverse outcomes within 12 months (defined as need for steroids, change of therapy, hospitalization, or surgery).

Dr. Manreet Kaur

While these observations need to be validated externally, this study further consolidates the role for bowel ultrasound as a viable imaging modality to monitor disease and response to therapy in CD. Prior studies have shown bowel ultrasound is a valid alternative to MR enterography – without the expense, limited availability, and need for gadolinium contrast. As the therapeutic targets in IBD move toward mucosa healing, bowel ultrasound offers the promise of a cost-effective, noninvasive, point-of care test that can be performed during an office consultation. The operator dependent nature of this modality may limit its uptake and utilization. The International Bowel Ultrasound Group (IBUS) has collaborated with the European Crohn’s and Colitis organization as well as the Canadian Association of Gastroenterology to establish training and research in bowel ultrasound. Soon, patients can expect a bowel ultrasound to become part of their routine assessment during an office consultation. 

Manreet Kaur, MD, is medical director of the Inflammatory Bowel Disease Center and an associate professor in the division of gastroenterology and hepatology at Baylor College of Medicine, Houston. She has no relevant conflicts of interest.
 

Body

Patients with Crohn’s disease (CD) undergo multiple colonoscopies during their lifetime. Endoscopic assessment is often necessary to determine extent and severity of inflammation to guide choice of therapy, assess mucosal healing on current therapy, and for surveillance examination for colorectal dysplasia. Multiple colonoscopies over a lifetime present a significant financial burden for patients. The invasive nature of the procedure, along with the small but potential risk of perforation and patient discomfort make for an undesirable experience. Cross-sectional imaging offers the advantage of noninvasive modality to assess bowel wall and extraluminal complications related to CD. Bowel ultrasound, performed as point of care imaging by gastroenterologists, is an emerging imaging alternative to visualize the bowel.

In the study by Allocca et al., the authors developed a bowel ultrasound–based score incorporating bowel wall thickness, pattern, flow, and presence of extraluminal complications. The score was developed by comparing ultrasound parameters with colonoscopy findings for each segment of the colon and terminal ileum. In a cohort of 225 patients, a bowel ultrasound score of >3.52 along with at least one extraluminal complication, baseline fecal calprotectin of >250 mcg/g, and male gender were linked with adverse outcomes within 12 months (defined as need for steroids, change of therapy, hospitalization, or surgery).

Dr. Manreet Kaur

While these observations need to be validated externally, this study further consolidates the role for bowel ultrasound as a viable imaging modality to monitor disease and response to therapy in CD. Prior studies have shown bowel ultrasound is a valid alternative to MR enterography – without the expense, limited availability, and need for gadolinium contrast. As the therapeutic targets in IBD move toward mucosa healing, bowel ultrasound offers the promise of a cost-effective, noninvasive, point-of care test that can be performed during an office consultation. The operator dependent nature of this modality may limit its uptake and utilization. The International Bowel Ultrasound Group (IBUS) has collaborated with the European Crohn’s and Colitis organization as well as the Canadian Association of Gastroenterology to establish training and research in bowel ultrasound. Soon, patients can expect a bowel ultrasound to become part of their routine assessment during an office consultation. 

Manreet Kaur, MD, is medical director of the Inflammatory Bowel Disease Center and an associate professor in the division of gastroenterology and hepatology at Baylor College of Medicine, Houston. She has no relevant conflicts of interest.
 

Title
A 'significant financial burden' avoided
A 'significant financial burden' avoided

Bowel ultrasound predicts the clinical course of Crohn’s disease for up to 1 year, according to results of a prospective trial involving 225 patients.

After additional confirmation in larger studies, ultrasound could serve as a noninvasive alternative to colonoscopy for monitoring and predicting disease course, reported lead author Mariangela Allocca, MD, PhD, of Humanitas University, Milan, and colleagues.

“Frequent colonoscopies are expensive, invasive, and not well tolerated by patients, thus noninvasive tools for assessment and monitoring are strongly needed,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Bowel ultrasound accurately detects inflammatory bowel disease activity, extent, and complications, particularly in Crohn’s disease. Considering its low cost, minimal invasiveness ... and easy repeatability, bowel ultrasound may be a simple, readily available tool for assessing and monitoring Crohn’s disease.”

To test this hypothesis, Dr. Allocca and colleagues enrolled 225 consecutive patients with ileal and/or colonic Crohn’s disease diagnosed for at least 6 months and managed at a tertiary hospital in Italy. All patients underwent both colonoscopy and bowel ultrasound with no more than 3 months between each procedure.

Colonoscopy results were characterized by the Simplified Endoscopic Score for Crohn’s disease (SES-CD), whereas ultrasound was scored using a several parameters, including bowel wall pattern, bowel thickness, bowel wall flow, presence of complications (abscess, fistula, stricture), and characteristics of mesenteric lymph nodes and tissue. Ultrasound scores were considered high if they exceeded a cut-off of 3.52, which was determined by a receiver operating characteristic curve analysis.

Participants were followed for 12 months after baseline ultrasound. The primary objective was to determine the relationship between baseline ultrasound findings and negative disease course, defined by steroid usage, need for surgery, need for hospitalization, and/or change in therapy. The secondary objective was to understand the relationship between ultrasound findings and endoscopy activity.

Multivariable analysis revealed that ultrasound scores greater than 3.52 predicted a negative clinical disease course for up to one year (odds ratio, 6.97; 95% confidence interval, 2.87-16.93; P < .001), as did the presence of at least one disease complication at baseline (OR, 3.90; 95% CI, 1.21-12.53; P = 0.21). A worse clinical course at one-year was also predicted by a baseline fecal calprotectin value of at least 250 mcg/g (OR, 5.43; 95% CI, 2.25-13.11; P < .001) and male sex (OR, 2.60; 95% CI, 1.12-6.02; P = .025).

Investigators then assessed individual disease outcomes at 12 months and baseline results. For example, high ultrasound score and calprotectin at baseline each predicted the need for treatment escalation. In comparison, disease behavior (inflammatory, stricturing, penetrating) and C reactive protein predicted need for corticosteroids. The only significant predictor of hospitalization a year later was CRP.

“[B]owel ultrasound is able to predict disease course in Crohn’s disease patients,” they wrote. “It may identify patients at high risk of a negative course to adopt effective strategies to prevent any disease progression. Our data need to be confirmed and validated in further large studies.”

The investigators disclosed relationships with Janssen, AbbVie, Mundipharma, and others.

Bowel ultrasound predicts the clinical course of Crohn’s disease for up to 1 year, according to results of a prospective trial involving 225 patients.

After additional confirmation in larger studies, ultrasound could serve as a noninvasive alternative to colonoscopy for monitoring and predicting disease course, reported lead author Mariangela Allocca, MD, PhD, of Humanitas University, Milan, and colleagues.

“Frequent colonoscopies are expensive, invasive, and not well tolerated by patients, thus noninvasive tools for assessment and monitoring are strongly needed,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Bowel ultrasound accurately detects inflammatory bowel disease activity, extent, and complications, particularly in Crohn’s disease. Considering its low cost, minimal invasiveness ... and easy repeatability, bowel ultrasound may be a simple, readily available tool for assessing and monitoring Crohn’s disease.”

To test this hypothesis, Dr. Allocca and colleagues enrolled 225 consecutive patients with ileal and/or colonic Crohn’s disease diagnosed for at least 6 months and managed at a tertiary hospital in Italy. All patients underwent both colonoscopy and bowel ultrasound with no more than 3 months between each procedure.

Colonoscopy results were characterized by the Simplified Endoscopic Score for Crohn’s disease (SES-CD), whereas ultrasound was scored using a several parameters, including bowel wall pattern, bowel thickness, bowel wall flow, presence of complications (abscess, fistula, stricture), and characteristics of mesenteric lymph nodes and tissue. Ultrasound scores were considered high if they exceeded a cut-off of 3.52, which was determined by a receiver operating characteristic curve analysis.

Participants were followed for 12 months after baseline ultrasound. The primary objective was to determine the relationship between baseline ultrasound findings and negative disease course, defined by steroid usage, need for surgery, need for hospitalization, and/or change in therapy. The secondary objective was to understand the relationship between ultrasound findings and endoscopy activity.

Multivariable analysis revealed that ultrasound scores greater than 3.52 predicted a negative clinical disease course for up to one year (odds ratio, 6.97; 95% confidence interval, 2.87-16.93; P < .001), as did the presence of at least one disease complication at baseline (OR, 3.90; 95% CI, 1.21-12.53; P = 0.21). A worse clinical course at one-year was also predicted by a baseline fecal calprotectin value of at least 250 mcg/g (OR, 5.43; 95% CI, 2.25-13.11; P < .001) and male sex (OR, 2.60; 95% CI, 1.12-6.02; P = .025).

Investigators then assessed individual disease outcomes at 12 months and baseline results. For example, high ultrasound score and calprotectin at baseline each predicted the need for treatment escalation. In comparison, disease behavior (inflammatory, stricturing, penetrating) and C reactive protein predicted need for corticosteroids. The only significant predictor of hospitalization a year later was CRP.

“[B]owel ultrasound is able to predict disease course in Crohn’s disease patients,” they wrote. “It may identify patients at high risk of a negative course to adopt effective strategies to prevent any disease progression. Our data need to be confirmed and validated in further large studies.”

The investigators disclosed relationships with Janssen, AbbVie, Mundipharma, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Children and COVID: Decline in new cases reaches 7th week

Article Type
Changed
Tue, 03/15/2022 - 16:20

New cases of COVID-19 in U.S. children have fallen to their lowest level since the beginning of the Delta surge in July of 2021, according to the American Academy of Pediatrics and the Children’s Hospital Association.

Just under 42,000 new cases were reported during the week of March 4-10, making it the 7th consecutive week of declines since the peak of the Omicron surge in January. Over those 7 weeks, new cases dropped over 96% from the 1.15 million reported for Jan. 14-20, based on data collected by the AAP and CHA from state and territorial health departments.

The last time that the weekly count was below 42,000 was July 16-22, 2021, when almost 39,000 cases were reported in the midst of the Delta upsurge. That was shortly after cases had reached their lowest point, 8,447, since the early stages of the pandemic in 2020, the AAP/CHA data show.

The cumulative number of pediatric cases is now up to 12.7 million, while the overall proportion of cases occurring in children held steady at 19.0% for the 4th week in a row, the AAP and CHA said in their weekly COVID-19 report. The Centers for Disease Control and Prevention, using an age range of 0-18 versus the states’ variety of ages, puts total cases at 11.7 million and deaths at 1,656 as of March 14.

Data from the CDC’s COVID-19–Associated Hospitalization Surveillance Network show that hospitalizations with laboratory-confirmed infection were down by 50% in children aged 0-4 years, by 63% among 5- to 11-year-olds, and by 58% in those aged 12-17 years for the week of Feb. 27 to March 5, compared with the week before.

The pace of vaccination continues to follow a similar trend, as the declines seen through February have continued into March. Cumulatively, 33.7% of children aged 5-11 have received at least one dose, and 26.8% are fully vaccinated, with corresponding numbers of 68.0% and 58.0% for children aged 12-17, the CDC reported on its COVID Data Tracker.

State-level data show that children aged 5-11 in Vermont, with a rate of 65%, are the most likely to have received at least one dose of COVID vaccine, while just 15% of 5- to 11-year-olds in Alabama, Louisiana, and Mississippi have gotten their first dose. Among children aged 12-17, that rate ranges from 40% in Wyoming to 94% in Hawaii, Massachusetts, and Rhode Island, the AAP said in a separate report based on CDC data.

In a recent report involving 1,364 children aged 5-15 years, two doses of the COVID-19 vaccine reduced the risk of infection from the Omicron variant by 31% in children aged 5-11 years and by 59% among children aged 12-15 years, said Ashley L. Fowlkes, ScD, of the CDC’s COVID-19 Emergency Response Team, and associates (MMWR 2022 Mar 11;71).
 

Publications
Topics
Sections

New cases of COVID-19 in U.S. children have fallen to their lowest level since the beginning of the Delta surge in July of 2021, according to the American Academy of Pediatrics and the Children’s Hospital Association.

Just under 42,000 new cases were reported during the week of March 4-10, making it the 7th consecutive week of declines since the peak of the Omicron surge in January. Over those 7 weeks, new cases dropped over 96% from the 1.15 million reported for Jan. 14-20, based on data collected by the AAP and CHA from state and territorial health departments.

The last time that the weekly count was below 42,000 was July 16-22, 2021, when almost 39,000 cases were reported in the midst of the Delta upsurge. That was shortly after cases had reached their lowest point, 8,447, since the early stages of the pandemic in 2020, the AAP/CHA data show.

The cumulative number of pediatric cases is now up to 12.7 million, while the overall proportion of cases occurring in children held steady at 19.0% for the 4th week in a row, the AAP and CHA said in their weekly COVID-19 report. The Centers for Disease Control and Prevention, using an age range of 0-18 versus the states’ variety of ages, puts total cases at 11.7 million and deaths at 1,656 as of March 14.

Data from the CDC’s COVID-19–Associated Hospitalization Surveillance Network show that hospitalizations with laboratory-confirmed infection were down by 50% in children aged 0-4 years, by 63% among 5- to 11-year-olds, and by 58% in those aged 12-17 years for the week of Feb. 27 to March 5, compared with the week before.

The pace of vaccination continues to follow a similar trend, as the declines seen through February have continued into March. Cumulatively, 33.7% of children aged 5-11 have received at least one dose, and 26.8% are fully vaccinated, with corresponding numbers of 68.0% and 58.0% for children aged 12-17, the CDC reported on its COVID Data Tracker.

State-level data show that children aged 5-11 in Vermont, with a rate of 65%, are the most likely to have received at least one dose of COVID vaccine, while just 15% of 5- to 11-year-olds in Alabama, Louisiana, and Mississippi have gotten their first dose. Among children aged 12-17, that rate ranges from 40% in Wyoming to 94% in Hawaii, Massachusetts, and Rhode Island, the AAP said in a separate report based on CDC data.

In a recent report involving 1,364 children aged 5-15 years, two doses of the COVID-19 vaccine reduced the risk of infection from the Omicron variant by 31% in children aged 5-11 years and by 59% among children aged 12-15 years, said Ashley L. Fowlkes, ScD, of the CDC’s COVID-19 Emergency Response Team, and associates (MMWR 2022 Mar 11;71).
 

New cases of COVID-19 in U.S. children have fallen to their lowest level since the beginning of the Delta surge in July of 2021, according to the American Academy of Pediatrics and the Children’s Hospital Association.

Just under 42,000 new cases were reported during the week of March 4-10, making it the 7th consecutive week of declines since the peak of the Omicron surge in January. Over those 7 weeks, new cases dropped over 96% from the 1.15 million reported for Jan. 14-20, based on data collected by the AAP and CHA from state and territorial health departments.

The last time that the weekly count was below 42,000 was July 16-22, 2021, when almost 39,000 cases were reported in the midst of the Delta upsurge. That was shortly after cases had reached their lowest point, 8,447, since the early stages of the pandemic in 2020, the AAP/CHA data show.

The cumulative number of pediatric cases is now up to 12.7 million, while the overall proportion of cases occurring in children held steady at 19.0% for the 4th week in a row, the AAP and CHA said in their weekly COVID-19 report. The Centers for Disease Control and Prevention, using an age range of 0-18 versus the states’ variety of ages, puts total cases at 11.7 million and deaths at 1,656 as of March 14.

Data from the CDC’s COVID-19–Associated Hospitalization Surveillance Network show that hospitalizations with laboratory-confirmed infection were down by 50% in children aged 0-4 years, by 63% among 5- to 11-year-olds, and by 58% in those aged 12-17 years for the week of Feb. 27 to March 5, compared with the week before.

The pace of vaccination continues to follow a similar trend, as the declines seen through February have continued into March. Cumulatively, 33.7% of children aged 5-11 have received at least one dose, and 26.8% are fully vaccinated, with corresponding numbers of 68.0% and 58.0% for children aged 12-17, the CDC reported on its COVID Data Tracker.

State-level data show that children aged 5-11 in Vermont, with a rate of 65%, are the most likely to have received at least one dose of COVID vaccine, while just 15% of 5- to 11-year-olds in Alabama, Louisiana, and Mississippi have gotten their first dose. Among children aged 12-17, that rate ranges from 40% in Wyoming to 94% in Hawaii, Massachusetts, and Rhode Island, the AAP said in a separate report based on CDC data.

In a recent report involving 1,364 children aged 5-15 years, two doses of the COVID-19 vaccine reduced the risk of infection from the Omicron variant by 31% in children aged 5-11 years and by 59% among children aged 12-15 years, said Ashley L. Fowlkes, ScD, of the CDC’s COVID-19 Emergency Response Team, and associates (MMWR 2022 Mar 11;71).
 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The science of clean skin care and the clean beauty movement

Article Type
Changed
Tue, 03/15/2022 - 16:13

As the clean beauty movement is gaining momentum, it has become challenging to differentiate between science and marketing hype. I see numerous social media posts, blogs, and magazine articles about toxic skin care ingredients, while more patients are asking their dermatologists about clean beauty products. So, I decided it was time to dissect the issues and figure out what “clean” really means to me.

The problem is that no one agrees on a clean ingredient standard for beauty products. Many companies, like Target, Walgreens/Boots, Sephora, Neiman Marcus, Whole Foods, and Ulta, have their own varying clean standards. Even Allure magazine has a “Clean Best of Beauty” seal. California has Proposition 65, otherwise known as the Safe Drinking Water and Toxic Enforcement Act of 1986, which contains a list of banned chemicals “known to the state to cause cancer or reproductive toxicity.” In January 2021, Hawai‘i law prohibited the sale of oxybenzone and octinoxate in sunscreens in response to scientific studies showing that these ingredients “are toxic to corals and other marine life.” The Environmental Working Group (EWG) rates the safety of ingredients based on carcinogenicity, developmental and reproductive toxicity, allergenicity, and immunotoxicity. The Cosmetic Ingredient Review (CIR), funded by the Personal Care Products Council, consists of a seven-member steering committee that has at least one dermatologist representing the American Academy of Dermatology and a toxicologist representing the Society of Toxicology. The CIR publishes detailed reviews of ingredients that can be easily found on PubMed and Google Scholar and closely reviews animal and human data and reports on safety and contact dermatitis risk.
 

Which clean beauty standard is best?

I reviewed most of the various standards, clean seals, laws, and safety reports and found significant discrepancies resulting from misunderstandings of the science, lack of depth in the scientific evaluations, lumping of ingredients into a larger category, or lack of data. The most salient cause of misinformation and confusion seems to be hyperbolic claims by the media and clean beauty advocates who do not understand the basic science.

Dr. Leslie S. Baumann

When I conducted a survey of cosmetic chemists on my LinkedIn account, most of the chemists stated that “ ‘Clean Beauty’ is a marketing term, more than a scientific term.” None of the chemists could give an exact definition of clean beauty. However, I thought I needed a good answer for my patients and for doctors who want to use and recommend “clean skin care brands.”

A dermatologist’s approach to develop a clean beauty standard

Many of the standards combine all of the following into the “clean” designation: nontoxic to the environment (both the production process and the resulting ingredient), nontoxic to marine life and coral, cruelty-free (not tested on animals), hypoallergenic, lacking in known health risks (carcinogenicity, reproductive toxicity), vegan, and gluten free. As a dermatologist, I am a splitter more than a lumper, so I prefer that “clean” be split into categories to make it easier to understand. With that in mind, I will focus on clean beauty ingredients only as they pertain to health: carcinogenicity, endocrine effects, nephrotoxicity, hepatotoxicity, immunotoxicity, etc. This discussion will not consider environmental effects, reproductive toxicity (some ingredients may decrease fertility, which is beyond the scope of this article), ingredient sources, and sustainability, animal testing, or human rights violations during production. Those issues are important, of course, but for clarity and simplicity, we will focus on the health risks of skin care ingredients.

In this month’s column, I will focus on a few ingredients and will continue the discussion in subsequent columns. Please note that commercial standards such as Target standards are based on the product type (e.g., cleansers, sunscreens, or moisturizers). So, when I mention an ingredient not allowed by certain company standards, note that it can vary by product type. My comments pertain mostly to facial moisturizers and facial serums to try and simplify the information. The Good Face Project has a complete list of standards by product type, which I recommend as a resource if you want more detailed information.
 

Are ethanolamines safe or toxic in cosmetics?

Ethanolamines are common ingredients in surfactants, fragrances, and emulsifying agents and include cocamide diethanolamine (DEA), cocamide monoethanolamine (MEA), and triethanolamine (TEA). Cocamide DEA, lauramide DEA, linoleamide DEA, and oleamide DEA are fatty acid diethanolamides that may contain 4% to 33% diethanolamine.1 A Google search of toxic ingredients in beauty products consistently identifies ethanolamines among such offending product constituents. Table 1 reveals that ethanolamines are excluded from some standards and included in others (N = not allowed or restricted by amount used and Y = allowed with no restrictions). As you can see, the standards don’t correspond to the EWG rating of the ingredients, which ranges from 1 (low hazard) to 10 (high hazard).

Why are ethanolamines sometimes considered safe and sometimes not?

Ethanolamines are reputed to be allergenic, but as we know as dermatologists, that does not mean that everyone will react to them. (In my opinion, allergenicity is a separate issue than the clean issue.) One study showed that TEA in 2.5% petrolatum had a 0.4% positive patch test rate in humans, which was thought to be related more to irritation than allergenicity.2 Cocamide DEA allergy is seen in those with hand dermatitis resulting from hand cleansers but is more commonly seen in metal workers.3 For this reason, these ethanolamines are usually found in rinse-off products to decrease exposure time. But there are many irritating ingredients not banned by Target, Sephora, and Ulta, so why does ethanolamine end up on toxic ingredient lists?

First, there is the issue of oral studies in animals. Oral forms of some ethanolamines have shown mild toxicity in rats, but topical forms have not been demonstrated to cause mutagenicity.1
 

For this reason, ethanolamines in their native form are considered safe.

The main issue with ethanolamines is that, when they are formulated with ingredients that break down into nitrogen, such as certain preservatives, the combination forms nitrosamines, such as N-nitrosodiethylamine (NDEA), which are carcinogenic.4 The European Commission prohibits DEA in cosmetics based on concerns about formation of these carcinogenic nitrosamines. Some standards limit ethanolamines to rinse-off products.5 The CIR panel concluded that diethanolamine and its 16 salts are safe if they are not used in cosmetic products in which N-nitroso compounds can be formed and that TEA and TEA-related compounds are safe if they are not used in cosmetic products in which N-nitroso compounds can be formed.6,7 The FDA states that there is no reason for consumers to be alarmed based on the use of DEA in cosmetics.8

The safety issues surrounding the use of ethanolamines in a skin care routine illustrate an important point: Every single product in the skin care routine should be compatible with the other products in the regimen. Using ethanolamines in a rinse-off product is one solution, as is ensuring that no other products in the skin care routine contain N-nitroso compounds that can combine with ethanolamines to form nitrosamines.
 

 

 

Are natural products safer?

Natural products are not necessarily any safer than synthetic products. Considering ethanolamines as the example here, note that cocamide DEA is an ethanolamine derived from coconut. It is often found in “green” or “natural” skin care products.9 It can still combine with N-nitroso compounds to form carcinogenic nitrosamines.

What is the bottom line? Are ethanolamines safe in cosmetics?

For now, if a patient asks if ethanolamine is safe in skin care, my answer would be yes, so long as the following is true:

  • It is in a rinse-off product.
  • The patient is not allergic to it.
  • They do not have hand dermatitis.
  • Their skin care routine does not include nitrogen-containing compounds like N-nitrosodiethanolamine (NDELA) or NDEA.

Conclusion

This column uses ethanolamines as an example to show the disparity in clean standards in the cosmetic industry. As you can see, there are multiple factors to consider. I will begin including clean information in my cosmeceutical critique columns to address some of these issues.

Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur who practices in Miami. She founded the Cosmetic Dermatology Center at the University of Miami in 1997. Dr. Baumann has written two textbooks and a New York Times Best Sellers book for consumers. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Revance, Evolus, and Burt’s Bees. She is the CEO of Skin Type Solutions Inc., a company that independently tests skin care products and makes recommendations to physicians on which skin care technologies are best. Write to her at [email protected].

References

1. Cocamide DE. J Am Coll Toxicol. 1986;5(5).

2. Lessmann H et al. Contact Dermatitis. 2009 May;60(5):243-55.

3. Aalto-Korte K et al. 2014 Mar;70(3):169-74.

4. Kraeling ME et al. Food Chem Toxicol. 2004 Oct;42(10):1553-61.

5. Fiume MM et al. Int J Toxicol. 2015 Sep;34(2 Suppl):84S-98S.

6. Fiume MM.. Int J Toxicol. 2017 Sep/Oct;36(5_suppl2):89S-110S.

7. Fiume MM et al. Int J Toxicol. 2013 May-Jun;32(3 Suppl):59S-83S.

8. U.S. Food & Drug Administration. Diethanolamine. https://www.fda.gov/cosmetics/cosmetic-ingredients/diethanolamine. Accessed Feb. 12, 2022.

9. Aryanti N et al. IOP Conference Series: Materials Science and Engineering 2021 Feb 1 (Vol. 1053, No. 1, p. 012066). IOP Publishing.

Publications
Topics
Sections

As the clean beauty movement is gaining momentum, it has become challenging to differentiate between science and marketing hype. I see numerous social media posts, blogs, and magazine articles about toxic skin care ingredients, while more patients are asking their dermatologists about clean beauty products. So, I decided it was time to dissect the issues and figure out what “clean” really means to me.

The problem is that no one agrees on a clean ingredient standard for beauty products. Many companies, like Target, Walgreens/Boots, Sephora, Neiman Marcus, Whole Foods, and Ulta, have their own varying clean standards. Even Allure magazine has a “Clean Best of Beauty” seal. California has Proposition 65, otherwise known as the Safe Drinking Water and Toxic Enforcement Act of 1986, which contains a list of banned chemicals “known to the state to cause cancer or reproductive toxicity.” In January 2021, Hawai‘i law prohibited the sale of oxybenzone and octinoxate in sunscreens in response to scientific studies showing that these ingredients “are toxic to corals and other marine life.” The Environmental Working Group (EWG) rates the safety of ingredients based on carcinogenicity, developmental and reproductive toxicity, allergenicity, and immunotoxicity. The Cosmetic Ingredient Review (CIR), funded by the Personal Care Products Council, consists of a seven-member steering committee that has at least one dermatologist representing the American Academy of Dermatology and a toxicologist representing the Society of Toxicology. The CIR publishes detailed reviews of ingredients that can be easily found on PubMed and Google Scholar and closely reviews animal and human data and reports on safety and contact dermatitis risk.
 

Which clean beauty standard is best?

I reviewed most of the various standards, clean seals, laws, and safety reports and found significant discrepancies resulting from misunderstandings of the science, lack of depth in the scientific evaluations, lumping of ingredients into a larger category, or lack of data. The most salient cause of misinformation and confusion seems to be hyperbolic claims by the media and clean beauty advocates who do not understand the basic science.

Dr. Leslie S. Baumann

When I conducted a survey of cosmetic chemists on my LinkedIn account, most of the chemists stated that “ ‘Clean Beauty’ is a marketing term, more than a scientific term.” None of the chemists could give an exact definition of clean beauty. However, I thought I needed a good answer for my patients and for doctors who want to use and recommend “clean skin care brands.”

A dermatologist’s approach to develop a clean beauty standard

Many of the standards combine all of the following into the “clean” designation: nontoxic to the environment (both the production process and the resulting ingredient), nontoxic to marine life and coral, cruelty-free (not tested on animals), hypoallergenic, lacking in known health risks (carcinogenicity, reproductive toxicity), vegan, and gluten free. As a dermatologist, I am a splitter more than a lumper, so I prefer that “clean” be split into categories to make it easier to understand. With that in mind, I will focus on clean beauty ingredients only as they pertain to health: carcinogenicity, endocrine effects, nephrotoxicity, hepatotoxicity, immunotoxicity, etc. This discussion will not consider environmental effects, reproductive toxicity (some ingredients may decrease fertility, which is beyond the scope of this article), ingredient sources, and sustainability, animal testing, or human rights violations during production. Those issues are important, of course, but for clarity and simplicity, we will focus on the health risks of skin care ingredients.

In this month’s column, I will focus on a few ingredients and will continue the discussion in subsequent columns. Please note that commercial standards such as Target standards are based on the product type (e.g., cleansers, sunscreens, or moisturizers). So, when I mention an ingredient not allowed by certain company standards, note that it can vary by product type. My comments pertain mostly to facial moisturizers and facial serums to try and simplify the information. The Good Face Project has a complete list of standards by product type, which I recommend as a resource if you want more detailed information.
 

Are ethanolamines safe or toxic in cosmetics?

Ethanolamines are common ingredients in surfactants, fragrances, and emulsifying agents and include cocamide diethanolamine (DEA), cocamide monoethanolamine (MEA), and triethanolamine (TEA). Cocamide DEA, lauramide DEA, linoleamide DEA, and oleamide DEA are fatty acid diethanolamides that may contain 4% to 33% diethanolamine.1 A Google search of toxic ingredients in beauty products consistently identifies ethanolamines among such offending product constituents. Table 1 reveals that ethanolamines are excluded from some standards and included in others (N = not allowed or restricted by amount used and Y = allowed with no restrictions). As you can see, the standards don’t correspond to the EWG rating of the ingredients, which ranges from 1 (low hazard) to 10 (high hazard).

Why are ethanolamines sometimes considered safe and sometimes not?

Ethanolamines are reputed to be allergenic, but as we know as dermatologists, that does not mean that everyone will react to them. (In my opinion, allergenicity is a separate issue than the clean issue.) One study showed that TEA in 2.5% petrolatum had a 0.4% positive patch test rate in humans, which was thought to be related more to irritation than allergenicity.2 Cocamide DEA allergy is seen in those with hand dermatitis resulting from hand cleansers but is more commonly seen in metal workers.3 For this reason, these ethanolamines are usually found in rinse-off products to decrease exposure time. But there are many irritating ingredients not banned by Target, Sephora, and Ulta, so why does ethanolamine end up on toxic ingredient lists?

First, there is the issue of oral studies in animals. Oral forms of some ethanolamines have shown mild toxicity in rats, but topical forms have not been demonstrated to cause mutagenicity.1
 

For this reason, ethanolamines in their native form are considered safe.

The main issue with ethanolamines is that, when they are formulated with ingredients that break down into nitrogen, such as certain preservatives, the combination forms nitrosamines, such as N-nitrosodiethylamine (NDEA), which are carcinogenic.4 The European Commission prohibits DEA in cosmetics based on concerns about formation of these carcinogenic nitrosamines. Some standards limit ethanolamines to rinse-off products.5 The CIR panel concluded that diethanolamine and its 16 salts are safe if they are not used in cosmetic products in which N-nitroso compounds can be formed and that TEA and TEA-related compounds are safe if they are not used in cosmetic products in which N-nitroso compounds can be formed.6,7 The FDA states that there is no reason for consumers to be alarmed based on the use of DEA in cosmetics.8

The safety issues surrounding the use of ethanolamines in a skin care routine illustrate an important point: Every single product in the skin care routine should be compatible with the other products in the regimen. Using ethanolamines in a rinse-off product is one solution, as is ensuring that no other products in the skin care routine contain N-nitroso compounds that can combine with ethanolamines to form nitrosamines.
 

 

 

Are natural products safer?

Natural products are not necessarily any safer than synthetic products. Considering ethanolamines as the example here, note that cocamide DEA is an ethanolamine derived from coconut. It is often found in “green” or “natural” skin care products.9 It can still combine with N-nitroso compounds to form carcinogenic nitrosamines.

What is the bottom line? Are ethanolamines safe in cosmetics?

For now, if a patient asks if ethanolamine is safe in skin care, my answer would be yes, so long as the following is true:

  • It is in a rinse-off product.
  • The patient is not allergic to it.
  • They do not have hand dermatitis.
  • Their skin care routine does not include nitrogen-containing compounds like N-nitrosodiethanolamine (NDELA) or NDEA.

Conclusion

This column uses ethanolamines as an example to show the disparity in clean standards in the cosmetic industry. As you can see, there are multiple factors to consider. I will begin including clean information in my cosmeceutical critique columns to address some of these issues.

Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur who practices in Miami. She founded the Cosmetic Dermatology Center at the University of Miami in 1997. Dr. Baumann has written two textbooks and a New York Times Best Sellers book for consumers. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Revance, Evolus, and Burt’s Bees. She is the CEO of Skin Type Solutions Inc., a company that independently tests skin care products and makes recommendations to physicians on which skin care technologies are best. Write to her at [email protected].

References

1. Cocamide DE. J Am Coll Toxicol. 1986;5(5).

2. Lessmann H et al. Contact Dermatitis. 2009 May;60(5):243-55.

3. Aalto-Korte K et al. 2014 Mar;70(3):169-74.

4. Kraeling ME et al. Food Chem Toxicol. 2004 Oct;42(10):1553-61.

5. Fiume MM et al. Int J Toxicol. 2015 Sep;34(2 Suppl):84S-98S.

6. Fiume MM.. Int J Toxicol. 2017 Sep/Oct;36(5_suppl2):89S-110S.

7. Fiume MM et al. Int J Toxicol. 2013 May-Jun;32(3 Suppl):59S-83S.

8. U.S. Food & Drug Administration. Diethanolamine. https://www.fda.gov/cosmetics/cosmetic-ingredients/diethanolamine. Accessed Feb. 12, 2022.

9. Aryanti N et al. IOP Conference Series: Materials Science and Engineering 2021 Feb 1 (Vol. 1053, No. 1, p. 012066). IOP Publishing.

As the clean beauty movement is gaining momentum, it has become challenging to differentiate between science and marketing hype. I see numerous social media posts, blogs, and magazine articles about toxic skin care ingredients, while more patients are asking their dermatologists about clean beauty products. So, I decided it was time to dissect the issues and figure out what “clean” really means to me.

The problem is that no one agrees on a clean ingredient standard for beauty products. Many companies, like Target, Walgreens/Boots, Sephora, Neiman Marcus, Whole Foods, and Ulta, have their own varying clean standards. Even Allure magazine has a “Clean Best of Beauty” seal. California has Proposition 65, otherwise known as the Safe Drinking Water and Toxic Enforcement Act of 1986, which contains a list of banned chemicals “known to the state to cause cancer or reproductive toxicity.” In January 2021, Hawai‘i law prohibited the sale of oxybenzone and octinoxate in sunscreens in response to scientific studies showing that these ingredients “are toxic to corals and other marine life.” The Environmental Working Group (EWG) rates the safety of ingredients based on carcinogenicity, developmental and reproductive toxicity, allergenicity, and immunotoxicity. The Cosmetic Ingredient Review (CIR), funded by the Personal Care Products Council, consists of a seven-member steering committee that has at least one dermatologist representing the American Academy of Dermatology and a toxicologist representing the Society of Toxicology. The CIR publishes detailed reviews of ingredients that can be easily found on PubMed and Google Scholar and closely reviews animal and human data and reports on safety and contact dermatitis risk.
 

Which clean beauty standard is best?

I reviewed most of the various standards, clean seals, laws, and safety reports and found significant discrepancies resulting from misunderstandings of the science, lack of depth in the scientific evaluations, lumping of ingredients into a larger category, or lack of data. The most salient cause of misinformation and confusion seems to be hyperbolic claims by the media and clean beauty advocates who do not understand the basic science.

Dr. Leslie S. Baumann

When I conducted a survey of cosmetic chemists on my LinkedIn account, most of the chemists stated that “ ‘Clean Beauty’ is a marketing term, more than a scientific term.” None of the chemists could give an exact definition of clean beauty. However, I thought I needed a good answer for my patients and for doctors who want to use and recommend “clean skin care brands.”

A dermatologist’s approach to develop a clean beauty standard

Many of the standards combine all of the following into the “clean” designation: nontoxic to the environment (both the production process and the resulting ingredient), nontoxic to marine life and coral, cruelty-free (not tested on animals), hypoallergenic, lacking in known health risks (carcinogenicity, reproductive toxicity), vegan, and gluten free. As a dermatologist, I am a splitter more than a lumper, so I prefer that “clean” be split into categories to make it easier to understand. With that in mind, I will focus on clean beauty ingredients only as they pertain to health: carcinogenicity, endocrine effects, nephrotoxicity, hepatotoxicity, immunotoxicity, etc. This discussion will not consider environmental effects, reproductive toxicity (some ingredients may decrease fertility, which is beyond the scope of this article), ingredient sources, and sustainability, animal testing, or human rights violations during production. Those issues are important, of course, but for clarity and simplicity, we will focus on the health risks of skin care ingredients.

In this month’s column, I will focus on a few ingredients and will continue the discussion in subsequent columns. Please note that commercial standards such as Target standards are based on the product type (e.g., cleansers, sunscreens, or moisturizers). So, when I mention an ingredient not allowed by certain company standards, note that it can vary by product type. My comments pertain mostly to facial moisturizers and facial serums to try and simplify the information. The Good Face Project has a complete list of standards by product type, which I recommend as a resource if you want more detailed information.
 

Are ethanolamines safe or toxic in cosmetics?

Ethanolamines are common ingredients in surfactants, fragrances, and emulsifying agents and include cocamide diethanolamine (DEA), cocamide monoethanolamine (MEA), and triethanolamine (TEA). Cocamide DEA, lauramide DEA, linoleamide DEA, and oleamide DEA are fatty acid diethanolamides that may contain 4% to 33% diethanolamine.1 A Google search of toxic ingredients in beauty products consistently identifies ethanolamines among such offending product constituents. Table 1 reveals that ethanolamines are excluded from some standards and included in others (N = not allowed or restricted by amount used and Y = allowed with no restrictions). As you can see, the standards don’t correspond to the EWG rating of the ingredients, which ranges from 1 (low hazard) to 10 (high hazard).

Why are ethanolamines sometimes considered safe and sometimes not?

Ethanolamines are reputed to be allergenic, but as we know as dermatologists, that does not mean that everyone will react to them. (In my opinion, allergenicity is a separate issue than the clean issue.) One study showed that TEA in 2.5% petrolatum had a 0.4% positive patch test rate in humans, which was thought to be related more to irritation than allergenicity.2 Cocamide DEA allergy is seen in those with hand dermatitis resulting from hand cleansers but is more commonly seen in metal workers.3 For this reason, these ethanolamines are usually found in rinse-off products to decrease exposure time. But there are many irritating ingredients not banned by Target, Sephora, and Ulta, so why does ethanolamine end up on toxic ingredient lists?

First, there is the issue of oral studies in animals. Oral forms of some ethanolamines have shown mild toxicity in rats, but topical forms have not been demonstrated to cause mutagenicity.1
 

For this reason, ethanolamines in their native form are considered safe.

The main issue with ethanolamines is that, when they are formulated with ingredients that break down into nitrogen, such as certain preservatives, the combination forms nitrosamines, such as N-nitrosodiethylamine (NDEA), which are carcinogenic.4 The European Commission prohibits DEA in cosmetics based on concerns about formation of these carcinogenic nitrosamines. Some standards limit ethanolamines to rinse-off products.5 The CIR panel concluded that diethanolamine and its 16 salts are safe if they are not used in cosmetic products in which N-nitroso compounds can be formed and that TEA and TEA-related compounds are safe if they are not used in cosmetic products in which N-nitroso compounds can be formed.6,7 The FDA states that there is no reason for consumers to be alarmed based on the use of DEA in cosmetics.8

The safety issues surrounding the use of ethanolamines in a skin care routine illustrate an important point: Every single product in the skin care routine should be compatible with the other products in the regimen. Using ethanolamines in a rinse-off product is one solution, as is ensuring that no other products in the skin care routine contain N-nitroso compounds that can combine with ethanolamines to form nitrosamines.
 

 

 

Are natural products safer?

Natural products are not necessarily any safer than synthetic products. Considering ethanolamines as the example here, note that cocamide DEA is an ethanolamine derived from coconut. It is often found in “green” or “natural” skin care products.9 It can still combine with N-nitroso compounds to form carcinogenic nitrosamines.

What is the bottom line? Are ethanolamines safe in cosmetics?

For now, if a patient asks if ethanolamine is safe in skin care, my answer would be yes, so long as the following is true:

  • It is in a rinse-off product.
  • The patient is not allergic to it.
  • They do not have hand dermatitis.
  • Their skin care routine does not include nitrogen-containing compounds like N-nitrosodiethanolamine (NDELA) or NDEA.

Conclusion

This column uses ethanolamines as an example to show the disparity in clean standards in the cosmetic industry. As you can see, there are multiple factors to consider. I will begin including clean information in my cosmeceutical critique columns to address some of these issues.

Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur who practices in Miami. She founded the Cosmetic Dermatology Center at the University of Miami in 1997. Dr. Baumann has written two textbooks and a New York Times Best Sellers book for consumers. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Revance, Evolus, and Burt’s Bees. She is the CEO of Skin Type Solutions Inc., a company that independently tests skin care products and makes recommendations to physicians on which skin care technologies are best. Write to her at [email protected].

References

1. Cocamide DE. J Am Coll Toxicol. 1986;5(5).

2. Lessmann H et al. Contact Dermatitis. 2009 May;60(5):243-55.

3. Aalto-Korte K et al. 2014 Mar;70(3):169-74.

4. Kraeling ME et al. Food Chem Toxicol. 2004 Oct;42(10):1553-61.

5. Fiume MM et al. Int J Toxicol. 2015 Sep;34(2 Suppl):84S-98S.

6. Fiume MM.. Int J Toxicol. 2017 Sep/Oct;36(5_suppl2):89S-110S.

7. Fiume MM et al. Int J Toxicol. 2013 May-Jun;32(3 Suppl):59S-83S.

8. U.S. Food & Drug Administration. Diethanolamine. https://www.fda.gov/cosmetics/cosmetic-ingredients/diethanolamine. Accessed Feb. 12, 2022.

9. Aryanti N et al. IOP Conference Series: Materials Science and Engineering 2021 Feb 1 (Vol. 1053, No. 1, p. 012066). IOP Publishing.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Inside insulin (Part 2): Approaching a cure for type 1 diabetes?

Article Type
Changed
Tue, 05/03/2022 - 15:01

Editor’s note: This is the second in a two-part series commemorating the 100-year anniversary of the first use of insulin in humans. Part 1 of this series examined the rivalry behind the discovery and use of insulin.

One hundred years ago, teenager Leonard Thompson was the first patient with type 1 diabetes to be successfully treated with insulin, granting him a reprieve from what was a certain death sentence at the time.

Since then, research has gathered pace. In the century since insulin’s discovery and first use, recombinant DNA technology has allowed for the engineering of the insulin molecule, providing numerous short- and long-acting analog versions. At the same time, technological leaps in automated insulin delivery and monitoring of blood glucose ensure more time with glucose in range and fewer life-threatening complications for those with type 1 diabetes fortunate enough to have access to the technology. 

In spite of these advancements, there is still scope for further evolution of disease management, with the holy grail being the transplant of stem cell–derived islet cells capable of making insulin, ideally encased in some kind of protective device so that immunosuppression is not required.

Indeed, it is not unreasonable to “hope that type 1 diabetes will be a curable disease in the next 100 years,” said Elizabeth Stephens, MD, an endocrinologist who has type 1 diabetes and practices in Portland, Ore.
 

Type 1 diabetes: The past 100 years

The epidemiology of type 1 diabetes has shifted considerably since 1922. A century ago, given that average life expectancy in the United States was around 54 years, it was pretty much the only type of diabetes that doctors encountered. “There was some type 2 diabetes about in heavier people, but the focus was on type 1 diabetes,” noted Dr. Stephens.

Originally called juvenile diabetes because it was thought to only occur in children, “now 50% of people are diagnosed with type 1 diabetes ... over [the age of] 20,” explained Dr. Stephens.

In the United States, around 1.4 million adults 20 years and older, and 187,000 children younger than 20, have the disease, according to data from the National Diabetes Statistics Report 2020 by the Centers for Disease Control and Prevention. This total represents an increase of nearly 30% from 2017.

Over the years, theories as to the cause, or trigger, for type 1 diabetes “have included cow’s milk and [viral] infections,” said Dr. Stephens. “Most likely, there’s a genetic predisposition and some type of exposure, which creates the perfect storm to trigger disease.”

There are hints that COVID-19 might be precipitating type 1 diabetes in some people. Recently, the CDC found SARS-CoV-2 infection was associated with an increased risk for diabetes (all types) among youth, but not other acute respiratory infections. And two further studies from different parts of the world have recently identified an increase in the incidence of type 1 diabetes in children since the COVID-19 pandemic began, but the reasons remain unclear.

The global CoviDiab registry has also been established to collect data on patients with COVID-19–related diabetes.

The million-dollar question: Is COVID-19 itself is propagating type 1 diabetes or unmasking a predisposition to the disease sooner? The latter might be associated with a lower type 1 diabetes rate in the future, said Partha Kar, MBBS, OBE, national specialty advisor, diabetes, for National Health Service England.

“Right now, we don’t know the answer. Whichever way you look at it, it is likely there will be a rise in cases, and in countries where insulin is not freely available, healthcare systems need to have supply ready because insulin is lifesaving in type 1 diabetes,” Dr. Kar emphasized.
 

 

 

CGMs and automated insulin delivery: A ‘godsend’

A huge change has also been seen, most notably in the past 15 to 20 years, in the technological advancements that can help those with type 1 diabetes live an easier life.

Continuous glucose monitors (CGMs) and automated ways of delivering insulin, such as smart pens and insulin pumps, have made the daily life of a person with type 1 diabetes in the Western world considerably more comfortable.

CGMs provide a constant stream of data to an app, often wirelessly in sync with the insulin pump. However, on a global level, they are only available to a lucky few.

In England, pending National Institute for Health and Care Excellence) approval, any CGM should be available to all eligible patients with type 1 diabetes within the NHS from April 2022, Dr. Kar pointed out. In the United States, CGMs are often unaffordable and access is mostly dependent on a person’s health insurance.

Kersten Hall, PhD, a scientist and U.K.-based medical historian who recently wrote a book, “Insulin, the Crooked Timber” (Oxford, England: Oxford University Press, 2022) uncovering the lesser-known story behind the discovery of insulin, was diagnosed with adult-onset type 1 diabetes at the age of 41. Dr. Hall had always found the finger-prick blood glucose test to be a chore but now has a CGM. 

“It’s a total game changer for me: a godsend. I can’t sing its praises enough,” he said. “All it involves is the swipe of the phone and this provides a reading which tells me if my glucose is too low, so I eat something, or too high, so I might [go for] a run.”
 

Brewing insulin at scale

As described by Dr. Hall in his book, the journey from treating Mr. Thompson in 1922 to treating the masses began when biochemist James Collip, MD, PhD, discovered a means of purifying the animal pancreas extracts used to treat the teenager.

But production at scale presented a further challenge. This was overcome in 1924 when Eli Lilly drew on a technique used in the beer brewing process – where pH guides bitterness – to purify and manufacture large amounts of insulin.

By 1936, a range of slower-acting cattle and pig-derived insulins, the first produced by Novo Nordisk Pharmaceuticals, were developed.

However, it took 8,000 lb (approximately 3,600 kg) of pancreas glands from 23,500 animals to make 1 lb (0.5 kg) of insulin, so a more efficient process was badly needed.

Dr. Hall, who is a molecular biologist as well as an author, explains that the use of recombinant DNA technology to produce human insulin, as done by Genentech in the late 70s, was a key development in the story of modern insulin products. Genentech then provided synthetic human insulin for Eli Lilly to conduct clinical trials.

Human insulin most closely resembles porcine insulin structure and function, differing by only one amino acid, while human insulin differs from bovine insulin by three amino acid residues. This synthetic human insulin eliminated the allergies that the animal-derived products sometimes caused.

In the early 1980s, Eli Lilly produced Humulin, the first biosynthetic (made in Escherichia coli, hence the term, “bio”) human insulin.

This technology eventually “allowed for the alteration of specific amino acids in the sequence of the insulin protein to make insulin analogs [synthetic versions grown in E. coli and genetically altered for various properties] that act faster, or more slowly, than normal human insulin. By using the slow- and fast-acting insulins in combination, a patient can control their blood sugar levels with a much greater degree of finesse and precision,” Dr. Hall explained.

Today, a whole range of insulins are available, including ultra–rapid-acting, short-acting, intermediate-acting, long-acting, ultra–long-acting, and even inhaled insulin, although the latter is expensive, has been associated with side effects, and is less commonly used, according to Dr. Stephens.



Oral insulin formulations are even in the early stages of development, with candidate drugs by Generex and from the Oralis project.

“With insulin therapy, we try to reproduce the normal physiology of the healthy body and pancreas,” Dr. Stephens explained.

Insulin analogs are only made by three companies (Eli Lilly, Novo Nordisk, and Sanofi), and they are generally much more expensive than nonanalog human insulin. In the United Kingdom through the NHS, they cost twice as much.

In the United States today, one of the biggest barriers to proper care of type 1 diabetes is the cost of insulin, which can limit access. With the market controlled by these three large companies, the average cost of a unit of insulin in the United States, according to RAND research, was $98.17 in January 2021, compared with $7.52 in the United Kingdom and $12.00 in Canada. 

Several U.S. states have enacted legislation capping insulin copayments to at, or under, $100 a month. But the federal Build Back Better Framework Act – which would cap copayments for insulin at $35 – currently hangs in the balance.

Alongside these moves, in 2020 the Food and Drug Administration approved the first interchangeable biosimilar insulin for type 1 diabetes (and insulin-dependent type 2 diabetes) in children and adults, called Semglee (Mylan Pharmaceuticals). 

Biosimilars (essentially generic versions of branded insulins) are expected to be less expensive than branded analogs, but the indications so far are that they will only be around 20% cheaper.

“I totally fail to understand how the richest country in the world still has a debate about price caps, and we are looking at biosimilar markets to change the debate. This makes no sense to me at all,” stressed Dr. Kar. “For lifesaving drugs, they should be funded by the state.”

Insulin also remains unaffordable for many in numerous low- and middle-income countries, where most patients pay out-of-pocket for medicines. Globally, there are estimated to be around 30 million people who need insulin but cannot afford it.

 

 

How near to a cure in the coming decades?

Looking ahead to the coming years, if not the next 100, Dr. Stephens highlighted two important aspects of care.

First, the use of a CGM device in combination with an insulin pump (also known as a closed-loop system or artificial pancreas), where the CGM effectively tells the insulin pump how much insulin to automatically dispense, should revolutionize care.

A number of such closed-loop systems have recently been approved in both the United States, including systems from Medtronic and Omnipod, and Europe.

“I wear one of these and it’s been a life changer for me, but it doesn’t suit everyone because the technology can be cumbersome, but with time, hopefully things will become smaller and more accurate in insulin delivery,” Dr. Stephens added.

The second advance of interest is the development and transplantation of cells that produce insulin.

Dr. Stephens explained that someone living with type 1 diabetes has a lot to think about, not least, doing the math related to insulin requirement. “If we just had cells from a pancreas that could be transplanted and would do that for us, then it would be a total game changer.”

To date, Vertex Pharmaceuticals has successfully treated one patient – who had lived with type 1 diabetes for about 40 years and had recurrent episodes of severe hypoglycemia – with an infusion of stem cell–derived differentiated islet cells into his liver. The procedure resulted in near reversal of type 1 diabetes, with his insulin dose reduced from 34 to 3 units, and his hemoglobin A1c falling from 8.6% to 7.2%.

And although the patient, Brian Shelton, still needs to take immunosuppressive agents to prevent rejection of the stem cell–derived islets, “it’s a whole new life,” he recently told the New York Times.  

Another company called ViaCyte is also working on a similar approach.

Whether this is a cure for type 1 diabetes is still debatable, said Anne Peters, MD, of the University of Southern California, Los Angeles. “Is it true? In a word, no. But we are part of the way there, which is much closer than we were 6 months ago.”

There are also ongoing clinical trials of therapeutic interventions to prevent or delay the trajectory from presymptomatic to clinical type 1 diabetes. The most advanced is the anti-CD3 monoclonal antibody teplizumab (Tzield, Provention Bio), which was rejected by the FDA in July 2021, but has since been refiled. The company expects to hear from the agency by the end of March 2022 as to whether the resubmission has been accepted.
 

Diabetes specialist nurses/educators keep it human

Dr. Hall said he concurs with the late eminent U.K. diabetes specialist Robert Tattersall’s observation on what he considers one of the most important advances in the management and treatment of type 1 diabetes: the human touch.

Referring to Dr. Tattersall’s book, “Diabetes: A Biography,” Dr. Hall quoted: “If asked what innovation had made the most difference to their lives in the 1980s, patients with type 1 diabetes in England would unhesitatingly have chosen not human insulin, but the spread of diabetes specialist nurses ... these people (mainly women) did more in the last two decades of the 20th century to improve the standard of diabetes care than any other innovation or drug.”

In the United States, diabetes specialist nurses were called diabetes educators until recently, when the name changed to certified diabetes care and education specialist.

“Above all, they have humanized the service and given the patient a say in the otherwise unequal relationship with all-powerful doctors,” concluded Dr. Hall, again quoting Dr. Tattersall.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Editor’s note: This is the second in a two-part series commemorating the 100-year anniversary of the first use of insulin in humans. Part 1 of this series examined the rivalry behind the discovery and use of insulin.

One hundred years ago, teenager Leonard Thompson was the first patient with type 1 diabetes to be successfully treated with insulin, granting him a reprieve from what was a certain death sentence at the time.

Since then, research has gathered pace. In the century since insulin’s discovery and first use, recombinant DNA technology has allowed for the engineering of the insulin molecule, providing numerous short- and long-acting analog versions. At the same time, technological leaps in automated insulin delivery and monitoring of blood glucose ensure more time with glucose in range and fewer life-threatening complications for those with type 1 diabetes fortunate enough to have access to the technology. 

In spite of these advancements, there is still scope for further evolution of disease management, with the holy grail being the transplant of stem cell–derived islet cells capable of making insulin, ideally encased in some kind of protective device so that immunosuppression is not required.

Indeed, it is not unreasonable to “hope that type 1 diabetes will be a curable disease in the next 100 years,” said Elizabeth Stephens, MD, an endocrinologist who has type 1 diabetes and practices in Portland, Ore.
 

Type 1 diabetes: The past 100 years

The epidemiology of type 1 diabetes has shifted considerably since 1922. A century ago, given that average life expectancy in the United States was around 54 years, it was pretty much the only type of diabetes that doctors encountered. “There was some type 2 diabetes about in heavier people, but the focus was on type 1 diabetes,” noted Dr. Stephens.

Originally called juvenile diabetes because it was thought to only occur in children, “now 50% of people are diagnosed with type 1 diabetes ... over [the age of] 20,” explained Dr. Stephens.

In the United States, around 1.4 million adults 20 years and older, and 187,000 children younger than 20, have the disease, according to data from the National Diabetes Statistics Report 2020 by the Centers for Disease Control and Prevention. This total represents an increase of nearly 30% from 2017.

Over the years, theories as to the cause, or trigger, for type 1 diabetes “have included cow’s milk and [viral] infections,” said Dr. Stephens. “Most likely, there’s a genetic predisposition and some type of exposure, which creates the perfect storm to trigger disease.”

There are hints that COVID-19 might be precipitating type 1 diabetes in some people. Recently, the CDC found SARS-CoV-2 infection was associated with an increased risk for diabetes (all types) among youth, but not other acute respiratory infections. And two further studies from different parts of the world have recently identified an increase in the incidence of type 1 diabetes in children since the COVID-19 pandemic began, but the reasons remain unclear.

The global CoviDiab registry has also been established to collect data on patients with COVID-19–related diabetes.

The million-dollar question: Is COVID-19 itself is propagating type 1 diabetes or unmasking a predisposition to the disease sooner? The latter might be associated with a lower type 1 diabetes rate in the future, said Partha Kar, MBBS, OBE, national specialty advisor, diabetes, for National Health Service England.

“Right now, we don’t know the answer. Whichever way you look at it, it is likely there will be a rise in cases, and in countries where insulin is not freely available, healthcare systems need to have supply ready because insulin is lifesaving in type 1 diabetes,” Dr. Kar emphasized.
 

 

 

CGMs and automated insulin delivery: A ‘godsend’

A huge change has also been seen, most notably in the past 15 to 20 years, in the technological advancements that can help those with type 1 diabetes live an easier life.

Continuous glucose monitors (CGMs) and automated ways of delivering insulin, such as smart pens and insulin pumps, have made the daily life of a person with type 1 diabetes in the Western world considerably more comfortable.

CGMs provide a constant stream of data to an app, often wirelessly in sync with the insulin pump. However, on a global level, they are only available to a lucky few.

In England, pending National Institute for Health and Care Excellence) approval, any CGM should be available to all eligible patients with type 1 diabetes within the NHS from April 2022, Dr. Kar pointed out. In the United States, CGMs are often unaffordable and access is mostly dependent on a person’s health insurance.

Kersten Hall, PhD, a scientist and U.K.-based medical historian who recently wrote a book, “Insulin, the Crooked Timber” (Oxford, England: Oxford University Press, 2022) uncovering the lesser-known story behind the discovery of insulin, was diagnosed with adult-onset type 1 diabetes at the age of 41. Dr. Hall had always found the finger-prick blood glucose test to be a chore but now has a CGM. 

“It’s a total game changer for me: a godsend. I can’t sing its praises enough,” he said. “All it involves is the swipe of the phone and this provides a reading which tells me if my glucose is too low, so I eat something, or too high, so I might [go for] a run.”
 

Brewing insulin at scale

As described by Dr. Hall in his book, the journey from treating Mr. Thompson in 1922 to treating the masses began when biochemist James Collip, MD, PhD, discovered a means of purifying the animal pancreas extracts used to treat the teenager.

But production at scale presented a further challenge. This was overcome in 1924 when Eli Lilly drew on a technique used in the beer brewing process – where pH guides bitterness – to purify and manufacture large amounts of insulin.

By 1936, a range of slower-acting cattle and pig-derived insulins, the first produced by Novo Nordisk Pharmaceuticals, were developed.

However, it took 8,000 lb (approximately 3,600 kg) of pancreas glands from 23,500 animals to make 1 lb (0.5 kg) of insulin, so a more efficient process was badly needed.

Dr. Hall, who is a molecular biologist as well as an author, explains that the use of recombinant DNA technology to produce human insulin, as done by Genentech in the late 70s, was a key development in the story of modern insulin products. Genentech then provided synthetic human insulin for Eli Lilly to conduct clinical trials.

Human insulin most closely resembles porcine insulin structure and function, differing by only one amino acid, while human insulin differs from bovine insulin by three amino acid residues. This synthetic human insulin eliminated the allergies that the animal-derived products sometimes caused.

In the early 1980s, Eli Lilly produced Humulin, the first biosynthetic (made in Escherichia coli, hence the term, “bio”) human insulin.

This technology eventually “allowed for the alteration of specific amino acids in the sequence of the insulin protein to make insulin analogs [synthetic versions grown in E. coli and genetically altered for various properties] that act faster, or more slowly, than normal human insulin. By using the slow- and fast-acting insulins in combination, a patient can control their blood sugar levels with a much greater degree of finesse and precision,” Dr. Hall explained.

Today, a whole range of insulins are available, including ultra–rapid-acting, short-acting, intermediate-acting, long-acting, ultra–long-acting, and even inhaled insulin, although the latter is expensive, has been associated with side effects, and is less commonly used, according to Dr. Stephens.



Oral insulin formulations are even in the early stages of development, with candidate drugs by Generex and from the Oralis project.

“With insulin therapy, we try to reproduce the normal physiology of the healthy body and pancreas,” Dr. Stephens explained.

Insulin analogs are only made by three companies (Eli Lilly, Novo Nordisk, and Sanofi), and they are generally much more expensive than nonanalog human insulin. In the United Kingdom through the NHS, they cost twice as much.

In the United States today, one of the biggest barriers to proper care of type 1 diabetes is the cost of insulin, which can limit access. With the market controlled by these three large companies, the average cost of a unit of insulin in the United States, according to RAND research, was $98.17 in January 2021, compared with $7.52 in the United Kingdom and $12.00 in Canada. 

Several U.S. states have enacted legislation capping insulin copayments to at, or under, $100 a month. But the federal Build Back Better Framework Act – which would cap copayments for insulin at $35 – currently hangs in the balance.

Alongside these moves, in 2020 the Food and Drug Administration approved the first interchangeable biosimilar insulin for type 1 diabetes (and insulin-dependent type 2 diabetes) in children and adults, called Semglee (Mylan Pharmaceuticals). 

Biosimilars (essentially generic versions of branded insulins) are expected to be less expensive than branded analogs, but the indications so far are that they will only be around 20% cheaper.

“I totally fail to understand how the richest country in the world still has a debate about price caps, and we are looking at biosimilar markets to change the debate. This makes no sense to me at all,” stressed Dr. Kar. “For lifesaving drugs, they should be funded by the state.”

Insulin also remains unaffordable for many in numerous low- and middle-income countries, where most patients pay out-of-pocket for medicines. Globally, there are estimated to be around 30 million people who need insulin but cannot afford it.

 

 

How near to a cure in the coming decades?

Looking ahead to the coming years, if not the next 100, Dr. Stephens highlighted two important aspects of care.

First, the use of a CGM device in combination with an insulin pump (also known as a closed-loop system or artificial pancreas), where the CGM effectively tells the insulin pump how much insulin to automatically dispense, should revolutionize care.

A number of such closed-loop systems have recently been approved in both the United States, including systems from Medtronic and Omnipod, and Europe.

“I wear one of these and it’s been a life changer for me, but it doesn’t suit everyone because the technology can be cumbersome, but with time, hopefully things will become smaller and more accurate in insulin delivery,” Dr. Stephens added.

The second advance of interest is the development and transplantation of cells that produce insulin.

Dr. Stephens explained that someone living with type 1 diabetes has a lot to think about, not least, doing the math related to insulin requirement. “If we just had cells from a pancreas that could be transplanted and would do that for us, then it would be a total game changer.”

To date, Vertex Pharmaceuticals has successfully treated one patient – who had lived with type 1 diabetes for about 40 years and had recurrent episodes of severe hypoglycemia – with an infusion of stem cell–derived differentiated islet cells into his liver. The procedure resulted in near reversal of type 1 diabetes, with his insulin dose reduced from 34 to 3 units, and his hemoglobin A1c falling from 8.6% to 7.2%.

And although the patient, Brian Shelton, still needs to take immunosuppressive agents to prevent rejection of the stem cell–derived islets, “it’s a whole new life,” he recently told the New York Times.  

Another company called ViaCyte is also working on a similar approach.

Whether this is a cure for type 1 diabetes is still debatable, said Anne Peters, MD, of the University of Southern California, Los Angeles. “Is it true? In a word, no. But we are part of the way there, which is much closer than we were 6 months ago.”

There are also ongoing clinical trials of therapeutic interventions to prevent or delay the trajectory from presymptomatic to clinical type 1 diabetes. The most advanced is the anti-CD3 monoclonal antibody teplizumab (Tzield, Provention Bio), which was rejected by the FDA in July 2021, but has since been refiled. The company expects to hear from the agency by the end of March 2022 as to whether the resubmission has been accepted.
 

Diabetes specialist nurses/educators keep it human

Dr. Hall said he concurs with the late eminent U.K. diabetes specialist Robert Tattersall’s observation on what he considers one of the most important advances in the management and treatment of type 1 diabetes: the human touch.

Referring to Dr. Tattersall’s book, “Diabetes: A Biography,” Dr. Hall quoted: “If asked what innovation had made the most difference to their lives in the 1980s, patients with type 1 diabetes in England would unhesitatingly have chosen not human insulin, but the spread of diabetes specialist nurses ... these people (mainly women) did more in the last two decades of the 20th century to improve the standard of diabetes care than any other innovation or drug.”

In the United States, diabetes specialist nurses were called diabetes educators until recently, when the name changed to certified diabetes care and education specialist.

“Above all, they have humanized the service and given the patient a say in the otherwise unequal relationship with all-powerful doctors,” concluded Dr. Hall, again quoting Dr. Tattersall.

A version of this article first appeared on Medscape.com.

Editor’s note: This is the second in a two-part series commemorating the 100-year anniversary of the first use of insulin in humans. Part 1 of this series examined the rivalry behind the discovery and use of insulin.

One hundred years ago, teenager Leonard Thompson was the first patient with type 1 diabetes to be successfully treated with insulin, granting him a reprieve from what was a certain death sentence at the time.

Since then, research has gathered pace. In the century since insulin’s discovery and first use, recombinant DNA technology has allowed for the engineering of the insulin molecule, providing numerous short- and long-acting analog versions. At the same time, technological leaps in automated insulin delivery and monitoring of blood glucose ensure more time with glucose in range and fewer life-threatening complications for those with type 1 diabetes fortunate enough to have access to the technology. 

In spite of these advancements, there is still scope for further evolution of disease management, with the holy grail being the transplant of stem cell–derived islet cells capable of making insulin, ideally encased in some kind of protective device so that immunosuppression is not required.

Indeed, it is not unreasonable to “hope that type 1 diabetes will be a curable disease in the next 100 years,” said Elizabeth Stephens, MD, an endocrinologist who has type 1 diabetes and practices in Portland, Ore.
 

Type 1 diabetes: The past 100 years

The epidemiology of type 1 diabetes has shifted considerably since 1922. A century ago, given that average life expectancy in the United States was around 54 years, it was pretty much the only type of diabetes that doctors encountered. “There was some type 2 diabetes about in heavier people, but the focus was on type 1 diabetes,” noted Dr. Stephens.

Originally called juvenile diabetes because it was thought to only occur in children, “now 50% of people are diagnosed with type 1 diabetes ... over [the age of] 20,” explained Dr. Stephens.

In the United States, around 1.4 million adults 20 years and older, and 187,000 children younger than 20, have the disease, according to data from the National Diabetes Statistics Report 2020 by the Centers for Disease Control and Prevention. This total represents an increase of nearly 30% from 2017.

Over the years, theories as to the cause, or trigger, for type 1 diabetes “have included cow’s milk and [viral] infections,” said Dr. Stephens. “Most likely, there’s a genetic predisposition and some type of exposure, which creates the perfect storm to trigger disease.”

There are hints that COVID-19 might be precipitating type 1 diabetes in some people. Recently, the CDC found SARS-CoV-2 infection was associated with an increased risk for diabetes (all types) among youth, but not other acute respiratory infections. And two further studies from different parts of the world have recently identified an increase in the incidence of type 1 diabetes in children since the COVID-19 pandemic began, but the reasons remain unclear.

The global CoviDiab registry has also been established to collect data on patients with COVID-19–related diabetes.

The million-dollar question: Is COVID-19 itself is propagating type 1 diabetes or unmasking a predisposition to the disease sooner? The latter might be associated with a lower type 1 diabetes rate in the future, said Partha Kar, MBBS, OBE, national specialty advisor, diabetes, for National Health Service England.

“Right now, we don’t know the answer. Whichever way you look at it, it is likely there will be a rise in cases, and in countries where insulin is not freely available, healthcare systems need to have supply ready because insulin is lifesaving in type 1 diabetes,” Dr. Kar emphasized.
 

 

 

CGMs and automated insulin delivery: A ‘godsend’

A huge change has also been seen, most notably in the past 15 to 20 years, in the technological advancements that can help those with type 1 diabetes live an easier life.

Continuous glucose monitors (CGMs) and automated ways of delivering insulin, such as smart pens and insulin pumps, have made the daily life of a person with type 1 diabetes in the Western world considerably more comfortable.

CGMs provide a constant stream of data to an app, often wirelessly in sync with the insulin pump. However, on a global level, they are only available to a lucky few.

In England, pending National Institute for Health and Care Excellence) approval, any CGM should be available to all eligible patients with type 1 diabetes within the NHS from April 2022, Dr. Kar pointed out. In the United States, CGMs are often unaffordable and access is mostly dependent on a person’s health insurance.

Kersten Hall, PhD, a scientist and U.K.-based medical historian who recently wrote a book, “Insulin, the Crooked Timber” (Oxford, England: Oxford University Press, 2022) uncovering the lesser-known story behind the discovery of insulin, was diagnosed with adult-onset type 1 diabetes at the age of 41. Dr. Hall had always found the finger-prick blood glucose test to be a chore but now has a CGM. 

“It’s a total game changer for me: a godsend. I can’t sing its praises enough,” he said. “All it involves is the swipe of the phone and this provides a reading which tells me if my glucose is too low, so I eat something, or too high, so I might [go for] a run.”
 

Brewing insulin at scale

As described by Dr. Hall in his book, the journey from treating Mr. Thompson in 1922 to treating the masses began when biochemist James Collip, MD, PhD, discovered a means of purifying the animal pancreas extracts used to treat the teenager.

But production at scale presented a further challenge. This was overcome in 1924 when Eli Lilly drew on a technique used in the beer brewing process – where pH guides bitterness – to purify and manufacture large amounts of insulin.

By 1936, a range of slower-acting cattle and pig-derived insulins, the first produced by Novo Nordisk Pharmaceuticals, were developed.

However, it took 8,000 lb (approximately 3,600 kg) of pancreas glands from 23,500 animals to make 1 lb (0.5 kg) of insulin, so a more efficient process was badly needed.

Dr. Hall, who is a molecular biologist as well as an author, explains that the use of recombinant DNA technology to produce human insulin, as done by Genentech in the late 70s, was a key development in the story of modern insulin products. Genentech then provided synthetic human insulin for Eli Lilly to conduct clinical trials.

Human insulin most closely resembles porcine insulin structure and function, differing by only one amino acid, while human insulin differs from bovine insulin by three amino acid residues. This synthetic human insulin eliminated the allergies that the animal-derived products sometimes caused.

In the early 1980s, Eli Lilly produced Humulin, the first biosynthetic (made in Escherichia coli, hence the term, “bio”) human insulin.

This technology eventually “allowed for the alteration of specific amino acids in the sequence of the insulin protein to make insulin analogs [synthetic versions grown in E. coli and genetically altered for various properties] that act faster, or more slowly, than normal human insulin. By using the slow- and fast-acting insulins in combination, a patient can control their blood sugar levels with a much greater degree of finesse and precision,” Dr. Hall explained.

Today, a whole range of insulins are available, including ultra–rapid-acting, short-acting, intermediate-acting, long-acting, ultra–long-acting, and even inhaled insulin, although the latter is expensive, has been associated with side effects, and is less commonly used, according to Dr. Stephens.



Oral insulin formulations are even in the early stages of development, with candidate drugs by Generex and from the Oralis project.

“With insulin therapy, we try to reproduce the normal physiology of the healthy body and pancreas,” Dr. Stephens explained.

Insulin analogs are only made by three companies (Eli Lilly, Novo Nordisk, and Sanofi), and they are generally much more expensive than nonanalog human insulin. In the United Kingdom through the NHS, they cost twice as much.

In the United States today, one of the biggest barriers to proper care of type 1 diabetes is the cost of insulin, which can limit access. With the market controlled by these three large companies, the average cost of a unit of insulin in the United States, according to RAND research, was $98.17 in January 2021, compared with $7.52 in the United Kingdom and $12.00 in Canada. 

Several U.S. states have enacted legislation capping insulin copayments to at, or under, $100 a month. But the federal Build Back Better Framework Act – which would cap copayments for insulin at $35 – currently hangs in the balance.

Alongside these moves, in 2020 the Food and Drug Administration approved the first interchangeable biosimilar insulin for type 1 diabetes (and insulin-dependent type 2 diabetes) in children and adults, called Semglee (Mylan Pharmaceuticals). 

Biosimilars (essentially generic versions of branded insulins) are expected to be less expensive than branded analogs, but the indications so far are that they will only be around 20% cheaper.

“I totally fail to understand how the richest country in the world still has a debate about price caps, and we are looking at biosimilar markets to change the debate. This makes no sense to me at all,” stressed Dr. Kar. “For lifesaving drugs, they should be funded by the state.”

Insulin also remains unaffordable for many in numerous low- and middle-income countries, where most patients pay out-of-pocket for medicines. Globally, there are estimated to be around 30 million people who need insulin but cannot afford it.

 

 

How near to a cure in the coming decades?

Looking ahead to the coming years, if not the next 100, Dr. Stephens highlighted two important aspects of care.

First, the use of a CGM device in combination with an insulin pump (also known as a closed-loop system or artificial pancreas), where the CGM effectively tells the insulin pump how much insulin to automatically dispense, should revolutionize care.

A number of such closed-loop systems have recently been approved in both the United States, including systems from Medtronic and Omnipod, and Europe.

“I wear one of these and it’s been a life changer for me, but it doesn’t suit everyone because the technology can be cumbersome, but with time, hopefully things will become smaller and more accurate in insulin delivery,” Dr. Stephens added.

The second advance of interest is the development and transplantation of cells that produce insulin.

Dr. Stephens explained that someone living with type 1 diabetes has a lot to think about, not least, doing the math related to insulin requirement. “If we just had cells from a pancreas that could be transplanted and would do that for us, then it would be a total game changer.”

To date, Vertex Pharmaceuticals has successfully treated one patient – who had lived with type 1 diabetes for about 40 years and had recurrent episodes of severe hypoglycemia – with an infusion of stem cell–derived differentiated islet cells into his liver. The procedure resulted in near reversal of type 1 diabetes, with his insulin dose reduced from 34 to 3 units, and his hemoglobin A1c falling from 8.6% to 7.2%.

And although the patient, Brian Shelton, still needs to take immunosuppressive agents to prevent rejection of the stem cell–derived islets, “it’s a whole new life,” he recently told the New York Times.  

Another company called ViaCyte is also working on a similar approach.

Whether this is a cure for type 1 diabetes is still debatable, said Anne Peters, MD, of the University of Southern California, Los Angeles. “Is it true? In a word, no. But we are part of the way there, which is much closer than we were 6 months ago.”

There are also ongoing clinical trials of therapeutic interventions to prevent or delay the trajectory from presymptomatic to clinical type 1 diabetes. The most advanced is the anti-CD3 monoclonal antibody teplizumab (Tzield, Provention Bio), which was rejected by the FDA in July 2021, but has since been refiled. The company expects to hear from the agency by the end of March 2022 as to whether the resubmission has been accepted.
 

Diabetes specialist nurses/educators keep it human

Dr. Hall said he concurs with the late eminent U.K. diabetes specialist Robert Tattersall’s observation on what he considers one of the most important advances in the management and treatment of type 1 diabetes: the human touch.

Referring to Dr. Tattersall’s book, “Diabetes: A Biography,” Dr. Hall quoted: “If asked what innovation had made the most difference to their lives in the 1980s, patients with type 1 diabetes in England would unhesitatingly have chosen not human insulin, but the spread of diabetes specialist nurses ... these people (mainly women) did more in the last two decades of the 20th century to improve the standard of diabetes care than any other innovation or drug.”

In the United States, diabetes specialist nurses were called diabetes educators until recently, when the name changed to certified diabetes care and education specialist.

“Above all, they have humanized the service and given the patient a say in the otherwise unequal relationship with all-powerful doctors,” concluded Dr. Hall, again quoting Dr. Tattersall.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Death of pig heart transplant patient is more a beginning than an end

Article Type
Changed
Tue, 03/15/2022 - 15:54

The genetically altered pig’s heart “worked like a rock star, beautifully functioning,” the surgeon who performed the pioneering Jan. 7 xenotransplant procedure said in a press statement on the death of the patient, David Bennett Sr.

“He wasn’t able to overcome what turned out to be devastating – the debilitation from his previous period of heart failure, which was extreme,” said Bartley P. Griffith, MD, clinical director of the cardiac xenotransplantation program at the University of Maryland, Baltimore.

University of Maryland Medical Center
Dr. Bartley P. Griffith and David Bennett Sr.

Representatives of the institution aren’t offering many details on the cause of Mr. Bennett’s death on March 8, 60 days after his operation, but said they will elaborate when their findings are formally published. But their comments seem to downplay the unique nature of the implanted heart itself as a culprit and instead implicate the patient’s diminished overall clinical condition and what grew into an ongoing battle with infections.

The 57-year-old Bennett, bedridden with end-stage heart failure, judged a poor candidate for a ventricular assist device, and on extracorporeal membrane oxygenation (ECMO), reportedly was offered the extraordinary surgery after being turned down for a conventional transplant at several major centers.

“Until day 45 or 50, he was doing very well,” Muhammad M. Mohiuddin, MD, the xenotransplantation program’s scientific director, observed in the statement. But infections soon took advantage of his hobbled immune system.

Given his “preexisting condition and how frail his body was,” Dr. Mohiuddin said, “we were having difficulty maintaining a balance between his immunosuppression and controlling his infection.” Mr. Bennett went into multiple organ failure and “I think that resulted in his passing away.”


 

Beyond wildest dreams

The surgeons confidently framed Mr. Bennett’s experience as a milestone for heart xenotransplantation. “The demonstration that it was possible, beyond the wildest dreams of most people in the field, even, at this point – that we were able to take a genetically engineered organ and watch it function flawlessly for 9 weeks – is pretty positive in terms of the potential of this therapy,” Dr. Griffith said.

But enough questions linger that others were more circumspect, even as they praised the accomplishment. “There’s no question that this is a historic event,” Mandeep R. Mehra, MD, of Harvard Medical School, and director of the Center for Advanced Heart Disease at Brigham and Women’s Hospital, both in Boston, said in an interview.

Dr. Mandeep R. Mehra

Still, “I don’t think we should just conclude that it was the patient’s frailty or death from infection,” Dr. Mehra said. With so few details available, “I would be very careful in prematurely concluding that the problem did not reside with the heart but with the patient. We cannot be sure.”

For example, he noted, “6 to 8 weeks is right around the time when some cardiac complications, like accelerated forms of vasculopathy, could become evident.” Immune-mediated cardiac allograft vasculopathy is a common cause of heart transplant failure.

Or, “it could as easily have been the fact that immunosuppression was modified at 6 to 7 weeks in response to potential infection, which could have led to a cardiac compromise,” Dr. Mehra said. “We just don’t know.”

“It’s really important that this be reported in a scientifically accurate way, because we will all learn from this,” Lori J. West, MD, DPhil, said in an interview.

Little seems to be known for sure about the actual cause of death, “but the fact there was not hyperacute rejection is itself a big step forward. And we know, at least from the limited information we have, that it did not occur,” observed Dr. West, who directs the Alberta Transplant Institute, Edmonton, and the Canadian Donation and Transplantation Research Program. She is a professor of pediatrics with adjunct positions in the departments of surgery and microbiology/immunology.

Dr. West also sees Mr. Bennett’s struggle with infections and adjustments to his unique immunosuppressive regimen, at least as characterized by his care team, as in line with the experience of many heart transplant recipients facing the same threat.

“We already walk this tightrope with every transplant patient,” she said. Typically, they’re put on a somewhat standardized immunosuppressant regimen, “and then we modify it a bit, either increasing or decreasing it, depending on the posttransplant course.” The regimen can become especially intense in response to new signs of rejection, “and you know that that’s going to have an impact on susceptibility to all kinds of infections.”
 

 

 

Full circle

The porcine heart was protected along two fronts against assault from Mr. Bennett’s immune system and other inhospitable aspects of his physiology, either of which could also have been obstacles to success: Genetic modification (Revivicor) of the pig that provided the heart, and a singularly aggressive antirejection drug regimen for the patient.

The knockout of three genes targeting specific porcine cell-surface carbohydrates that provoke a strong human antibody response reportedly averted a hyperacute rejection response that would have caused the graft to fail almost immediately.

Other genetic manipulations, some using CRISPR technology, silenced genes encoded for porcine endogenous retroviruses. Others were aimed at controlling myocardial growth and stemming graft microangiopathy.  

Mr. Bennett himself was treated with powerful immunosuppressants, including an investigational anti-CD40 monoclonal antibody (KPL-404, Kiniksa Pharmaceuticals) that, according to UMSOM, inhibits a well-recognized pathway critical to B-cell proliferation, T-cell activation, and antibody production.

“I suspect the patient may not have had rejection, but unfortunately, that intense immunosuppression really set him up – even if he had been half that age – for a very difficult time,” David A. Baran, MD, a cardiologist from Sentara Advanced Heart Failure Center, Norfolk, Va., who studies transplant immunology, said in an interview.

“This is in some ways like the original heart transplant in 1967, when the ability to do the surgery evolved before understanding of the immunosuppression needed. Four or 5 years later, heart transplantation almost died out, before the development of better immunosuppressants like cyclosporine and later tacrolimus,” Dr. Baran said.

“The current age, when we use less immunosuppression than ever, is based on 30 years of progressive success,” he noted. This landmark xenotransplantation “basically turns back the clock to a time when the intensity of immunosuppression by definition had to be extremely high, because we really didn’t know what to expect.”
 

Emerging role of xeno-organs

Xenotransplantation has been touted as potential strategy for expanding the pool of organs available for transplantation. Mr. Bennett’s “breakthrough surgery” takes the world “one step closer to solving the organ shortage crisis,” his surgeon, Dr. Griffith, announced soon after the procedure. “There are simply not enough donor human hearts available to meet the long list of potential recipients.”

But it’s not the only proposed approach. Measures could be taken, for example, to make more efficient use of the human organs that become available, partly by opening the field to additional less-than-ideal hearts and loosening regulatory mandates for projected graft survival.

“Every year, more than two-thirds of donor organs in the United States are discarded. So it’s not actually that we don’t have enough organs, it’s that we don’t have enough organs that people are willing to take,” Dr. Baran said. Still, it’s important to pursue all promising avenues, and “the genetic manipulation pathway is remarkable.”

But “honestly, organs such as kidneys probably make the most sense” for early study of xenotransplantation from pigs, he said. “The waiting list for kidneys is also very long, but if the kidney graft were to fail, the patient wouldn’t die. It would allow us to work out the immunosuppression without putting patients’ lives at risk.”

Often overlooked in assessments of organ demand, Dr. West said, is that “a lot of patients who could benefit from a transplant will never even be listed for a transplant.” It’s not clear why; perhaps they have multiple comorbidities, live too far from a transplant center, “or they’re too big or too small. Even if there were unlimited organs, you could never meet the needs of people who could benefit from transplantation.”

So even if more available donor organs were used, she said, there would still be a gap that xenotransplantation could help fill. “I’m very much in favor of research that allows us to continue to try to find a pathway to xenotransplantation. I think it’s critically important.”

Unquestionably, “we now need to have a dialogue to entertain how a technology like this, using modern medicine with gene editing, is really going to be utilized,” Dr. Mehra said. The Bennett case “does open up the field, but it also raises caution.” There should be broad participation to move the field forward, “coordinated through either societies or nationally allocated advisory committees that oversee the movement of this technology, to the next step.”

Ideally, that next step “would be to do a safety clinical trial in the right patient,” he said. “And the right patient, by definition, would be one who does not have a life-prolonging option, either mechanical circulatory support or allograft transplantation. That would be the goal.”

Dr. Mehra has reported receiving payments to his institution from Abbott for consulting; consulting fees from Janssen, Mesoblast, Broadview Ventures, Natera, Paragonix, Moderna, and the Baim Institute for Clinical Research; and serving on a scientific advisory board NuPulseCV, Leviticus, and FineHeart. Dr. Baran disclosed consulting for Getinge and LivaNova; speaking for Pfizer; and serving on trial steering committees for CareDx and Procyrion, all unrelated to xenotransplantation. Dr. West has declared no relevant conflicts.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The genetically altered pig’s heart “worked like a rock star, beautifully functioning,” the surgeon who performed the pioneering Jan. 7 xenotransplant procedure said in a press statement on the death of the patient, David Bennett Sr.

“He wasn’t able to overcome what turned out to be devastating – the debilitation from his previous period of heart failure, which was extreme,” said Bartley P. Griffith, MD, clinical director of the cardiac xenotransplantation program at the University of Maryland, Baltimore.

University of Maryland Medical Center
Dr. Bartley P. Griffith and David Bennett Sr.

Representatives of the institution aren’t offering many details on the cause of Mr. Bennett’s death on March 8, 60 days after his operation, but said they will elaborate when their findings are formally published. But their comments seem to downplay the unique nature of the implanted heart itself as a culprit and instead implicate the patient’s diminished overall clinical condition and what grew into an ongoing battle with infections.

The 57-year-old Bennett, bedridden with end-stage heart failure, judged a poor candidate for a ventricular assist device, and on extracorporeal membrane oxygenation (ECMO), reportedly was offered the extraordinary surgery after being turned down for a conventional transplant at several major centers.

“Until day 45 or 50, he was doing very well,” Muhammad M. Mohiuddin, MD, the xenotransplantation program’s scientific director, observed in the statement. But infections soon took advantage of his hobbled immune system.

Given his “preexisting condition and how frail his body was,” Dr. Mohiuddin said, “we were having difficulty maintaining a balance between his immunosuppression and controlling his infection.” Mr. Bennett went into multiple organ failure and “I think that resulted in his passing away.”


 

Beyond wildest dreams

The surgeons confidently framed Mr. Bennett’s experience as a milestone for heart xenotransplantation. “The demonstration that it was possible, beyond the wildest dreams of most people in the field, even, at this point – that we were able to take a genetically engineered organ and watch it function flawlessly for 9 weeks – is pretty positive in terms of the potential of this therapy,” Dr. Griffith said.

But enough questions linger that others were more circumspect, even as they praised the accomplishment. “There’s no question that this is a historic event,” Mandeep R. Mehra, MD, of Harvard Medical School, and director of the Center for Advanced Heart Disease at Brigham and Women’s Hospital, both in Boston, said in an interview.

Dr. Mandeep R. Mehra

Still, “I don’t think we should just conclude that it was the patient’s frailty or death from infection,” Dr. Mehra said. With so few details available, “I would be very careful in prematurely concluding that the problem did not reside with the heart but with the patient. We cannot be sure.”

For example, he noted, “6 to 8 weeks is right around the time when some cardiac complications, like accelerated forms of vasculopathy, could become evident.” Immune-mediated cardiac allograft vasculopathy is a common cause of heart transplant failure.

Or, “it could as easily have been the fact that immunosuppression was modified at 6 to 7 weeks in response to potential infection, which could have led to a cardiac compromise,” Dr. Mehra said. “We just don’t know.”

“It’s really important that this be reported in a scientifically accurate way, because we will all learn from this,” Lori J. West, MD, DPhil, said in an interview.

Little seems to be known for sure about the actual cause of death, “but the fact there was not hyperacute rejection is itself a big step forward. And we know, at least from the limited information we have, that it did not occur,” observed Dr. West, who directs the Alberta Transplant Institute, Edmonton, and the Canadian Donation and Transplantation Research Program. She is a professor of pediatrics with adjunct positions in the departments of surgery and microbiology/immunology.

Dr. West also sees Mr. Bennett’s struggle with infections and adjustments to his unique immunosuppressive regimen, at least as characterized by his care team, as in line with the experience of many heart transplant recipients facing the same threat.

“We already walk this tightrope with every transplant patient,” she said. Typically, they’re put on a somewhat standardized immunosuppressant regimen, “and then we modify it a bit, either increasing or decreasing it, depending on the posttransplant course.” The regimen can become especially intense in response to new signs of rejection, “and you know that that’s going to have an impact on susceptibility to all kinds of infections.”
 

 

 

Full circle

The porcine heart was protected along two fronts against assault from Mr. Bennett’s immune system and other inhospitable aspects of his physiology, either of which could also have been obstacles to success: Genetic modification (Revivicor) of the pig that provided the heart, and a singularly aggressive antirejection drug regimen for the patient.

The knockout of three genes targeting specific porcine cell-surface carbohydrates that provoke a strong human antibody response reportedly averted a hyperacute rejection response that would have caused the graft to fail almost immediately.

Other genetic manipulations, some using CRISPR technology, silenced genes encoded for porcine endogenous retroviruses. Others were aimed at controlling myocardial growth and stemming graft microangiopathy.  

Mr. Bennett himself was treated with powerful immunosuppressants, including an investigational anti-CD40 monoclonal antibody (KPL-404, Kiniksa Pharmaceuticals) that, according to UMSOM, inhibits a well-recognized pathway critical to B-cell proliferation, T-cell activation, and antibody production.

“I suspect the patient may not have had rejection, but unfortunately, that intense immunosuppression really set him up – even if he had been half that age – for a very difficult time,” David A. Baran, MD, a cardiologist from Sentara Advanced Heart Failure Center, Norfolk, Va., who studies transplant immunology, said in an interview.

“This is in some ways like the original heart transplant in 1967, when the ability to do the surgery evolved before understanding of the immunosuppression needed. Four or 5 years later, heart transplantation almost died out, before the development of better immunosuppressants like cyclosporine and later tacrolimus,” Dr. Baran said.

“The current age, when we use less immunosuppression than ever, is based on 30 years of progressive success,” he noted. This landmark xenotransplantation “basically turns back the clock to a time when the intensity of immunosuppression by definition had to be extremely high, because we really didn’t know what to expect.”
 

Emerging role of xeno-organs

Xenotransplantation has been touted as potential strategy for expanding the pool of organs available for transplantation. Mr. Bennett’s “breakthrough surgery” takes the world “one step closer to solving the organ shortage crisis,” his surgeon, Dr. Griffith, announced soon after the procedure. “There are simply not enough donor human hearts available to meet the long list of potential recipients.”

But it’s not the only proposed approach. Measures could be taken, for example, to make more efficient use of the human organs that become available, partly by opening the field to additional less-than-ideal hearts and loosening regulatory mandates for projected graft survival.

“Every year, more than two-thirds of donor organs in the United States are discarded. So it’s not actually that we don’t have enough organs, it’s that we don’t have enough organs that people are willing to take,” Dr. Baran said. Still, it’s important to pursue all promising avenues, and “the genetic manipulation pathway is remarkable.”

But “honestly, organs such as kidneys probably make the most sense” for early study of xenotransplantation from pigs, he said. “The waiting list for kidneys is also very long, but if the kidney graft were to fail, the patient wouldn’t die. It would allow us to work out the immunosuppression without putting patients’ lives at risk.”

Often overlooked in assessments of organ demand, Dr. West said, is that “a lot of patients who could benefit from a transplant will never even be listed for a transplant.” It’s not clear why; perhaps they have multiple comorbidities, live too far from a transplant center, “or they’re too big or too small. Even if there were unlimited organs, you could never meet the needs of people who could benefit from transplantation.”

So even if more available donor organs were used, she said, there would still be a gap that xenotransplantation could help fill. “I’m very much in favor of research that allows us to continue to try to find a pathway to xenotransplantation. I think it’s critically important.”

Unquestionably, “we now need to have a dialogue to entertain how a technology like this, using modern medicine with gene editing, is really going to be utilized,” Dr. Mehra said. The Bennett case “does open up the field, but it also raises caution.” There should be broad participation to move the field forward, “coordinated through either societies or nationally allocated advisory committees that oversee the movement of this technology, to the next step.”

Ideally, that next step “would be to do a safety clinical trial in the right patient,” he said. “And the right patient, by definition, would be one who does not have a life-prolonging option, either mechanical circulatory support or allograft transplantation. That would be the goal.”

Dr. Mehra has reported receiving payments to his institution from Abbott for consulting; consulting fees from Janssen, Mesoblast, Broadview Ventures, Natera, Paragonix, Moderna, and the Baim Institute for Clinical Research; and serving on a scientific advisory board NuPulseCV, Leviticus, and FineHeart. Dr. Baran disclosed consulting for Getinge and LivaNova; speaking for Pfizer; and serving on trial steering committees for CareDx and Procyrion, all unrelated to xenotransplantation. Dr. West has declared no relevant conflicts.

A version of this article first appeared on Medscape.com.

The genetically altered pig’s heart “worked like a rock star, beautifully functioning,” the surgeon who performed the pioneering Jan. 7 xenotransplant procedure said in a press statement on the death of the patient, David Bennett Sr.

“He wasn’t able to overcome what turned out to be devastating – the debilitation from his previous period of heart failure, which was extreme,” said Bartley P. Griffith, MD, clinical director of the cardiac xenotransplantation program at the University of Maryland, Baltimore.

University of Maryland Medical Center
Dr. Bartley P. Griffith and David Bennett Sr.

Representatives of the institution aren’t offering many details on the cause of Mr. Bennett’s death on March 8, 60 days after his operation, but said they will elaborate when their findings are formally published. But their comments seem to downplay the unique nature of the implanted heart itself as a culprit and instead implicate the patient’s diminished overall clinical condition and what grew into an ongoing battle with infections.

The 57-year-old Bennett, bedridden with end-stage heart failure, judged a poor candidate for a ventricular assist device, and on extracorporeal membrane oxygenation (ECMO), reportedly was offered the extraordinary surgery after being turned down for a conventional transplant at several major centers.

“Until day 45 or 50, he was doing very well,” Muhammad M. Mohiuddin, MD, the xenotransplantation program’s scientific director, observed in the statement. But infections soon took advantage of his hobbled immune system.

Given his “preexisting condition and how frail his body was,” Dr. Mohiuddin said, “we were having difficulty maintaining a balance between his immunosuppression and controlling his infection.” Mr. Bennett went into multiple organ failure and “I think that resulted in his passing away.”


 

Beyond wildest dreams

The surgeons confidently framed Mr. Bennett’s experience as a milestone for heart xenotransplantation. “The demonstration that it was possible, beyond the wildest dreams of most people in the field, even, at this point – that we were able to take a genetically engineered organ and watch it function flawlessly for 9 weeks – is pretty positive in terms of the potential of this therapy,” Dr. Griffith said.

But enough questions linger that others were more circumspect, even as they praised the accomplishment. “There’s no question that this is a historic event,” Mandeep R. Mehra, MD, of Harvard Medical School, and director of the Center for Advanced Heart Disease at Brigham and Women’s Hospital, both in Boston, said in an interview.

Dr. Mandeep R. Mehra

Still, “I don’t think we should just conclude that it was the patient’s frailty or death from infection,” Dr. Mehra said. With so few details available, “I would be very careful in prematurely concluding that the problem did not reside with the heart but with the patient. We cannot be sure.”

For example, he noted, “6 to 8 weeks is right around the time when some cardiac complications, like accelerated forms of vasculopathy, could become evident.” Immune-mediated cardiac allograft vasculopathy is a common cause of heart transplant failure.

Or, “it could as easily have been the fact that immunosuppression was modified at 6 to 7 weeks in response to potential infection, which could have led to a cardiac compromise,” Dr. Mehra said. “We just don’t know.”

“It’s really important that this be reported in a scientifically accurate way, because we will all learn from this,” Lori J. West, MD, DPhil, said in an interview.

Little seems to be known for sure about the actual cause of death, “but the fact there was not hyperacute rejection is itself a big step forward. And we know, at least from the limited information we have, that it did not occur,” observed Dr. West, who directs the Alberta Transplant Institute, Edmonton, and the Canadian Donation and Transplantation Research Program. She is a professor of pediatrics with adjunct positions in the departments of surgery and microbiology/immunology.

Dr. West also sees Mr. Bennett’s struggle with infections and adjustments to his unique immunosuppressive regimen, at least as characterized by his care team, as in line with the experience of many heart transplant recipients facing the same threat.

“We already walk this tightrope with every transplant patient,” she said. Typically, they’re put on a somewhat standardized immunosuppressant regimen, “and then we modify it a bit, either increasing or decreasing it, depending on the posttransplant course.” The regimen can become especially intense in response to new signs of rejection, “and you know that that’s going to have an impact on susceptibility to all kinds of infections.”
 

 

 

Full circle

The porcine heart was protected along two fronts against assault from Mr. Bennett’s immune system and other inhospitable aspects of his physiology, either of which could also have been obstacles to success: Genetic modification (Revivicor) of the pig that provided the heart, and a singularly aggressive antirejection drug regimen for the patient.

The knockout of three genes targeting specific porcine cell-surface carbohydrates that provoke a strong human antibody response reportedly averted a hyperacute rejection response that would have caused the graft to fail almost immediately.

Other genetic manipulations, some using CRISPR technology, silenced genes encoded for porcine endogenous retroviruses. Others were aimed at controlling myocardial growth and stemming graft microangiopathy.  

Mr. Bennett himself was treated with powerful immunosuppressants, including an investigational anti-CD40 monoclonal antibody (KPL-404, Kiniksa Pharmaceuticals) that, according to UMSOM, inhibits a well-recognized pathway critical to B-cell proliferation, T-cell activation, and antibody production.

“I suspect the patient may not have had rejection, but unfortunately, that intense immunosuppression really set him up – even if he had been half that age – for a very difficult time,” David A. Baran, MD, a cardiologist from Sentara Advanced Heart Failure Center, Norfolk, Va., who studies transplant immunology, said in an interview.

“This is in some ways like the original heart transplant in 1967, when the ability to do the surgery evolved before understanding of the immunosuppression needed. Four or 5 years later, heart transplantation almost died out, before the development of better immunosuppressants like cyclosporine and later tacrolimus,” Dr. Baran said.

“The current age, when we use less immunosuppression than ever, is based on 30 years of progressive success,” he noted. This landmark xenotransplantation “basically turns back the clock to a time when the intensity of immunosuppression by definition had to be extremely high, because we really didn’t know what to expect.”
 

Emerging role of xeno-organs

Xenotransplantation has been touted as potential strategy for expanding the pool of organs available for transplantation. Mr. Bennett’s “breakthrough surgery” takes the world “one step closer to solving the organ shortage crisis,” his surgeon, Dr. Griffith, announced soon after the procedure. “There are simply not enough donor human hearts available to meet the long list of potential recipients.”

But it’s not the only proposed approach. Measures could be taken, for example, to make more efficient use of the human organs that become available, partly by opening the field to additional less-than-ideal hearts and loosening regulatory mandates for projected graft survival.

“Every year, more than two-thirds of donor organs in the United States are discarded. So it’s not actually that we don’t have enough organs, it’s that we don’t have enough organs that people are willing to take,” Dr. Baran said. Still, it’s important to pursue all promising avenues, and “the genetic manipulation pathway is remarkable.”

But “honestly, organs such as kidneys probably make the most sense” for early study of xenotransplantation from pigs, he said. “The waiting list for kidneys is also very long, but if the kidney graft were to fail, the patient wouldn’t die. It would allow us to work out the immunosuppression without putting patients’ lives at risk.”

Often overlooked in assessments of organ demand, Dr. West said, is that “a lot of patients who could benefit from a transplant will never even be listed for a transplant.” It’s not clear why; perhaps they have multiple comorbidities, live too far from a transplant center, “or they’re too big or too small. Even if there were unlimited organs, you could never meet the needs of people who could benefit from transplantation.”

So even if more available donor organs were used, she said, there would still be a gap that xenotransplantation could help fill. “I’m very much in favor of research that allows us to continue to try to find a pathway to xenotransplantation. I think it’s critically important.”

Unquestionably, “we now need to have a dialogue to entertain how a technology like this, using modern medicine with gene editing, is really going to be utilized,” Dr. Mehra said. The Bennett case “does open up the field, but it also raises caution.” There should be broad participation to move the field forward, “coordinated through either societies or nationally allocated advisory committees that oversee the movement of this technology, to the next step.”

Ideally, that next step “would be to do a safety clinical trial in the right patient,” he said. “And the right patient, by definition, would be one who does not have a life-prolonging option, either mechanical circulatory support or allograft transplantation. That would be the goal.”

Dr. Mehra has reported receiving payments to his institution from Abbott for consulting; consulting fees from Janssen, Mesoblast, Broadview Ventures, Natera, Paragonix, Moderna, and the Baim Institute for Clinical Research; and serving on a scientific advisory board NuPulseCV, Leviticus, and FineHeart. Dr. Baran disclosed consulting for Getinge and LivaNova; speaking for Pfizer; and serving on trial steering committees for CareDx and Procyrion, all unrelated to xenotransplantation. Dr. West has declared no relevant conflicts.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Morphology of Mycosis Fungoides and Sézary Syndrome in Skin of Color

Article Type
Changed
Thu, 03/17/2022 - 14:05
Display Headline
Morphology of Mycosis Fungoides and Sézary Syndrome in Skin of Color

Mycosis fungoides (MF) and Sézary syndrome (SS) are non-Hodgkin T-cell lymphomas that make up the majority of cutaneous T-cell lymphomas. These conditions commonly affect Black patients, with an incidence rate of 12.6 cases of cutaneous T-cell lymphomas per million individuals vs 9.8 per million individuals in non–skin of color (SoC) patients.1 However, educational resources tend to focus on the clinical manifestations of MF/SS in lighter skin types, describing MF as erythematous patches, plaques, or tumors presenting in non–sun-exposed areas of the skin and SS as generalized erythroderma.2 Skin of color, comprised of Fitzpatrick skin types (FSTs) IV to VI,3 is poorly represented across dermatology textbooks,4,5 medical student resources,6 and peer-reviewed publications,7 raising awareness for the need to address this disparity.

Skin of color patients with MF/SS display variable morphologies, including features such as hyperpigmentation and hypopigmentation,8 the latter being exceedingly rare in non-SoC patients.9 Familiarity with these differences among providers is essential to allow for equitable diagnosis and treatment across all skin types, especially in light of data predicting that by 2044 more than 50% of the US population will be people of color.10 Patients with SoC are of many ethnic and racial backgrounds, including Black, Hispanic, American Indian, Pacific Islander, and Asian.11

Along with morphologic differences, there also are several racial disparities in the prognosis and survival of patients with MF/SS. Black patients diagnosed with MF present with greater body surface area affected, and Black women with MF have reduced survival rates compared to their White counterparts.12 Given these racial disparities in survival and representation in educational resources, we aimed to quantify the frequency of various morphologic characteristics of MF/SS in patients with SoC vs non-SoC patients to facilitate better recognition of early MF/SS in SoC patients by medical providers.

Methods

We performed a retrospective chart review following approval from the institutional review board at Northwestern University (Chicago, Illinois). We identified all patients with FSTs IV to VI and biopsy-proven MF/SS who had been clinically photographed in our clinic from January 1998 to December 2019. Only photographs that were high quality enough to review morphologic features were included in our review. Fitzpatrick skin type was determined based on electronic medical record documentation. If photographs were available from multiple visits for the same patient, only those showing posttreatment nonactive lesions were included. Additionally, 36 patients with FSTs I to III (non-SoC) and biopsy-proven MF/SS were included in our review as a comparison with the SoC cohort. The primary outcomes for this study included the presence of scale, erythema, hyperpigmentation, hypopigmentation, violaceous color, lichenification, silver hue, dyschromia, alopecia, poikiloderma, atrophy, and ulceration in active lesions. Dyschromia was defined by the presence of both hypopigmentation and hyperpigmentation. Poikiloderma was defined by hypopigmentation and hyperpigmentation, telangiectasia, and atrophy. Secondary outcomes included evaluation of those same characteristics in posttreatment nonactive lesions. All photographs were independently assessed by 3 authors (M.L.E., C.J.W., J.M.M.), and discrepancies were resolved by further review of the photograph in question and discussion.

Statistical Analysis—Summary statistics were applied to describe demographic and clinical characteristics. The χ2 test was used for categorical variables. Results achieving P<.05 were considered statistically significant.

Patient Demographics

Results

We reviewed photographs of 111 patients across all skin types (8, FST I; 12, FST II; 16, FST III; 17, FST IV; 44, FST V; 14, FST VI). The cohort was 47% female, and the mean age was 49.7 years (range, 15–86 years). The majority of the cohort had early-stage MF (stage IA or IB). There were more cases of SS in the SoC cohort than the non-SoC cohort (Table). Only 5 photographs had discrepancies and required discussion among the reviewers to achieve consensus.

Frequency of morphologic features found in skin of color (SoC [Fitzpatrick skin types IV–VI]) vs non-SoC (Fitzpatrick skin types I–III) patients with mycosis fungoides/Sézary syndrome
FIGURE 1. Frequency of morphologic features found in skin of color (SoC [Fitzpatrick skin types IV–VI]) vs non-SoC (Fitzpatrick skin types I–III) patients with mycosis fungoides/Sézary syndrome. Asterisk indicates statistically significant findings (P<.05).

Regarding morphologic characteristics in active lesions (Figure 1), scale was present in almost all patients (99% in SoC, 94% in non-SoC). Erythema was present in nearly all non-SoC patients (94%) but only in 69% of SoC patients (P=.003). Poikiloderma also was found to be present at higher frequencies in non-SoC patients compared with SoC patients (19% and 4%, respectively [P=.008]). However, hyperpigmentation (80% vs 39%), lichenification (43% vs 17%), and silver hue (25% vs 3%) were more common in SoC patients than non-SoC patients (P<.05). There were no significant differences in the remaining features, including hypopigmentation (39% vs 25%), dyschromia (24% vs 19%), violaceous color (44% vs 25%), atrophy (11% vs 22%), alopecia (23% vs 31%), and ulceration (16% vs 8%) between SoC and non-SoC patients (P>.05). Photographs of MF in patients with SoC can be seen in Figure 2.

Representative photographs of mycosis fungoides (MF) in skin of color (Fitzpatrick skin types [FSTs] IV–VI)
FIGURE 2. Representative photographs of mycosis fungoides (MF) in skin of color (Fitzpatrick skin types [FSTs] IV–VI). A, A female with FST IV and MF (stage IA) who presented with hypopigmented and hyperpigmented (dyschromic) erythematous patches with poikiloderma and overlying scale on the chest and neck. B, A female with FST V and MF (stage IB) who presented with erythematous to violaceous lichenified plaques with overlying scale along the back and buttocks. C, A female with FST V and MF (stage IB) who presented with hyperpigmented, violaceous, and lichenified patches and plaques with an overlying silver hue and scale diffusely distributed along the back and buttocks. D, A female with FST V and MF (stage IB) who presented with hypopigmented scaly patches on the abdomen. E, A male with FST VI and MF (stage IIB) who presented with hyperpigmented and violaceous lichenified patches, plaques, and tumors with an overlying silver hue and scale on the thighs.

 

 

Posttreatment (nonactive) photographs were available for 26 patients (6 non-SoC, 20 SoC). We found that across all FST groups, hyperpigmentation was more common than hypopigmentation in areas of previously active disease. Statistical analysis was not completed given that few non-SoC photographs were available for comparison.

Comment

This qualitative review demonstrates the heterogeneity of MF/SS in SoC patients and that these conditions do not present in this population with the classic erythematous patches and plaques found in non-SoC patients. We found that hyperpigmentation, lichenification, and silver hue were present at higher rates in patients with FSTs IV to VI compared to those with FSTs I to III, which had higher rates of erythema and poikiloderma. Familiarity with these morphologic features along with increased exposure to clinical photographs of MF/SS in SoC patients will aid in the visual recognition required for this diagnosis, since erythema is harder to identify in darker skin types. Recognizing the unique findings of MF in patients with SoC as well as in patients with lighter skin types will enable earlier diagnosis and treatment of MF/SS across all skin types. If MF is diagnosed and treated early, life expectancy is similar to that of patients without MF.13 However, the 5-year survival rate for advanced-stage MF/SS is 52% across all skin types, and studies have found that Black patients with advanced-stage disease have worse outcomes despite accounting for demographic factors and tumor stage.14,15 Given the worse outcomes in SoC patients with advanced-stage MF/SS, earlier diagnosis could help address this disparity.8,13,14 Similar morphologic features could be used in diagnosing other inflammatory conditions; studies have shown that the lack of recognition of erythema in Black children has led to delayed diagnosis of atopic dermatitis and subsequent inadequate treatment.16,17

The morphologic presentation of MF/SS in SoC patients also can influence an optimal treatment plan for this population. Hypopigmented MF responds better to phototherapy than hyperpigmented MF, as phototherapy has been shown to have decreased efficacy with increasing FST.18 Therefore, for patients with FSTs IV to VI, topical agents such as nitrogen mustard or bexarotene may be more suitable treatment options, as the efficacy of these treatments is independent of skin color.8 However, nitrogen mustard commonly leads to postinflammatory hyperpigmentation, and topical bexarotene may lead to erythema or irritation; therefore, providers must counsel patients on these possible side effects. For refractory disease, adjunct systemic treatments such as oral bexarotene, subcutaneous interferon, methotrexate, or radiation therapy may be considered.8

In addition to aiding in the prompt diagnosis and treatment of MF/SS in SoC patients, our findings may be used to better assess the extent of disease and distinguish between active MF/SS lesions vs xerosis cutis or residual dyschromia from previously treated lesions. It is important to note that these morphologic features must be taken into account with a complete history and work-up. The differential diagnosis of MF/SS includes conditions such as atopic dermatitis, psoriasis, tinea corporis, and drug reactions, which may have similar morphology in SoC.19

Limitations of our study include the single-center design and the use of photographs instead of in-person examination; however, our cutaneous lymphoma clinic serves a diverse patient population, and our 3 reviewers rated the photographs independently. Discussion amongst the reviewers to address discrepancies was only required for 5 photographs, indicating the high inter-reviewer reliability. Additionally, the original purpose of FST was to assess for the propensity of the skin to burn when undergoing phototherapy, not to serve as a marker for skin color. We recommend trainees and clinicians be mindful about the purpose of FST and to use inclusive language (eg, using the terms skin irritation, skin tenderness, or skin becoming darker from the sun instead of tanning) when determining FST in darker-skinned individuals.20 Future directions include examining if certain treatments are associated with prolonged dyschromia.

Conclusion

In our single-institution retrospective study, we found differences in the morphologic presentation of MF/SS in SoC patients vs non-SoC patients. While erythema is a common feature in non-SoC patients, clinical features of hyperpigmentation, lichenification, and silver hue should be carefully evaluated in the diagnosis of MF/SS in SoC patients. Knowledge of the heterogenous presentation of MF/SS in patients with SoC allows for expedited diagnosis and treatment, leading to better clinical outcomes. Valuable resources, including Taylor and Kelly’s Dermatology for Skin of Color, the Skin of Color Society, and VisualDx educate providers on how dermatologic conditions present in darker skin types. However, there is still work to be done to enhance diversity in educational resources in order to provide equitable care to patients of all skin types.

References
  1. Korgavkar K, Xiong M, Weinstock M. Changing incidence trends of cutaneous T-cell lymphoma. JAMA Dermatol. 2013;149:1295-1299. doi:10.1001/jamadermatol.2013.5526
  2. Jawed SI, Myskowski PL, Horwitz S, et al. Primary cutaneous T-cell lymphoma (mycosis fungoides and Sézary syndrome): part I. diagnosis: clinical and histopathologic features and new molecular and biologic markers. J Am Acad Dermatol. 2014;70:205.E1-E16; quiz 221-222. doi:10.1016/j.jaad.2013.07.049
  3. Tull RZ, Kerby E, Subash JJ, et al. Ethnic skin centers in the United States: where are we in 2020?. J Am Acad Dermatol. 2020;83:1757-1759. doi:10.1016/j.jaad.2020.03.054
  4. Adelekun A, Onyekaba G, Lipoff JB. Skin color in dermatology textbooks: an updated evaluation and analysis. J Am Acad Dermatol. 2021;84:194-196. doi:10.1016/j.jaad.2020.04.084
  5. Ebede T, Papier A. Disparities in dermatology educational resources. J Am Acad Dermatol. 2006;55:687-690. doi:10.1016/j.jaad.2005.10.068
  6. Jones VA, Clark KA, Shobajo MT, et al. Skin of color representation in medical education: an analysis of popular preparatory materials used for United States medical licensing examinations. J Am Acad Dermatol. 2021;85:773-775. doi:10.1016/j.jaad.2020.07.112
  7. Montgomery SN, Elbuluk N. A quantitative analysis of research publications focused on the top chief complaints in skin of color patients. J Am Acad Dermatol. 2021;85:241-242. doi:10.1016/j.jaad.2020.08.031
  8. Hinds GA, Heald P. Cutaneous T-cell lymphoma in skin of color. J Am Acad Dermatol. 2009;60:359-375; quiz 376-378. doi:10.1016/j.jaad.2008.10.031
  9. Ardigó M, Borroni G, Muscardin L, et al. Hypopigmented mycosis fungoides in Caucasian patients: a clinicopathologic study of 7 cases. J Am Acad Dermatol. 2003;49:264-270. doi:10.1067/s0190-9622(03)00907-1
  10. Colby SL, Ortman JM. Projections of the size and composition of the U.S. population: 2014 to 2060. United States Census Bureau website. Updated October 8, 2021. Accessed February 28, 2022. https://www.census.gov/library/publications/2015/demo/p25-1143.html
  11. Taylor SC, Kyei A. Defining skin of color. In: Kelly AP, Taylor SC, Lim HW, et al, eds. Taylor and Kelly’s Dermatology for Skin of Color. 2nd ed. McGraw-Hill Education; 2016.
  12. Huang AH, Kwatra SG, Khanna R, et al. Racial disparities in the clinical presentation and prognosis of patients with mycosis fungoides. J Natl Med Assoc. 2019;111:633-639. doi:10.1016/j.jnma.2019.08.006
  13. Kim YH, Jensen RA, Watanabe GL, et al. Clinical stage IA (limited patch and plaque) mycosis fungoides. a long-term outcome analysis. Arch Dermatol. 1996;132:1309-1313.
  14. Scarisbrick JJ, Prince HM, Vermeer MH, et al. Cutaneous lymphoma international consortium study of outcome in advanced stages of mycosis fungoides and Sézary syndrome: effect of specific prognostic markers on survival and development of a prognostic model. J Clin Oncol. 2015;33:3766-3773. doi:10.1200/JCO.2015.61.7142
  15. Nath SK, Yu JB, Wilson LD. Poorer prognosis of African-American patients with mycosis fungoides: an analysis of the SEER dataset, 1988 to 2008. Clin Lymphoma Myeloma Leuk. 2014;14:419-423. doi:10.1016/j.clml.2013.12.018
  16. Ben-Gashir MA, Hay RJ. Reliance on erythema scores may mask severe atopic dermatitis in black children compared with their white counterparts. Br J Dermatol. 2002;147:920-925. doi:10.1046/j.1365-2133.2002.04965.x
  17. Poladian K, De Souza B, McMichael AJ. Atopic dermatitis in adolescents with skin of color. Cutis. 2019;104:164-168.
  18. Yones SS, Palmer RA, Garibaldinos TT, et al. Randomized double-blind trial of the treatment of chronic plaque psoriasis: efficacy of psoralen-UV-A therapy vs narrowband UV-B therapy. Arch Dermatol. 2006;142:836-842. doi:10.1001/archderm.142.7.836
  19. Currimbhoy S, Pandya AG. Cutaneous T-cell lymphoma. In: Kelly AP, Taylor SC, Lim HW, et al, eds. Taylor and Kelly’s Dermatology for Skin of Color. 2nd ed. McGraw-Hill Education; 2016.
  20. Ware OR, Dawson JE, Shinohara MM, et al. Racial limitations of Fitzpatrick skin type. Cutis. 2020;105:77-80.
Article PDF
Author and Disclosure Information

From the Department of Dermatology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois.

The authors report no conflict of interest.

Correspondence: Maria L. Espinosa, MD, 924 E 57th St, Ste 104, Chicago, IL 60637 ([email protected]).

Issue
Cutis - 109(3)
Publications
Topics
Page Number
E3-E7
Sections
Author and Disclosure Information

From the Department of Dermatology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois.

The authors report no conflict of interest.

Correspondence: Maria L. Espinosa, MD, 924 E 57th St, Ste 104, Chicago, IL 60637 ([email protected]).

Author and Disclosure Information

From the Department of Dermatology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois.

The authors report no conflict of interest.

Correspondence: Maria L. Espinosa, MD, 924 E 57th St, Ste 104, Chicago, IL 60637 ([email protected]).

Article PDF
Article PDF

Mycosis fungoides (MF) and Sézary syndrome (SS) are non-Hodgkin T-cell lymphomas that make up the majority of cutaneous T-cell lymphomas. These conditions commonly affect Black patients, with an incidence rate of 12.6 cases of cutaneous T-cell lymphomas per million individuals vs 9.8 per million individuals in non–skin of color (SoC) patients.1 However, educational resources tend to focus on the clinical manifestations of MF/SS in lighter skin types, describing MF as erythematous patches, plaques, or tumors presenting in non–sun-exposed areas of the skin and SS as generalized erythroderma.2 Skin of color, comprised of Fitzpatrick skin types (FSTs) IV to VI,3 is poorly represented across dermatology textbooks,4,5 medical student resources,6 and peer-reviewed publications,7 raising awareness for the need to address this disparity.

Skin of color patients with MF/SS display variable morphologies, including features such as hyperpigmentation and hypopigmentation,8 the latter being exceedingly rare in non-SoC patients.9 Familiarity with these differences among providers is essential to allow for equitable diagnosis and treatment across all skin types, especially in light of data predicting that by 2044 more than 50% of the US population will be people of color.10 Patients with SoC are of many ethnic and racial backgrounds, including Black, Hispanic, American Indian, Pacific Islander, and Asian.11

Along with morphologic differences, there also are several racial disparities in the prognosis and survival of patients with MF/SS. Black patients diagnosed with MF present with greater body surface area affected, and Black women with MF have reduced survival rates compared to their White counterparts.12 Given these racial disparities in survival and representation in educational resources, we aimed to quantify the frequency of various morphologic characteristics of MF/SS in patients with SoC vs non-SoC patients to facilitate better recognition of early MF/SS in SoC patients by medical providers.

Methods

We performed a retrospective chart review following approval from the institutional review board at Northwestern University (Chicago, Illinois). We identified all patients with FSTs IV to VI and biopsy-proven MF/SS who had been clinically photographed in our clinic from January 1998 to December 2019. Only photographs that were high quality enough to review morphologic features were included in our review. Fitzpatrick skin type was determined based on electronic medical record documentation. If photographs were available from multiple visits for the same patient, only those showing posttreatment nonactive lesions were included. Additionally, 36 patients with FSTs I to III (non-SoC) and biopsy-proven MF/SS were included in our review as a comparison with the SoC cohort. The primary outcomes for this study included the presence of scale, erythema, hyperpigmentation, hypopigmentation, violaceous color, lichenification, silver hue, dyschromia, alopecia, poikiloderma, atrophy, and ulceration in active lesions. Dyschromia was defined by the presence of both hypopigmentation and hyperpigmentation. Poikiloderma was defined by hypopigmentation and hyperpigmentation, telangiectasia, and atrophy. Secondary outcomes included evaluation of those same characteristics in posttreatment nonactive lesions. All photographs were independently assessed by 3 authors (M.L.E., C.J.W., J.M.M.), and discrepancies were resolved by further review of the photograph in question and discussion.

Statistical Analysis—Summary statistics were applied to describe demographic and clinical characteristics. The χ2 test was used for categorical variables. Results achieving P<.05 were considered statistically significant.

Patient Demographics

Results

We reviewed photographs of 111 patients across all skin types (8, FST I; 12, FST II; 16, FST III; 17, FST IV; 44, FST V; 14, FST VI). The cohort was 47% female, and the mean age was 49.7 years (range, 15–86 years). The majority of the cohort had early-stage MF (stage IA or IB). There were more cases of SS in the SoC cohort than the non-SoC cohort (Table). Only 5 photographs had discrepancies and required discussion among the reviewers to achieve consensus.

Frequency of morphologic features found in skin of color (SoC [Fitzpatrick skin types IV–VI]) vs non-SoC (Fitzpatrick skin types I–III) patients with mycosis fungoides/Sézary syndrome
FIGURE 1. Frequency of morphologic features found in skin of color (SoC [Fitzpatrick skin types IV–VI]) vs non-SoC (Fitzpatrick skin types I–III) patients with mycosis fungoides/Sézary syndrome. Asterisk indicates statistically significant findings (P<.05).

Regarding morphologic characteristics in active lesions (Figure 1), scale was present in almost all patients (99% in SoC, 94% in non-SoC). Erythema was present in nearly all non-SoC patients (94%) but only in 69% of SoC patients (P=.003). Poikiloderma also was found to be present at higher frequencies in non-SoC patients compared with SoC patients (19% and 4%, respectively [P=.008]). However, hyperpigmentation (80% vs 39%), lichenification (43% vs 17%), and silver hue (25% vs 3%) were more common in SoC patients than non-SoC patients (P<.05). There were no significant differences in the remaining features, including hypopigmentation (39% vs 25%), dyschromia (24% vs 19%), violaceous color (44% vs 25%), atrophy (11% vs 22%), alopecia (23% vs 31%), and ulceration (16% vs 8%) between SoC and non-SoC patients (P>.05). Photographs of MF in patients with SoC can be seen in Figure 2.

Representative photographs of mycosis fungoides (MF) in skin of color (Fitzpatrick skin types [FSTs] IV–VI)
FIGURE 2. Representative photographs of mycosis fungoides (MF) in skin of color (Fitzpatrick skin types [FSTs] IV–VI). A, A female with FST IV and MF (stage IA) who presented with hypopigmented and hyperpigmented (dyschromic) erythematous patches with poikiloderma and overlying scale on the chest and neck. B, A female with FST V and MF (stage IB) who presented with erythematous to violaceous lichenified plaques with overlying scale along the back and buttocks. C, A female with FST V and MF (stage IB) who presented with hyperpigmented, violaceous, and lichenified patches and plaques with an overlying silver hue and scale diffusely distributed along the back and buttocks. D, A female with FST V and MF (stage IB) who presented with hypopigmented scaly patches on the abdomen. E, A male with FST VI and MF (stage IIB) who presented with hyperpigmented and violaceous lichenified patches, plaques, and tumors with an overlying silver hue and scale on the thighs.

 

 

Posttreatment (nonactive) photographs were available for 26 patients (6 non-SoC, 20 SoC). We found that across all FST groups, hyperpigmentation was more common than hypopigmentation in areas of previously active disease. Statistical analysis was not completed given that few non-SoC photographs were available for comparison.

Comment

This qualitative review demonstrates the heterogeneity of MF/SS in SoC patients and that these conditions do not present in this population with the classic erythematous patches and plaques found in non-SoC patients. We found that hyperpigmentation, lichenification, and silver hue were present at higher rates in patients with FSTs IV to VI compared to those with FSTs I to III, which had higher rates of erythema and poikiloderma. Familiarity with these morphologic features along with increased exposure to clinical photographs of MF/SS in SoC patients will aid in the visual recognition required for this diagnosis, since erythema is harder to identify in darker skin types. Recognizing the unique findings of MF in patients with SoC as well as in patients with lighter skin types will enable earlier diagnosis and treatment of MF/SS across all skin types. If MF is diagnosed and treated early, life expectancy is similar to that of patients without MF.13 However, the 5-year survival rate for advanced-stage MF/SS is 52% across all skin types, and studies have found that Black patients with advanced-stage disease have worse outcomes despite accounting for demographic factors and tumor stage.14,15 Given the worse outcomes in SoC patients with advanced-stage MF/SS, earlier diagnosis could help address this disparity.8,13,14 Similar morphologic features could be used in diagnosing other inflammatory conditions; studies have shown that the lack of recognition of erythema in Black children has led to delayed diagnosis of atopic dermatitis and subsequent inadequate treatment.16,17

The morphologic presentation of MF/SS in SoC patients also can influence an optimal treatment plan for this population. Hypopigmented MF responds better to phototherapy than hyperpigmented MF, as phototherapy has been shown to have decreased efficacy with increasing FST.18 Therefore, for patients with FSTs IV to VI, topical agents such as nitrogen mustard or bexarotene may be more suitable treatment options, as the efficacy of these treatments is independent of skin color.8 However, nitrogen mustard commonly leads to postinflammatory hyperpigmentation, and topical bexarotene may lead to erythema or irritation; therefore, providers must counsel patients on these possible side effects. For refractory disease, adjunct systemic treatments such as oral bexarotene, subcutaneous interferon, methotrexate, or radiation therapy may be considered.8

In addition to aiding in the prompt diagnosis and treatment of MF/SS in SoC patients, our findings may be used to better assess the extent of disease and distinguish between active MF/SS lesions vs xerosis cutis or residual dyschromia from previously treated lesions. It is important to note that these morphologic features must be taken into account with a complete history and work-up. The differential diagnosis of MF/SS includes conditions such as atopic dermatitis, psoriasis, tinea corporis, and drug reactions, which may have similar morphology in SoC.19

Limitations of our study include the single-center design and the use of photographs instead of in-person examination; however, our cutaneous lymphoma clinic serves a diverse patient population, and our 3 reviewers rated the photographs independently. Discussion amongst the reviewers to address discrepancies was only required for 5 photographs, indicating the high inter-reviewer reliability. Additionally, the original purpose of FST was to assess for the propensity of the skin to burn when undergoing phototherapy, not to serve as a marker for skin color. We recommend trainees and clinicians be mindful about the purpose of FST and to use inclusive language (eg, using the terms skin irritation, skin tenderness, or skin becoming darker from the sun instead of tanning) when determining FST in darker-skinned individuals.20 Future directions include examining if certain treatments are associated with prolonged dyschromia.

Conclusion

In our single-institution retrospective study, we found differences in the morphologic presentation of MF/SS in SoC patients vs non-SoC patients. While erythema is a common feature in non-SoC patients, clinical features of hyperpigmentation, lichenification, and silver hue should be carefully evaluated in the diagnosis of MF/SS in SoC patients. Knowledge of the heterogenous presentation of MF/SS in patients with SoC allows for expedited diagnosis and treatment, leading to better clinical outcomes. Valuable resources, including Taylor and Kelly’s Dermatology for Skin of Color, the Skin of Color Society, and VisualDx educate providers on how dermatologic conditions present in darker skin types. However, there is still work to be done to enhance diversity in educational resources in order to provide equitable care to patients of all skin types.

Mycosis fungoides (MF) and Sézary syndrome (SS) are non-Hodgkin T-cell lymphomas that make up the majority of cutaneous T-cell lymphomas. These conditions commonly affect Black patients, with an incidence rate of 12.6 cases of cutaneous T-cell lymphomas per million individuals vs 9.8 per million individuals in non–skin of color (SoC) patients.1 However, educational resources tend to focus on the clinical manifestations of MF/SS in lighter skin types, describing MF as erythematous patches, plaques, or tumors presenting in non–sun-exposed areas of the skin and SS as generalized erythroderma.2 Skin of color, comprised of Fitzpatrick skin types (FSTs) IV to VI,3 is poorly represented across dermatology textbooks,4,5 medical student resources,6 and peer-reviewed publications,7 raising awareness for the need to address this disparity.

Skin of color patients with MF/SS display variable morphologies, including features such as hyperpigmentation and hypopigmentation,8 the latter being exceedingly rare in non-SoC patients.9 Familiarity with these differences among providers is essential to allow for equitable diagnosis and treatment across all skin types, especially in light of data predicting that by 2044 more than 50% of the US population will be people of color.10 Patients with SoC are of many ethnic and racial backgrounds, including Black, Hispanic, American Indian, Pacific Islander, and Asian.11

Along with morphologic differences, there also are several racial disparities in the prognosis and survival of patients with MF/SS. Black patients diagnosed with MF present with greater body surface area affected, and Black women with MF have reduced survival rates compared to their White counterparts.12 Given these racial disparities in survival and representation in educational resources, we aimed to quantify the frequency of various morphologic characteristics of MF/SS in patients with SoC vs non-SoC patients to facilitate better recognition of early MF/SS in SoC patients by medical providers.

Methods

We performed a retrospective chart review following approval from the institutional review board at Northwestern University (Chicago, Illinois). We identified all patients with FSTs IV to VI and biopsy-proven MF/SS who had been clinically photographed in our clinic from January 1998 to December 2019. Only photographs that were high quality enough to review morphologic features were included in our review. Fitzpatrick skin type was determined based on electronic medical record documentation. If photographs were available from multiple visits for the same patient, only those showing posttreatment nonactive lesions were included. Additionally, 36 patients with FSTs I to III (non-SoC) and biopsy-proven MF/SS were included in our review as a comparison with the SoC cohort. The primary outcomes for this study included the presence of scale, erythema, hyperpigmentation, hypopigmentation, violaceous color, lichenification, silver hue, dyschromia, alopecia, poikiloderma, atrophy, and ulceration in active lesions. Dyschromia was defined by the presence of both hypopigmentation and hyperpigmentation. Poikiloderma was defined by hypopigmentation and hyperpigmentation, telangiectasia, and atrophy. Secondary outcomes included evaluation of those same characteristics in posttreatment nonactive lesions. All photographs were independently assessed by 3 authors (M.L.E., C.J.W., J.M.M.), and discrepancies were resolved by further review of the photograph in question and discussion.

Statistical Analysis—Summary statistics were applied to describe demographic and clinical characteristics. The χ2 test was used for categorical variables. Results achieving P<.05 were considered statistically significant.

Patient Demographics

Results

We reviewed photographs of 111 patients across all skin types (8, FST I; 12, FST II; 16, FST III; 17, FST IV; 44, FST V; 14, FST VI). The cohort was 47% female, and the mean age was 49.7 years (range, 15–86 years). The majority of the cohort had early-stage MF (stage IA or IB). There were more cases of SS in the SoC cohort than the non-SoC cohort (Table). Only 5 photographs had discrepancies and required discussion among the reviewers to achieve consensus.

Frequency of morphologic features found in skin of color (SoC [Fitzpatrick skin types IV–VI]) vs non-SoC (Fitzpatrick skin types I–III) patients with mycosis fungoides/Sézary syndrome
FIGURE 1. Frequency of morphologic features found in skin of color (SoC [Fitzpatrick skin types IV–VI]) vs non-SoC (Fitzpatrick skin types I–III) patients with mycosis fungoides/Sézary syndrome. Asterisk indicates statistically significant findings (P<.05).

Regarding morphologic characteristics in active lesions (Figure 1), scale was present in almost all patients (99% in SoC, 94% in non-SoC). Erythema was present in nearly all non-SoC patients (94%) but only in 69% of SoC patients (P=.003). Poikiloderma also was found to be present at higher frequencies in non-SoC patients compared with SoC patients (19% and 4%, respectively [P=.008]). However, hyperpigmentation (80% vs 39%), lichenification (43% vs 17%), and silver hue (25% vs 3%) were more common in SoC patients than non-SoC patients (P<.05). There were no significant differences in the remaining features, including hypopigmentation (39% vs 25%), dyschromia (24% vs 19%), violaceous color (44% vs 25%), atrophy (11% vs 22%), alopecia (23% vs 31%), and ulceration (16% vs 8%) between SoC and non-SoC patients (P>.05). Photographs of MF in patients with SoC can be seen in Figure 2.

Representative photographs of mycosis fungoides (MF) in skin of color (Fitzpatrick skin types [FSTs] IV–VI)
FIGURE 2. Representative photographs of mycosis fungoides (MF) in skin of color (Fitzpatrick skin types [FSTs] IV–VI). A, A female with FST IV and MF (stage IA) who presented with hypopigmented and hyperpigmented (dyschromic) erythematous patches with poikiloderma and overlying scale on the chest and neck. B, A female with FST V and MF (stage IB) who presented with erythematous to violaceous lichenified plaques with overlying scale along the back and buttocks. C, A female with FST V and MF (stage IB) who presented with hyperpigmented, violaceous, and lichenified patches and plaques with an overlying silver hue and scale diffusely distributed along the back and buttocks. D, A female with FST V and MF (stage IB) who presented with hypopigmented scaly patches on the abdomen. E, A male with FST VI and MF (stage IIB) who presented with hyperpigmented and violaceous lichenified patches, plaques, and tumors with an overlying silver hue and scale on the thighs.

 

 

Posttreatment (nonactive) photographs were available for 26 patients (6 non-SoC, 20 SoC). We found that across all FST groups, hyperpigmentation was more common than hypopigmentation in areas of previously active disease. Statistical analysis was not completed given that few non-SoC photographs were available for comparison.

Comment

This qualitative review demonstrates the heterogeneity of MF/SS in SoC patients and that these conditions do not present in this population with the classic erythematous patches and plaques found in non-SoC patients. We found that hyperpigmentation, lichenification, and silver hue were present at higher rates in patients with FSTs IV to VI compared to those with FSTs I to III, which had higher rates of erythema and poikiloderma. Familiarity with these morphologic features along with increased exposure to clinical photographs of MF/SS in SoC patients will aid in the visual recognition required for this diagnosis, since erythema is harder to identify in darker skin types. Recognizing the unique findings of MF in patients with SoC as well as in patients with lighter skin types will enable earlier diagnosis and treatment of MF/SS across all skin types. If MF is diagnosed and treated early, life expectancy is similar to that of patients without MF.13 However, the 5-year survival rate for advanced-stage MF/SS is 52% across all skin types, and studies have found that Black patients with advanced-stage disease have worse outcomes despite accounting for demographic factors and tumor stage.14,15 Given the worse outcomes in SoC patients with advanced-stage MF/SS, earlier diagnosis could help address this disparity.8,13,14 Similar morphologic features could be used in diagnosing other inflammatory conditions; studies have shown that the lack of recognition of erythema in Black children has led to delayed diagnosis of atopic dermatitis and subsequent inadequate treatment.16,17

The morphologic presentation of MF/SS in SoC patients also can influence an optimal treatment plan for this population. Hypopigmented MF responds better to phototherapy than hyperpigmented MF, as phototherapy has been shown to have decreased efficacy with increasing FST.18 Therefore, for patients with FSTs IV to VI, topical agents such as nitrogen mustard or bexarotene may be more suitable treatment options, as the efficacy of these treatments is independent of skin color.8 However, nitrogen mustard commonly leads to postinflammatory hyperpigmentation, and topical bexarotene may lead to erythema or irritation; therefore, providers must counsel patients on these possible side effects. For refractory disease, adjunct systemic treatments such as oral bexarotene, subcutaneous interferon, methotrexate, or radiation therapy may be considered.8

In addition to aiding in the prompt diagnosis and treatment of MF/SS in SoC patients, our findings may be used to better assess the extent of disease and distinguish between active MF/SS lesions vs xerosis cutis or residual dyschromia from previously treated lesions. It is important to note that these morphologic features must be taken into account with a complete history and work-up. The differential diagnosis of MF/SS includes conditions such as atopic dermatitis, psoriasis, tinea corporis, and drug reactions, which may have similar morphology in SoC.19

Limitations of our study include the single-center design and the use of photographs instead of in-person examination; however, our cutaneous lymphoma clinic serves a diverse patient population, and our 3 reviewers rated the photographs independently. Discussion amongst the reviewers to address discrepancies was only required for 5 photographs, indicating the high inter-reviewer reliability. Additionally, the original purpose of FST was to assess for the propensity of the skin to burn when undergoing phototherapy, not to serve as a marker for skin color. We recommend trainees and clinicians be mindful about the purpose of FST and to use inclusive language (eg, using the terms skin irritation, skin tenderness, or skin becoming darker from the sun instead of tanning) when determining FST in darker-skinned individuals.20 Future directions include examining if certain treatments are associated with prolonged dyschromia.

Conclusion

In our single-institution retrospective study, we found differences in the morphologic presentation of MF/SS in SoC patients vs non-SoC patients. While erythema is a common feature in non-SoC patients, clinical features of hyperpigmentation, lichenification, and silver hue should be carefully evaluated in the diagnosis of MF/SS in SoC patients. Knowledge of the heterogenous presentation of MF/SS in patients with SoC allows for expedited diagnosis and treatment, leading to better clinical outcomes. Valuable resources, including Taylor and Kelly’s Dermatology for Skin of Color, the Skin of Color Society, and VisualDx educate providers on how dermatologic conditions present in darker skin types. However, there is still work to be done to enhance diversity in educational resources in order to provide equitable care to patients of all skin types.

References
  1. Korgavkar K, Xiong M, Weinstock M. Changing incidence trends of cutaneous T-cell lymphoma. JAMA Dermatol. 2013;149:1295-1299. doi:10.1001/jamadermatol.2013.5526
  2. Jawed SI, Myskowski PL, Horwitz S, et al. Primary cutaneous T-cell lymphoma (mycosis fungoides and Sézary syndrome): part I. diagnosis: clinical and histopathologic features and new molecular and biologic markers. J Am Acad Dermatol. 2014;70:205.E1-E16; quiz 221-222. doi:10.1016/j.jaad.2013.07.049
  3. Tull RZ, Kerby E, Subash JJ, et al. Ethnic skin centers in the United States: where are we in 2020?. J Am Acad Dermatol. 2020;83:1757-1759. doi:10.1016/j.jaad.2020.03.054
  4. Adelekun A, Onyekaba G, Lipoff JB. Skin color in dermatology textbooks: an updated evaluation and analysis. J Am Acad Dermatol. 2021;84:194-196. doi:10.1016/j.jaad.2020.04.084
  5. Ebede T, Papier A. Disparities in dermatology educational resources. J Am Acad Dermatol. 2006;55:687-690. doi:10.1016/j.jaad.2005.10.068
  6. Jones VA, Clark KA, Shobajo MT, et al. Skin of color representation in medical education: an analysis of popular preparatory materials used for United States medical licensing examinations. J Am Acad Dermatol. 2021;85:773-775. doi:10.1016/j.jaad.2020.07.112
  7. Montgomery SN, Elbuluk N. A quantitative analysis of research publications focused on the top chief complaints in skin of color patients. J Am Acad Dermatol. 2021;85:241-242. doi:10.1016/j.jaad.2020.08.031
  8. Hinds GA, Heald P. Cutaneous T-cell lymphoma in skin of color. J Am Acad Dermatol. 2009;60:359-375; quiz 376-378. doi:10.1016/j.jaad.2008.10.031
  9. Ardigó M, Borroni G, Muscardin L, et al. Hypopigmented mycosis fungoides in Caucasian patients: a clinicopathologic study of 7 cases. J Am Acad Dermatol. 2003;49:264-270. doi:10.1067/s0190-9622(03)00907-1
  10. Colby SL, Ortman JM. Projections of the size and composition of the U.S. population: 2014 to 2060. United States Census Bureau website. Updated October 8, 2021. Accessed February 28, 2022. https://www.census.gov/library/publications/2015/demo/p25-1143.html
  11. Taylor SC, Kyei A. Defining skin of color. In: Kelly AP, Taylor SC, Lim HW, et al, eds. Taylor and Kelly’s Dermatology for Skin of Color. 2nd ed. McGraw-Hill Education; 2016.
  12. Huang AH, Kwatra SG, Khanna R, et al. Racial disparities in the clinical presentation and prognosis of patients with mycosis fungoides. J Natl Med Assoc. 2019;111:633-639. doi:10.1016/j.jnma.2019.08.006
  13. Kim YH, Jensen RA, Watanabe GL, et al. Clinical stage IA (limited patch and plaque) mycosis fungoides. a long-term outcome analysis. Arch Dermatol. 1996;132:1309-1313.
  14. Scarisbrick JJ, Prince HM, Vermeer MH, et al. Cutaneous lymphoma international consortium study of outcome in advanced stages of mycosis fungoides and Sézary syndrome: effect of specific prognostic markers on survival and development of a prognostic model. J Clin Oncol. 2015;33:3766-3773. doi:10.1200/JCO.2015.61.7142
  15. Nath SK, Yu JB, Wilson LD. Poorer prognosis of African-American patients with mycosis fungoides: an analysis of the SEER dataset, 1988 to 2008. Clin Lymphoma Myeloma Leuk. 2014;14:419-423. doi:10.1016/j.clml.2013.12.018
  16. Ben-Gashir MA, Hay RJ. Reliance on erythema scores may mask severe atopic dermatitis in black children compared with their white counterparts. Br J Dermatol. 2002;147:920-925. doi:10.1046/j.1365-2133.2002.04965.x
  17. Poladian K, De Souza B, McMichael AJ. Atopic dermatitis in adolescents with skin of color. Cutis. 2019;104:164-168.
  18. Yones SS, Palmer RA, Garibaldinos TT, et al. Randomized double-blind trial of the treatment of chronic plaque psoriasis: efficacy of psoralen-UV-A therapy vs narrowband UV-B therapy. Arch Dermatol. 2006;142:836-842. doi:10.1001/archderm.142.7.836
  19. Currimbhoy S, Pandya AG. Cutaneous T-cell lymphoma. In: Kelly AP, Taylor SC, Lim HW, et al, eds. Taylor and Kelly’s Dermatology for Skin of Color. 2nd ed. McGraw-Hill Education; 2016.
  20. Ware OR, Dawson JE, Shinohara MM, et al. Racial limitations of Fitzpatrick skin type. Cutis. 2020;105:77-80.
References
  1. Korgavkar K, Xiong M, Weinstock M. Changing incidence trends of cutaneous T-cell lymphoma. JAMA Dermatol. 2013;149:1295-1299. doi:10.1001/jamadermatol.2013.5526
  2. Jawed SI, Myskowski PL, Horwitz S, et al. Primary cutaneous T-cell lymphoma (mycosis fungoides and Sézary syndrome): part I. diagnosis: clinical and histopathologic features and new molecular and biologic markers. J Am Acad Dermatol. 2014;70:205.E1-E16; quiz 221-222. doi:10.1016/j.jaad.2013.07.049
  3. Tull RZ, Kerby E, Subash JJ, et al. Ethnic skin centers in the United States: where are we in 2020?. J Am Acad Dermatol. 2020;83:1757-1759. doi:10.1016/j.jaad.2020.03.054
  4. Adelekun A, Onyekaba G, Lipoff JB. Skin color in dermatology textbooks: an updated evaluation and analysis. J Am Acad Dermatol. 2021;84:194-196. doi:10.1016/j.jaad.2020.04.084
  5. Ebede T, Papier A. Disparities in dermatology educational resources. J Am Acad Dermatol. 2006;55:687-690. doi:10.1016/j.jaad.2005.10.068
  6. Jones VA, Clark KA, Shobajo MT, et al. Skin of color representation in medical education: an analysis of popular preparatory materials used for United States medical licensing examinations. J Am Acad Dermatol. 2021;85:773-775. doi:10.1016/j.jaad.2020.07.112
  7. Montgomery SN, Elbuluk N. A quantitative analysis of research publications focused on the top chief complaints in skin of color patients. J Am Acad Dermatol. 2021;85:241-242. doi:10.1016/j.jaad.2020.08.031
  8. Hinds GA, Heald P. Cutaneous T-cell lymphoma in skin of color. J Am Acad Dermatol. 2009;60:359-375; quiz 376-378. doi:10.1016/j.jaad.2008.10.031
  9. Ardigó M, Borroni G, Muscardin L, et al. Hypopigmented mycosis fungoides in Caucasian patients: a clinicopathologic study of 7 cases. J Am Acad Dermatol. 2003;49:264-270. doi:10.1067/s0190-9622(03)00907-1
  10. Colby SL, Ortman JM. Projections of the size and composition of the U.S. population: 2014 to 2060. United States Census Bureau website. Updated October 8, 2021. Accessed February 28, 2022. https://www.census.gov/library/publications/2015/demo/p25-1143.html
  11. Taylor SC, Kyei A. Defining skin of color. In: Kelly AP, Taylor SC, Lim HW, et al, eds. Taylor and Kelly’s Dermatology for Skin of Color. 2nd ed. McGraw-Hill Education; 2016.
  12. Huang AH, Kwatra SG, Khanna R, et al. Racial disparities in the clinical presentation and prognosis of patients with mycosis fungoides. J Natl Med Assoc. 2019;111:633-639. doi:10.1016/j.jnma.2019.08.006
  13. Kim YH, Jensen RA, Watanabe GL, et al. Clinical stage IA (limited patch and plaque) mycosis fungoides. a long-term outcome analysis. Arch Dermatol. 1996;132:1309-1313.
  14. Scarisbrick JJ, Prince HM, Vermeer MH, et al. Cutaneous lymphoma international consortium study of outcome in advanced stages of mycosis fungoides and Sézary syndrome: effect of specific prognostic markers on survival and development of a prognostic model. J Clin Oncol. 2015;33:3766-3773. doi:10.1200/JCO.2015.61.7142
  15. Nath SK, Yu JB, Wilson LD. Poorer prognosis of African-American patients with mycosis fungoides: an analysis of the SEER dataset, 1988 to 2008. Clin Lymphoma Myeloma Leuk. 2014;14:419-423. doi:10.1016/j.clml.2013.12.018
  16. Ben-Gashir MA, Hay RJ. Reliance on erythema scores may mask severe atopic dermatitis in black children compared with their white counterparts. Br J Dermatol. 2002;147:920-925. doi:10.1046/j.1365-2133.2002.04965.x
  17. Poladian K, De Souza B, McMichael AJ. Atopic dermatitis in adolescents with skin of color. Cutis. 2019;104:164-168.
  18. Yones SS, Palmer RA, Garibaldinos TT, et al. Randomized double-blind trial of the treatment of chronic plaque psoriasis: efficacy of psoralen-UV-A therapy vs narrowband UV-B therapy. Arch Dermatol. 2006;142:836-842. doi:10.1001/archderm.142.7.836
  19. Currimbhoy S, Pandya AG. Cutaneous T-cell lymphoma. In: Kelly AP, Taylor SC, Lim HW, et al, eds. Taylor and Kelly’s Dermatology for Skin of Color. 2nd ed. McGraw-Hill Education; 2016.
  20. Ware OR, Dawson JE, Shinohara MM, et al. Racial limitations of Fitzpatrick skin type. Cutis. 2020;105:77-80.
Issue
Cutis - 109(3)
Issue
Cutis - 109(3)
Page Number
E3-E7
Page Number
E3-E7
Publications
Publications
Topics
Article Type
Display Headline
Morphology of Mycosis Fungoides and Sézary Syndrome in Skin of Color
Display Headline
Morphology of Mycosis Fungoides and Sézary Syndrome in Skin of Color
Sections
Inside the Article

Practice Points

  • Dermatologists should be familiar with the variable morphology of mycosis fungoides (MF)/Sézary syndrome (SS) exhibited by patients of all skin types to ensure prompt diagnosis and treatment.
  • Patients with skin of color (SoC)(Fitzpatrick skin types IV–VI) with MF/SS are more likely than non-SoC patients (Fitzpatrick skin types I–III) to present with hyperpigmentation, a silver hue, and lichenification, whereas non-SoC patients commonly present with erythema and poikiloderma.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media