User login
Metastatic microsatellite-stable CRC: CXD101 and nivolumab combo shows promise in phase 2
Key clinical point: The combination of CXD101 and nivolumab at full individual doses was effective and well tolerated as a third-line and above treatment for patients with late-stage microsatellite stable colorectal cancer (MSS CRC).
Major finding: CXD101 and nivolumab combination was well tolerated, with neutropenia (18%) and anemia (7%) being the most common grade 3-4 adverse events. The median progression-free survival and overall survival were 2.1 (95% CI 1.4-3.9) and 7.0 (95% CI 5.13-10.22) months, respectively, with an immune disease control rate of 48% and an immune objective response rate of 9%.
Study details: The data comes from a phase 1b/2 trial including 55 heavily pretreated patients with biopsy-confirmed MSS CRC who received oral CXD101 and intravenous nivolumab.
Disclosures: The trial was supported by Celleron Therapeutics. This study was funded by The Oxford NIHR Comprehensive Biomedical Research Centre and a Cancer Research UK Advanced Clinician Scientist Fellowship. Some authors declared being employees of or holding shares or share options in Celleron Therapeutics.
Source: Saunders MP et al. CXD101 and nivolumab in patients with metastatic microsatellite-stable colorectal cancer (CAROSELL): A multicentre, open-label, single-arm, phase II trial. ESMO Open. 2022;7(6):100594 (Oct 27). Doi: 10.1016/j.esmoop.2022.100594
Key clinical point: The combination of CXD101 and nivolumab at full individual doses was effective and well tolerated as a third-line and above treatment for patients with late-stage microsatellite stable colorectal cancer (MSS CRC).
Major finding: CXD101 and nivolumab combination was well tolerated, with neutropenia (18%) and anemia (7%) being the most common grade 3-4 adverse events. The median progression-free survival and overall survival were 2.1 (95% CI 1.4-3.9) and 7.0 (95% CI 5.13-10.22) months, respectively, with an immune disease control rate of 48% and an immune objective response rate of 9%.
Study details: The data comes from a phase 1b/2 trial including 55 heavily pretreated patients with biopsy-confirmed MSS CRC who received oral CXD101 and intravenous nivolumab.
Disclosures: The trial was supported by Celleron Therapeutics. This study was funded by The Oxford NIHR Comprehensive Biomedical Research Centre and a Cancer Research UK Advanced Clinician Scientist Fellowship. Some authors declared being employees of or holding shares or share options in Celleron Therapeutics.
Source: Saunders MP et al. CXD101 and nivolumab in patients with metastatic microsatellite-stable colorectal cancer (CAROSELL): A multicentre, open-label, single-arm, phase II trial. ESMO Open. 2022;7(6):100594 (Oct 27). Doi: 10.1016/j.esmoop.2022.100594
Key clinical point: The combination of CXD101 and nivolumab at full individual doses was effective and well tolerated as a third-line and above treatment for patients with late-stage microsatellite stable colorectal cancer (MSS CRC).
Major finding: CXD101 and nivolumab combination was well tolerated, with neutropenia (18%) and anemia (7%) being the most common grade 3-4 adverse events. The median progression-free survival and overall survival were 2.1 (95% CI 1.4-3.9) and 7.0 (95% CI 5.13-10.22) months, respectively, with an immune disease control rate of 48% and an immune objective response rate of 9%.
Study details: The data comes from a phase 1b/2 trial including 55 heavily pretreated patients with biopsy-confirmed MSS CRC who received oral CXD101 and intravenous nivolumab.
Disclosures: The trial was supported by Celleron Therapeutics. This study was funded by The Oxford NIHR Comprehensive Biomedical Research Centre and a Cancer Research UK Advanced Clinician Scientist Fellowship. Some authors declared being employees of or holding shares or share options in Celleron Therapeutics.
Source: Saunders MP et al. CXD101 and nivolumab in patients with metastatic microsatellite-stable colorectal cancer (CAROSELL): A multicentre, open-label, single-arm, phase II trial. ESMO Open. 2022;7(6):100594 (Oct 27). Doi: 10.1016/j.esmoop.2022.100594
Effect of early treatment and oxaliplatin discontinuation in patients with stage III colon cancer
Key clinical point: Patients with stage III colon cancer (CC) who received >50% of the planned 6-month oxaliplatin-based chemotherapy may discontinue oxaliplatin and continue fluoropyrimidine in case of clinically relevant neurotoxicity.
Major finding: Discontinuation of all treatment (DT) vs no DT was independently associated with worse 3-year disease-free survival (DFS, adjusted hazard ratio [aHR] 1.61; P < .001) and 5-year overall survival (OS aHR, 1.73; P < .001), but discontinuation of oxaliplatin had no effect on 3-year DFS (P = .3) and 5-year OS (P = .1). However, patients receiving <50% vs 100% of the planned oxaliplatin cycles had poorer DFS (aHR 1.34; 95% CI 1.10-1.64) and OS (aHR 1.61; 95% CI 1.29-2.01).
Study details: This pooled analysis of 11 adjuvant trials included patients with stage III CC who were to receive 6 months of infusional fluorouracil+leucovorin+oxaliplatin or capecitabine+oxaliplatin.
Disclosures: No funding source was declared. Some authors declared employment, stock, or other ownership interest in or receiving research support, speakers' fee, or consultancy fees from various sources.
Source: Gallois C et al. Prognostic impact of early treatment and oxaliplatin discontinuation in patients with stage III colon cancer: An ACCENT/IDEA pooled analysis of 11 adjuvant trials. J Clin Oncol. 2022 (Oct 28). Doi: 10.1200/JCO.21.02726
Key clinical point: Patients with stage III colon cancer (CC) who received >50% of the planned 6-month oxaliplatin-based chemotherapy may discontinue oxaliplatin and continue fluoropyrimidine in case of clinically relevant neurotoxicity.
Major finding: Discontinuation of all treatment (DT) vs no DT was independently associated with worse 3-year disease-free survival (DFS, adjusted hazard ratio [aHR] 1.61; P < .001) and 5-year overall survival (OS aHR, 1.73; P < .001), but discontinuation of oxaliplatin had no effect on 3-year DFS (P = .3) and 5-year OS (P = .1). However, patients receiving <50% vs 100% of the planned oxaliplatin cycles had poorer DFS (aHR 1.34; 95% CI 1.10-1.64) and OS (aHR 1.61; 95% CI 1.29-2.01).
Study details: This pooled analysis of 11 adjuvant trials included patients with stage III CC who were to receive 6 months of infusional fluorouracil+leucovorin+oxaliplatin or capecitabine+oxaliplatin.
Disclosures: No funding source was declared. Some authors declared employment, stock, or other ownership interest in or receiving research support, speakers' fee, or consultancy fees from various sources.
Source: Gallois C et al. Prognostic impact of early treatment and oxaliplatin discontinuation in patients with stage III colon cancer: An ACCENT/IDEA pooled analysis of 11 adjuvant trials. J Clin Oncol. 2022 (Oct 28). Doi: 10.1200/JCO.21.02726
Key clinical point: Patients with stage III colon cancer (CC) who received >50% of the planned 6-month oxaliplatin-based chemotherapy may discontinue oxaliplatin and continue fluoropyrimidine in case of clinically relevant neurotoxicity.
Major finding: Discontinuation of all treatment (DT) vs no DT was independently associated with worse 3-year disease-free survival (DFS, adjusted hazard ratio [aHR] 1.61; P < .001) and 5-year overall survival (OS aHR, 1.73; P < .001), but discontinuation of oxaliplatin had no effect on 3-year DFS (P = .3) and 5-year OS (P = .1). However, patients receiving <50% vs 100% of the planned oxaliplatin cycles had poorer DFS (aHR 1.34; 95% CI 1.10-1.64) and OS (aHR 1.61; 95% CI 1.29-2.01).
Study details: This pooled analysis of 11 adjuvant trials included patients with stage III CC who were to receive 6 months of infusional fluorouracil+leucovorin+oxaliplatin or capecitabine+oxaliplatin.
Disclosures: No funding source was declared. Some authors declared employment, stock, or other ownership interest in or receiving research support, speakers' fee, or consultancy fees from various sources.
Source: Gallois C et al. Prognostic impact of early treatment and oxaliplatin discontinuation in patients with stage III colon cancer: An ACCENT/IDEA pooled analysis of 11 adjuvant trials. J Clin Oncol. 2022 (Oct 28). Doi: 10.1200/JCO.21.02726
Colonoscopy screening leads to modest reduction in risk for CRC
Key clinical point: Participants invited to undergo a single screening colonoscopy had a modestly reduced risk for colorectal cancer (CRC) at 10 years than those who were assigned to no screening.
Major finding: At 10 years, the real-world risk for CRC was 18% lower among participants who were invited vs not invited to undergo screening colonoscopy (risk ratio 0.82; 95% CI 0.70-0.93), with the number needed to invite to undergo screening to prevent 1 case of CRC within 10 years being 455 (95% CI 270-1,429).
Study details: The findings are 10-year follow-up results of the NordICC trial including 84,585 participants who were randomly assigned to receive (invited group; n = 28,220) or not receive (usual-care group; n = 56,365) an invitation to undergo a single screening colonoscopy.
Disclosures: This study was funded by the Research Council of Norway, Nordic Cancer Union, and others. Some authors declared serving as expert witnesses or consultants for or receiving research support, speakers' fees, or consultancy fees from various sources.
Source: Bretthauer M et al. Effect of colonoscopy screening on risks of colorectal cancer and related death. N Engl J Med. 2022;387(17):1547-1556 (Oct 27). Doi: 10.1056/NEJMoa2208375
Key clinical point: Participants invited to undergo a single screening colonoscopy had a modestly reduced risk for colorectal cancer (CRC) at 10 years than those who were assigned to no screening.
Major finding: At 10 years, the real-world risk for CRC was 18% lower among participants who were invited vs not invited to undergo screening colonoscopy (risk ratio 0.82; 95% CI 0.70-0.93), with the number needed to invite to undergo screening to prevent 1 case of CRC within 10 years being 455 (95% CI 270-1,429).
Study details: The findings are 10-year follow-up results of the NordICC trial including 84,585 participants who were randomly assigned to receive (invited group; n = 28,220) or not receive (usual-care group; n = 56,365) an invitation to undergo a single screening colonoscopy.
Disclosures: This study was funded by the Research Council of Norway, Nordic Cancer Union, and others. Some authors declared serving as expert witnesses or consultants for or receiving research support, speakers' fees, or consultancy fees from various sources.
Source: Bretthauer M et al. Effect of colonoscopy screening on risks of colorectal cancer and related death. N Engl J Med. 2022;387(17):1547-1556 (Oct 27). Doi: 10.1056/NEJMoa2208375
Key clinical point: Participants invited to undergo a single screening colonoscopy had a modestly reduced risk for colorectal cancer (CRC) at 10 years than those who were assigned to no screening.
Major finding: At 10 years, the real-world risk for CRC was 18% lower among participants who were invited vs not invited to undergo screening colonoscopy (risk ratio 0.82; 95% CI 0.70-0.93), with the number needed to invite to undergo screening to prevent 1 case of CRC within 10 years being 455 (95% CI 270-1,429).
Study details: The findings are 10-year follow-up results of the NordICC trial including 84,585 participants who were randomly assigned to receive (invited group; n = 28,220) or not receive (usual-care group; n = 56,365) an invitation to undergo a single screening colonoscopy.
Disclosures: This study was funded by the Research Council of Norway, Nordic Cancer Union, and others. Some authors declared serving as expert witnesses or consultants for or receiving research support, speakers' fees, or consultancy fees from various sources.
Source: Bretthauer M et al. Effect of colonoscopy screening on risks of colorectal cancer and related death. N Engl J Med. 2022;387(17):1547-1556 (Oct 27). Doi: 10.1056/NEJMoa2208375
Children with autism show distinct brain features related to motor impairment
Previous research suggests that individuals with ASD overlap in motor impairment with those with DCD. But these two conditions may differ significantly in some areas, as children with ASD tend to show weaker skills in social motor tasks such as imitation, wrote Emil Kilroy, PhD, of the University of Southern California, Los Angeles, and colleagues.
The neurobiological basis of autism remains unknown, despite many research efforts, in part because of the heterogeneity of the disease, said corresponding author Lisa Aziz-Zadeh, PhD, also of the University of Southern California, in an interview.
Comorbidity with other disorders is a strong contributing factor to heterogeneity, and approximately 80% of autistic individuals have motor impairments and meet criteria for a diagnosis of DCD, said Dr. Aziz-Zadeh. “Controlling for other comorbidities, such as developmental coordination disorder, when trying to understand the neural basis of autism is important, so that we can understand which neural circuits are related to [core symptoms of autism] and which ones are related to motor impairments that are comorbid with autism, but not necessarily part of the core symptomology,” she explained. “We focused on white matter pathways here because many researchers now think the underlying basis of autism, besides genetics, is brain connectivity differences.”
In their study published in Scientific Reports, the researchers reviewed data from whole-brain correlational tractography for 22 individuals with autism spectrum disorder, 16 with developmental coordination disorder, and 21 normally developing individuals, who served as the control group. The mean age of the participants was approximately 11 years; the age range was 8-17 years.
Overall, patterns of brain diffusion (movement of fluid, mainly water molecules, in the brain) were significantly different in ASD children, compared with typically developing children.
The ASD group showed significantly reduced diffusivity in the bilateral fronto-parietal cingulum and the left parolfactory cingulum. This finding reflects previous studies suggesting an association between brain patterns in the cingulum area and ASD. But the current study is “the first to identify the fronto-parietal and the parolfactory portions of the cingulum as well as the anterior caudal u-fibers as specific to core ASD symptomatology and not related to motor-related comorbidity,” the researchers wrote.
Differences in brain diffusivity were associated with worse performance on motor skills and behavioral measures for children with ASD and children with DCD, compared with controls.
Motor development was assessed using the Total Movement Assessment Battery for Children-2 (MABC-2) and the Florida Apraxia Battery modified for children (FAB-M). The MABC-2 is among the most common tools for measuring motor skills and identifying clinically relevant motor deficits in children and teens aged 3-16 years. The test includes three subtest scores (manual dexterity, gross-motor aiming and catching, and balance) and a total score. Scores are based on a child’s best performance on each component, and higher scores indicate better functioning. In the new study, The MABC-2 total scores averaged 10.57 for controls, compared with 5.76 in the ASD group, and 4.31 in the DCD group.
Children with ASD differed from the other groups in social measures. Social skills were measured using several tools, including the Social Responsivity Scale (SRS Total), which is a parent-completed survey that includes a total score designed to reflect the severity of social deficits in ASD. It is divided into five subscales for parents to assess a child’s social skill impairment: social awareness, social cognition, social communication, social motivation, and mannerisms. Scores for the SRS are calculated in T-scores, in which a score of 50 represents the mean. T-scores of 59 and below are generally not associated with ASD, and patients with these scores are considered to have low to no symptomatology. Scores on the SRS Total in the new study were 45.95, 77.45, and 55.81 for the controls, ASD group, and DCD group, respectively.
Results should raise awareness
“The results were largely predicted in our hypotheses – that we would find specific white matter pathways in autism that would differ from [what we saw in typically developing patients and those with DCD], and that diffusivity in ASD would be related to socioemotional differences,” Dr. Aziz-Zadeh said, in an interview.
“What was surprising was that some pathways that had previously been thought to be different in autism were also compromised in DCD, indicating that they were common to motor deficits which both groups shared, not to core autism symptomology,” she noted.
A message for clinicians from the study is that a dual diagnosis of DCD is often missing in ASD practice, said Dr. Aziz-Zadeh. “Given that approximately 80% of children with ASD have DCD, testing for DCD and addressing potential motor issues should be more common practice,” she said.
Dr. Aziz-Zadeh and colleagues are now investigating relationships between the brain, behavior, and the gut microbiome. “We think that understanding autism from a full-body perspective, examining interactions between the brain and the body, will be an important step in this field,” she emphasized.
The study was limited by several factors, including the small sample size, the use of only right-handed participants, and the use of self-reports by children and parents, the researchers noted. Additionally, they noted that white matter develops at different rates in different age groups, and future studies might consider age as a factor, as well as further behavioral assessments, they said.
Small sample size limits conclusions
“Understanding the neuroanatomic differences that may contribute to the core symptoms of ASD is a very important goal for the field, particularly how they relate to other comorbid symptoms and neurodevelopmental disorders,” said Michael Gandal, MD, of the department of psychiatry at the University of Pennsylvania, Philadelphia, and a member of the Lifespan Brain Institute at the Children’s Hospital of Philadelphia, in an interview.
“While this study provides some clues into how structural connectivity may relate to motor coordination in ASD, it will be important to replicate these findings in a much larger sample before we can really appreciate how robust these findings are and how well they generalize to the broader ASD population,” Dr. Gandal emphasized.
The study was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The researchers had no financial conflicts to disclose. Dr. Gandal had no financial conflicts to disclose.
Previous research suggests that individuals with ASD overlap in motor impairment with those with DCD. But these two conditions may differ significantly in some areas, as children with ASD tend to show weaker skills in social motor tasks such as imitation, wrote Emil Kilroy, PhD, of the University of Southern California, Los Angeles, and colleagues.
The neurobiological basis of autism remains unknown, despite many research efforts, in part because of the heterogeneity of the disease, said corresponding author Lisa Aziz-Zadeh, PhD, also of the University of Southern California, in an interview.
Comorbidity with other disorders is a strong contributing factor to heterogeneity, and approximately 80% of autistic individuals have motor impairments and meet criteria for a diagnosis of DCD, said Dr. Aziz-Zadeh. “Controlling for other comorbidities, such as developmental coordination disorder, when trying to understand the neural basis of autism is important, so that we can understand which neural circuits are related to [core symptoms of autism] and which ones are related to motor impairments that are comorbid with autism, but not necessarily part of the core symptomology,” she explained. “We focused on white matter pathways here because many researchers now think the underlying basis of autism, besides genetics, is brain connectivity differences.”
In their study published in Scientific Reports, the researchers reviewed data from whole-brain correlational tractography for 22 individuals with autism spectrum disorder, 16 with developmental coordination disorder, and 21 normally developing individuals, who served as the control group. The mean age of the participants was approximately 11 years; the age range was 8-17 years.
Overall, patterns of brain diffusion (movement of fluid, mainly water molecules, in the brain) were significantly different in ASD children, compared with typically developing children.
The ASD group showed significantly reduced diffusivity in the bilateral fronto-parietal cingulum and the left parolfactory cingulum. This finding reflects previous studies suggesting an association between brain patterns in the cingulum area and ASD. But the current study is “the first to identify the fronto-parietal and the parolfactory portions of the cingulum as well as the anterior caudal u-fibers as specific to core ASD symptomatology and not related to motor-related comorbidity,” the researchers wrote.
Differences in brain diffusivity were associated with worse performance on motor skills and behavioral measures for children with ASD and children with DCD, compared with controls.
Motor development was assessed using the Total Movement Assessment Battery for Children-2 (MABC-2) and the Florida Apraxia Battery modified for children (FAB-M). The MABC-2 is among the most common tools for measuring motor skills and identifying clinically relevant motor deficits in children and teens aged 3-16 years. The test includes three subtest scores (manual dexterity, gross-motor aiming and catching, and balance) and a total score. Scores are based on a child’s best performance on each component, and higher scores indicate better functioning. In the new study, The MABC-2 total scores averaged 10.57 for controls, compared with 5.76 in the ASD group, and 4.31 in the DCD group.
Children with ASD differed from the other groups in social measures. Social skills were measured using several tools, including the Social Responsivity Scale (SRS Total), which is a parent-completed survey that includes a total score designed to reflect the severity of social deficits in ASD. It is divided into five subscales for parents to assess a child’s social skill impairment: social awareness, social cognition, social communication, social motivation, and mannerisms. Scores for the SRS are calculated in T-scores, in which a score of 50 represents the mean. T-scores of 59 and below are generally not associated with ASD, and patients with these scores are considered to have low to no symptomatology. Scores on the SRS Total in the new study were 45.95, 77.45, and 55.81 for the controls, ASD group, and DCD group, respectively.
Results should raise awareness
“The results were largely predicted in our hypotheses – that we would find specific white matter pathways in autism that would differ from [what we saw in typically developing patients and those with DCD], and that diffusivity in ASD would be related to socioemotional differences,” Dr. Aziz-Zadeh said, in an interview.
“What was surprising was that some pathways that had previously been thought to be different in autism were also compromised in DCD, indicating that they were common to motor deficits which both groups shared, not to core autism symptomology,” she noted.
A message for clinicians from the study is that a dual diagnosis of DCD is often missing in ASD practice, said Dr. Aziz-Zadeh. “Given that approximately 80% of children with ASD have DCD, testing for DCD and addressing potential motor issues should be more common practice,” she said.
Dr. Aziz-Zadeh and colleagues are now investigating relationships between the brain, behavior, and the gut microbiome. “We think that understanding autism from a full-body perspective, examining interactions between the brain and the body, will be an important step in this field,” she emphasized.
The study was limited by several factors, including the small sample size, the use of only right-handed participants, and the use of self-reports by children and parents, the researchers noted. Additionally, they noted that white matter develops at different rates in different age groups, and future studies might consider age as a factor, as well as further behavioral assessments, they said.
Small sample size limits conclusions
“Understanding the neuroanatomic differences that may contribute to the core symptoms of ASD is a very important goal for the field, particularly how they relate to other comorbid symptoms and neurodevelopmental disorders,” said Michael Gandal, MD, of the department of psychiatry at the University of Pennsylvania, Philadelphia, and a member of the Lifespan Brain Institute at the Children’s Hospital of Philadelphia, in an interview.
“While this study provides some clues into how structural connectivity may relate to motor coordination in ASD, it will be important to replicate these findings in a much larger sample before we can really appreciate how robust these findings are and how well they generalize to the broader ASD population,” Dr. Gandal emphasized.
The study was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The researchers had no financial conflicts to disclose. Dr. Gandal had no financial conflicts to disclose.
Previous research suggests that individuals with ASD overlap in motor impairment with those with DCD. But these two conditions may differ significantly in some areas, as children with ASD tend to show weaker skills in social motor tasks such as imitation, wrote Emil Kilroy, PhD, of the University of Southern California, Los Angeles, and colleagues.
The neurobiological basis of autism remains unknown, despite many research efforts, in part because of the heterogeneity of the disease, said corresponding author Lisa Aziz-Zadeh, PhD, also of the University of Southern California, in an interview.
Comorbidity with other disorders is a strong contributing factor to heterogeneity, and approximately 80% of autistic individuals have motor impairments and meet criteria for a diagnosis of DCD, said Dr. Aziz-Zadeh. “Controlling for other comorbidities, such as developmental coordination disorder, when trying to understand the neural basis of autism is important, so that we can understand which neural circuits are related to [core symptoms of autism] and which ones are related to motor impairments that are comorbid with autism, but not necessarily part of the core symptomology,” she explained. “We focused on white matter pathways here because many researchers now think the underlying basis of autism, besides genetics, is brain connectivity differences.”
In their study published in Scientific Reports, the researchers reviewed data from whole-brain correlational tractography for 22 individuals with autism spectrum disorder, 16 with developmental coordination disorder, and 21 normally developing individuals, who served as the control group. The mean age of the participants was approximately 11 years; the age range was 8-17 years.
Overall, patterns of brain diffusion (movement of fluid, mainly water molecules, in the brain) were significantly different in ASD children, compared with typically developing children.
The ASD group showed significantly reduced diffusivity in the bilateral fronto-parietal cingulum and the left parolfactory cingulum. This finding reflects previous studies suggesting an association between brain patterns in the cingulum area and ASD. But the current study is “the first to identify the fronto-parietal and the parolfactory portions of the cingulum as well as the anterior caudal u-fibers as specific to core ASD symptomatology and not related to motor-related comorbidity,” the researchers wrote.
Differences in brain diffusivity were associated with worse performance on motor skills and behavioral measures for children with ASD and children with DCD, compared with controls.
Motor development was assessed using the Total Movement Assessment Battery for Children-2 (MABC-2) and the Florida Apraxia Battery modified for children (FAB-M). The MABC-2 is among the most common tools for measuring motor skills and identifying clinically relevant motor deficits in children and teens aged 3-16 years. The test includes three subtest scores (manual dexterity, gross-motor aiming and catching, and balance) and a total score. Scores are based on a child’s best performance on each component, and higher scores indicate better functioning. In the new study, The MABC-2 total scores averaged 10.57 for controls, compared with 5.76 in the ASD group, and 4.31 in the DCD group.
Children with ASD differed from the other groups in social measures. Social skills were measured using several tools, including the Social Responsivity Scale (SRS Total), which is a parent-completed survey that includes a total score designed to reflect the severity of social deficits in ASD. It is divided into five subscales for parents to assess a child’s social skill impairment: social awareness, social cognition, social communication, social motivation, and mannerisms. Scores for the SRS are calculated in T-scores, in which a score of 50 represents the mean. T-scores of 59 and below are generally not associated with ASD, and patients with these scores are considered to have low to no symptomatology. Scores on the SRS Total in the new study were 45.95, 77.45, and 55.81 for the controls, ASD group, and DCD group, respectively.
Results should raise awareness
“The results were largely predicted in our hypotheses – that we would find specific white matter pathways in autism that would differ from [what we saw in typically developing patients and those with DCD], and that diffusivity in ASD would be related to socioemotional differences,” Dr. Aziz-Zadeh said, in an interview.
“What was surprising was that some pathways that had previously been thought to be different in autism were also compromised in DCD, indicating that they were common to motor deficits which both groups shared, not to core autism symptomology,” she noted.
A message for clinicians from the study is that a dual diagnosis of DCD is often missing in ASD practice, said Dr. Aziz-Zadeh. “Given that approximately 80% of children with ASD have DCD, testing for DCD and addressing potential motor issues should be more common practice,” she said.
Dr. Aziz-Zadeh and colleagues are now investigating relationships between the brain, behavior, and the gut microbiome. “We think that understanding autism from a full-body perspective, examining interactions between the brain and the body, will be an important step in this field,” she emphasized.
The study was limited by several factors, including the small sample size, the use of only right-handed participants, and the use of self-reports by children and parents, the researchers noted. Additionally, they noted that white matter develops at different rates in different age groups, and future studies might consider age as a factor, as well as further behavioral assessments, they said.
Small sample size limits conclusions
“Understanding the neuroanatomic differences that may contribute to the core symptoms of ASD is a very important goal for the field, particularly how they relate to other comorbid symptoms and neurodevelopmental disorders,” said Michael Gandal, MD, of the department of psychiatry at the University of Pennsylvania, Philadelphia, and a member of the Lifespan Brain Institute at the Children’s Hospital of Philadelphia, in an interview.
“While this study provides some clues into how structural connectivity may relate to motor coordination in ASD, it will be important to replicate these findings in a much larger sample before we can really appreciate how robust these findings are and how well they generalize to the broader ASD population,” Dr. Gandal emphasized.
The study was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The researchers had no financial conflicts to disclose. Dr. Gandal had no financial conflicts to disclose.
FROM SCIENTIFIC REPORTS
Could intermittent fasting improve GERD symptoms?
, suggests a small U.S. study.
Twenty-five individuals with suspected GERD symptoms underwent 96-hour pH monitoring. They were asked to follow their normal diet for the first 48 hours; for the second 48 hours, they were asked to switch to a 16-hour fast, which was followed by an 8-hour eating window.
Just over a third of participants were fully compliant with the 16:8 intermittent fasting. But those who followed the regimen experienced a mild reduction in mean acid exposure time and self-reported GERD symptoms scores.
The research was published online by the Journal of Clinical Gastroenterology.
Costly condition
The prevalence of GERD in the United States is estimated at 18%-28%. Annual costs of the condition are more than $18 billion per year, largely through pharmacologic therapies and diagnostic testing, write lead author Yan Jiang, MD, division of gastrointestinal and liver D-diseases, Keck Medicine of University of Southern California, Los Angeles, and colleagues.
Proton pump inhibitor (PPI) therapy is one of the most prescribed classes of medications in the United States, the authors write. But concerns over the long-term safety of the drugs, as well as the fact that half of patients report breakthrough GERD symptoms, have generated interest in non-PPI treatments among patients and providers.
The role of diet in the management of GERD, however, remains poorly understood, despite the fact that obesity and weight gain have been linked to reflux.
The authors note that intermittent fasting has shown benefits in coronary artery disease, inflammatory disorders, obesity, and diabetes. Proposed mechanisms include anti-inflammatory effects, weight loss, and alterations in hormone secretion.
Intervention test in a 96-hour clinical evaluation for GERD
To investigate the effects of intermittent fasting in GERD, the researchers screened patients referred to the Stanford University gastrointestinal clinic for diagnostic 96-hour ambulatory wireless pH monitoring of suspected acid reflux symptoms.
They excluded patients younger than 18 years, pregnant women, those with insulin-dependent diabetes, and those who had used PPIs within the previous 7 days. There were other exclusion criteria as well.
The study was completed by 25 participants. The mean age of the patients was 43.5 years; 52% were women. Just under half (44%) were White, and the mean body mass index was 25.8 kg/m2.
For the first 48 hours of the pH monitoring, the patients followed their baseline diet. For the second 48 hours, they were asked to follow an intermittent fasting regimen.
In that regimen, during a 24-hour period, there was an 8-hour caloric intake window and no caloric intake during the other 16 consecutive hours. Participants who fasted for at least 15 hours, as indicated on a self-report food log, were considered successful.
Only 36% of participants were fully adherent to the fasting regimen; 84% were partially compliant, defined as following the regimen for at least 1 of the 2 days of intermittent fasting.
On intermittent fasting days, the mean acid exposure time was 3.5%, compared with 4.3% on the baseline diet. The team calculated that adhering to the 16:8 intermittent fasting regimen reduced the mean acid exposure time by 0.64%.
Intermittent fasting was also associated with a reduction in total GERD symptom scores, at 9.9 following day 4 versus 14.3 following day 2. There were reductions in heartburn symptoms scores of 2.6 and in regurgitation scores of 1.8.
When the researchers compared individuals who were compliant with intermittent fasting with those who were only partially compliant, they found that there was still an improvement in GERD symptoms, with a reduction in scores of 3.2.
More acid, bigger benefits
There could be several explanations for the findings, Dr. Jiang said in an interview.
In the short-term study, fewer meals during intermittent fasting and more hours between the last meal and bedtime can help with the supine symptoms of GERD, Dr. Jiang said.
Over the longer term, he added, previous studies have suggested that fasting-induced alterations in inflammatory cytokines or cells could be a contributory mechanism, “but it’s not something that we can glean from our study.”
Participants with elevated acid exposure at baseline and who were more likely to have GERD diagnosed by the pH monitoring seemed to experience the greatest benefit from intermittent fasting, Dr. Jiang pointed out.
“This study looked at all comers with GERD symptoms,” he said. “But if you were to do another study with people with proven GERD, they might experience a bigger impact with intermittent fasting.”
Dr. Jiang added, “If a patient is willing to do intermittent fasting, and certainly if they have other reasons [for doing so], I think it doesn’t hurt, and it might actually help them a little bit in their current symptoms.”
Larger scale, longer follow-up studies needed
Luigi Bonavina, MD, department of biomedical sciences for health, University of Milan, IRCCS Policlinico San Donato, Italy, said in an interview that it was a “nice, original study.”
It is “noteworthy that only one previous study explored the effect of Ramadan on GERD symptoms and found a small improvement of GERD symptoms,” Dr. Bonavina said. “Unfortunately, the magnitude of effect [in the current study] was not as one may have expected, due to small sample size and low compliance with intermittent fasting.”
Although the effect was “mild compared to that seen with PPIs,” it would “be interesting to see whether the results of this pilot, proof-of-concept study can be confirmed on a larger scale with longer follow-up to prove that reflux symptoms will not worsen over time,” he said.
“Intermittent fasting may be recommended, especially in overweight-obese patients with GERD symptoms who are poor responders to gastric acid inhibitors,” Dr. Bonavina added. “Reduction of inflammation, reduction of meal intake, and going to bed with an empty stomach may also work in patients with GERD.”
No funding for the study has been declared. The authors and Dr. Bonavina report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, suggests a small U.S. study.
Twenty-five individuals with suspected GERD symptoms underwent 96-hour pH monitoring. They were asked to follow their normal diet for the first 48 hours; for the second 48 hours, they were asked to switch to a 16-hour fast, which was followed by an 8-hour eating window.
Just over a third of participants were fully compliant with the 16:8 intermittent fasting. But those who followed the regimen experienced a mild reduction in mean acid exposure time and self-reported GERD symptoms scores.
The research was published online by the Journal of Clinical Gastroenterology.
Costly condition
The prevalence of GERD in the United States is estimated at 18%-28%. Annual costs of the condition are more than $18 billion per year, largely through pharmacologic therapies and diagnostic testing, write lead author Yan Jiang, MD, division of gastrointestinal and liver D-diseases, Keck Medicine of University of Southern California, Los Angeles, and colleagues.
Proton pump inhibitor (PPI) therapy is one of the most prescribed classes of medications in the United States, the authors write. But concerns over the long-term safety of the drugs, as well as the fact that half of patients report breakthrough GERD symptoms, have generated interest in non-PPI treatments among patients and providers.
The role of diet in the management of GERD, however, remains poorly understood, despite the fact that obesity and weight gain have been linked to reflux.
The authors note that intermittent fasting has shown benefits in coronary artery disease, inflammatory disorders, obesity, and diabetes. Proposed mechanisms include anti-inflammatory effects, weight loss, and alterations in hormone secretion.
Intervention test in a 96-hour clinical evaluation for GERD
To investigate the effects of intermittent fasting in GERD, the researchers screened patients referred to the Stanford University gastrointestinal clinic for diagnostic 96-hour ambulatory wireless pH monitoring of suspected acid reflux symptoms.
They excluded patients younger than 18 years, pregnant women, those with insulin-dependent diabetes, and those who had used PPIs within the previous 7 days. There were other exclusion criteria as well.
The study was completed by 25 participants. The mean age of the patients was 43.5 years; 52% were women. Just under half (44%) were White, and the mean body mass index was 25.8 kg/m2.
For the first 48 hours of the pH monitoring, the patients followed their baseline diet. For the second 48 hours, they were asked to follow an intermittent fasting regimen.
In that regimen, during a 24-hour period, there was an 8-hour caloric intake window and no caloric intake during the other 16 consecutive hours. Participants who fasted for at least 15 hours, as indicated on a self-report food log, were considered successful.
Only 36% of participants were fully adherent to the fasting regimen; 84% were partially compliant, defined as following the regimen for at least 1 of the 2 days of intermittent fasting.
On intermittent fasting days, the mean acid exposure time was 3.5%, compared with 4.3% on the baseline diet. The team calculated that adhering to the 16:8 intermittent fasting regimen reduced the mean acid exposure time by 0.64%.
Intermittent fasting was also associated with a reduction in total GERD symptom scores, at 9.9 following day 4 versus 14.3 following day 2. There were reductions in heartburn symptoms scores of 2.6 and in regurgitation scores of 1.8.
When the researchers compared individuals who were compliant with intermittent fasting with those who were only partially compliant, they found that there was still an improvement in GERD symptoms, with a reduction in scores of 3.2.
More acid, bigger benefits
There could be several explanations for the findings, Dr. Jiang said in an interview.
In the short-term study, fewer meals during intermittent fasting and more hours between the last meal and bedtime can help with the supine symptoms of GERD, Dr. Jiang said.
Over the longer term, he added, previous studies have suggested that fasting-induced alterations in inflammatory cytokines or cells could be a contributory mechanism, “but it’s not something that we can glean from our study.”
Participants with elevated acid exposure at baseline and who were more likely to have GERD diagnosed by the pH monitoring seemed to experience the greatest benefit from intermittent fasting, Dr. Jiang pointed out.
“This study looked at all comers with GERD symptoms,” he said. “But if you were to do another study with people with proven GERD, they might experience a bigger impact with intermittent fasting.”
Dr. Jiang added, “If a patient is willing to do intermittent fasting, and certainly if they have other reasons [for doing so], I think it doesn’t hurt, and it might actually help them a little bit in their current symptoms.”
Larger scale, longer follow-up studies needed
Luigi Bonavina, MD, department of biomedical sciences for health, University of Milan, IRCCS Policlinico San Donato, Italy, said in an interview that it was a “nice, original study.”
It is “noteworthy that only one previous study explored the effect of Ramadan on GERD symptoms and found a small improvement of GERD symptoms,” Dr. Bonavina said. “Unfortunately, the magnitude of effect [in the current study] was not as one may have expected, due to small sample size and low compliance with intermittent fasting.”
Although the effect was “mild compared to that seen with PPIs,” it would “be interesting to see whether the results of this pilot, proof-of-concept study can be confirmed on a larger scale with longer follow-up to prove that reflux symptoms will not worsen over time,” he said.
“Intermittent fasting may be recommended, especially in overweight-obese patients with GERD symptoms who are poor responders to gastric acid inhibitors,” Dr. Bonavina added. “Reduction of inflammation, reduction of meal intake, and going to bed with an empty stomach may also work in patients with GERD.”
No funding for the study has been declared. The authors and Dr. Bonavina report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, suggests a small U.S. study.
Twenty-five individuals with suspected GERD symptoms underwent 96-hour pH monitoring. They were asked to follow their normal diet for the first 48 hours; for the second 48 hours, they were asked to switch to a 16-hour fast, which was followed by an 8-hour eating window.
Just over a third of participants were fully compliant with the 16:8 intermittent fasting. But those who followed the regimen experienced a mild reduction in mean acid exposure time and self-reported GERD symptoms scores.
The research was published online by the Journal of Clinical Gastroenterology.
Costly condition
The prevalence of GERD in the United States is estimated at 18%-28%. Annual costs of the condition are more than $18 billion per year, largely through pharmacologic therapies and diagnostic testing, write lead author Yan Jiang, MD, division of gastrointestinal and liver D-diseases, Keck Medicine of University of Southern California, Los Angeles, and colleagues.
Proton pump inhibitor (PPI) therapy is one of the most prescribed classes of medications in the United States, the authors write. But concerns over the long-term safety of the drugs, as well as the fact that half of patients report breakthrough GERD symptoms, have generated interest in non-PPI treatments among patients and providers.
The role of diet in the management of GERD, however, remains poorly understood, despite the fact that obesity and weight gain have been linked to reflux.
The authors note that intermittent fasting has shown benefits in coronary artery disease, inflammatory disorders, obesity, and diabetes. Proposed mechanisms include anti-inflammatory effects, weight loss, and alterations in hormone secretion.
Intervention test in a 96-hour clinical evaluation for GERD
To investigate the effects of intermittent fasting in GERD, the researchers screened patients referred to the Stanford University gastrointestinal clinic for diagnostic 96-hour ambulatory wireless pH monitoring of suspected acid reflux symptoms.
They excluded patients younger than 18 years, pregnant women, those with insulin-dependent diabetes, and those who had used PPIs within the previous 7 days. There were other exclusion criteria as well.
The study was completed by 25 participants. The mean age of the patients was 43.5 years; 52% were women. Just under half (44%) were White, and the mean body mass index was 25.8 kg/m2.
For the first 48 hours of the pH monitoring, the patients followed their baseline diet. For the second 48 hours, they were asked to follow an intermittent fasting regimen.
In that regimen, during a 24-hour period, there was an 8-hour caloric intake window and no caloric intake during the other 16 consecutive hours. Participants who fasted for at least 15 hours, as indicated on a self-report food log, were considered successful.
Only 36% of participants were fully adherent to the fasting regimen; 84% were partially compliant, defined as following the regimen for at least 1 of the 2 days of intermittent fasting.
On intermittent fasting days, the mean acid exposure time was 3.5%, compared with 4.3% on the baseline diet. The team calculated that adhering to the 16:8 intermittent fasting regimen reduced the mean acid exposure time by 0.64%.
Intermittent fasting was also associated with a reduction in total GERD symptom scores, at 9.9 following day 4 versus 14.3 following day 2. There were reductions in heartburn symptoms scores of 2.6 and in regurgitation scores of 1.8.
When the researchers compared individuals who were compliant with intermittent fasting with those who were only partially compliant, they found that there was still an improvement in GERD symptoms, with a reduction in scores of 3.2.
More acid, bigger benefits
There could be several explanations for the findings, Dr. Jiang said in an interview.
In the short-term study, fewer meals during intermittent fasting and more hours between the last meal and bedtime can help with the supine symptoms of GERD, Dr. Jiang said.
Over the longer term, he added, previous studies have suggested that fasting-induced alterations in inflammatory cytokines or cells could be a contributory mechanism, “but it’s not something that we can glean from our study.”
Participants with elevated acid exposure at baseline and who were more likely to have GERD diagnosed by the pH monitoring seemed to experience the greatest benefit from intermittent fasting, Dr. Jiang pointed out.
“This study looked at all comers with GERD symptoms,” he said. “But if you were to do another study with people with proven GERD, they might experience a bigger impact with intermittent fasting.”
Dr. Jiang added, “If a patient is willing to do intermittent fasting, and certainly if they have other reasons [for doing so], I think it doesn’t hurt, and it might actually help them a little bit in their current symptoms.”
Larger scale, longer follow-up studies needed
Luigi Bonavina, MD, department of biomedical sciences for health, University of Milan, IRCCS Policlinico San Donato, Italy, said in an interview that it was a “nice, original study.”
It is “noteworthy that only one previous study explored the effect of Ramadan on GERD symptoms and found a small improvement of GERD symptoms,” Dr. Bonavina said. “Unfortunately, the magnitude of effect [in the current study] was not as one may have expected, due to small sample size and low compliance with intermittent fasting.”
Although the effect was “mild compared to that seen with PPIs,” it would “be interesting to see whether the results of this pilot, proof-of-concept study can be confirmed on a larger scale with longer follow-up to prove that reflux symptoms will not worsen over time,” he said.
“Intermittent fasting may be recommended, especially in overweight-obese patients with GERD symptoms who are poor responders to gastric acid inhibitors,” Dr. Bonavina added. “Reduction of inflammation, reduction of meal intake, and going to bed with an empty stomach may also work in patients with GERD.”
No funding for the study has been declared. The authors and Dr. Bonavina report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JOURNAL OF CLINICAL GASTROENTEROLOGY
At-home births rose during the pandemic, CDC reports
More women gave birth at home in America last year, continuing a pandemic trend and reaching the highest level in decades, according to figures released by the CDC.
The report said that almost 52,000 births occurred at home in 2021, out of 4 million total births in the country. This was an increase of 12% from 2020. The figure rose by 22% in 2020, when the COVID-19 pandemic hit, over 2019.
There were several possible reasons for the increase in home births. Infection rates and hospitalizations were high. Vaccinations were not available or were not widely used, and many people avoided going to hospitals or the doctor, said Elizabeth Gregory, the report’s lead author.
Also, some women didn’t have health insurance, lived far from a medical facility, or could not get to a hospital fast enough. About 25% of home births are not planned, the Associated Press reported.
Increases in home births occurred across all races, but home births were less common among Hispanics.
The AP reported that home births are riskier than hospital births, according to the American College of Obstetricians and Gynecologists. The organization advises against home births for women carrying multiple babies or who have previously had a cesarean section.
“Hospitals and accredited birth centers are the safest places to give birth, because although serious complications associated with labor and delivery are rare, they can be catastrophic,” said Jeffrey Ecker, M.D., chief of obstetrics and gynecology at Massachusetts General Hospital, Boston.
A version of this article first appeared on WebMD.com.
More women gave birth at home in America last year, continuing a pandemic trend and reaching the highest level in decades, according to figures released by the CDC.
The report said that almost 52,000 births occurred at home in 2021, out of 4 million total births in the country. This was an increase of 12% from 2020. The figure rose by 22% in 2020, when the COVID-19 pandemic hit, over 2019.
There were several possible reasons for the increase in home births. Infection rates and hospitalizations were high. Vaccinations were not available or were not widely used, and many people avoided going to hospitals or the doctor, said Elizabeth Gregory, the report’s lead author.
Also, some women didn’t have health insurance, lived far from a medical facility, or could not get to a hospital fast enough. About 25% of home births are not planned, the Associated Press reported.
Increases in home births occurred across all races, but home births were less common among Hispanics.
The AP reported that home births are riskier than hospital births, according to the American College of Obstetricians and Gynecologists. The organization advises against home births for women carrying multiple babies or who have previously had a cesarean section.
“Hospitals and accredited birth centers are the safest places to give birth, because although serious complications associated with labor and delivery are rare, they can be catastrophic,” said Jeffrey Ecker, M.D., chief of obstetrics and gynecology at Massachusetts General Hospital, Boston.
A version of this article first appeared on WebMD.com.
More women gave birth at home in America last year, continuing a pandemic trend and reaching the highest level in decades, according to figures released by the CDC.
The report said that almost 52,000 births occurred at home in 2021, out of 4 million total births in the country. This was an increase of 12% from 2020. The figure rose by 22% in 2020, when the COVID-19 pandemic hit, over 2019.
There were several possible reasons for the increase in home births. Infection rates and hospitalizations were high. Vaccinations were not available or were not widely used, and many people avoided going to hospitals or the doctor, said Elizabeth Gregory, the report’s lead author.
Also, some women didn’t have health insurance, lived far from a medical facility, or could not get to a hospital fast enough. About 25% of home births are not planned, the Associated Press reported.
Increases in home births occurred across all races, but home births were less common among Hispanics.
The AP reported that home births are riskier than hospital births, according to the American College of Obstetricians and Gynecologists. The organization advises against home births for women carrying multiple babies or who have previously had a cesarean section.
“Hospitals and accredited birth centers are the safest places to give birth, because although serious complications associated with labor and delivery are rare, they can be catastrophic,” said Jeffrey Ecker, M.D., chief of obstetrics and gynecology at Massachusetts General Hospital, Boston.
A version of this article first appeared on WebMD.com.
Diffuse Papular Eruption With Erosions and Ulcerations
The Diagnosis: Immunotherapy-Related Lichenoid Drug Eruption
Direct immunofluorescence was negative, and histopathology revealed a lichenoid interface dermatitis, minimal parakeratosis, and saw-toothed rete ridges (Figure 1). He was diagnosed with an immunotherapyrelated lichenoid drug eruption based on the morphology of the skin lesions and clinicopathologic correlation. Bullous pemphigoid and lichen planus pemphigoides were ruled out given the negative direct immunofluorescence findings. Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) was not consistent with the clinical presentation, especially given the lack of mucosal findings. The histology also was not consistent, as the biopsy specimen lacked apoptotic and necrotic keratinocytes to the degree seen in SJS/TEN and also had a greater degree of inflammatory infiltrate. Drug reaction with eosinophilia and systemic symptoms (DRESS) syndrome was ruled out given the lack of systemic findings, including facial swelling and lymphadenopathy and the clinical appearance of the rash. No morbilliform features were present, which is the most common presentation of DRESS syndrome.
Checkpoint inhibitor (CPI) therapy has become the cornerstone in management of certain advanced malignancies.1 Checkpoint inhibitors block cytotoxic T lymphocyte–associated protein 4, programmed cell death-1, and/or programmed cell death ligand-1, allowing activated T cells to infiltrate the tumor microenvironment and destroy malignant cells. Checkpoint inhibitors are approved for the treatment of melanoma, cutaneous squamous cell carcinoma, and Merkel cell carcinoma and are being investigated in various other cutaneous and soft tissue malignancies.1-3
Although CPIs have shown substantial efficacy in the management of advanced malignancies, immune-related adverse events (AEs) are common due to nonspecific immune activation.2 Immune-related cutaneous AEs are the most common immune-related AEs, occurring in 30% to 50% of patients who undergo treatment.2-5 Common immune-related cutaneous AEs include maculopapular, psoriasiform, and lichenoid dermatitis, as well as pruritus without dermatitis.2,3,6 Other reactions include but are not limited to bullous pemphigoid, vitiligolike depigmentation, and alopecia.2,3 Immune-related cutaneous AEs usually are self-limited; however, severe life-threatening reactions such as the spectrum of SJS/TEN and DRESS syndrome also can occur.2-4 Immune-related cutaneous AEs are graded based on the Common Terminology Criteria for Adverse Events: grade 1 reactions are asymptomatic and cover less than 10% of the patient’s body surface area (BSA), grade 2 reactions have mild symptoms and cover 10% to 30% of the patient’s BSA, grade 3 reactions have moderate to severe symptoms and cover greater than 30% of the patient’s BSA, and grade 4 reactions are life-threatening.2,3 With prompt recognition and adequate treatment, mild to moderate immune-related cutaneous AEs—grades 1 and 2—largely are reversible, and less than 5% require discontinuation of therapy.2,3,6 It has been suggested that immune-related cutaneous AEs may be a positive prognostic factor in the treatment of underlying malignancy, indicating adequate immune activation targeting the malignant cells.6
Although our patient had some typical violaceous, flat-topped papules and plaques with Wickham striae, he also had atypical findings for a lichenoid reaction. Given the endorsement of blisters, it is possible that some of these lesions initially were bullous and subsequently ruptured, leaving behind erosions. However, in other areas, there also were eroded papules and ulcerations without a reported history of excoriation, scratching, picking, or prior bullae, including difficult-to-reach areas such as the back. It is favored that these lesions represented a robust lichenoid dermatitis leading to erosive and ulcerated lesions, similar to the formation of bullous lichen planus. Lichenoid eruptions secondary to immunotherapy are well-known phenomena, but a PubMed search of articles indexed for MEDLINE using the terms ulcer, lichenoid, and immunotherapy revealed only 2 cases of ulcerative lichenoid eruptions: a localized digital erosive lichenoid dermatitis and a widespread ulcerative lichenoid drug eruption without true erosions.7,8 However, widespread erosive and ulcerated lichenoid reactions are rare.
Lichenoid eruptions most strongly are associated with anti–programmed cell death-1/ programmed cell death ligand-1 therapy, occurring in 20% of patients undergoing treatment.3 Lichenoid eruptions present as discrete, pruritic, erythematous, violaceous papules and plaques on the chest and back and rarely may involve the limbs, palmoplantar surfaces, and oral mucosa.2,3,6 Histopathologic features include a dense bandlike lymphocytic infiltrate in the dermis with scattered apoptotic keratinocytes in the basal layer of the epidermis.2,4,6 Grades 1 to 2 lesions can be managed with high-potency topical corticosteroids without CPI dose interruption, with more extensive grade 2 lesions requiring systemic corticosteroids.2,6,9 Lichenoid eruptions grade 3 or higher also require systemic corticosteroid therapy CPI therapy cessation until the eruption has receded to grade 0 to 1.2 Alternative treatment options for high-grade toxicity include phototherapy and acitretin.2,4,9
Our patient was treated with cessation of immunotherapy and initiation of a systemic corticosteroid taper, acitretin, and narrowband UVB therapy. After 6 weeks of treatment, the pain and pruritus improved and the rash had resolved in some areas while it had taken on a more classic lichenoid appearance with violaceous scaly papules and plaques (Figure 2) in areas of prior ulcers and erosions. He no longer had any bullae, erosions, or ulcers.
- Barrios DM, Do MH, Phillips GS, et al. Immune checkpoint inhibitors to treat cutaneous malignancies. J Am Acad Dermatol. 2020;83:1239-1253. doi:10.1016/j.jaad.2020.03.131
- Geisler AN, Phillips GS, Barrios DM, et al. Immune checkpoint inhibitor-related dermatologic adverse events. J Am Acad Dermatol. 2020;83:1255-1268. doi:10.1016/j.jaad.2020.03.132
- Tattersall IW, Leventhal JS. Cutaneous toxicities of immune checkpoint inhibitors: the role of the dermatologist. Yale J Biol Med. 2020;93:123-132.
- Si X, He C, Zhang L, et al. Management of immune checkpoint inhibitor-related dermatologic adverse events. Thorac Cancer. 2020;11:488-492. doi:10.1111/1759-7714.13275
- Eggermont AMM, Kicinski M, Blank CU, et al. Association between immune-related adverse events and recurrence-free survival among patients with stage III melanoma randomized to receive pembrolizumab or placebo: a secondary analysis of a randomized clinical trial. JAMA Oncol. 2020;6:519-527. doi:10.1001 /jamaoncol.2019.5570
- Sibaud V, Meyer N, Lamant L, et al. Dermatologic complications of anti-PD-1/PD-L1 immune checkpoint antibodies. Curr Opin Oncol. 2016;28:254-263. doi:10.1097/CCO.0000000000000290
- Martínez-Doménech Á, García-Legaz Martínez M, Magdaleno-Tapial J, et al. Digital ulcerative lichenoid dermatitis in a patient receiving anti-PD-1 therapy. Dermatol Online J. 2019;25:13030/qt8sm0j7t7.
- Davis MJ, Wilken R, Fung MA, et al. Debilitating erosive lichenoid interface dermatitis from checkpoint inhibitor therapy. Dermatol Online J. 2018;24:13030/qt3vq6b04v.
- Apalla Z, Papageorgiou C, Lallas A, et al. Cutaneous adverse events of immune checkpoint inhibitors: a literature review [published online January 29, 2021]. Dermatol Pract Concept. 2021;11:E2021155. doi:10.5826/dpc.1101a155
The Diagnosis: Immunotherapy-Related Lichenoid Drug Eruption
Direct immunofluorescence was negative, and histopathology revealed a lichenoid interface dermatitis, minimal parakeratosis, and saw-toothed rete ridges (Figure 1). He was diagnosed with an immunotherapyrelated lichenoid drug eruption based on the morphology of the skin lesions and clinicopathologic correlation. Bullous pemphigoid and lichen planus pemphigoides were ruled out given the negative direct immunofluorescence findings. Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) was not consistent with the clinical presentation, especially given the lack of mucosal findings. The histology also was not consistent, as the biopsy specimen lacked apoptotic and necrotic keratinocytes to the degree seen in SJS/TEN and also had a greater degree of inflammatory infiltrate. Drug reaction with eosinophilia and systemic symptoms (DRESS) syndrome was ruled out given the lack of systemic findings, including facial swelling and lymphadenopathy and the clinical appearance of the rash. No morbilliform features were present, which is the most common presentation of DRESS syndrome.
Checkpoint inhibitor (CPI) therapy has become the cornerstone in management of certain advanced malignancies.1 Checkpoint inhibitors block cytotoxic T lymphocyte–associated protein 4, programmed cell death-1, and/or programmed cell death ligand-1, allowing activated T cells to infiltrate the tumor microenvironment and destroy malignant cells. Checkpoint inhibitors are approved for the treatment of melanoma, cutaneous squamous cell carcinoma, and Merkel cell carcinoma and are being investigated in various other cutaneous and soft tissue malignancies.1-3
Although CPIs have shown substantial efficacy in the management of advanced malignancies, immune-related adverse events (AEs) are common due to nonspecific immune activation.2 Immune-related cutaneous AEs are the most common immune-related AEs, occurring in 30% to 50% of patients who undergo treatment.2-5 Common immune-related cutaneous AEs include maculopapular, psoriasiform, and lichenoid dermatitis, as well as pruritus without dermatitis.2,3,6 Other reactions include but are not limited to bullous pemphigoid, vitiligolike depigmentation, and alopecia.2,3 Immune-related cutaneous AEs usually are self-limited; however, severe life-threatening reactions such as the spectrum of SJS/TEN and DRESS syndrome also can occur.2-4 Immune-related cutaneous AEs are graded based on the Common Terminology Criteria for Adverse Events: grade 1 reactions are asymptomatic and cover less than 10% of the patient’s body surface area (BSA), grade 2 reactions have mild symptoms and cover 10% to 30% of the patient’s BSA, grade 3 reactions have moderate to severe symptoms and cover greater than 30% of the patient’s BSA, and grade 4 reactions are life-threatening.2,3 With prompt recognition and adequate treatment, mild to moderate immune-related cutaneous AEs—grades 1 and 2—largely are reversible, and less than 5% require discontinuation of therapy.2,3,6 It has been suggested that immune-related cutaneous AEs may be a positive prognostic factor in the treatment of underlying malignancy, indicating adequate immune activation targeting the malignant cells.6
Although our patient had some typical violaceous, flat-topped papules and plaques with Wickham striae, he also had atypical findings for a lichenoid reaction. Given the endorsement of blisters, it is possible that some of these lesions initially were bullous and subsequently ruptured, leaving behind erosions. However, in other areas, there also were eroded papules and ulcerations without a reported history of excoriation, scratching, picking, or prior bullae, including difficult-to-reach areas such as the back. It is favored that these lesions represented a robust lichenoid dermatitis leading to erosive and ulcerated lesions, similar to the formation of bullous lichen planus. Lichenoid eruptions secondary to immunotherapy are well-known phenomena, but a PubMed search of articles indexed for MEDLINE using the terms ulcer, lichenoid, and immunotherapy revealed only 2 cases of ulcerative lichenoid eruptions: a localized digital erosive lichenoid dermatitis and a widespread ulcerative lichenoid drug eruption without true erosions.7,8 However, widespread erosive and ulcerated lichenoid reactions are rare.
Lichenoid eruptions most strongly are associated with anti–programmed cell death-1/ programmed cell death ligand-1 therapy, occurring in 20% of patients undergoing treatment.3 Lichenoid eruptions present as discrete, pruritic, erythematous, violaceous papules and plaques on the chest and back and rarely may involve the limbs, palmoplantar surfaces, and oral mucosa.2,3,6 Histopathologic features include a dense bandlike lymphocytic infiltrate in the dermis with scattered apoptotic keratinocytes in the basal layer of the epidermis.2,4,6 Grades 1 to 2 lesions can be managed with high-potency topical corticosteroids without CPI dose interruption, with more extensive grade 2 lesions requiring systemic corticosteroids.2,6,9 Lichenoid eruptions grade 3 or higher also require systemic corticosteroid therapy CPI therapy cessation until the eruption has receded to grade 0 to 1.2 Alternative treatment options for high-grade toxicity include phototherapy and acitretin.2,4,9
Our patient was treated with cessation of immunotherapy and initiation of a systemic corticosteroid taper, acitretin, and narrowband UVB therapy. After 6 weeks of treatment, the pain and pruritus improved and the rash had resolved in some areas while it had taken on a more classic lichenoid appearance with violaceous scaly papules and plaques (Figure 2) in areas of prior ulcers and erosions. He no longer had any bullae, erosions, or ulcers.
The Diagnosis: Immunotherapy-Related Lichenoid Drug Eruption
Direct immunofluorescence was negative, and histopathology revealed a lichenoid interface dermatitis, minimal parakeratosis, and saw-toothed rete ridges (Figure 1). He was diagnosed with an immunotherapyrelated lichenoid drug eruption based on the morphology of the skin lesions and clinicopathologic correlation. Bullous pemphigoid and lichen planus pemphigoides were ruled out given the negative direct immunofluorescence findings. Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) was not consistent with the clinical presentation, especially given the lack of mucosal findings. The histology also was not consistent, as the biopsy specimen lacked apoptotic and necrotic keratinocytes to the degree seen in SJS/TEN and also had a greater degree of inflammatory infiltrate. Drug reaction with eosinophilia and systemic symptoms (DRESS) syndrome was ruled out given the lack of systemic findings, including facial swelling and lymphadenopathy and the clinical appearance of the rash. No morbilliform features were present, which is the most common presentation of DRESS syndrome.
Checkpoint inhibitor (CPI) therapy has become the cornerstone in management of certain advanced malignancies.1 Checkpoint inhibitors block cytotoxic T lymphocyte–associated protein 4, programmed cell death-1, and/or programmed cell death ligand-1, allowing activated T cells to infiltrate the tumor microenvironment and destroy malignant cells. Checkpoint inhibitors are approved for the treatment of melanoma, cutaneous squamous cell carcinoma, and Merkel cell carcinoma and are being investigated in various other cutaneous and soft tissue malignancies.1-3
Although CPIs have shown substantial efficacy in the management of advanced malignancies, immune-related adverse events (AEs) are common due to nonspecific immune activation.2 Immune-related cutaneous AEs are the most common immune-related AEs, occurring in 30% to 50% of patients who undergo treatment.2-5 Common immune-related cutaneous AEs include maculopapular, psoriasiform, and lichenoid dermatitis, as well as pruritus without dermatitis.2,3,6 Other reactions include but are not limited to bullous pemphigoid, vitiligolike depigmentation, and alopecia.2,3 Immune-related cutaneous AEs usually are self-limited; however, severe life-threatening reactions such as the spectrum of SJS/TEN and DRESS syndrome also can occur.2-4 Immune-related cutaneous AEs are graded based on the Common Terminology Criteria for Adverse Events: grade 1 reactions are asymptomatic and cover less than 10% of the patient’s body surface area (BSA), grade 2 reactions have mild symptoms and cover 10% to 30% of the patient’s BSA, grade 3 reactions have moderate to severe symptoms and cover greater than 30% of the patient’s BSA, and grade 4 reactions are life-threatening.2,3 With prompt recognition and adequate treatment, mild to moderate immune-related cutaneous AEs—grades 1 and 2—largely are reversible, and less than 5% require discontinuation of therapy.2,3,6 It has been suggested that immune-related cutaneous AEs may be a positive prognostic factor in the treatment of underlying malignancy, indicating adequate immune activation targeting the malignant cells.6
Although our patient had some typical violaceous, flat-topped papules and plaques with Wickham striae, he also had atypical findings for a lichenoid reaction. Given the endorsement of blisters, it is possible that some of these lesions initially were bullous and subsequently ruptured, leaving behind erosions. However, in other areas, there also were eroded papules and ulcerations without a reported history of excoriation, scratching, picking, or prior bullae, including difficult-to-reach areas such as the back. It is favored that these lesions represented a robust lichenoid dermatitis leading to erosive and ulcerated lesions, similar to the formation of bullous lichen planus. Lichenoid eruptions secondary to immunotherapy are well-known phenomena, but a PubMed search of articles indexed for MEDLINE using the terms ulcer, lichenoid, and immunotherapy revealed only 2 cases of ulcerative lichenoid eruptions: a localized digital erosive lichenoid dermatitis and a widespread ulcerative lichenoid drug eruption without true erosions.7,8 However, widespread erosive and ulcerated lichenoid reactions are rare.
Lichenoid eruptions most strongly are associated with anti–programmed cell death-1/ programmed cell death ligand-1 therapy, occurring in 20% of patients undergoing treatment.3 Lichenoid eruptions present as discrete, pruritic, erythematous, violaceous papules and plaques on the chest and back and rarely may involve the limbs, palmoplantar surfaces, and oral mucosa.2,3,6 Histopathologic features include a dense bandlike lymphocytic infiltrate in the dermis with scattered apoptotic keratinocytes in the basal layer of the epidermis.2,4,6 Grades 1 to 2 lesions can be managed with high-potency topical corticosteroids without CPI dose interruption, with more extensive grade 2 lesions requiring systemic corticosteroids.2,6,9 Lichenoid eruptions grade 3 or higher also require systemic corticosteroid therapy CPI therapy cessation until the eruption has receded to grade 0 to 1.2 Alternative treatment options for high-grade toxicity include phototherapy and acitretin.2,4,9
Our patient was treated with cessation of immunotherapy and initiation of a systemic corticosteroid taper, acitretin, and narrowband UVB therapy. After 6 weeks of treatment, the pain and pruritus improved and the rash had resolved in some areas while it had taken on a more classic lichenoid appearance with violaceous scaly papules and plaques (Figure 2) in areas of prior ulcers and erosions. He no longer had any bullae, erosions, or ulcers.
- Barrios DM, Do MH, Phillips GS, et al. Immune checkpoint inhibitors to treat cutaneous malignancies. J Am Acad Dermatol. 2020;83:1239-1253. doi:10.1016/j.jaad.2020.03.131
- Geisler AN, Phillips GS, Barrios DM, et al. Immune checkpoint inhibitor-related dermatologic adverse events. J Am Acad Dermatol. 2020;83:1255-1268. doi:10.1016/j.jaad.2020.03.132
- Tattersall IW, Leventhal JS. Cutaneous toxicities of immune checkpoint inhibitors: the role of the dermatologist. Yale J Biol Med. 2020;93:123-132.
- Si X, He C, Zhang L, et al. Management of immune checkpoint inhibitor-related dermatologic adverse events. Thorac Cancer. 2020;11:488-492. doi:10.1111/1759-7714.13275
- Eggermont AMM, Kicinski M, Blank CU, et al. Association between immune-related adverse events and recurrence-free survival among patients with stage III melanoma randomized to receive pembrolizumab or placebo: a secondary analysis of a randomized clinical trial. JAMA Oncol. 2020;6:519-527. doi:10.1001 /jamaoncol.2019.5570
- Sibaud V, Meyer N, Lamant L, et al. Dermatologic complications of anti-PD-1/PD-L1 immune checkpoint antibodies. Curr Opin Oncol. 2016;28:254-263. doi:10.1097/CCO.0000000000000290
- Martínez-Doménech Á, García-Legaz Martínez M, Magdaleno-Tapial J, et al. Digital ulcerative lichenoid dermatitis in a patient receiving anti-PD-1 therapy. Dermatol Online J. 2019;25:13030/qt8sm0j7t7.
- Davis MJ, Wilken R, Fung MA, et al. Debilitating erosive lichenoid interface dermatitis from checkpoint inhibitor therapy. Dermatol Online J. 2018;24:13030/qt3vq6b04v.
- Apalla Z, Papageorgiou C, Lallas A, et al. Cutaneous adverse events of immune checkpoint inhibitors: a literature review [published online January 29, 2021]. Dermatol Pract Concept. 2021;11:E2021155. doi:10.5826/dpc.1101a155
- Barrios DM, Do MH, Phillips GS, et al. Immune checkpoint inhibitors to treat cutaneous malignancies. J Am Acad Dermatol. 2020;83:1239-1253. doi:10.1016/j.jaad.2020.03.131
- Geisler AN, Phillips GS, Barrios DM, et al. Immune checkpoint inhibitor-related dermatologic adverse events. J Am Acad Dermatol. 2020;83:1255-1268. doi:10.1016/j.jaad.2020.03.132
- Tattersall IW, Leventhal JS. Cutaneous toxicities of immune checkpoint inhibitors: the role of the dermatologist. Yale J Biol Med. 2020;93:123-132.
- Si X, He C, Zhang L, et al. Management of immune checkpoint inhibitor-related dermatologic adverse events. Thorac Cancer. 2020;11:488-492. doi:10.1111/1759-7714.13275
- Eggermont AMM, Kicinski M, Blank CU, et al. Association between immune-related adverse events and recurrence-free survival among patients with stage III melanoma randomized to receive pembrolizumab or placebo: a secondary analysis of a randomized clinical trial. JAMA Oncol. 2020;6:519-527. doi:10.1001 /jamaoncol.2019.5570
- Sibaud V, Meyer N, Lamant L, et al. Dermatologic complications of anti-PD-1/PD-L1 immune checkpoint antibodies. Curr Opin Oncol. 2016;28:254-263. doi:10.1097/CCO.0000000000000290
- Martínez-Doménech Á, García-Legaz Martínez M, Magdaleno-Tapial J, et al. Digital ulcerative lichenoid dermatitis in a patient receiving anti-PD-1 therapy. Dermatol Online J. 2019;25:13030/qt8sm0j7t7.
- Davis MJ, Wilken R, Fung MA, et al. Debilitating erosive lichenoid interface dermatitis from checkpoint inhibitor therapy. Dermatol Online J. 2018;24:13030/qt3vq6b04v.
- Apalla Z, Papageorgiou C, Lallas A, et al. Cutaneous adverse events of immune checkpoint inhibitors: a literature review [published online January 29, 2021]. Dermatol Pract Concept. 2021;11:E2021155. doi:10.5826/dpc.1101a155
A 70-year-old man presented with a painful, pruritic, diffuse eruption on the trunk, legs, and arms of 2 months’ duration. He had a history of stage IV pleomorphic cell sarcoma of the retroperitoneum and was started on pembrolizumab therapy 6 weeks prior to the eruption. Physical examination revealed violaceous papules and plaques with shiny reticulated scaling as well as multiple scattered eroded papules and shallow ulcerations. The oral mucosa and genitals were spared. The patient endorsed blisters followed by open sores that were both itchy and painful. He denied self-infliction. Both the patient and his wife denied scratching. Two biopsies for direct immunofluorescence and histopathology were performed.
Poor NAFLD outcomes with increased VCTE-measured liver stiffness
Although previous retrospective studies have suggested that increased liver stiffness, as measured by VCTE (FibroScan), is associated with increases in liver-related events, there is a paucity of prospective data, reported Samer Gawrieh, MD, from Indiana University, Carmel and Indianapolis. VCTE is a noninvasive measure of cirrhosis progression.
In their prospective cohort study of patients representing the entire spectrum of NAFLD, the progression to LSM-defined cirrhosis was independently associated with the risk for a composite clinical outcome of death, decompensation, hepatocellular carcinoma, or a Model for End Stage Liver Disease (MELD) score of greater than 15, he said.
Their findings show that “progression to LSM-defined cirrhosis by VCTE is strongly associated with poor clinical outcomes,” Dr. Gawrieh said.
Study findings
Investigators looked at prospective data on 894 patients with biopsy-proven NAFLD in the Nonalcoholic Steatohepatitis (NASH) Clinical Research Network database. The sample included patients with a minimum of two LSM readings taken from 2014 through 2022.
They defined LSM-defined cirrhosis as reaching LSM of greater than 14.9 kPa (90% specificity cutoff) among patients without cirrhosis on the baseline VCTE (a 90% sensitivity cutoff of LSM less than 12.1 kPa).
They also performed a histology-based subanalysis, including data only from those patients who had LSM within 6 months of a liver biopsy.
The median patient age was 60 years, 37% were male, and 80.9% were White and 11.5% were Hispanic/Latino. The median body mass index (BMI) was 32.
Out of all the patients, 119 (13.3%) had progression to LSM-defined cirrhosis.
At a median follow-up of 3.69 years for the 775 patients without LSM progression, 79 (10.2%) had one or more of the events in the composite clinical outcome.
In contrast, after a median 5.48 years of follow-up, 31 of the 119 patients with progression (26.1%) had one or more of the composite events (P < .0001).
The median rates of progression to LSM-defined cirrhosis in the overall cohort were 2% at 1 year, 11% at 3 years, and 16% at 5 years.
Researchers found a correlation between progression to LSM-defined cirrhosis and baseline histological fibrosis stage on biopsy, with a rate of 7% among those with no baseline fibrosis, 9% each for patients with stage I A-C or stage II fibrosis, 24% of those with baseline bridging fibrosis, and 25% of those with baseline cirrhosis.
A comparison of the time to a composite clinical outcome event between patients with progression to LSM-defined cirrhosis and those without progression showed that LSM-defined progression was associated with near doubling in risk, with a hazard ratio of 1.84 (P = .0039).
In a multivariate Cox regression analysis controlling for age, sex, race, BMI, diabetes status, and baseline LSM, only LSM-defined progression (HR, 1.93; P < .01) and age (HR, 1.03; P < .01) were significant predictors.
Dr. Gawrieh noted that while age was a statistically significant factor, it was only weakly associated.
“These data suggest that development of cirrhosis LSM criteria is a promising surrogate for clinical outcomes in patients with NAFLD,” Dr. Gawrieh concluded.
Progression definition questioned
Following the presentation, Nezam Afdhal, MD, chief of the division of gastroenterology, hepatology, and nutrition at Beth Israel Deaconess Hospital in Boston, questioned how 25% of patients who had biopsy-proven cirrhosis could progress to LSM-defined cirrhosis.
Dr. Gawrieh said that, according to inclusion criteria, the patients could not have LSM-defined cirrhosis with the sensitivity cutoff of 12.1 kPa, and that of the 10 patients with baseline cirrhosis in the cohort, all had LSM of less than 12.1 kPa. However, he admitted that because those 10 patients were technically not progressors to cirrhosis, they should have been removed from the analysis for clinical outcomes.
Mark Hartman, MD, a clinical researcher at Eli Lilly and Company in Indianapolis, said the study is valuable but noted that those patients who progressed tended to have higher LSM at baseline as well as a higher [fibrosis-4 score].
Dr. Gawrieh added that the investigators are exploring variables that might explain progression to cirrhosis among patients without high baseline liver stiffness, such as alcohol use or drug-induced liver injury.
The study was supported by the National Institutes of Health and the NASH Clinical Research Network institutions. Dr. Gawrieh disclosed research grants from NIH, Zydus, Viking, and Sonic Incytes, and consulting for TransMedics and Pfizer. Dr. Afdhal and Dr. Hartman reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Although previous retrospective studies have suggested that increased liver stiffness, as measured by VCTE (FibroScan), is associated with increases in liver-related events, there is a paucity of prospective data, reported Samer Gawrieh, MD, from Indiana University, Carmel and Indianapolis. VCTE is a noninvasive measure of cirrhosis progression.
In their prospective cohort study of patients representing the entire spectrum of NAFLD, the progression to LSM-defined cirrhosis was independently associated with the risk for a composite clinical outcome of death, decompensation, hepatocellular carcinoma, or a Model for End Stage Liver Disease (MELD) score of greater than 15, he said.
Their findings show that “progression to LSM-defined cirrhosis by VCTE is strongly associated with poor clinical outcomes,” Dr. Gawrieh said.
Study findings
Investigators looked at prospective data on 894 patients with biopsy-proven NAFLD in the Nonalcoholic Steatohepatitis (NASH) Clinical Research Network database. The sample included patients with a minimum of two LSM readings taken from 2014 through 2022.
They defined LSM-defined cirrhosis as reaching LSM of greater than 14.9 kPa (90% specificity cutoff) among patients without cirrhosis on the baseline VCTE (a 90% sensitivity cutoff of LSM less than 12.1 kPa).
They also performed a histology-based subanalysis, including data only from those patients who had LSM within 6 months of a liver biopsy.
The median patient age was 60 years, 37% were male, and 80.9% were White and 11.5% were Hispanic/Latino. The median body mass index (BMI) was 32.
Out of all the patients, 119 (13.3%) had progression to LSM-defined cirrhosis.
At a median follow-up of 3.69 years for the 775 patients without LSM progression, 79 (10.2%) had one or more of the events in the composite clinical outcome.
In contrast, after a median 5.48 years of follow-up, 31 of the 119 patients with progression (26.1%) had one or more of the composite events (P < .0001).
The median rates of progression to LSM-defined cirrhosis in the overall cohort were 2% at 1 year, 11% at 3 years, and 16% at 5 years.
Researchers found a correlation between progression to LSM-defined cirrhosis and baseline histological fibrosis stage on biopsy, with a rate of 7% among those with no baseline fibrosis, 9% each for patients with stage I A-C or stage II fibrosis, 24% of those with baseline bridging fibrosis, and 25% of those with baseline cirrhosis.
A comparison of the time to a composite clinical outcome event between patients with progression to LSM-defined cirrhosis and those without progression showed that LSM-defined progression was associated with near doubling in risk, with a hazard ratio of 1.84 (P = .0039).
In a multivariate Cox regression analysis controlling for age, sex, race, BMI, diabetes status, and baseline LSM, only LSM-defined progression (HR, 1.93; P < .01) and age (HR, 1.03; P < .01) were significant predictors.
Dr. Gawrieh noted that while age was a statistically significant factor, it was only weakly associated.
“These data suggest that development of cirrhosis LSM criteria is a promising surrogate for clinical outcomes in patients with NAFLD,” Dr. Gawrieh concluded.
Progression definition questioned
Following the presentation, Nezam Afdhal, MD, chief of the division of gastroenterology, hepatology, and nutrition at Beth Israel Deaconess Hospital in Boston, questioned how 25% of patients who had biopsy-proven cirrhosis could progress to LSM-defined cirrhosis.
Dr. Gawrieh said that, according to inclusion criteria, the patients could not have LSM-defined cirrhosis with the sensitivity cutoff of 12.1 kPa, and that of the 10 patients with baseline cirrhosis in the cohort, all had LSM of less than 12.1 kPa. However, he admitted that because those 10 patients were technically not progressors to cirrhosis, they should have been removed from the analysis for clinical outcomes.
Mark Hartman, MD, a clinical researcher at Eli Lilly and Company in Indianapolis, said the study is valuable but noted that those patients who progressed tended to have higher LSM at baseline as well as a higher [fibrosis-4 score].
Dr. Gawrieh added that the investigators are exploring variables that might explain progression to cirrhosis among patients without high baseline liver stiffness, such as alcohol use or drug-induced liver injury.
The study was supported by the National Institutes of Health and the NASH Clinical Research Network institutions. Dr. Gawrieh disclosed research grants from NIH, Zydus, Viking, and Sonic Incytes, and consulting for TransMedics and Pfizer. Dr. Afdhal and Dr. Hartman reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Although previous retrospective studies have suggested that increased liver stiffness, as measured by VCTE (FibroScan), is associated with increases in liver-related events, there is a paucity of prospective data, reported Samer Gawrieh, MD, from Indiana University, Carmel and Indianapolis. VCTE is a noninvasive measure of cirrhosis progression.
In their prospective cohort study of patients representing the entire spectrum of NAFLD, the progression to LSM-defined cirrhosis was independently associated with the risk for a composite clinical outcome of death, decompensation, hepatocellular carcinoma, or a Model for End Stage Liver Disease (MELD) score of greater than 15, he said.
Their findings show that “progression to LSM-defined cirrhosis by VCTE is strongly associated with poor clinical outcomes,” Dr. Gawrieh said.
Study findings
Investigators looked at prospective data on 894 patients with biopsy-proven NAFLD in the Nonalcoholic Steatohepatitis (NASH) Clinical Research Network database. The sample included patients with a minimum of two LSM readings taken from 2014 through 2022.
They defined LSM-defined cirrhosis as reaching LSM of greater than 14.9 kPa (90% specificity cutoff) among patients without cirrhosis on the baseline VCTE (a 90% sensitivity cutoff of LSM less than 12.1 kPa).
They also performed a histology-based subanalysis, including data only from those patients who had LSM within 6 months of a liver biopsy.
The median patient age was 60 years, 37% were male, and 80.9% were White and 11.5% were Hispanic/Latino. The median body mass index (BMI) was 32.
Out of all the patients, 119 (13.3%) had progression to LSM-defined cirrhosis.
At a median follow-up of 3.69 years for the 775 patients without LSM progression, 79 (10.2%) had one or more of the events in the composite clinical outcome.
In contrast, after a median 5.48 years of follow-up, 31 of the 119 patients with progression (26.1%) had one or more of the composite events (P < .0001).
The median rates of progression to LSM-defined cirrhosis in the overall cohort were 2% at 1 year, 11% at 3 years, and 16% at 5 years.
Researchers found a correlation between progression to LSM-defined cirrhosis and baseline histological fibrosis stage on biopsy, with a rate of 7% among those with no baseline fibrosis, 9% each for patients with stage I A-C or stage II fibrosis, 24% of those with baseline bridging fibrosis, and 25% of those with baseline cirrhosis.
A comparison of the time to a composite clinical outcome event between patients with progression to LSM-defined cirrhosis and those without progression showed that LSM-defined progression was associated with near doubling in risk, with a hazard ratio of 1.84 (P = .0039).
In a multivariate Cox regression analysis controlling for age, sex, race, BMI, diabetes status, and baseline LSM, only LSM-defined progression (HR, 1.93; P < .01) and age (HR, 1.03; P < .01) were significant predictors.
Dr. Gawrieh noted that while age was a statistically significant factor, it was only weakly associated.
“These data suggest that development of cirrhosis LSM criteria is a promising surrogate for clinical outcomes in patients with NAFLD,” Dr. Gawrieh concluded.
Progression definition questioned
Following the presentation, Nezam Afdhal, MD, chief of the division of gastroenterology, hepatology, and nutrition at Beth Israel Deaconess Hospital in Boston, questioned how 25% of patients who had biopsy-proven cirrhosis could progress to LSM-defined cirrhosis.
Dr. Gawrieh said that, according to inclusion criteria, the patients could not have LSM-defined cirrhosis with the sensitivity cutoff of 12.1 kPa, and that of the 10 patients with baseline cirrhosis in the cohort, all had LSM of less than 12.1 kPa. However, he admitted that because those 10 patients were technically not progressors to cirrhosis, they should have been removed from the analysis for clinical outcomes.
Mark Hartman, MD, a clinical researcher at Eli Lilly and Company in Indianapolis, said the study is valuable but noted that those patients who progressed tended to have higher LSM at baseline as well as a higher [fibrosis-4 score].
Dr. Gawrieh added that the investigators are exploring variables that might explain progression to cirrhosis among patients without high baseline liver stiffness, such as alcohol use or drug-induced liver injury.
The study was supported by the National Institutes of Health and the NASH Clinical Research Network institutions. Dr. Gawrieh disclosed research grants from NIH, Zydus, Viking, and Sonic Incytes, and consulting for TransMedics and Pfizer. Dr. Afdhal and Dr. Hartman reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE LIVER MEETING
Long-term behavioral follow-up of children exposed to mood stabilizers and antidepressants: A look forward
Much of the focus of reproductive psychiatry over the last 1 to 2 decades has been on issues regarding risk of fetal exposure to psychiatric medications in the context of the specific risk for teratogenesis or organ malformation. Concerns and questions are mostly focused on exposure to any number of medications that women take during the first trimester, as it is during that period that the major organs are formed.
More recently, there has been appropriate interest in the effect of fetal exposure to psychiatric medications with respect to risk for obstetrical and neonatal complications. This particularly has been the case with respect to antidepressants where fetal exposure to these medications, which while associated with symptoms of transient jitteriness and irritability about 20% of the time, have not been associated with symptoms requiring frank clinical intervention.
Concerning mood stabilizers, the risk for organ dysgenesis following fetal exposure to sodium valproate has been very well established, and we’ve known for over a decade about the adverse effects of fetal exposure to sodium valproate on behavioral outcomes (Lancet Neurol. 2013 Mar;12[3]:244-52). We also now have ample data on lamotrigine, one of the most widely used medicines by reproductive-age women for treatment of bipolar disorder that supports the absence of a risk of organ malformation in first-trimester exposure.
Most recently, in a study of 292 children of women with epilepsy, an evaluation of women being treated with more modern anticonvulsants such as lamotrigine and levetiracetam alone or as polytherapy was performed. The results showed no difference in language, motor, cognitive, social, emotional, and general adaptive functioning in children exposed to either lamotrigine or levetiracetam relative to unexposed children of women with epilepsy. However, the researchers found an increase in anti-epileptic drug plasma level appeared to be associated with decreased motor and sensory function. These are reassuring data that really confirm earlier work, which failed to reveal a signal of concern for lamotrigine and now provide some of the first data on levetiracetam, which is widely used by reproductive-age women with epilepsy (JAMA Neurol. 2021 Aug 1;78[8]:927-936). While one caveat of the study is a short follow-up of 2 years, the absence of a signal of concern is reassuring. With more and more data demonstrating bipolar disorder is an illness that requires chronic treatment for many people, and that discontinuation is associated with high risk for relapse, it is an advance in the field to have data on risk for teratogenesis and data on longer-term neurobehavioral outcomes.
There is vast information regarding reproductive safety, organ malformation, and acute neonatal outcomes for antidepressants. The last decade has brought interest in and analysis of specific reports of increased risk of both autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD) following fetal exposure to antidepressants. What can be said based on reviews of pooled meta-analyses is that the risk for ASD and ADHD has been put to rest for most clinicians and patients (J Clin Psychiatry. 2020 May 26;81[3]:20f13463). With other neurodevelopmental disorders, results have been somewhat inconclusive. Over the last 5-10 years, there have been sporadic reports of concerns about problems in a specific domain of neurodevelopment in offspring of women who have used antidepressants during pregnancy, whether it be speech, language, or motor functioning, but no signal of concern has been consistent.
In a previous column, I addressed a Danish study that showed no increased risk of longer-term sequelae after fetal exposure to antidepressants. Now, a new study has examined 1.93 million pregnancies in the Medicaid Analytic eXtract and 1.25 million pregnancies in the IBM MarketScan Research Database with follow-up up to 14 years of age where the specific interval for fetal exposure was from gestational age of 19 weeks to delivery, as that is the period that corresponds most to synaptogenesis in the brain. The researchers examined a spectrum of neurodevelopmental disorders such as developmental speech issues, ADHD, ASD, dyslexia, and learning disorders, among others. They found a twofold increased risk for neurodevelopmental disorders in the unadjusted models that flattened to no finding when factoring in environmental and genetic risk variables, highlighting the importance of dealing appropriately with confounders when performing these analyses. Those confounders examined include the mother’s use of alcohol and tobacco, and her body mass index and overall general health (JAMA Intern Med. 2022;182[11]:1149-60).
Given the consistency of these results with earlier data, patients can be increasingly comfortable as they weigh the benefits and risks of antidepressant use during pregnancy, factoring in the risk of fetal exposure with added data on long-term neurobehavioral sequelae. With that said, we need to remember the importance of initiatives to address alcohol consumption, poor nutrition, tobacco use, elevated BMI, and general health during pregnancy. These are modifiable risks that we as clinicians should focus on in order to optimize outcomes during pregnancy.
We have come so far in knowledge about fetal exposure to antidepressants relative to other classes of medications women take during pregnancy, about which, frankly, we are still starved for data. As use of psychiatric medications during pregnancy continues to grow, we can rest a bit more comfortably. But we should also address some of the other behaviors that have adverse effects on maternal and child well-being.
Dr. Cohen is the director of the Ammon-Pinizzotto Center for Women’s Mental Health at Massachusetts General Hospital (MGH) in Boston, which provides information resources and conducts clinical care and research in reproductive mental health. He has been a consultant to manufacturers of psychiatric medications. Email Dr. Cohen at [email protected].
Much of the focus of reproductive psychiatry over the last 1 to 2 decades has been on issues regarding risk of fetal exposure to psychiatric medications in the context of the specific risk for teratogenesis or organ malformation. Concerns and questions are mostly focused on exposure to any number of medications that women take during the first trimester, as it is during that period that the major organs are formed.
More recently, there has been appropriate interest in the effect of fetal exposure to psychiatric medications with respect to risk for obstetrical and neonatal complications. This particularly has been the case with respect to antidepressants where fetal exposure to these medications, which while associated with symptoms of transient jitteriness and irritability about 20% of the time, have not been associated with symptoms requiring frank clinical intervention.
Concerning mood stabilizers, the risk for organ dysgenesis following fetal exposure to sodium valproate has been very well established, and we’ve known for over a decade about the adverse effects of fetal exposure to sodium valproate on behavioral outcomes (Lancet Neurol. 2013 Mar;12[3]:244-52). We also now have ample data on lamotrigine, one of the most widely used medicines by reproductive-age women for treatment of bipolar disorder that supports the absence of a risk of organ malformation in first-trimester exposure.
Most recently, in a study of 292 children of women with epilepsy, an evaluation of women being treated with more modern anticonvulsants such as lamotrigine and levetiracetam alone or as polytherapy was performed. The results showed no difference in language, motor, cognitive, social, emotional, and general adaptive functioning in children exposed to either lamotrigine or levetiracetam relative to unexposed children of women with epilepsy. However, the researchers found an increase in anti-epileptic drug plasma level appeared to be associated with decreased motor and sensory function. These are reassuring data that really confirm earlier work, which failed to reveal a signal of concern for lamotrigine and now provide some of the first data on levetiracetam, which is widely used by reproductive-age women with epilepsy (JAMA Neurol. 2021 Aug 1;78[8]:927-936). While one caveat of the study is a short follow-up of 2 years, the absence of a signal of concern is reassuring. With more and more data demonstrating bipolar disorder is an illness that requires chronic treatment for many people, and that discontinuation is associated with high risk for relapse, it is an advance in the field to have data on risk for teratogenesis and data on longer-term neurobehavioral outcomes.
There is vast information regarding reproductive safety, organ malformation, and acute neonatal outcomes for antidepressants. The last decade has brought interest in and analysis of specific reports of increased risk of both autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD) following fetal exposure to antidepressants. What can be said based on reviews of pooled meta-analyses is that the risk for ASD and ADHD has been put to rest for most clinicians and patients (J Clin Psychiatry. 2020 May 26;81[3]:20f13463). With other neurodevelopmental disorders, results have been somewhat inconclusive. Over the last 5-10 years, there have been sporadic reports of concerns about problems in a specific domain of neurodevelopment in offspring of women who have used antidepressants during pregnancy, whether it be speech, language, or motor functioning, but no signal of concern has been consistent.
In a previous column, I addressed a Danish study that showed no increased risk of longer-term sequelae after fetal exposure to antidepressants. Now, a new study has examined 1.93 million pregnancies in the Medicaid Analytic eXtract and 1.25 million pregnancies in the IBM MarketScan Research Database with follow-up up to 14 years of age where the specific interval for fetal exposure was from gestational age of 19 weeks to delivery, as that is the period that corresponds most to synaptogenesis in the brain. The researchers examined a spectrum of neurodevelopmental disorders such as developmental speech issues, ADHD, ASD, dyslexia, and learning disorders, among others. They found a twofold increased risk for neurodevelopmental disorders in the unadjusted models that flattened to no finding when factoring in environmental and genetic risk variables, highlighting the importance of dealing appropriately with confounders when performing these analyses. Those confounders examined include the mother’s use of alcohol and tobacco, and her body mass index and overall general health (JAMA Intern Med. 2022;182[11]:1149-60).
Given the consistency of these results with earlier data, patients can be increasingly comfortable as they weigh the benefits and risks of antidepressant use during pregnancy, factoring in the risk of fetal exposure with added data on long-term neurobehavioral sequelae. With that said, we need to remember the importance of initiatives to address alcohol consumption, poor nutrition, tobacco use, elevated BMI, and general health during pregnancy. These are modifiable risks that we as clinicians should focus on in order to optimize outcomes during pregnancy.
We have come so far in knowledge about fetal exposure to antidepressants relative to other classes of medications women take during pregnancy, about which, frankly, we are still starved for data. As use of psychiatric medications during pregnancy continues to grow, we can rest a bit more comfortably. But we should also address some of the other behaviors that have adverse effects on maternal and child well-being.
Dr. Cohen is the director of the Ammon-Pinizzotto Center for Women’s Mental Health at Massachusetts General Hospital (MGH) in Boston, which provides information resources and conducts clinical care and research in reproductive mental health. He has been a consultant to manufacturers of psychiatric medications. Email Dr. Cohen at [email protected].
Much of the focus of reproductive psychiatry over the last 1 to 2 decades has been on issues regarding risk of fetal exposure to psychiatric medications in the context of the specific risk for teratogenesis or organ malformation. Concerns and questions are mostly focused on exposure to any number of medications that women take during the first trimester, as it is during that period that the major organs are formed.
More recently, there has been appropriate interest in the effect of fetal exposure to psychiatric medications with respect to risk for obstetrical and neonatal complications. This particularly has been the case with respect to antidepressants where fetal exposure to these medications, which while associated with symptoms of transient jitteriness and irritability about 20% of the time, have not been associated with symptoms requiring frank clinical intervention.
Concerning mood stabilizers, the risk for organ dysgenesis following fetal exposure to sodium valproate has been very well established, and we’ve known for over a decade about the adverse effects of fetal exposure to sodium valproate on behavioral outcomes (Lancet Neurol. 2013 Mar;12[3]:244-52). We also now have ample data on lamotrigine, one of the most widely used medicines by reproductive-age women for treatment of bipolar disorder that supports the absence of a risk of organ malformation in first-trimester exposure.
Most recently, in a study of 292 children of women with epilepsy, an evaluation of women being treated with more modern anticonvulsants such as lamotrigine and levetiracetam alone or as polytherapy was performed. The results showed no difference in language, motor, cognitive, social, emotional, and general adaptive functioning in children exposed to either lamotrigine or levetiracetam relative to unexposed children of women with epilepsy. However, the researchers found an increase in anti-epileptic drug plasma level appeared to be associated with decreased motor and sensory function. These are reassuring data that really confirm earlier work, which failed to reveal a signal of concern for lamotrigine and now provide some of the first data on levetiracetam, which is widely used by reproductive-age women with epilepsy (JAMA Neurol. 2021 Aug 1;78[8]:927-936). While one caveat of the study is a short follow-up of 2 years, the absence of a signal of concern is reassuring. With more and more data demonstrating bipolar disorder is an illness that requires chronic treatment for many people, and that discontinuation is associated with high risk for relapse, it is an advance in the field to have data on risk for teratogenesis and data on longer-term neurobehavioral outcomes.
There is vast information regarding reproductive safety, organ malformation, and acute neonatal outcomes for antidepressants. The last decade has brought interest in and analysis of specific reports of increased risk of both autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD) following fetal exposure to antidepressants. What can be said based on reviews of pooled meta-analyses is that the risk for ASD and ADHD has been put to rest for most clinicians and patients (J Clin Psychiatry. 2020 May 26;81[3]:20f13463). With other neurodevelopmental disorders, results have been somewhat inconclusive. Over the last 5-10 years, there have been sporadic reports of concerns about problems in a specific domain of neurodevelopment in offspring of women who have used antidepressants during pregnancy, whether it be speech, language, or motor functioning, but no signal of concern has been consistent.
In a previous column, I addressed a Danish study that showed no increased risk of longer-term sequelae after fetal exposure to antidepressants. Now, a new study has examined 1.93 million pregnancies in the Medicaid Analytic eXtract and 1.25 million pregnancies in the IBM MarketScan Research Database with follow-up up to 14 years of age where the specific interval for fetal exposure was from gestational age of 19 weeks to delivery, as that is the period that corresponds most to synaptogenesis in the brain. The researchers examined a spectrum of neurodevelopmental disorders such as developmental speech issues, ADHD, ASD, dyslexia, and learning disorders, among others. They found a twofold increased risk for neurodevelopmental disorders in the unadjusted models that flattened to no finding when factoring in environmental and genetic risk variables, highlighting the importance of dealing appropriately with confounders when performing these analyses. Those confounders examined include the mother’s use of alcohol and tobacco, and her body mass index and overall general health (JAMA Intern Med. 2022;182[11]:1149-60).
Given the consistency of these results with earlier data, patients can be increasingly comfortable as they weigh the benefits and risks of antidepressant use during pregnancy, factoring in the risk of fetal exposure with added data on long-term neurobehavioral sequelae. With that said, we need to remember the importance of initiatives to address alcohol consumption, poor nutrition, tobacco use, elevated BMI, and general health during pregnancy. These are modifiable risks that we as clinicians should focus on in order to optimize outcomes during pregnancy.
We have come so far in knowledge about fetal exposure to antidepressants relative to other classes of medications women take during pregnancy, about which, frankly, we are still starved for data. As use of psychiatric medications during pregnancy continues to grow, we can rest a bit more comfortably. But we should also address some of the other behaviors that have adverse effects on maternal and child well-being.
Dr. Cohen is the director of the Ammon-Pinizzotto Center for Women’s Mental Health at Massachusetts General Hospital (MGH) in Boston, which provides information resources and conducts clinical care and research in reproductive mental health. He has been a consultant to manufacturers of psychiatric medications. Email Dr. Cohen at [email protected].
How accurate is transcutaneous bilirubin testing in newborns with darker skin tones?
EVIDENCE SUMMARY
Some evidence suggests overestimation in all skin tones
In a prospective diagnostic cohort study of 1553 infants in Nigeria, the accuracy of TcB measurement with 2 transcutaneous bilirubinometers (Konica Minolta/Air Shields JM- 103 and Respironics BiliChek) was analyzed. 1 The study population was derived from neonates delivered in a single maternity hospital in Lagos who were ≥ 35 weeks gestational age or ≥ 2.2 kg.
Using a color scale generated for this population, researchers stratified neonates into 1 of 3 skin tone groups: light brown, medium brown, or dark brown. TcB and TSB paired samples were collected in the first 120 hours of life in all patients. JM-103 recordings comprised 71.9% of TcB readings.
Overall, TcB testing overestimated the TSB by ≥ 2 mg/dL in 64.5% of infants, ≥ 3 mg/dL in 42.7%, and > 4 mg/dL in 25.7%. TcB testing underestimated the TSB by ≥ 2 mg/dL in 1.1% of infants, ≥ 3 mg/dL in 0.5%, and > 4 mg/dL in 0.3%.1
Local variation in skin tone was not associated with changes in overestimation, although the researchers noted that a key limitation of the study was a lack of lighttoned infants for comparison.1
A prospective diagnostic cohort study of 1359 infants in Spain compared TcB measurements to TSB levels using the Dräger Jaundice Meter JM-105.2 Patients included all neonates (gestational age, 36.6 to 41.1 weeks) born at a single hospital in Barcelona.
Using a validated skin tone scale, researchers stratified neonates at 24 hours of life to 1 of 4 skin tones: light (n = 337), medium light (n = 750), medium dark (n = 249), and dark (n = 23). They then obtained TSB samples at 48 to 72 hours of life, along with other routine screening labs and midsternal TcB measurements.
TcB testing tended to overestimate TSB (when < 15 mg/dL) for all skin tones, although to a larger degree for neonates with dark skin tones (mean overestimation, 0.7 mg/dL for light; 1.08 mg/dL for medium light; 1.89 mg/dL for medium dark; and 1.86 mg/dL for dark; P < .001 for light vs medium dark or dark).2
Continue to: Stated limitations...
Stated limitations of the study included relatively low numbers of neonates with dark skin tone, no test of interobserver reliability in skin tone assignment, and enrollment of exclusively healthy neonates with low bilirubin levels.2
Other studies report overestimation in infants with darker skin tone
Two Canadian diagnostic cohort studies also found evidence that TcB testing overestimated TSB in infants with darker skin tones, although TcB test characteristics proved stable over a wide range of bilirubin levels.
The first study enrolled 451 neonates ≥ 35 weeks gestational age at a hospital in Ottawa and assessed TcB using the JM-103 meter.3 The neonates were stratified into light (n = 51), medium (n = 326), and dark (n = 74) skin tones using cosmetic reference color swatches. All had a TcB and TSB obtained within 30 minutes of each other.
TcB testing underestimated TSB in infants with light and medium skin tones and overestimated TSB in infants with darker skin tone (mean difference, –0.88 mg/dL for light; –1.1 mg/dL for medium; and 0.68 mg/dL for dark; P not given). The mean area under the curve (AUC) was ≥ 0.94 for all receiver–operator characteristic (ROC) curves across all skin tones and bilirubin thresholds (AUC range, 0-1, with > 0.8 indicating strong modeling).3
Limitations of the study included failure to check interrater reliability for skin tone assessment, low numbers of infants with elevated bilirubin (≥ 13.5 mg/dL), and very few infants in either the dark or light skin tone groups.3
Continue to: The second Canadian study...
The second Canadian study enrolled 774 infants born at ≥ 37 weeks gestational age in Calgary and assessed TcB with the JM-103.4 Infants were categorized as having light (n = 347), medium (n = 412), and dark (n = 15) skin tones by study nurses, based on reference cosmetic colors. All infants had paired TcB and TSB measurements within 60 minutes of each other and before 120 hours of life.
Multivariate linear regression analysis using medium skin tone as the reference group found a tendency toward low TcB levels in infants with light skin tone and a tendency toward high TcB levels in infants with dark skin tone (adjusted R2 = 0.86). The AUC was ≥ 0.95 for all ROC curves for lightand medium-toned infants at key TSB cutoff points; the study included too few infants with dark skin tone to generate ROC curves for that group.4
Recommendations from others
In 2009, the American Academy of Pediatrics (AAP) recommended universal predischarge screening for hyperbilirubinemia in newborns using either TcB testing or TSB. The AAP statement did not address the effect of skin tone on TcB levels, but did advise regular calibration of TcB and TSB results at the hospital level.5
In 2016, the National Institute for Health and Care Excellence (NICE) updated their guideline on jaundice in newborns younger than 28 days old. NICE recommended visual inspection of all babies for jaundice by examining them in bright natural light and looking for jaundice on blanched skin; it specifically advised checking sclera and gums in infants with darker skin tones.6
The Nigerian researchers noted earlier have published an updated TcB nomogram for their patient population.7
Editor’s takeaway
Even with the small variation of 2 mg/dL or less between transcutaneous and serum bilirubin, and the SOR of C due to lab values being labeled disease-oriented evidence, TcB proves to be useful. In practice, concerning TcB values should lead to serum bilirubin confirmation. This evidence indicates we might be ordering TSB measurements more or less often depending on skin tone, reinforcing the need for review and adjustment of TcB cut-off levels based on the local population.
1. Olusanya BO, Imosemi DO, Emokpae AA. Differences between transcutaneous and serum bilirubin measurements in Black African neonates. Pediatrics. 2016;138:e20160907. doi: 10.1542/ peds.2016-0907
2. Maya-Enero S, Candel-Pau J, Garcia-Garcia J, et al. Reliability of transcutaneous bilirubin determination based on skin color determined by a neonatal skin color scale of our own. Eur J Pediatr. 2021;180:607-616. doi: 10.1007/s00431-020-03885-0
3. Samiee-Zafarghandy S, Feberova J, Williams K, et al. Influence of skin colour on diagnostic accuracy of the jaundice meter JM 103 in newborns. Arch Dis Child Fetal Neonatal Ed. 2014;99: F480-F484. doi: 10.1136/archdischild-2013-305699
4. Wainer S, Rabi Y, Parmar SM, et al. Impact of skin tone on the performance of a transcutaneous jaundice meter. Acta Paediatr. 2009;98:1909-1915. doi: 10.1111/j.1651-2227.2009.01497.x
5. Maisels MJ, Bhutani VK, Bogen D, et al. Hyperbilirubinemia in the newborn infant > or = 35 weeks’ gestation: an update with clarifications. Pediatrics. 2009;124:1193-1198. doi: 10.1542/peds. 2009-0329
6. Amos RC, Jacob H, Leith W. Jaundice in newborn babies under 28 days: NICE guideline 2016 (CG98). Arch Dis Child Educ Pract Ed. 2017;102:207-209. doi: 10.1136/archdischild-2016-311556
7. Olusanya BO, Mabogunje CA, Imosemi DO, et al. Transcutaneous bilirubin nomograms in African neonates. PloS ONE. 2017; 12:e0172058. doi: 10.1371/journal.pone.0172058
EVIDENCE SUMMARY
Some evidence suggests overestimation in all skin tones
In a prospective diagnostic cohort study of 1553 infants in Nigeria, the accuracy of TcB measurement with 2 transcutaneous bilirubinometers (Konica Minolta/Air Shields JM- 103 and Respironics BiliChek) was analyzed. 1 The study population was derived from neonates delivered in a single maternity hospital in Lagos who were ≥ 35 weeks gestational age or ≥ 2.2 kg.
Using a color scale generated for this population, researchers stratified neonates into 1 of 3 skin tone groups: light brown, medium brown, or dark brown. TcB and TSB paired samples were collected in the first 120 hours of life in all patients. JM-103 recordings comprised 71.9% of TcB readings.
Overall, TcB testing overestimated the TSB by ≥ 2 mg/dL in 64.5% of infants, ≥ 3 mg/dL in 42.7%, and > 4 mg/dL in 25.7%. TcB testing underestimated the TSB by ≥ 2 mg/dL in 1.1% of infants, ≥ 3 mg/dL in 0.5%, and > 4 mg/dL in 0.3%.1
Local variation in skin tone was not associated with changes in overestimation, although the researchers noted that a key limitation of the study was a lack of lighttoned infants for comparison.1
A prospective diagnostic cohort study of 1359 infants in Spain compared TcB measurements to TSB levels using the Dräger Jaundice Meter JM-105.2 Patients included all neonates (gestational age, 36.6 to 41.1 weeks) born at a single hospital in Barcelona.
Using a validated skin tone scale, researchers stratified neonates at 24 hours of life to 1 of 4 skin tones: light (n = 337), medium light (n = 750), medium dark (n = 249), and dark (n = 23). They then obtained TSB samples at 48 to 72 hours of life, along with other routine screening labs and midsternal TcB measurements.
TcB testing tended to overestimate TSB (when < 15 mg/dL) for all skin tones, although to a larger degree for neonates with dark skin tones (mean overestimation, 0.7 mg/dL for light; 1.08 mg/dL for medium light; 1.89 mg/dL for medium dark; and 1.86 mg/dL for dark; P < .001 for light vs medium dark or dark).2
Continue to: Stated limitations...
Stated limitations of the study included relatively low numbers of neonates with dark skin tone, no test of interobserver reliability in skin tone assignment, and enrollment of exclusively healthy neonates with low bilirubin levels.2
Other studies report overestimation in infants with darker skin tone
Two Canadian diagnostic cohort studies also found evidence that TcB testing overestimated TSB in infants with darker skin tones, although TcB test characteristics proved stable over a wide range of bilirubin levels.
The first study enrolled 451 neonates ≥ 35 weeks gestational age at a hospital in Ottawa and assessed TcB using the JM-103 meter.3 The neonates were stratified into light (n = 51), medium (n = 326), and dark (n = 74) skin tones using cosmetic reference color swatches. All had a TcB and TSB obtained within 30 minutes of each other.
TcB testing underestimated TSB in infants with light and medium skin tones and overestimated TSB in infants with darker skin tone (mean difference, –0.88 mg/dL for light; –1.1 mg/dL for medium; and 0.68 mg/dL for dark; P not given). The mean area under the curve (AUC) was ≥ 0.94 for all receiver–operator characteristic (ROC) curves across all skin tones and bilirubin thresholds (AUC range, 0-1, with > 0.8 indicating strong modeling).3
Limitations of the study included failure to check interrater reliability for skin tone assessment, low numbers of infants with elevated bilirubin (≥ 13.5 mg/dL), and very few infants in either the dark or light skin tone groups.3
Continue to: The second Canadian study...
The second Canadian study enrolled 774 infants born at ≥ 37 weeks gestational age in Calgary and assessed TcB with the JM-103.4 Infants were categorized as having light (n = 347), medium (n = 412), and dark (n = 15) skin tones by study nurses, based on reference cosmetic colors. All infants had paired TcB and TSB measurements within 60 minutes of each other and before 120 hours of life.
Multivariate linear regression analysis using medium skin tone as the reference group found a tendency toward low TcB levels in infants with light skin tone and a tendency toward high TcB levels in infants with dark skin tone (adjusted R2 = 0.86). The AUC was ≥ 0.95 for all ROC curves for lightand medium-toned infants at key TSB cutoff points; the study included too few infants with dark skin tone to generate ROC curves for that group.4
Recommendations from others
In 2009, the American Academy of Pediatrics (AAP) recommended universal predischarge screening for hyperbilirubinemia in newborns using either TcB testing or TSB. The AAP statement did not address the effect of skin tone on TcB levels, but did advise regular calibration of TcB and TSB results at the hospital level.5
In 2016, the National Institute for Health and Care Excellence (NICE) updated their guideline on jaundice in newborns younger than 28 days old. NICE recommended visual inspection of all babies for jaundice by examining them in bright natural light and looking for jaundice on blanched skin; it specifically advised checking sclera and gums in infants with darker skin tones.6
The Nigerian researchers noted earlier have published an updated TcB nomogram for their patient population.7
Editor’s takeaway
Even with the small variation of 2 mg/dL or less between transcutaneous and serum bilirubin, and the SOR of C due to lab values being labeled disease-oriented evidence, TcB proves to be useful. In practice, concerning TcB values should lead to serum bilirubin confirmation. This evidence indicates we might be ordering TSB measurements more or less often depending on skin tone, reinforcing the need for review and adjustment of TcB cut-off levels based on the local population.
EVIDENCE SUMMARY
Some evidence suggests overestimation in all skin tones
In a prospective diagnostic cohort study of 1553 infants in Nigeria, the accuracy of TcB measurement with 2 transcutaneous bilirubinometers (Konica Minolta/Air Shields JM- 103 and Respironics BiliChek) was analyzed. 1 The study population was derived from neonates delivered in a single maternity hospital in Lagos who were ≥ 35 weeks gestational age or ≥ 2.2 kg.
Using a color scale generated for this population, researchers stratified neonates into 1 of 3 skin tone groups: light brown, medium brown, or dark brown. TcB and TSB paired samples were collected in the first 120 hours of life in all patients. JM-103 recordings comprised 71.9% of TcB readings.
Overall, TcB testing overestimated the TSB by ≥ 2 mg/dL in 64.5% of infants, ≥ 3 mg/dL in 42.7%, and > 4 mg/dL in 25.7%. TcB testing underestimated the TSB by ≥ 2 mg/dL in 1.1% of infants, ≥ 3 mg/dL in 0.5%, and > 4 mg/dL in 0.3%.1
Local variation in skin tone was not associated with changes in overestimation, although the researchers noted that a key limitation of the study was a lack of lighttoned infants for comparison.1
A prospective diagnostic cohort study of 1359 infants in Spain compared TcB measurements to TSB levels using the Dräger Jaundice Meter JM-105.2 Patients included all neonates (gestational age, 36.6 to 41.1 weeks) born at a single hospital in Barcelona.
Using a validated skin tone scale, researchers stratified neonates at 24 hours of life to 1 of 4 skin tones: light (n = 337), medium light (n = 750), medium dark (n = 249), and dark (n = 23). They then obtained TSB samples at 48 to 72 hours of life, along with other routine screening labs and midsternal TcB measurements.
TcB testing tended to overestimate TSB (when < 15 mg/dL) for all skin tones, although to a larger degree for neonates with dark skin tones (mean overestimation, 0.7 mg/dL for light; 1.08 mg/dL for medium light; 1.89 mg/dL for medium dark; and 1.86 mg/dL for dark; P < .001 for light vs medium dark or dark).2
Continue to: Stated limitations...
Stated limitations of the study included relatively low numbers of neonates with dark skin tone, no test of interobserver reliability in skin tone assignment, and enrollment of exclusively healthy neonates with low bilirubin levels.2
Other studies report overestimation in infants with darker skin tone
Two Canadian diagnostic cohort studies also found evidence that TcB testing overestimated TSB in infants with darker skin tones, although TcB test characteristics proved stable over a wide range of bilirubin levels.
The first study enrolled 451 neonates ≥ 35 weeks gestational age at a hospital in Ottawa and assessed TcB using the JM-103 meter.3 The neonates were stratified into light (n = 51), medium (n = 326), and dark (n = 74) skin tones using cosmetic reference color swatches. All had a TcB and TSB obtained within 30 minutes of each other.
TcB testing underestimated TSB in infants with light and medium skin tones and overestimated TSB in infants with darker skin tone (mean difference, –0.88 mg/dL for light; –1.1 mg/dL for medium; and 0.68 mg/dL for dark; P not given). The mean area under the curve (AUC) was ≥ 0.94 for all receiver–operator characteristic (ROC) curves across all skin tones and bilirubin thresholds (AUC range, 0-1, with > 0.8 indicating strong modeling).3
Limitations of the study included failure to check interrater reliability for skin tone assessment, low numbers of infants with elevated bilirubin (≥ 13.5 mg/dL), and very few infants in either the dark or light skin tone groups.3
Continue to: The second Canadian study...
The second Canadian study enrolled 774 infants born at ≥ 37 weeks gestational age in Calgary and assessed TcB with the JM-103.4 Infants were categorized as having light (n = 347), medium (n = 412), and dark (n = 15) skin tones by study nurses, based on reference cosmetic colors. All infants had paired TcB and TSB measurements within 60 minutes of each other and before 120 hours of life.
Multivariate linear regression analysis using medium skin tone as the reference group found a tendency toward low TcB levels in infants with light skin tone and a tendency toward high TcB levels in infants with dark skin tone (adjusted R2 = 0.86). The AUC was ≥ 0.95 for all ROC curves for lightand medium-toned infants at key TSB cutoff points; the study included too few infants with dark skin tone to generate ROC curves for that group.4
Recommendations from others
In 2009, the American Academy of Pediatrics (AAP) recommended universal predischarge screening for hyperbilirubinemia in newborns using either TcB testing or TSB. The AAP statement did not address the effect of skin tone on TcB levels, but did advise regular calibration of TcB and TSB results at the hospital level.5
In 2016, the National Institute for Health and Care Excellence (NICE) updated their guideline on jaundice in newborns younger than 28 days old. NICE recommended visual inspection of all babies for jaundice by examining them in bright natural light and looking for jaundice on blanched skin; it specifically advised checking sclera and gums in infants with darker skin tones.6
The Nigerian researchers noted earlier have published an updated TcB nomogram for their patient population.7
Editor’s takeaway
Even with the small variation of 2 mg/dL or less between transcutaneous and serum bilirubin, and the SOR of C due to lab values being labeled disease-oriented evidence, TcB proves to be useful. In practice, concerning TcB values should lead to serum bilirubin confirmation. This evidence indicates we might be ordering TSB measurements more or less often depending on skin tone, reinforcing the need for review and adjustment of TcB cut-off levels based on the local population.
1. Olusanya BO, Imosemi DO, Emokpae AA. Differences between transcutaneous and serum bilirubin measurements in Black African neonates. Pediatrics. 2016;138:e20160907. doi: 10.1542/ peds.2016-0907
2. Maya-Enero S, Candel-Pau J, Garcia-Garcia J, et al. Reliability of transcutaneous bilirubin determination based on skin color determined by a neonatal skin color scale of our own. Eur J Pediatr. 2021;180:607-616. doi: 10.1007/s00431-020-03885-0
3. Samiee-Zafarghandy S, Feberova J, Williams K, et al. Influence of skin colour on diagnostic accuracy of the jaundice meter JM 103 in newborns. Arch Dis Child Fetal Neonatal Ed. 2014;99: F480-F484. doi: 10.1136/archdischild-2013-305699
4. Wainer S, Rabi Y, Parmar SM, et al. Impact of skin tone on the performance of a transcutaneous jaundice meter. Acta Paediatr. 2009;98:1909-1915. doi: 10.1111/j.1651-2227.2009.01497.x
5. Maisels MJ, Bhutani VK, Bogen D, et al. Hyperbilirubinemia in the newborn infant > or = 35 weeks’ gestation: an update with clarifications. Pediatrics. 2009;124:1193-1198. doi: 10.1542/peds. 2009-0329
6. Amos RC, Jacob H, Leith W. Jaundice in newborn babies under 28 days: NICE guideline 2016 (CG98). Arch Dis Child Educ Pract Ed. 2017;102:207-209. doi: 10.1136/archdischild-2016-311556
7. Olusanya BO, Mabogunje CA, Imosemi DO, et al. Transcutaneous bilirubin nomograms in African neonates. PloS ONE. 2017; 12:e0172058. doi: 10.1371/journal.pone.0172058
1. Olusanya BO, Imosemi DO, Emokpae AA. Differences between transcutaneous and serum bilirubin measurements in Black African neonates. Pediatrics. 2016;138:e20160907. doi: 10.1542/ peds.2016-0907
2. Maya-Enero S, Candel-Pau J, Garcia-Garcia J, et al. Reliability of transcutaneous bilirubin determination based on skin color determined by a neonatal skin color scale of our own. Eur J Pediatr. 2021;180:607-616. doi: 10.1007/s00431-020-03885-0
3. Samiee-Zafarghandy S, Feberova J, Williams K, et al. Influence of skin colour on diagnostic accuracy of the jaundice meter JM 103 in newborns. Arch Dis Child Fetal Neonatal Ed. 2014;99: F480-F484. doi: 10.1136/archdischild-2013-305699
4. Wainer S, Rabi Y, Parmar SM, et al. Impact of skin tone on the performance of a transcutaneous jaundice meter. Acta Paediatr. 2009;98:1909-1915. doi: 10.1111/j.1651-2227.2009.01497.x
5. Maisels MJ, Bhutani VK, Bogen D, et al. Hyperbilirubinemia in the newborn infant > or = 35 weeks’ gestation: an update with clarifications. Pediatrics. 2009;124:1193-1198. doi: 10.1542/peds. 2009-0329
6. Amos RC, Jacob H, Leith W. Jaundice in newborn babies under 28 days: NICE guideline 2016 (CG98). Arch Dis Child Educ Pract Ed. 2017;102:207-209. doi: 10.1136/archdischild-2016-311556
7. Olusanya BO, Mabogunje CA, Imosemi DO, et al. Transcutaneous bilirubin nomograms in African neonates. PloS ONE. 2017; 12:e0172058. doi: 10.1371/journal.pone.0172058
EVIDENCE-BASED ANSWER:
Fairly accurate. Photometric transcutaneous bilirubin (TcB) testing may overestimate total serum bilirubin (TSB) in neonates with darker skin tones by a mean of 0.68 to > 2 mg/dL (strength of recommendation [SOR]: C, diagnostic cohort studies with differing reference standards).
Overall, TcB meters retain acceptable accuracy in infants of all skin tones across a range of bilirubin levels, despite being more likely to underestimate lighter skin tones and overestimate darker ones (SOR: C, diagnostic cohort studies with differing reference standards). It is unclear if the higher readings prompt an increase in blood draws or otherwise alter care.