Precision blood test for rheumatoid arthritis receives Medicare coverage

Article Type
Changed
Thu, 09/07/2023 - 16:06

Medicare will now cover a molecular diagnostic test to predict treatment response for certain patients with rheumatoid arthritis.

The blood test, PrismRA, is the first and only commercially available test that can help predict which patients with RA are unlikely to respond to tumor necrosis factor inhibitor therapy, according to the test manufacturer, Scipher Medicine.

Alexander Raths/ThinkStock


“Precision medicine will now be accessible to many patients suffering from RA, a potentially debilitating disease if not treated with the right therapy,” Alif Saleh, the company’s chief executive officer, said in a press release on Sept. 7. “This coverage decision not only represents a significant benefit for patients today but also ushers in a new era of precision medicine in autoimmune diseases.”

PrismRA was first made available in December 2021 for commercial billing. The test costs about $5,000, but most patients with insurance coverage pay less than $75 out-of-pocket after insurance, according to Scipher.

On Sept. 1, 2022, the Medicare administrative contractor Palmetto GBA published a draft recommendation that the test should not be covered by the national health insurance program, stating that biomarker tests “have not yet demonstrated definitive value above the combination of available clinical, laboratory, and demographic data.”

During the comment period, clinicians urged the contractor to reconsider.

“I do not have a test or clinical assessment to inform me of the right biologic for my patients. I utilize the PrismRA test to inform me which biologic is the best start. Without this valuable tool, I am left with prescribing based on what is dictated by the patient’s insurance,” wrote one commenter

These responses and additional data published during the comment period resulted in Palmetto GBA revising their decision.

“We agree that despite the many limitations of predictive biomarker tests, a review of the evidence supports their limited use given their demonstrated validity and utility,” the company wrote in response. “Specifically, when a nonresponse signature is obtained by the molecular signature response classifier, nearly 90% of those patients will prove to not clinically respond to TNFi therapies using multiple validated disease response criteria including the ACR50 and CDAI. For these patients, a change in management would ultimately serve to avoid time on an unnecessary therapy and shorten the time to an appropriate therapy.”

The local coverage determination provides Medicare coverage nationally for patients who have a confirmed diagnosis of moderate to severely active RA, have failed first-line therapy for RA treatment, and have not started biologic or targeted synthetic therapy for RA or who are being considered for an alternative class of targeted therapy due to failure of an initially targeted therapy despite adequate dosing.

The LCD becomes effective for tests performed on or after Oct. 15, 2023.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Medicare will now cover a molecular diagnostic test to predict treatment response for certain patients with rheumatoid arthritis.

The blood test, PrismRA, is the first and only commercially available test that can help predict which patients with RA are unlikely to respond to tumor necrosis factor inhibitor therapy, according to the test manufacturer, Scipher Medicine.

Alexander Raths/ThinkStock


“Precision medicine will now be accessible to many patients suffering from RA, a potentially debilitating disease if not treated with the right therapy,” Alif Saleh, the company’s chief executive officer, said in a press release on Sept. 7. “This coverage decision not only represents a significant benefit for patients today but also ushers in a new era of precision medicine in autoimmune diseases.”

PrismRA was first made available in December 2021 for commercial billing. The test costs about $5,000, but most patients with insurance coverage pay less than $75 out-of-pocket after insurance, according to Scipher.

On Sept. 1, 2022, the Medicare administrative contractor Palmetto GBA published a draft recommendation that the test should not be covered by the national health insurance program, stating that biomarker tests “have not yet demonstrated definitive value above the combination of available clinical, laboratory, and demographic data.”

During the comment period, clinicians urged the contractor to reconsider.

“I do not have a test or clinical assessment to inform me of the right biologic for my patients. I utilize the PrismRA test to inform me which biologic is the best start. Without this valuable tool, I am left with prescribing based on what is dictated by the patient’s insurance,” wrote one commenter

These responses and additional data published during the comment period resulted in Palmetto GBA revising their decision.

“We agree that despite the many limitations of predictive biomarker tests, a review of the evidence supports their limited use given their demonstrated validity and utility,” the company wrote in response. “Specifically, when a nonresponse signature is obtained by the molecular signature response classifier, nearly 90% of those patients will prove to not clinically respond to TNFi therapies using multiple validated disease response criteria including the ACR50 and CDAI. For these patients, a change in management would ultimately serve to avoid time on an unnecessary therapy and shorten the time to an appropriate therapy.”

The local coverage determination provides Medicare coverage nationally for patients who have a confirmed diagnosis of moderate to severely active RA, have failed first-line therapy for RA treatment, and have not started biologic or targeted synthetic therapy for RA or who are being considered for an alternative class of targeted therapy due to failure of an initially targeted therapy despite adequate dosing.

The LCD becomes effective for tests performed on or after Oct. 15, 2023.

A version of this article first appeared on Medscape.com.

Medicare will now cover a molecular diagnostic test to predict treatment response for certain patients with rheumatoid arthritis.

The blood test, PrismRA, is the first and only commercially available test that can help predict which patients with RA are unlikely to respond to tumor necrosis factor inhibitor therapy, according to the test manufacturer, Scipher Medicine.

Alexander Raths/ThinkStock


“Precision medicine will now be accessible to many patients suffering from RA, a potentially debilitating disease if not treated with the right therapy,” Alif Saleh, the company’s chief executive officer, said in a press release on Sept. 7. “This coverage decision not only represents a significant benefit for patients today but also ushers in a new era of precision medicine in autoimmune diseases.”

PrismRA was first made available in December 2021 for commercial billing. The test costs about $5,000, but most patients with insurance coverage pay less than $75 out-of-pocket after insurance, according to Scipher.

On Sept. 1, 2022, the Medicare administrative contractor Palmetto GBA published a draft recommendation that the test should not be covered by the national health insurance program, stating that biomarker tests “have not yet demonstrated definitive value above the combination of available clinical, laboratory, and demographic data.”

During the comment period, clinicians urged the contractor to reconsider.

“I do not have a test or clinical assessment to inform me of the right biologic for my patients. I utilize the PrismRA test to inform me which biologic is the best start. Without this valuable tool, I am left with prescribing based on what is dictated by the patient’s insurance,” wrote one commenter

These responses and additional data published during the comment period resulted in Palmetto GBA revising their decision.

“We agree that despite the many limitations of predictive biomarker tests, a review of the evidence supports their limited use given their demonstrated validity and utility,” the company wrote in response. “Specifically, when a nonresponse signature is obtained by the molecular signature response classifier, nearly 90% of those patients will prove to not clinically respond to TNFi therapies using multiple validated disease response criteria including the ACR50 and CDAI. For these patients, a change in management would ultimately serve to avoid time on an unnecessary therapy and shorten the time to an appropriate therapy.”

The local coverage determination provides Medicare coverage nationally for patients who have a confirmed diagnosis of moderate to severely active RA, have failed first-line therapy for RA treatment, and have not started biologic or targeted synthetic therapy for RA or who are being considered for an alternative class of targeted therapy due to failure of an initially targeted therapy despite adequate dosing.

The LCD becomes effective for tests performed on or after Oct. 15, 2023.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

BCR is unreliable surrogate for overall survival in prostate cancer

Article Type
Changed
Fri, 09/08/2023 - 16:17

 

TOPLINE

Biochemical recurrence (BCR) falls short as a reliable surrogate for overall survival in localized prostate cancer trials and may not be a suitable primary endpoint.

METHODOLOGY

  • In trials of localized prostate cancer, BCR remains a controversial surrogate endpoint for overall survival.
  • The meta-analysis included 10,741 patients from 11 randomized clinical trials; the median follow-up was 9.2 years.
  • Interventions included radiotherapy dose escalation, in which high-dose radiotherapy was compared with conventional radiotherapy (n = 3,639); short-term androgen deprivation therapy (ADT), in which radiotherapy plus short-term ADT was compared with radiotherapy alone (n = 3,930); and ADT prolongation, in which radiotherapy plus long-term ADT was compared with radiotherapy plus short-term ADT (n = 3,772).
  • Prentice criteria and the two-stage meta-analytic approach were used to assess BCR as a surrogate endpoint for overall survival.
  • The researchers assessed the treatment effect on BCR and on overall survival.

TAKEAWAY

  • With regard to treatment effect on BCR, the three interventions significantly reduced BCR risk – dose escalation by 29%, short-term ADT by 47%, and ADT prolongation by 46%. With regard to survival, only short- and long-term ADT significantly improved overall survival, by 9% and 14%, respectively.
  • At 48 months, BCR was associated with significantly increased mortality risk: 2.46-fold increased risk for dose escalation, 1.51-fold greater risk for short-term ADT, and 2.31-fold higher risk for ADT prolongation.
  • However, after adjusting for BCR at 48 months, there was no significant treatment effect on overall survival (hazard ratio, 1.10; [95% confidence interval, 0.96-1.27]; HR, 0.96 [95% CI, 0.87-1.06]; HR, 1.00 [95% CI, 0.90-1.12], respectively).
  • Patient-level correlation between time to BCR and overall survival was low after censoring for noncancer-related deaths. The correlation between BCR-free survival and overall survival ranged from low to moderate.

IN PRACTICE

Overall, “these results strongly suggest that BCR-based endpoints should not be the primary endpoint in randomized trials conducted for localized [prostate cancer],” the authors concluded. They added that metastasis-free survival represents a more appropriate measure.

SOURCE

The study was led by senior author Amar Kishan, MD, of the University of California, Los Angeles, and was published online in the Journal of Clinical Oncology.

LIMITATIONS

  • The trials used different definitions of BCR – the older American Society of Therapeutic Radiation and Oncology definition, and the more current Phoenix criteria.
  • Some trials were conducted more than 20 years ago, and a variety of factors, including patient selection, staging, diagnostic criteria, and therapeutic approaches, have evolved in that time.
  • Quality of life was not captured.

DISCLOSURES

The study received support from Cancer Research UK, the UK National Health Service, the Prostate Cancer National Institutes of Health Specialized Programs of Research Excellence, the UK Department of Defense, the Prostate Cancer Foundation, and the American Society for Radiation Oncology. Authors’ relevant financial relationships are detailed in the published study.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE

Biochemical recurrence (BCR) falls short as a reliable surrogate for overall survival in localized prostate cancer trials and may not be a suitable primary endpoint.

METHODOLOGY

  • In trials of localized prostate cancer, BCR remains a controversial surrogate endpoint for overall survival.
  • The meta-analysis included 10,741 patients from 11 randomized clinical trials; the median follow-up was 9.2 years.
  • Interventions included radiotherapy dose escalation, in which high-dose radiotherapy was compared with conventional radiotherapy (n = 3,639); short-term androgen deprivation therapy (ADT), in which radiotherapy plus short-term ADT was compared with radiotherapy alone (n = 3,930); and ADT prolongation, in which radiotherapy plus long-term ADT was compared with radiotherapy plus short-term ADT (n = 3,772).
  • Prentice criteria and the two-stage meta-analytic approach were used to assess BCR as a surrogate endpoint for overall survival.
  • The researchers assessed the treatment effect on BCR and on overall survival.

TAKEAWAY

  • With regard to treatment effect on BCR, the three interventions significantly reduced BCR risk – dose escalation by 29%, short-term ADT by 47%, and ADT prolongation by 46%. With regard to survival, only short- and long-term ADT significantly improved overall survival, by 9% and 14%, respectively.
  • At 48 months, BCR was associated with significantly increased mortality risk: 2.46-fold increased risk for dose escalation, 1.51-fold greater risk for short-term ADT, and 2.31-fold higher risk for ADT prolongation.
  • However, after adjusting for BCR at 48 months, there was no significant treatment effect on overall survival (hazard ratio, 1.10; [95% confidence interval, 0.96-1.27]; HR, 0.96 [95% CI, 0.87-1.06]; HR, 1.00 [95% CI, 0.90-1.12], respectively).
  • Patient-level correlation between time to BCR and overall survival was low after censoring for noncancer-related deaths. The correlation between BCR-free survival and overall survival ranged from low to moderate.

IN PRACTICE

Overall, “these results strongly suggest that BCR-based endpoints should not be the primary endpoint in randomized trials conducted for localized [prostate cancer],” the authors concluded. They added that metastasis-free survival represents a more appropriate measure.

SOURCE

The study was led by senior author Amar Kishan, MD, of the University of California, Los Angeles, and was published online in the Journal of Clinical Oncology.

LIMITATIONS

  • The trials used different definitions of BCR – the older American Society of Therapeutic Radiation and Oncology definition, and the more current Phoenix criteria.
  • Some trials were conducted more than 20 years ago, and a variety of factors, including patient selection, staging, diagnostic criteria, and therapeutic approaches, have evolved in that time.
  • Quality of life was not captured.

DISCLOSURES

The study received support from Cancer Research UK, the UK National Health Service, the Prostate Cancer National Institutes of Health Specialized Programs of Research Excellence, the UK Department of Defense, the Prostate Cancer Foundation, and the American Society for Radiation Oncology. Authors’ relevant financial relationships are detailed in the published study.

A version of this article appeared on Medscape.com.

 

TOPLINE

Biochemical recurrence (BCR) falls short as a reliable surrogate for overall survival in localized prostate cancer trials and may not be a suitable primary endpoint.

METHODOLOGY

  • In trials of localized prostate cancer, BCR remains a controversial surrogate endpoint for overall survival.
  • The meta-analysis included 10,741 patients from 11 randomized clinical trials; the median follow-up was 9.2 years.
  • Interventions included radiotherapy dose escalation, in which high-dose radiotherapy was compared with conventional radiotherapy (n = 3,639); short-term androgen deprivation therapy (ADT), in which radiotherapy plus short-term ADT was compared with radiotherapy alone (n = 3,930); and ADT prolongation, in which radiotherapy plus long-term ADT was compared with radiotherapy plus short-term ADT (n = 3,772).
  • Prentice criteria and the two-stage meta-analytic approach were used to assess BCR as a surrogate endpoint for overall survival.
  • The researchers assessed the treatment effect on BCR and on overall survival.

TAKEAWAY

  • With regard to treatment effect on BCR, the three interventions significantly reduced BCR risk – dose escalation by 29%, short-term ADT by 47%, and ADT prolongation by 46%. With regard to survival, only short- and long-term ADT significantly improved overall survival, by 9% and 14%, respectively.
  • At 48 months, BCR was associated with significantly increased mortality risk: 2.46-fold increased risk for dose escalation, 1.51-fold greater risk for short-term ADT, and 2.31-fold higher risk for ADT prolongation.
  • However, after adjusting for BCR at 48 months, there was no significant treatment effect on overall survival (hazard ratio, 1.10; [95% confidence interval, 0.96-1.27]; HR, 0.96 [95% CI, 0.87-1.06]; HR, 1.00 [95% CI, 0.90-1.12], respectively).
  • Patient-level correlation between time to BCR and overall survival was low after censoring for noncancer-related deaths. The correlation between BCR-free survival and overall survival ranged from low to moderate.

IN PRACTICE

Overall, “these results strongly suggest that BCR-based endpoints should not be the primary endpoint in randomized trials conducted for localized [prostate cancer],” the authors concluded. They added that metastasis-free survival represents a more appropriate measure.

SOURCE

The study was led by senior author Amar Kishan, MD, of the University of California, Los Angeles, and was published online in the Journal of Clinical Oncology.

LIMITATIONS

  • The trials used different definitions of BCR – the older American Society of Therapeutic Radiation and Oncology definition, and the more current Phoenix criteria.
  • Some trials were conducted more than 20 years ago, and a variety of factors, including patient selection, staging, diagnostic criteria, and therapeutic approaches, have evolved in that time.
  • Quality of life was not captured.

DISCLOSURES

The study received support from Cancer Research UK, the UK National Health Service, the Prostate Cancer National Institutes of Health Specialized Programs of Research Excellence, the UK Department of Defense, the Prostate Cancer Foundation, and the American Society for Radiation Oncology. Authors’ relevant financial relationships are detailed in the published study.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Ketogenic diet short-term may benefit women with PCOS

Article Type
Changed
Thu, 09/07/2023 - 15:35
Analysis examined data from seven studies

Ketogenic diets may improve reproductive hormone levels in women with polycystic ovary syndrome (PCOS), new research suggests.

In the first-ever systematic review and meta-analysis of clinical trials on the association, ketogenic diets followed for 45 days to 24 weeks showed improvements in the luteinizing hormone (LH)/follicle-stimulating hormone (FSH) ratio, serum free testosterone, and serum sex hormone binding globulin (SHBG).  

Previous evidence supporting ketogenic diets in PCOS has been “relatively patchy,” and although there have been reviews on the topic, this is the first meta-analysis, write Karniza Khalid, MD, of the National Institutes of Health, Ministry of Health Malaysia, and colleagues. 

Study co-author Syed A.A. Rizvi, MD, PhD, told this news organization: “Our paper supports the positive effects of short-term ketogenic diets on hormonal imbalances commonly associated with PCOS, a complex disease state associated with a multitude of presenting symptoms among individuals. Based on the presentation and individual patient circumstances, besides pharmacologic treatment, lifestyle changes and a ketogenic diet can lead to even faster improvements.”

However, Dr. Rizvi, a professor at the College of Biomedical Sciences, Larkin University, Miami, cautioned: “I would highly recommend a keto diet to women suffering from PCOS, but we all know every person has a different situation. Some may not want to change their diet, some may not be able to afford it, and for some it is just too much work. ... This is why any lifestyle change has to be discussed and planned carefully between patients and their health care providers.”

The findings were published online in the Journal of the Endocrine Society.
 

The literature search yielded seven qualifying studies of ketogenic diets, generally defined as a daily carbohydrate intake below 50 g while allowing variable amounts of fat and protein. A total of 170 participants were enrolled in the studies from Italy, China, and the United States.

Pooled data showed a significant association between ketogenic diet and reduced LH/FSH ratio (P < .001) and free testosterone (P < .001). There was also a significant increase in circulating SHBG (P = .002).

On the other hand, serum progesterone levels did not change significantly (P = .353).

Weight loss, a secondary outcome, was significantly greater with the ketogenic diet (P < .001).

“Since low-carbohydrate diets have shown to be effective in addressing obesity and type 2 diabetes, it makes sense that they would also be helpful to the patients with PCOS, and in fact, it has been the case,” Dr. Rizvi noted.

The exact mechanisms for the hormonal effects aren’t clear, but one theory is that the reduction in hyperinsulinemia from the ketogenic diet decreases stimulation of ovarian androgen production and increases SHBG levels. Another is that the physiologic ketosis induced by low carbohydrate intake reduces both circulating insulin and insulin-like growth factor-1, thereby suppressing the stimulus on the production of both ovarian and adrenal androgens.

The analysis didn’t include pregnancy rates. However, Dr. Rizvi noted, “there have been published studies showing that [patients with] PCOS on keto diets have significantly improved pregnancy rates, also including via [in vitro fertilization].”

The study received no outside funding. The authors have reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections
Analysis examined data from seven studies
Analysis examined data from seven studies

Ketogenic diets may improve reproductive hormone levels in women with polycystic ovary syndrome (PCOS), new research suggests.

In the first-ever systematic review and meta-analysis of clinical trials on the association, ketogenic diets followed for 45 days to 24 weeks showed improvements in the luteinizing hormone (LH)/follicle-stimulating hormone (FSH) ratio, serum free testosterone, and serum sex hormone binding globulin (SHBG).  

Previous evidence supporting ketogenic diets in PCOS has been “relatively patchy,” and although there have been reviews on the topic, this is the first meta-analysis, write Karniza Khalid, MD, of the National Institutes of Health, Ministry of Health Malaysia, and colleagues. 

Study co-author Syed A.A. Rizvi, MD, PhD, told this news organization: “Our paper supports the positive effects of short-term ketogenic diets on hormonal imbalances commonly associated with PCOS, a complex disease state associated with a multitude of presenting symptoms among individuals. Based on the presentation and individual patient circumstances, besides pharmacologic treatment, lifestyle changes and a ketogenic diet can lead to even faster improvements.”

However, Dr. Rizvi, a professor at the College of Biomedical Sciences, Larkin University, Miami, cautioned: “I would highly recommend a keto diet to women suffering from PCOS, but we all know every person has a different situation. Some may not want to change their diet, some may not be able to afford it, and for some it is just too much work. ... This is why any lifestyle change has to be discussed and planned carefully between patients and their health care providers.”

The findings were published online in the Journal of the Endocrine Society.
 

The literature search yielded seven qualifying studies of ketogenic diets, generally defined as a daily carbohydrate intake below 50 g while allowing variable amounts of fat and protein. A total of 170 participants were enrolled in the studies from Italy, China, and the United States.

Pooled data showed a significant association between ketogenic diet and reduced LH/FSH ratio (P < .001) and free testosterone (P < .001). There was also a significant increase in circulating SHBG (P = .002).

On the other hand, serum progesterone levels did not change significantly (P = .353).

Weight loss, a secondary outcome, was significantly greater with the ketogenic diet (P < .001).

“Since low-carbohydrate diets have shown to be effective in addressing obesity and type 2 diabetes, it makes sense that they would also be helpful to the patients with PCOS, and in fact, it has been the case,” Dr. Rizvi noted.

The exact mechanisms for the hormonal effects aren’t clear, but one theory is that the reduction in hyperinsulinemia from the ketogenic diet decreases stimulation of ovarian androgen production and increases SHBG levels. Another is that the physiologic ketosis induced by low carbohydrate intake reduces both circulating insulin and insulin-like growth factor-1, thereby suppressing the stimulus on the production of both ovarian and adrenal androgens.

The analysis didn’t include pregnancy rates. However, Dr. Rizvi noted, “there have been published studies showing that [patients with] PCOS on keto diets have significantly improved pregnancy rates, also including via [in vitro fertilization].”

The study received no outside funding. The authors have reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

Ketogenic diets may improve reproductive hormone levels in women with polycystic ovary syndrome (PCOS), new research suggests.

In the first-ever systematic review and meta-analysis of clinical trials on the association, ketogenic diets followed for 45 days to 24 weeks showed improvements in the luteinizing hormone (LH)/follicle-stimulating hormone (FSH) ratio, serum free testosterone, and serum sex hormone binding globulin (SHBG).  

Previous evidence supporting ketogenic diets in PCOS has been “relatively patchy,” and although there have been reviews on the topic, this is the first meta-analysis, write Karniza Khalid, MD, of the National Institutes of Health, Ministry of Health Malaysia, and colleagues. 

Study co-author Syed A.A. Rizvi, MD, PhD, told this news organization: “Our paper supports the positive effects of short-term ketogenic diets on hormonal imbalances commonly associated with PCOS, a complex disease state associated with a multitude of presenting symptoms among individuals. Based on the presentation and individual patient circumstances, besides pharmacologic treatment, lifestyle changes and a ketogenic diet can lead to even faster improvements.”

However, Dr. Rizvi, a professor at the College of Biomedical Sciences, Larkin University, Miami, cautioned: “I would highly recommend a keto diet to women suffering from PCOS, but we all know every person has a different situation. Some may not want to change their diet, some may not be able to afford it, and for some it is just too much work. ... This is why any lifestyle change has to be discussed and planned carefully between patients and their health care providers.”

The findings were published online in the Journal of the Endocrine Society.
 

The literature search yielded seven qualifying studies of ketogenic diets, generally defined as a daily carbohydrate intake below 50 g while allowing variable amounts of fat and protein. A total of 170 participants were enrolled in the studies from Italy, China, and the United States.

Pooled data showed a significant association between ketogenic diet and reduced LH/FSH ratio (P < .001) and free testosterone (P < .001). There was also a significant increase in circulating SHBG (P = .002).

On the other hand, serum progesterone levels did not change significantly (P = .353).

Weight loss, a secondary outcome, was significantly greater with the ketogenic diet (P < .001).

“Since low-carbohydrate diets have shown to be effective in addressing obesity and type 2 diabetes, it makes sense that they would also be helpful to the patients with PCOS, and in fact, it has been the case,” Dr. Rizvi noted.

The exact mechanisms for the hormonal effects aren’t clear, but one theory is that the reduction in hyperinsulinemia from the ketogenic diet decreases stimulation of ovarian androgen production and increases SHBG levels. Another is that the physiologic ketosis induced by low carbohydrate intake reduces both circulating insulin and insulin-like growth factor-1, thereby suppressing the stimulus on the production of both ovarian and adrenal androgens.

The analysis didn’t include pregnancy rates. However, Dr. Rizvi noted, “there have been published studies showing that [patients with] PCOS on keto diets have significantly improved pregnancy rates, also including via [in vitro fertilization].”

The study received no outside funding. The authors have reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

No benefit of anti-inflammatory strategy in acute myocarditis

Article Type
Changed
Fri, 09/08/2023 - 10:01

– A short course of the interleukin-1 receptor antagonist, anakinra, appeared safe but did not reduce complications of acute myocarditis in the ARAMIS trial.

The trial was presented at the annual congress of the European Society of Cardiology.

Lead investigator, Mathieu Kerneis, MD, Pitie Salpetriere APHP University Hospital, Paris, said this was the largest randomized controlled trial of patients with acute myocarditis and probably the first ever study in the acute setting of myocarditis patients diagnosed with cardiac magnetic resonance (CMR) imaging, not on biopsy, who are mostly at low risk for events.

He suggested that one of the reasons for the neutral result could have been the low-risk population involved and the low complication rate. “We enrolled an all-comer acute myocarditis population diagnosed with CMR, who were mostly at a low risk of complications,” he noted.

“I don’t think the story of anti-inflammatory drugs in acute myocarditis is over. This is just the beginning. This was the first trial, and it was just a phase 2 trial. We need further randomized trials to explore the potential benefit of an anti-inflammatory strategy in acute myocarditis patients at higher risk of complications. In addition, larger studies are needed to evaluate prolonged anti-inflammatory strategies in acute myocarditis patients at low-to-moderate risk of complications,” Dr. Kerneis concluded.

“It is very challenging to do a trial in high-risk patients with myocarditis as these patients are quite rare,” he added.
 

Inflammation of the myocardium

Dr. Kerneis explained that acute myocarditis is an inflammation of the myocardium that can cause permanent damage to the heart muscle and lead to myocardial infarction, stroke, heart failure, arrhythmias, and death. The condition can occur in individuals of all ages but is most frequent in young people. There is no specific treatment, but patients are generally treated with beta-blockers, angiotensin-converting enzyme (ACE) inhibitors, and sometimes steroids.

Anakinra is an interleukin-1 receptor antagonist that works by targeting the interleukin-1β innate immune pathway. Anakinra is used for the treatment of rheumatoid arthritis and has shown efficacy in pericarditis. Dr. Kerneis noted that there have been several case reports of successful treatment with anakinra in acute myocarditis.

The ARAMIS trial – conducted at six academic centers in France – was the first randomized study to evaluate inhibition of the interleukin-1β innate immune pathway in myocarditis patients. The trial enrolled 120 hospitalized, symptomatic patients with chest pain, increased cardiac troponin, and acute myocarditis diagnosed using CMR. More than half had had a recent bacterial or viral infection.

Patients were randomized within 72 hours of hospital admission to a daily subcutaneous dose of anakinra 100 mg or placebo until hospital discharge. Patients in both groups received standard-of-care treatments, including an ACE inhibitor, for at least 1 month. Consistent with prior data, the median age of participants was 28 years and 90% were men.

The primary endpoint was the number of days free of myocarditis complications (heart failure requiring hospitalization, chest pain requiring medication, left ventricular ejection fraction less than 50%, and ventricular arrhythmias) within 28 days postdischarge.

There was no significant difference in this endpoint between the two arms, with a median of 30 days for anakinra versus 31 days for placebo.

Overall, the rate of the composite endpoint of myocarditis complications occurred in 13.7% of patients, and there was a numerical reduction in the number of patients with these myocarditis complications with anakinra – 6 patients (10.5%) in the anakinra group versus 10 patients (16.5%) in the placebo group (odds ratio, 0.59; 95% confidence interval, 0.19-1.78). This was driven by fewer patients with chest pain requiring new medication (two patients versus six patients).

The safety endpoint was the number of serious adverse events within 28 days postdischarge. This endpoint occurred in seven patients (12.1%) in the anakinra arm and six patients (10.2%) in the placebo arm, with no significant difference between groups. Cases of severe infection within 28 days postdischarge were reported in both arms.
 

 

 

Low-risk population

Designated discussant of the study at the ESC Hotline session, Enrico Ammirati, MD, PhD, University of Milano-Bicocca, Monza, Italy, said that patients involved in ARAMIS fit the profile of acute myocarditis and that the CMR diagnosis was positive in all the patients enrolled.

Dr. Ammirati agreed with Dr. Kerneis that the neutral results of the study were probably caused by the low-risk population. “If we look at retrospective registries, at 30 days there are zero cardiac deaths or heart transplants at 30 days in patients with a low-risk presentation.

“The ARAMIS trial has shown the feasibility of conducting studies in the setting of acute myocarditis, and even if the primary endpoint was neutral, some important data are still missing, such as change in ejection fraction and troponin levels,” he noted.

“In terms of future perspective, we are moving to assessing efficacy of anakinra or other immunosuppressive drugs from acute low risk patients to higher risk patients with heart failure and severe dysfunction,” he said.  

Dr. Ammirati is the lead investigator of another ongoing study in such a higher-risk population; the MYTHS trial is investigating the use of intravenous steroids in patients with suspected acute myocarditis complicated by acute heart failure or cardiogenic shock, and an ejection fraction below 41%.

“So, we will have more results on the best treatment in this higher risk group of patients,” he concluded.

The ARAMIS trial was an academic study funded by the French Health Ministry and coordinated by the ACTION Group. Dr. Kerneis reports having received consulting fees from Kiniksa, Sanofi, and Bayer, and holds a patent for use of abatacept in immune checkpoint inhibitor (ICI)–induced myocarditis.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– A short course of the interleukin-1 receptor antagonist, anakinra, appeared safe but did not reduce complications of acute myocarditis in the ARAMIS trial.

The trial was presented at the annual congress of the European Society of Cardiology.

Lead investigator, Mathieu Kerneis, MD, Pitie Salpetriere APHP University Hospital, Paris, said this was the largest randomized controlled trial of patients with acute myocarditis and probably the first ever study in the acute setting of myocarditis patients diagnosed with cardiac magnetic resonance (CMR) imaging, not on biopsy, who are mostly at low risk for events.

He suggested that one of the reasons for the neutral result could have been the low-risk population involved and the low complication rate. “We enrolled an all-comer acute myocarditis population diagnosed with CMR, who were mostly at a low risk of complications,” he noted.

“I don’t think the story of anti-inflammatory drugs in acute myocarditis is over. This is just the beginning. This was the first trial, and it was just a phase 2 trial. We need further randomized trials to explore the potential benefit of an anti-inflammatory strategy in acute myocarditis patients at higher risk of complications. In addition, larger studies are needed to evaluate prolonged anti-inflammatory strategies in acute myocarditis patients at low-to-moderate risk of complications,” Dr. Kerneis concluded.

“It is very challenging to do a trial in high-risk patients with myocarditis as these patients are quite rare,” he added.
 

Inflammation of the myocardium

Dr. Kerneis explained that acute myocarditis is an inflammation of the myocardium that can cause permanent damage to the heart muscle and lead to myocardial infarction, stroke, heart failure, arrhythmias, and death. The condition can occur in individuals of all ages but is most frequent in young people. There is no specific treatment, but patients are generally treated with beta-blockers, angiotensin-converting enzyme (ACE) inhibitors, and sometimes steroids.

Anakinra is an interleukin-1 receptor antagonist that works by targeting the interleukin-1β innate immune pathway. Anakinra is used for the treatment of rheumatoid arthritis and has shown efficacy in pericarditis. Dr. Kerneis noted that there have been several case reports of successful treatment with anakinra in acute myocarditis.

The ARAMIS trial – conducted at six academic centers in France – was the first randomized study to evaluate inhibition of the interleukin-1β innate immune pathway in myocarditis patients. The trial enrolled 120 hospitalized, symptomatic patients with chest pain, increased cardiac troponin, and acute myocarditis diagnosed using CMR. More than half had had a recent bacterial or viral infection.

Patients were randomized within 72 hours of hospital admission to a daily subcutaneous dose of anakinra 100 mg or placebo until hospital discharge. Patients in both groups received standard-of-care treatments, including an ACE inhibitor, for at least 1 month. Consistent with prior data, the median age of participants was 28 years and 90% were men.

The primary endpoint was the number of days free of myocarditis complications (heart failure requiring hospitalization, chest pain requiring medication, left ventricular ejection fraction less than 50%, and ventricular arrhythmias) within 28 days postdischarge.

There was no significant difference in this endpoint between the two arms, with a median of 30 days for anakinra versus 31 days for placebo.

Overall, the rate of the composite endpoint of myocarditis complications occurred in 13.7% of patients, and there was a numerical reduction in the number of patients with these myocarditis complications with anakinra – 6 patients (10.5%) in the anakinra group versus 10 patients (16.5%) in the placebo group (odds ratio, 0.59; 95% confidence interval, 0.19-1.78). This was driven by fewer patients with chest pain requiring new medication (two patients versus six patients).

The safety endpoint was the number of serious adverse events within 28 days postdischarge. This endpoint occurred in seven patients (12.1%) in the anakinra arm and six patients (10.2%) in the placebo arm, with no significant difference between groups. Cases of severe infection within 28 days postdischarge were reported in both arms.
 

 

 

Low-risk population

Designated discussant of the study at the ESC Hotline session, Enrico Ammirati, MD, PhD, University of Milano-Bicocca, Monza, Italy, said that patients involved in ARAMIS fit the profile of acute myocarditis and that the CMR diagnosis was positive in all the patients enrolled.

Dr. Ammirati agreed with Dr. Kerneis that the neutral results of the study were probably caused by the low-risk population. “If we look at retrospective registries, at 30 days there are zero cardiac deaths or heart transplants at 30 days in patients with a low-risk presentation.

“The ARAMIS trial has shown the feasibility of conducting studies in the setting of acute myocarditis, and even if the primary endpoint was neutral, some important data are still missing, such as change in ejection fraction and troponin levels,” he noted.

“In terms of future perspective, we are moving to assessing efficacy of anakinra or other immunosuppressive drugs from acute low risk patients to higher risk patients with heart failure and severe dysfunction,” he said.  

Dr. Ammirati is the lead investigator of another ongoing study in such a higher-risk population; the MYTHS trial is investigating the use of intravenous steroids in patients with suspected acute myocarditis complicated by acute heart failure or cardiogenic shock, and an ejection fraction below 41%.

“So, we will have more results on the best treatment in this higher risk group of patients,” he concluded.

The ARAMIS trial was an academic study funded by the French Health Ministry and coordinated by the ACTION Group. Dr. Kerneis reports having received consulting fees from Kiniksa, Sanofi, and Bayer, and holds a patent for use of abatacept in immune checkpoint inhibitor (ICI)–induced myocarditis.

A version of this article first appeared on Medscape.com.

– A short course of the interleukin-1 receptor antagonist, anakinra, appeared safe but did not reduce complications of acute myocarditis in the ARAMIS trial.

The trial was presented at the annual congress of the European Society of Cardiology.

Lead investigator, Mathieu Kerneis, MD, Pitie Salpetriere APHP University Hospital, Paris, said this was the largest randomized controlled trial of patients with acute myocarditis and probably the first ever study in the acute setting of myocarditis patients diagnosed with cardiac magnetic resonance (CMR) imaging, not on biopsy, who are mostly at low risk for events.

He suggested that one of the reasons for the neutral result could have been the low-risk population involved and the low complication rate. “We enrolled an all-comer acute myocarditis population diagnosed with CMR, who were mostly at a low risk of complications,” he noted.

“I don’t think the story of anti-inflammatory drugs in acute myocarditis is over. This is just the beginning. This was the first trial, and it was just a phase 2 trial. We need further randomized trials to explore the potential benefit of an anti-inflammatory strategy in acute myocarditis patients at higher risk of complications. In addition, larger studies are needed to evaluate prolonged anti-inflammatory strategies in acute myocarditis patients at low-to-moderate risk of complications,” Dr. Kerneis concluded.

“It is very challenging to do a trial in high-risk patients with myocarditis as these patients are quite rare,” he added.
 

Inflammation of the myocardium

Dr. Kerneis explained that acute myocarditis is an inflammation of the myocardium that can cause permanent damage to the heart muscle and lead to myocardial infarction, stroke, heart failure, arrhythmias, and death. The condition can occur in individuals of all ages but is most frequent in young people. There is no specific treatment, but patients are generally treated with beta-blockers, angiotensin-converting enzyme (ACE) inhibitors, and sometimes steroids.

Anakinra is an interleukin-1 receptor antagonist that works by targeting the interleukin-1β innate immune pathway. Anakinra is used for the treatment of rheumatoid arthritis and has shown efficacy in pericarditis. Dr. Kerneis noted that there have been several case reports of successful treatment with anakinra in acute myocarditis.

The ARAMIS trial – conducted at six academic centers in France – was the first randomized study to evaluate inhibition of the interleukin-1β innate immune pathway in myocarditis patients. The trial enrolled 120 hospitalized, symptomatic patients with chest pain, increased cardiac troponin, and acute myocarditis diagnosed using CMR. More than half had had a recent bacterial or viral infection.

Patients were randomized within 72 hours of hospital admission to a daily subcutaneous dose of anakinra 100 mg or placebo until hospital discharge. Patients in both groups received standard-of-care treatments, including an ACE inhibitor, for at least 1 month. Consistent with prior data, the median age of participants was 28 years and 90% were men.

The primary endpoint was the number of days free of myocarditis complications (heart failure requiring hospitalization, chest pain requiring medication, left ventricular ejection fraction less than 50%, and ventricular arrhythmias) within 28 days postdischarge.

There was no significant difference in this endpoint between the two arms, with a median of 30 days for anakinra versus 31 days for placebo.

Overall, the rate of the composite endpoint of myocarditis complications occurred in 13.7% of patients, and there was a numerical reduction in the number of patients with these myocarditis complications with anakinra – 6 patients (10.5%) in the anakinra group versus 10 patients (16.5%) in the placebo group (odds ratio, 0.59; 95% confidence interval, 0.19-1.78). This was driven by fewer patients with chest pain requiring new medication (two patients versus six patients).

The safety endpoint was the number of serious adverse events within 28 days postdischarge. This endpoint occurred in seven patients (12.1%) in the anakinra arm and six patients (10.2%) in the placebo arm, with no significant difference between groups. Cases of severe infection within 28 days postdischarge were reported in both arms.
 

 

 

Low-risk population

Designated discussant of the study at the ESC Hotline session, Enrico Ammirati, MD, PhD, University of Milano-Bicocca, Monza, Italy, said that patients involved in ARAMIS fit the profile of acute myocarditis and that the CMR diagnosis was positive in all the patients enrolled.

Dr. Ammirati agreed with Dr. Kerneis that the neutral results of the study were probably caused by the low-risk population. “If we look at retrospective registries, at 30 days there are zero cardiac deaths or heart transplants at 30 days in patients with a low-risk presentation.

“The ARAMIS trial has shown the feasibility of conducting studies in the setting of acute myocarditis, and even if the primary endpoint was neutral, some important data are still missing, such as change in ejection fraction and troponin levels,” he noted.

“In terms of future perspective, we are moving to assessing efficacy of anakinra or other immunosuppressive drugs from acute low risk patients to higher risk patients with heart failure and severe dysfunction,” he said.  

Dr. Ammirati is the lead investigator of another ongoing study in such a higher-risk population; the MYTHS trial is investigating the use of intravenous steroids in patients with suspected acute myocarditis complicated by acute heart failure or cardiogenic shock, and an ejection fraction below 41%.

“So, we will have more results on the best treatment in this higher risk group of patients,” he concluded.

The ARAMIS trial was an academic study funded by the French Health Ministry and coordinated by the ACTION Group. Dr. Kerneis reports having received consulting fees from Kiniksa, Sanofi, and Bayer, and holds a patent for use of abatacept in immune checkpoint inhibitor (ICI)–induced myocarditis.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT ESC CONGRESS 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The new normal in body temperature

Article Type
Changed
Mon, 09/11/2023 - 18:06

 

This transcript has been edited for clarity.

Every branch of science has its constants. Physics has the speed of light, the gravitational constant, the Planck constant. Chemistry gives us Avogadro’s number, Faraday’s constant, the charge of an electron. Medicine isn’t quite as reliable as physics when it comes to these things, but insofar as there are any constants in medicine, might I suggest normal body temperature: 37° Celsius, 98.6° Fahrenheit.

Sure, serum sodium may be less variable and lactate concentration more clinically relevant, but even my 7-year-old knows that normal body temperature is 98.6°.

Except, as it turns out, 98.6° isn’t normal at all.

How did we arrive at 37.0° C for normal body temperature? We got it from this guy – German physician Carl Reinhold August Wunderlich, who, in addition to looking eerily like Luciano Pavarotti, was the first to realize that fever was not itself a disease but a symptom of one.

In 1851, Dr. Wunderlich released his measurements of more than 1 million body temperatures taken from 25,000 Germans – a painstaking process at the time, which employed a foot-long thermometer and took 20 minutes to obtain a measurement.

The average temperature measured, of course, was 37° C.

We’re more than 150 years post-Wunderlich right now, and the average person in the United States might be quite a bit different from the average German in 1850. Moreover, we can do a lot better than just measuring a ton of people and taking the average, because we have statistics. The problem with measuring a bunch of people and taking the average temperature as normal is that you can’t be sure that the people you are measuring are normal. There are obvious causes of elevated temperature that you could exclude. Let’s not take people with a respiratory infection or who are taking Tylenol, for example. But as highlighted in this paper in JAMA Internal Medicine, we can do a lot better than that.

The study leverages the fact that body temperature is typically measured during all medical office visits and recorded in the ever-present electronic medical record.

Researchers from Stanford identified 724,199 patient encounters with outpatient temperature data. They excluded extreme temperatures – less than 34° C or greater than 40° C – excluded patients under 20 or above 80 years, and excluded those with extremes of height, weight, or body mass index.

You end up with a distribution like this. Note that the peak is clearly lower than 37° C.

JAMA Internal Medicine


But we’re still not at “normal.” Some people would be seeing their doctor for conditions that affect body temperature, such as infection. You could use diagnosis codes to flag these individuals and drop them, but that feels a bit arbitrary.

I really love how the researchers used data to fix this problem. They used a technique called LIMIT (Laboratory Information Mining for Individualized Thresholds). It works like this:

Take all the temperature measurements and then identify the outliers – the very tails of the distribution.

JAMA Internal Medicine


Look at all the diagnosis codes in those distributions. Determine which diagnosis codes are overrepresented in those distributions. Now you have a data-driven way to say that yes, these diagnoses are associated with weird temperatures. Next, eliminate everyone with those diagnoses from the dataset. What you are left with is a normal population, or at least a population that doesn’t have a condition that seems to meaningfully affect temperature.

Dr. Wilson


So, who was dropped? Well, a lot of people, actually. It turned out that diabetes was way overrepresented in the outlier group. Although 9.2% of the population had diabetes, 26% of people with very low temperatures did, so everyone with diabetes is removed from the dataset. While 5% of the population had a cough at their encounter, 7% of the people with very high temperature and 7% of the people with very low temperature had a cough, so everyone with cough gets thrown out.

The algorithm excluded people on antibiotics or who had sinusitis, urinary tract infections, pneumonia, and, yes, a diagnosis of “fever.” The list makes sense, which is always nice when you have a purely algorithmic classification system.

What do we have left? What is the real normal temperature? Ready?

It’s 36.64° C, or about 98.0° F.

Of course, normal temperature varied depending on the time of day it was measured – higher in the afternoon.

JAMA Internal Medicine


The normal temperature in women tended to be higher than in men. The normal temperature declined with age as well.

JAMA Internal Medicine


In fact, the researchers built a nice online calculator where you can enter your own, or your patient’s, parameters and calculate a normal body temperature for them. Here’s mine. My normal temperature at around 2 p.m. should be 36.7° C.

JAMA Internal Medicine


So, we’re all more cold-blooded than we thought. Is this just because of better methods? Maybe. But studies have actually shown that body temperature may be decreasing over time in humans, possibly because of the lower levels of inflammation we face in modern life (thanks to improvements in hygiene and antibiotics).

Of course, I’m sure some of you are asking yourselves whether any of this really matters. Is 37° C close enough?

Sure, this may be sort of puttering around the edges of physical diagnosis, but I think the methodology is really interesting and can obviously be applied to other broadly collected data points. But these data show us that thin, older individuals really do run cooler, and that we may need to pay more attention to a low-grade fever in that population than we otherwise would.

In any case, it’s time for a little re-education. If someone asks you what normal body temperature is, just say 36.6° C, 98.0° F. For his work in this area, I suggest we call it Wunderlich’s constant.

Dr. Wilson is associate professor of medicine and public health at Yale University, New Haven, Conn., and director of Yale’s Clinical and Translational Research Accelerator. He has no disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity.

Every branch of science has its constants. Physics has the speed of light, the gravitational constant, the Planck constant. Chemistry gives us Avogadro’s number, Faraday’s constant, the charge of an electron. Medicine isn’t quite as reliable as physics when it comes to these things, but insofar as there are any constants in medicine, might I suggest normal body temperature: 37° Celsius, 98.6° Fahrenheit.

Sure, serum sodium may be less variable and lactate concentration more clinically relevant, but even my 7-year-old knows that normal body temperature is 98.6°.

Except, as it turns out, 98.6° isn’t normal at all.

How did we arrive at 37.0° C for normal body temperature? We got it from this guy – German physician Carl Reinhold August Wunderlich, who, in addition to looking eerily like Luciano Pavarotti, was the first to realize that fever was not itself a disease but a symptom of one.

In 1851, Dr. Wunderlich released his measurements of more than 1 million body temperatures taken from 25,000 Germans – a painstaking process at the time, which employed a foot-long thermometer and took 20 minutes to obtain a measurement.

The average temperature measured, of course, was 37° C.

We’re more than 150 years post-Wunderlich right now, and the average person in the United States might be quite a bit different from the average German in 1850. Moreover, we can do a lot better than just measuring a ton of people and taking the average, because we have statistics. The problem with measuring a bunch of people and taking the average temperature as normal is that you can’t be sure that the people you are measuring are normal. There are obvious causes of elevated temperature that you could exclude. Let’s not take people with a respiratory infection or who are taking Tylenol, for example. But as highlighted in this paper in JAMA Internal Medicine, we can do a lot better than that.

The study leverages the fact that body temperature is typically measured during all medical office visits and recorded in the ever-present electronic medical record.

Researchers from Stanford identified 724,199 patient encounters with outpatient temperature data. They excluded extreme temperatures – less than 34° C or greater than 40° C – excluded patients under 20 or above 80 years, and excluded those with extremes of height, weight, or body mass index.

You end up with a distribution like this. Note that the peak is clearly lower than 37° C.

JAMA Internal Medicine


But we’re still not at “normal.” Some people would be seeing their doctor for conditions that affect body temperature, such as infection. You could use diagnosis codes to flag these individuals and drop them, but that feels a bit arbitrary.

I really love how the researchers used data to fix this problem. They used a technique called LIMIT (Laboratory Information Mining for Individualized Thresholds). It works like this:

Take all the temperature measurements and then identify the outliers – the very tails of the distribution.

JAMA Internal Medicine


Look at all the diagnosis codes in those distributions. Determine which diagnosis codes are overrepresented in those distributions. Now you have a data-driven way to say that yes, these diagnoses are associated with weird temperatures. Next, eliminate everyone with those diagnoses from the dataset. What you are left with is a normal population, or at least a population that doesn’t have a condition that seems to meaningfully affect temperature.

Dr. Wilson


So, who was dropped? Well, a lot of people, actually. It turned out that diabetes was way overrepresented in the outlier group. Although 9.2% of the population had diabetes, 26% of people with very low temperatures did, so everyone with diabetes is removed from the dataset. While 5% of the population had a cough at their encounter, 7% of the people with very high temperature and 7% of the people with very low temperature had a cough, so everyone with cough gets thrown out.

The algorithm excluded people on antibiotics or who had sinusitis, urinary tract infections, pneumonia, and, yes, a diagnosis of “fever.” The list makes sense, which is always nice when you have a purely algorithmic classification system.

What do we have left? What is the real normal temperature? Ready?

It’s 36.64° C, or about 98.0° F.

Of course, normal temperature varied depending on the time of day it was measured – higher in the afternoon.

JAMA Internal Medicine


The normal temperature in women tended to be higher than in men. The normal temperature declined with age as well.

JAMA Internal Medicine


In fact, the researchers built a nice online calculator where you can enter your own, or your patient’s, parameters and calculate a normal body temperature for them. Here’s mine. My normal temperature at around 2 p.m. should be 36.7° C.

JAMA Internal Medicine


So, we’re all more cold-blooded than we thought. Is this just because of better methods? Maybe. But studies have actually shown that body temperature may be decreasing over time in humans, possibly because of the lower levels of inflammation we face in modern life (thanks to improvements in hygiene and antibiotics).

Of course, I’m sure some of you are asking yourselves whether any of this really matters. Is 37° C close enough?

Sure, this may be sort of puttering around the edges of physical diagnosis, but I think the methodology is really interesting and can obviously be applied to other broadly collected data points. But these data show us that thin, older individuals really do run cooler, and that we may need to pay more attention to a low-grade fever in that population than we otherwise would.

In any case, it’s time for a little re-education. If someone asks you what normal body temperature is, just say 36.6° C, 98.0° F. For his work in this area, I suggest we call it Wunderlich’s constant.

Dr. Wilson is associate professor of medicine and public health at Yale University, New Haven, Conn., and director of Yale’s Clinical and Translational Research Accelerator. He has no disclosures.

A version of this article appeared on Medscape.com.

 

This transcript has been edited for clarity.

Every branch of science has its constants. Physics has the speed of light, the gravitational constant, the Planck constant. Chemistry gives us Avogadro’s number, Faraday’s constant, the charge of an electron. Medicine isn’t quite as reliable as physics when it comes to these things, but insofar as there are any constants in medicine, might I suggest normal body temperature: 37° Celsius, 98.6° Fahrenheit.

Sure, serum sodium may be less variable and lactate concentration more clinically relevant, but even my 7-year-old knows that normal body temperature is 98.6°.

Except, as it turns out, 98.6° isn’t normal at all.

How did we arrive at 37.0° C for normal body temperature? We got it from this guy – German physician Carl Reinhold August Wunderlich, who, in addition to looking eerily like Luciano Pavarotti, was the first to realize that fever was not itself a disease but a symptom of one.

In 1851, Dr. Wunderlich released his measurements of more than 1 million body temperatures taken from 25,000 Germans – a painstaking process at the time, which employed a foot-long thermometer and took 20 minutes to obtain a measurement.

The average temperature measured, of course, was 37° C.

We’re more than 150 years post-Wunderlich right now, and the average person in the United States might be quite a bit different from the average German in 1850. Moreover, we can do a lot better than just measuring a ton of people and taking the average, because we have statistics. The problem with measuring a bunch of people and taking the average temperature as normal is that you can’t be sure that the people you are measuring are normal. There are obvious causes of elevated temperature that you could exclude. Let’s not take people with a respiratory infection or who are taking Tylenol, for example. But as highlighted in this paper in JAMA Internal Medicine, we can do a lot better than that.

The study leverages the fact that body temperature is typically measured during all medical office visits and recorded in the ever-present electronic medical record.

Researchers from Stanford identified 724,199 patient encounters with outpatient temperature data. They excluded extreme temperatures – less than 34° C or greater than 40° C – excluded patients under 20 or above 80 years, and excluded those with extremes of height, weight, or body mass index.

You end up with a distribution like this. Note that the peak is clearly lower than 37° C.

JAMA Internal Medicine


But we’re still not at “normal.” Some people would be seeing their doctor for conditions that affect body temperature, such as infection. You could use diagnosis codes to flag these individuals and drop them, but that feels a bit arbitrary.

I really love how the researchers used data to fix this problem. They used a technique called LIMIT (Laboratory Information Mining for Individualized Thresholds). It works like this:

Take all the temperature measurements and then identify the outliers – the very tails of the distribution.

JAMA Internal Medicine


Look at all the diagnosis codes in those distributions. Determine which diagnosis codes are overrepresented in those distributions. Now you have a data-driven way to say that yes, these diagnoses are associated with weird temperatures. Next, eliminate everyone with those diagnoses from the dataset. What you are left with is a normal population, or at least a population that doesn’t have a condition that seems to meaningfully affect temperature.

Dr. Wilson


So, who was dropped? Well, a lot of people, actually. It turned out that diabetes was way overrepresented in the outlier group. Although 9.2% of the population had diabetes, 26% of people with very low temperatures did, so everyone with diabetes is removed from the dataset. While 5% of the population had a cough at their encounter, 7% of the people with very high temperature and 7% of the people with very low temperature had a cough, so everyone with cough gets thrown out.

The algorithm excluded people on antibiotics or who had sinusitis, urinary tract infections, pneumonia, and, yes, a diagnosis of “fever.” The list makes sense, which is always nice when you have a purely algorithmic classification system.

What do we have left? What is the real normal temperature? Ready?

It’s 36.64° C, or about 98.0° F.

Of course, normal temperature varied depending on the time of day it was measured – higher in the afternoon.

JAMA Internal Medicine


The normal temperature in women tended to be higher than in men. The normal temperature declined with age as well.

JAMA Internal Medicine


In fact, the researchers built a nice online calculator where you can enter your own, or your patient’s, parameters and calculate a normal body temperature for them. Here’s mine. My normal temperature at around 2 p.m. should be 36.7° C.

JAMA Internal Medicine


So, we’re all more cold-blooded than we thought. Is this just because of better methods? Maybe. But studies have actually shown that body temperature may be decreasing over time in humans, possibly because of the lower levels of inflammation we face in modern life (thanks to improvements in hygiene and antibiotics).

Of course, I’m sure some of you are asking yourselves whether any of this really matters. Is 37° C close enough?

Sure, this may be sort of puttering around the edges of physical diagnosis, but I think the methodology is really interesting and can obviously be applied to other broadly collected data points. But these data show us that thin, older individuals really do run cooler, and that we may need to pay more attention to a low-grade fever in that population than we otherwise would.

In any case, it’s time for a little re-education. If someone asks you what normal body temperature is, just say 36.6° C, 98.0° F. For his work in this area, I suggest we call it Wunderlich’s constant.

Dr. Wilson is associate professor of medicine and public health at Yale University, New Haven, Conn., and director of Yale’s Clinical and Translational Research Accelerator. He has no disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The cult of the suicide risk assessment

Article Type
Changed
Mon, 09/11/2023 - 18:06

Suicide is not a trivial matter – it upends families, robs partners of a loved one, prevents children from having a parent, and can destroy a parent’s most cherished being. It is not surprising that societies have repeatedly made it a goal to study and reduce suicide within their populations.

The suicide rate in the United States is trending upward, from about 10 per 100,000 in 2000 to about 15 per 100,000 in more recent reports. The increasing suicide rates have been accompanied by increasing distress among many strata of society. From a public health level, analysts are not just witnessing increasing suicide rates, but a shocking rise in all “deaths of despair,”1 among which suicide can be considered the ultimate example.

Dr. Nicolas Badre

On an individual level, many know someone who has died of suicide or suffered from a serious suicide attempt. From the public health level to the individual level, advocacy has called for various interventions in the field of psychiatry to remedy this tragic problem.

Psychiatrists have been firsthand witnesses to this increasing demand for suicide interventions. When in residency, the norm was to perform a suicide risk assessment at the time of admission to the hospital and again at the time of discharge. As the years passed, the new normal within psychiatric hospitals has shifted to asking about suicidality on a daily basis.

In what seems to us like an escalating arms race, the emerging standard of care at many facilities is now not only for daily suicide risk assessments by each psychiatrist, but also to require nurses to ask about suicidality during every 8-hour shift – in addition to documented inquiries about suicidality by other allied staff on the psychiatric unit. As a result, it is not uncommon for a patient hospitalized at an academic center to receive more than half a dozen suicide risk assessments in a day (first by the medical student, at least once – often more than once – by the resident, again by the attending psychiatrist, then the social worker and three nurses in 24 hours).

Dr. Jason Compton

One of the concerns about such an approach is the lack of logic inherent to many risk assessment tools and symptom scales. Many of us are familiar with the Patient Health Questionnaire (PHQ-9) to assess depression.2 The PHQ-9 asks to consider “over the last 2 weeks, how often have you ...” in relation to nine symptoms associated with depression. It has always defied reason to perform a PHQ-9 every day and expect the answers to change from “nearly every day” to “not at all,” considering only 1 day has passed since the last time the patient has answered the questions. Yet daily, or near daily, PHQ-9 scores are a frequently used tool of tracking symptom improvement in response to treatments, such as electroconvulsive therapy, performed multiple times a week.

One can argue that the patient’s perspective on how symptomatic he or she has been over the past 2 weeks may change rapidly with alleviation of a depressed mood. However, the PHQ-9 is both reported to be, and often regarded as, an objective score. If one wishes to utilize it as such, the defense of its use should not be that it is a subjective report with just as much utility as “Rate your depression on a scale of 0-27.”

Similarly, many suicide scales were intended to assess thoughts of suicide in the past month3 or have been re-tooled to address this particular concern by asking “since the last contact.”4 It is baffling to see a chart with many dozens of suicide risk assessments with at times widely differing answers, yet all measuring thoughts of suicide in the past month. Is one to expect the answer to “How many times have you had these thoughts [of suicide ideation]? (1) Less than once a week (2) Once a week ...” to change between 8 a.m. and noon? Furthermore, for the purpose of assessing acute risk of suicidality in the immediate future, to only consider symptoms since the last contact – or past 2 weeks, past month, etc. – is of unclear significance.
 

 

 

Provider liability

Another concern is the liability placed on providers. A common problem encountered in the inpatient setting is insurance companies refusing to reimburse a hospital stay for depressed patients denying suicidality.

Any provider in the position of caring for such a patient must ask: What is the likelihood of someone providing a false negative – a false denial of suicidality? Is the likelihood of a suicidal person denying suicidality different if asked 5 or 10 or more times in a day? There are innumerable instances where a patient at a very high risk of self-harm has denied suicidality, been discharged from the hospital, and suffered terrible consequences. Ethically, the psychiatrist aware of this risk is no more at ease discharging these patients, whether it is one suicide risk scale or a dozen that suggests a patient is at low risk.

Alternatively, it may feel untenable from a medicolegal perspective for a psychiatrist to discharge a patient denying suicidality when the chart includes over a dozen previously documented elevated suicide risk assessments in the past 72 hours. By placing the job of suicide risk assessment in the hands of providers of varying levels of training and responsibility, a situation is created in which the seasoned psychiatrist who would otherwise be comfortable discharging a patient feels unable to do so because every other note-writer in the record – from the triage nurse to the medical assistant to the sitter in the emergency department – has recorded the patient as high risk for suicide. When put in such a position, the thought often occurs that systems of care, rather than individual providers, are protected most by ever escalating requirements for suicide risk documentation. To make a clinical decision contrary to the body of suicide risk documentation puts the provider at risk of being scapegoated by the system of care, which can point to its illogical and ineffective, though profusely documented, suicide prevention protocols.
 

Limitations of risk assessments

Considering the ongoing rise in the use of suicide risk assessments, one would expect that the evidence for their efficacy was robust and well established. Yet a thorough review of suicide risk assessments funded by the MacArthur Foundation, which examined decades of research, came to disheartening conclusions: “predictive ability has not improved over the past 50 years”; “no risk factor category or subcategory is substantially stronger than any other”; and “predicting solely according to base rates may be comparable to prediction with current risk factors.”5

Those findings were consistent with the conclusions of many other studies, which have summarized the utility of suicide risk assessments as follows: “occurrence of suicide is too low to identify those individuals who are likely to die by suicide”;6 “suicide prediction models produce accurate overall classification models, but their accuracy of predicting a future event is near zero”;7 “risk stratification is too inaccurate to be clinically useful and might even be harmful”;8 “suicide risk prediction [lacks] any items or information that to a useful degree permit the identification of persons who will complete suicide”;9 “existing suicide prediction tools have little current clinical value”;10 “our current preoccupation with risk assessment has ... created a mythology with no evidence to support it.”11 And that’s to cite just a few.

Sadly, we have known about the limitations of suicide risk assessments for many decades. In 1983 a large VA prospective study, which aimed to identify veterans who will die by suicide, examined 4,800 patients with a wide range of instruments and measures.12 This study concluded that “discriminant analysis was clearly inadequate in correctly classifying the subjects. For an event as rare as suicide, our predictive tools and guides are simply not equal to the task.” The authors described the feelings of many in stating “courts and public opinion expect physicians to be able to pick out the particular persons who will later commit suicide. Although we may reconstruct causal chains and motives, we do not possess the tools to predict suicides.”

Yet, even several decades prior, in 1954, Dr. Albert Rosen performed an elegant statistical analysis and predicted that, considering the low base rate of suicide, suicide risk assessments are “of no practical value, for it would be impossible to treat the prodigious number of false positives.”13 It seems that we continue to be unable to accept Dr. Rosen’s premonition despite decades of confirmatory evidence.
 

 

 

“Quantity over quality”

Regardless of those sobering reports, the field of psychiatry is seemingly doubling down on efforts to predict and prevent suicide deaths, and the way it is doing so has very questionable validity.

One can reasonably argue that the periodic performance of a suicide risk assessment may have clinical utility in reminding us of modifiable risk factors such as intoxication, social isolation, and access to lethal means. One can also reasonably argue that these risk assessments may provide useful education to patients and their families on epidemiological risk factors such as gender, age, and marital status. But our pursuit of serial suicide risk assessments throughout the day is encouraging providers to focus on a particular risk factor that changes from moment to moment and has particularly low validity, that being self-reported suicidality.

Reported suicidality is one of the few risk factors that can change from shift to shift. But 80% of people who die by suicide had not previously expressed suicidality, and 98.3% of people who have endorsed suicidality do not die by suicide.14 While the former statistic may improve with increased assessment, the later will likely worsen.

Suicide is not a trivial matter. We admire those that study it and advocate for better interventions. We have compassion for those who have suffered the loss of a loved one to suicide. Our patients have died as a result of the human limitations surrounding suicide prevention. Recognizing the weight of suicide and making an effort to avoid minimizing its immense consequences drive our desire to be honest with ourselves, our patients and their families, and society. That includes the unfortunate truth regarding the current state of the evidence and our ability to enact change.

It is our concern that the rising fascination with repeated suicide risk assessment is misguided in its current form and serves the purpose of appeasing administrators more than reflecting a scientific understanding of the literature. More sadly, we are concerned that this “quantity-over-quality” approach is yet another barrier to practicing what may be one of the few interventions with any hope of meaningfully impacting a patient’s risk of suicide in the clinical setting – spending time connecting with our patients.

Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Compton is a member of the psychiatry faculty at University of California, San Diego. His background includes medical education, mental health advocacy, work with underserved populations, and brain cancer research. Dr. Badre and Dr. Compton have no conflicts of interest.

References

1. Joint Economic Committee. (2019). Long Term Trends in Deaths of Despair. SCP Report 4-19.

2. Kroenke K and Spitzer RL. The PHQ-9: A new depression diagnostic and severity measure. Psychiatr Ann. 2013;32(9):509-15. doi: 10.3928/0048-5713-20020901-06.

3. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Lifetime/Recent.

4. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Since Last Contact.

5. Franklin JC et al. Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research. Psychol Bull. 2017 Feb;143(2):187-232. doi: 10.1037/bul0000084.

6. Beautrais AL. Further suicidal behavior among medically serious suicide attempters. Suicide Life Threat Behav. 2004 Spring;34(1):1-11. doi: 10.1521/suli.34.1.1.27772.

7. Belsher BE. Prediction models for suicide attempts and deaths: A systematic review and simulation. JAMA Psychiatry. 2019 Jun 1;76(6):642-651. doi: 10.1001/jamapsychiatry.2019.0174.

8. Carter G et al. Royal Australian and New Zealand College of Psychiatrists clinical practice guideline for the management of deliberate self-harm. Aust N Z J Psychiatry. 2016 Oct;50(10):939-1000. doi: 10.1177/0004867416661039.

9. Fosse R et al. Predictors of suicide in the patient population admitted to a locked-door psychiatric acute ward. PLoS One. 2017 Mar 16;12(3):e0173958. doi: 10.1371/journal.pone.0173958.

10. Kessler RC et al. Suicide prediction models: A critical review of recent research with recommendations for the way forward. Mol Psychiatry. 2020 Jan;25(1):168-79. doi: 10.1038/s41380-019-0531-0.

11. Mulder R. Problems with suicide risk assessment. Aust N Z J Psychiatry. 2011 Aug;45(8):605-7. doi: 10.3109/00048674.2011.594786.

12. Pokorny AD. Prediction of suicide in psychiatric patients: Report of a prospective study. Arch Gen Psychiatry. 1983 Mar;40(3):249-57. doi: 10.1001/archpsyc.1983.01790030019002.

13. Rosen A. Detection of suicidal patients: An example of some limitations in the prediction of infrequent events. J Consult Psychol. 1954 Dec;18(6):397-403. doi: 10.1037/h0058579.

14. McHugh CM et al. (2019). Association between suicidal ideation and suicide: Meta-analyses of odds ratios, sensitivity, specificity and positive predictive value. BJPsych Open. 2019 Mar;5(2):e18. doi: 10.1192/bjo.2018.88.

Publications
Topics
Sections

Suicide is not a trivial matter – it upends families, robs partners of a loved one, prevents children from having a parent, and can destroy a parent’s most cherished being. It is not surprising that societies have repeatedly made it a goal to study and reduce suicide within their populations.

The suicide rate in the United States is trending upward, from about 10 per 100,000 in 2000 to about 15 per 100,000 in more recent reports. The increasing suicide rates have been accompanied by increasing distress among many strata of society. From a public health level, analysts are not just witnessing increasing suicide rates, but a shocking rise in all “deaths of despair,”1 among which suicide can be considered the ultimate example.

Dr. Nicolas Badre

On an individual level, many know someone who has died of suicide or suffered from a serious suicide attempt. From the public health level to the individual level, advocacy has called for various interventions in the field of psychiatry to remedy this tragic problem.

Psychiatrists have been firsthand witnesses to this increasing demand for suicide interventions. When in residency, the norm was to perform a suicide risk assessment at the time of admission to the hospital and again at the time of discharge. As the years passed, the new normal within psychiatric hospitals has shifted to asking about suicidality on a daily basis.

In what seems to us like an escalating arms race, the emerging standard of care at many facilities is now not only for daily suicide risk assessments by each psychiatrist, but also to require nurses to ask about suicidality during every 8-hour shift – in addition to documented inquiries about suicidality by other allied staff on the psychiatric unit. As a result, it is not uncommon for a patient hospitalized at an academic center to receive more than half a dozen suicide risk assessments in a day (first by the medical student, at least once – often more than once – by the resident, again by the attending psychiatrist, then the social worker and three nurses in 24 hours).

Dr. Jason Compton

One of the concerns about such an approach is the lack of logic inherent to many risk assessment tools and symptom scales. Many of us are familiar with the Patient Health Questionnaire (PHQ-9) to assess depression.2 The PHQ-9 asks to consider “over the last 2 weeks, how often have you ...” in relation to nine symptoms associated with depression. It has always defied reason to perform a PHQ-9 every day and expect the answers to change from “nearly every day” to “not at all,” considering only 1 day has passed since the last time the patient has answered the questions. Yet daily, or near daily, PHQ-9 scores are a frequently used tool of tracking symptom improvement in response to treatments, such as electroconvulsive therapy, performed multiple times a week.

One can argue that the patient’s perspective on how symptomatic he or she has been over the past 2 weeks may change rapidly with alleviation of a depressed mood. However, the PHQ-9 is both reported to be, and often regarded as, an objective score. If one wishes to utilize it as such, the defense of its use should not be that it is a subjective report with just as much utility as “Rate your depression on a scale of 0-27.”

Similarly, many suicide scales were intended to assess thoughts of suicide in the past month3 or have been re-tooled to address this particular concern by asking “since the last contact.”4 It is baffling to see a chart with many dozens of suicide risk assessments with at times widely differing answers, yet all measuring thoughts of suicide in the past month. Is one to expect the answer to “How many times have you had these thoughts [of suicide ideation]? (1) Less than once a week (2) Once a week ...” to change between 8 a.m. and noon? Furthermore, for the purpose of assessing acute risk of suicidality in the immediate future, to only consider symptoms since the last contact – or past 2 weeks, past month, etc. – is of unclear significance.
 

 

 

Provider liability

Another concern is the liability placed on providers. A common problem encountered in the inpatient setting is insurance companies refusing to reimburse a hospital stay for depressed patients denying suicidality.

Any provider in the position of caring for such a patient must ask: What is the likelihood of someone providing a false negative – a false denial of suicidality? Is the likelihood of a suicidal person denying suicidality different if asked 5 or 10 or more times in a day? There are innumerable instances where a patient at a very high risk of self-harm has denied suicidality, been discharged from the hospital, and suffered terrible consequences. Ethically, the psychiatrist aware of this risk is no more at ease discharging these patients, whether it is one suicide risk scale or a dozen that suggests a patient is at low risk.

Alternatively, it may feel untenable from a medicolegal perspective for a psychiatrist to discharge a patient denying suicidality when the chart includes over a dozen previously documented elevated suicide risk assessments in the past 72 hours. By placing the job of suicide risk assessment in the hands of providers of varying levels of training and responsibility, a situation is created in which the seasoned psychiatrist who would otherwise be comfortable discharging a patient feels unable to do so because every other note-writer in the record – from the triage nurse to the medical assistant to the sitter in the emergency department – has recorded the patient as high risk for suicide. When put in such a position, the thought often occurs that systems of care, rather than individual providers, are protected most by ever escalating requirements for suicide risk documentation. To make a clinical decision contrary to the body of suicide risk documentation puts the provider at risk of being scapegoated by the system of care, which can point to its illogical and ineffective, though profusely documented, suicide prevention protocols.
 

Limitations of risk assessments

Considering the ongoing rise in the use of suicide risk assessments, one would expect that the evidence for their efficacy was robust and well established. Yet a thorough review of suicide risk assessments funded by the MacArthur Foundation, which examined decades of research, came to disheartening conclusions: “predictive ability has not improved over the past 50 years”; “no risk factor category or subcategory is substantially stronger than any other”; and “predicting solely according to base rates may be comparable to prediction with current risk factors.”5

Those findings were consistent with the conclusions of many other studies, which have summarized the utility of suicide risk assessments as follows: “occurrence of suicide is too low to identify those individuals who are likely to die by suicide”;6 “suicide prediction models produce accurate overall classification models, but their accuracy of predicting a future event is near zero”;7 “risk stratification is too inaccurate to be clinically useful and might even be harmful”;8 “suicide risk prediction [lacks] any items or information that to a useful degree permit the identification of persons who will complete suicide”;9 “existing suicide prediction tools have little current clinical value”;10 “our current preoccupation with risk assessment has ... created a mythology with no evidence to support it.”11 And that’s to cite just a few.

Sadly, we have known about the limitations of suicide risk assessments for many decades. In 1983 a large VA prospective study, which aimed to identify veterans who will die by suicide, examined 4,800 patients with a wide range of instruments and measures.12 This study concluded that “discriminant analysis was clearly inadequate in correctly classifying the subjects. For an event as rare as suicide, our predictive tools and guides are simply not equal to the task.” The authors described the feelings of many in stating “courts and public opinion expect physicians to be able to pick out the particular persons who will later commit suicide. Although we may reconstruct causal chains and motives, we do not possess the tools to predict suicides.”

Yet, even several decades prior, in 1954, Dr. Albert Rosen performed an elegant statistical analysis and predicted that, considering the low base rate of suicide, suicide risk assessments are “of no practical value, for it would be impossible to treat the prodigious number of false positives.”13 It seems that we continue to be unable to accept Dr. Rosen’s premonition despite decades of confirmatory evidence.
 

 

 

“Quantity over quality”

Regardless of those sobering reports, the field of psychiatry is seemingly doubling down on efforts to predict and prevent suicide deaths, and the way it is doing so has very questionable validity.

One can reasonably argue that the periodic performance of a suicide risk assessment may have clinical utility in reminding us of modifiable risk factors such as intoxication, social isolation, and access to lethal means. One can also reasonably argue that these risk assessments may provide useful education to patients and their families on epidemiological risk factors such as gender, age, and marital status. But our pursuit of serial suicide risk assessments throughout the day is encouraging providers to focus on a particular risk factor that changes from moment to moment and has particularly low validity, that being self-reported suicidality.

Reported suicidality is one of the few risk factors that can change from shift to shift. But 80% of people who die by suicide had not previously expressed suicidality, and 98.3% of people who have endorsed suicidality do not die by suicide.14 While the former statistic may improve with increased assessment, the later will likely worsen.

Suicide is not a trivial matter. We admire those that study it and advocate for better interventions. We have compassion for those who have suffered the loss of a loved one to suicide. Our patients have died as a result of the human limitations surrounding suicide prevention. Recognizing the weight of suicide and making an effort to avoid minimizing its immense consequences drive our desire to be honest with ourselves, our patients and their families, and society. That includes the unfortunate truth regarding the current state of the evidence and our ability to enact change.

It is our concern that the rising fascination with repeated suicide risk assessment is misguided in its current form and serves the purpose of appeasing administrators more than reflecting a scientific understanding of the literature. More sadly, we are concerned that this “quantity-over-quality” approach is yet another barrier to practicing what may be one of the few interventions with any hope of meaningfully impacting a patient’s risk of suicide in the clinical setting – spending time connecting with our patients.

Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Compton is a member of the psychiatry faculty at University of California, San Diego. His background includes medical education, mental health advocacy, work with underserved populations, and brain cancer research. Dr. Badre and Dr. Compton have no conflicts of interest.

References

1. Joint Economic Committee. (2019). Long Term Trends in Deaths of Despair. SCP Report 4-19.

2. Kroenke K and Spitzer RL. The PHQ-9: A new depression diagnostic and severity measure. Psychiatr Ann. 2013;32(9):509-15. doi: 10.3928/0048-5713-20020901-06.

3. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Lifetime/Recent.

4. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Since Last Contact.

5. Franklin JC et al. Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research. Psychol Bull. 2017 Feb;143(2):187-232. doi: 10.1037/bul0000084.

6. Beautrais AL. Further suicidal behavior among medically serious suicide attempters. Suicide Life Threat Behav. 2004 Spring;34(1):1-11. doi: 10.1521/suli.34.1.1.27772.

7. Belsher BE. Prediction models for suicide attempts and deaths: A systematic review and simulation. JAMA Psychiatry. 2019 Jun 1;76(6):642-651. doi: 10.1001/jamapsychiatry.2019.0174.

8. Carter G et al. Royal Australian and New Zealand College of Psychiatrists clinical practice guideline for the management of deliberate self-harm. Aust N Z J Psychiatry. 2016 Oct;50(10):939-1000. doi: 10.1177/0004867416661039.

9. Fosse R et al. Predictors of suicide in the patient population admitted to a locked-door psychiatric acute ward. PLoS One. 2017 Mar 16;12(3):e0173958. doi: 10.1371/journal.pone.0173958.

10. Kessler RC et al. Suicide prediction models: A critical review of recent research with recommendations for the way forward. Mol Psychiatry. 2020 Jan;25(1):168-79. doi: 10.1038/s41380-019-0531-0.

11. Mulder R. Problems with suicide risk assessment. Aust N Z J Psychiatry. 2011 Aug;45(8):605-7. doi: 10.3109/00048674.2011.594786.

12. Pokorny AD. Prediction of suicide in psychiatric patients: Report of a prospective study. Arch Gen Psychiatry. 1983 Mar;40(3):249-57. doi: 10.1001/archpsyc.1983.01790030019002.

13. Rosen A. Detection of suicidal patients: An example of some limitations in the prediction of infrequent events. J Consult Psychol. 1954 Dec;18(6):397-403. doi: 10.1037/h0058579.

14. McHugh CM et al. (2019). Association between suicidal ideation and suicide: Meta-analyses of odds ratios, sensitivity, specificity and positive predictive value. BJPsych Open. 2019 Mar;5(2):e18. doi: 10.1192/bjo.2018.88.

Suicide is not a trivial matter – it upends families, robs partners of a loved one, prevents children from having a parent, and can destroy a parent’s most cherished being. It is not surprising that societies have repeatedly made it a goal to study and reduce suicide within their populations.

The suicide rate in the United States is trending upward, from about 10 per 100,000 in 2000 to about 15 per 100,000 in more recent reports. The increasing suicide rates have been accompanied by increasing distress among many strata of society. From a public health level, analysts are not just witnessing increasing suicide rates, but a shocking rise in all “deaths of despair,”1 among which suicide can be considered the ultimate example.

Dr. Nicolas Badre

On an individual level, many know someone who has died of suicide or suffered from a serious suicide attempt. From the public health level to the individual level, advocacy has called for various interventions in the field of psychiatry to remedy this tragic problem.

Psychiatrists have been firsthand witnesses to this increasing demand for suicide interventions. When in residency, the norm was to perform a suicide risk assessment at the time of admission to the hospital and again at the time of discharge. As the years passed, the new normal within psychiatric hospitals has shifted to asking about suicidality on a daily basis.

In what seems to us like an escalating arms race, the emerging standard of care at many facilities is now not only for daily suicide risk assessments by each psychiatrist, but also to require nurses to ask about suicidality during every 8-hour shift – in addition to documented inquiries about suicidality by other allied staff on the psychiatric unit. As a result, it is not uncommon for a patient hospitalized at an academic center to receive more than half a dozen suicide risk assessments in a day (first by the medical student, at least once – often more than once – by the resident, again by the attending psychiatrist, then the social worker and three nurses in 24 hours).

Dr. Jason Compton

One of the concerns about such an approach is the lack of logic inherent to many risk assessment tools and symptom scales. Many of us are familiar with the Patient Health Questionnaire (PHQ-9) to assess depression.2 The PHQ-9 asks to consider “over the last 2 weeks, how often have you ...” in relation to nine symptoms associated with depression. It has always defied reason to perform a PHQ-9 every day and expect the answers to change from “nearly every day” to “not at all,” considering only 1 day has passed since the last time the patient has answered the questions. Yet daily, or near daily, PHQ-9 scores are a frequently used tool of tracking symptom improvement in response to treatments, such as electroconvulsive therapy, performed multiple times a week.

One can argue that the patient’s perspective on how symptomatic he or she has been over the past 2 weeks may change rapidly with alleviation of a depressed mood. However, the PHQ-9 is both reported to be, and often regarded as, an objective score. If one wishes to utilize it as such, the defense of its use should not be that it is a subjective report with just as much utility as “Rate your depression on a scale of 0-27.”

Similarly, many suicide scales were intended to assess thoughts of suicide in the past month3 or have been re-tooled to address this particular concern by asking “since the last contact.”4 It is baffling to see a chart with many dozens of suicide risk assessments with at times widely differing answers, yet all measuring thoughts of suicide in the past month. Is one to expect the answer to “How many times have you had these thoughts [of suicide ideation]? (1) Less than once a week (2) Once a week ...” to change between 8 a.m. and noon? Furthermore, for the purpose of assessing acute risk of suicidality in the immediate future, to only consider symptoms since the last contact – or past 2 weeks, past month, etc. – is of unclear significance.
 

 

 

Provider liability

Another concern is the liability placed on providers. A common problem encountered in the inpatient setting is insurance companies refusing to reimburse a hospital stay for depressed patients denying suicidality.

Any provider in the position of caring for such a patient must ask: What is the likelihood of someone providing a false negative – a false denial of suicidality? Is the likelihood of a suicidal person denying suicidality different if asked 5 or 10 or more times in a day? There are innumerable instances where a patient at a very high risk of self-harm has denied suicidality, been discharged from the hospital, and suffered terrible consequences. Ethically, the psychiatrist aware of this risk is no more at ease discharging these patients, whether it is one suicide risk scale or a dozen that suggests a patient is at low risk.

Alternatively, it may feel untenable from a medicolegal perspective for a psychiatrist to discharge a patient denying suicidality when the chart includes over a dozen previously documented elevated suicide risk assessments in the past 72 hours. By placing the job of suicide risk assessment in the hands of providers of varying levels of training and responsibility, a situation is created in which the seasoned psychiatrist who would otherwise be comfortable discharging a patient feels unable to do so because every other note-writer in the record – from the triage nurse to the medical assistant to the sitter in the emergency department – has recorded the patient as high risk for suicide. When put in such a position, the thought often occurs that systems of care, rather than individual providers, are protected most by ever escalating requirements for suicide risk documentation. To make a clinical decision contrary to the body of suicide risk documentation puts the provider at risk of being scapegoated by the system of care, which can point to its illogical and ineffective, though profusely documented, suicide prevention protocols.
 

Limitations of risk assessments

Considering the ongoing rise in the use of suicide risk assessments, one would expect that the evidence for their efficacy was robust and well established. Yet a thorough review of suicide risk assessments funded by the MacArthur Foundation, which examined decades of research, came to disheartening conclusions: “predictive ability has not improved over the past 50 years”; “no risk factor category or subcategory is substantially stronger than any other”; and “predicting solely according to base rates may be comparable to prediction with current risk factors.”5

Those findings were consistent with the conclusions of many other studies, which have summarized the utility of suicide risk assessments as follows: “occurrence of suicide is too low to identify those individuals who are likely to die by suicide”;6 “suicide prediction models produce accurate overall classification models, but their accuracy of predicting a future event is near zero”;7 “risk stratification is too inaccurate to be clinically useful and might even be harmful”;8 “suicide risk prediction [lacks] any items or information that to a useful degree permit the identification of persons who will complete suicide”;9 “existing suicide prediction tools have little current clinical value”;10 “our current preoccupation with risk assessment has ... created a mythology with no evidence to support it.”11 And that’s to cite just a few.

Sadly, we have known about the limitations of suicide risk assessments for many decades. In 1983 a large VA prospective study, which aimed to identify veterans who will die by suicide, examined 4,800 patients with a wide range of instruments and measures.12 This study concluded that “discriminant analysis was clearly inadequate in correctly classifying the subjects. For an event as rare as suicide, our predictive tools and guides are simply not equal to the task.” The authors described the feelings of many in stating “courts and public opinion expect physicians to be able to pick out the particular persons who will later commit suicide. Although we may reconstruct causal chains and motives, we do not possess the tools to predict suicides.”

Yet, even several decades prior, in 1954, Dr. Albert Rosen performed an elegant statistical analysis and predicted that, considering the low base rate of suicide, suicide risk assessments are “of no practical value, for it would be impossible to treat the prodigious number of false positives.”13 It seems that we continue to be unable to accept Dr. Rosen’s premonition despite decades of confirmatory evidence.
 

 

 

“Quantity over quality”

Regardless of those sobering reports, the field of psychiatry is seemingly doubling down on efforts to predict and prevent suicide deaths, and the way it is doing so has very questionable validity.

One can reasonably argue that the periodic performance of a suicide risk assessment may have clinical utility in reminding us of modifiable risk factors such as intoxication, social isolation, and access to lethal means. One can also reasonably argue that these risk assessments may provide useful education to patients and their families on epidemiological risk factors such as gender, age, and marital status. But our pursuit of serial suicide risk assessments throughout the day is encouraging providers to focus on a particular risk factor that changes from moment to moment and has particularly low validity, that being self-reported suicidality.

Reported suicidality is one of the few risk factors that can change from shift to shift. But 80% of people who die by suicide had not previously expressed suicidality, and 98.3% of people who have endorsed suicidality do not die by suicide.14 While the former statistic may improve with increased assessment, the later will likely worsen.

Suicide is not a trivial matter. We admire those that study it and advocate for better interventions. We have compassion for those who have suffered the loss of a loved one to suicide. Our patients have died as a result of the human limitations surrounding suicide prevention. Recognizing the weight of suicide and making an effort to avoid minimizing its immense consequences drive our desire to be honest with ourselves, our patients and their families, and society. That includes the unfortunate truth regarding the current state of the evidence and our ability to enact change.

It is our concern that the rising fascination with repeated suicide risk assessment is misguided in its current form and serves the purpose of appeasing administrators more than reflecting a scientific understanding of the literature. More sadly, we are concerned that this “quantity-over-quality” approach is yet another barrier to practicing what may be one of the few interventions with any hope of meaningfully impacting a patient’s risk of suicide in the clinical setting – spending time connecting with our patients.

Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Compton is a member of the psychiatry faculty at University of California, San Diego. His background includes medical education, mental health advocacy, work with underserved populations, and brain cancer research. Dr. Badre and Dr. Compton have no conflicts of interest.

References

1. Joint Economic Committee. (2019). Long Term Trends in Deaths of Despair. SCP Report 4-19.

2. Kroenke K and Spitzer RL. The PHQ-9: A new depression diagnostic and severity measure. Psychiatr Ann. 2013;32(9):509-15. doi: 10.3928/0048-5713-20020901-06.

3. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Lifetime/Recent.

4. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Since Last Contact.

5. Franklin JC et al. Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research. Psychol Bull. 2017 Feb;143(2):187-232. doi: 10.1037/bul0000084.

6. Beautrais AL. Further suicidal behavior among medically serious suicide attempters. Suicide Life Threat Behav. 2004 Spring;34(1):1-11. doi: 10.1521/suli.34.1.1.27772.

7. Belsher BE. Prediction models for suicide attempts and deaths: A systematic review and simulation. JAMA Psychiatry. 2019 Jun 1;76(6):642-651. doi: 10.1001/jamapsychiatry.2019.0174.

8. Carter G et al. Royal Australian and New Zealand College of Psychiatrists clinical practice guideline for the management of deliberate self-harm. Aust N Z J Psychiatry. 2016 Oct;50(10):939-1000. doi: 10.1177/0004867416661039.

9. Fosse R et al. Predictors of suicide in the patient population admitted to a locked-door psychiatric acute ward. PLoS One. 2017 Mar 16;12(3):e0173958. doi: 10.1371/journal.pone.0173958.

10. Kessler RC et al. Suicide prediction models: A critical review of recent research with recommendations for the way forward. Mol Psychiatry. 2020 Jan;25(1):168-79. doi: 10.1038/s41380-019-0531-0.

11. Mulder R. Problems with suicide risk assessment. Aust N Z J Psychiatry. 2011 Aug;45(8):605-7. doi: 10.3109/00048674.2011.594786.

12. Pokorny AD. Prediction of suicide in psychiatric patients: Report of a prospective study. Arch Gen Psychiatry. 1983 Mar;40(3):249-57. doi: 10.1001/archpsyc.1983.01790030019002.

13. Rosen A. Detection of suicidal patients: An example of some limitations in the prediction of infrequent events. J Consult Psychol. 1954 Dec;18(6):397-403. doi: 10.1037/h0058579.

14. McHugh CM et al. (2019). Association between suicidal ideation and suicide: Meta-analyses of odds ratios, sensitivity, specificity and positive predictive value. BJPsych Open. 2019 Mar;5(2):e18. doi: 10.1192/bjo.2018.88.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New AI-enhanced bandages poised to transform wound treatment

Article Type
Changed
Fri, 09/08/2023 - 09:34

You cut yourself. You put on a bandage. In a week or so, your wound heals.

Most people take this routine for granted. But for the more than 8.2 million Americans who have chronic wounds, it’s not so simple.

Traumatic injuries, post-surgical complications, advanced age, and chronic illnesses like diabetes and vascular disease can all disrupt the delicate healing process, leading to wounds that last months or years. 

Left untreated, about 30% led to amputation. And recent studies show the risk of dying from a chronic wound complication within 5 years rivals that of most cancers.

Yet until recently, medical technology had not kept up with what experts say is a snowballing threat to public health.

“Wound care – even with all of the billions of products that are sold – still exists on kind of a medieval level,” said Geoffrey Gurtner, MD, chair of the department of surgery and professor of biomedical engineering at the University of Arizona College of Medicine. “We’re still putting on poultices and salves ... and when it comes to diagnosing infection, it’s really an art. I think we can do better.” 
 

Old-school bandage meets AI

Dr. Gurtner is among dozens of clinicians and researchers reimagining the humble bandage, combining cutting-edge materials science with artificial intelligence and patient data to develop “smart bandages” that do far more than shield a wound.

Someday soon, these paper-thin bandages embedded with miniaturized electronics could monitor the healing process in real time, alerting the patient – or a doctor – when things go wrong. With the press of a smartphone button, that bandage could deliver medicine to fight an infection or an electrical pulse to stimulate healing.

Some “closed-loop” designs need no prompting, instead monitoring the wound and automatically giving it what it needs.

Others in development could halt a battlefield wound from hemorrhaging or kick-start healing in a blast wound, preventing longer-term disability. 

The same technologies could – if the price is right – speed up healing and reduce scarring in minor cuts and scrapes, too, said Dr. Gurtner. 

And unlike many cutting-edge medical innovations, these next-generation bandages could be made relatively cheaply and benefit some of the most vulnerable populations, including older adults, people with low incomes, and those in developing countries.

They could also save the health care system money, as the U.S. spends more than $28 billion annually treating chronic wounds.

“This is a condition that many patients find shameful and embarrassing, so there hasn’t been a lot of advocacy,” said Dr. Gurtner, outgoing board president of the Wound Healing Society. “It’s a relatively ignored problem afflicting an underserved population that has a huge cost. It’s a perfect storm.”
 

How wounds heal, or don’t

Wound healing is one of the most complex processes of the human body.

First platelets rush to the injury, prompting blood to clot. Then immune cells emit compounds called inflammatory cytokines, helping to fight off pathogens and keep infection at bay. Other compounds, including nitric oxide, spark the growth of new blood vessels and collagen to rebuild skin and connective tissue. As inflammation slows and stops, the flesh continues to reform.

But some conditions can stall the process, often in the inflammatory stage. 

In people with diabetes, high glucose levels and poor circulation tend to sabotage the process. And people with nerve damage from spinal cord injuries, diabetes, or other ailments may not be able to feel it when a wound is getting worse or reinjured.

“We end up with patients going months with open wounds that are festering and infected,” said Roslyn Rivkah Isseroff, MD, professor of dermatology at the University of California Davis and head of the VA Northern California Health Care System’s wound healing clinic. “The patients are upset with the smell. These open ulcers put the patient at risk for systemic infection, like sepsis.” It can impact mental health, draining the patient’s ability to care for their wound.

“We see them once a week and send them home and say change your dressing every day, and they say, ‘I can barely move. I can’t do this,’ ” said Dr. Isseroff.

Checking for infection means removing bandages and culturing the wound. That can be painful, and results take time. 

A lot can happen to a wound in a week.

“Sometimes, they come back and it’s a disaster, and they have to be admitted to the ER or even get an amputation,” Dr. Gurtner said. 

People who are housing insecure or lack access to health care are even more vulnerable to complications. 

“If you had the ability to say ‘there is something bad happening,’ you could do a lot to prevent this cascade and downward spiral.” 
 

 

 

Bandages 2.0

In 2019, the Defense Advanced Research Projects Agency, the research arm of the Department of Defense, launched the Bioelectronics for Tissue Regeneration program to encourage scientists to develop a “closed-loop” bandage capable of both monitoring and hastening healing.

Tens of millions in funding has kick-started a flood of innovation since.

“It’s kind of a race to the finish,” said Marco Rolandi, PhD, associate professor of electrical and computer engineering at the University of California Santa Cruz and the principal investigator for a team including engineers, medical doctors, and computer scientists from UC Santa Cruz, UC Davis, and Tufts. “I’ve been amazed and impressed at all the work coming out.”

His team’s goal is to cut healing time in half by using (a) real-time monitoring of how a wound is healing – using indicators like temperature, pH level, oxygen, moisture, glucose, electrical activity, and certain proteins, and (b) appropriate stimulation.

“Every wound is different, so there is no one solution,” said Dr. Isseroff, the team’s clinical lead. “The idea is that it will be able to sense different parameters unique to the wound, use AI to figure out what stage it is in, and provide the right stimulus to kick it out of that stalled stage.”

The team has developed a proof-of-concept prototype: a bandage embedded with a tiny camera that takes pictures and transmits them to a computer algorithm to assess the wound’s progress. Miniaturized battery-powered actuators, or motors, automatically deliver medication.

Phase I trials in rodents went well, Dr. Rolandi said. The team is now testing the bandage on pigs.

Across the globe, other promising developments are underway.

In a scientific paper published in May, researchers at the University of Glasgow described a new “low-cost, environmentally friendly” bandage embedded with light-emitting diodes that use ultraviolet light to kill bacteria – no antibiotics needed. The fabric is stitched with a slim, flexible coil that powers the lights without a battery using wireless power transfer. In lab studies, it eradicated gram-negative bacteria (some of the nastiest bugs) in 6 hours.

Also in May, in the journal Bioactive Materials, a Penn State team detailed a bandage with medicine-injecting microneedles that can halt bleeding immediately after injury. In lab and animal tests, it reduced clotting time from 11.5 minutes to 1.3 minutes and bleeding by 90%.

“With hemorrhaging injuries, it is often the loss of blood – not the injury itself – that causes death,” said study author Amir Sheikhi, PhD, assistant professor of chemical and biomedical engineering at Penn State. “Those 10 minutes could be the difference between life and death.” 

Another smart bandage, developed at Northwestern University, Chicago, harmlessly dissolves – electrodes and all – into the body after it is no longer needed, eliminating what can be a painful removal.

Guillermo Ameer, DSc, a study author reporting on the technology in Science Advances, hopes it could be made cheaply and used in developing countries.

“We’d like to create something that you could use in your home, even in a very remote village,” said Dr. Ameer, professor of biomedical engineering at Northwestern.
 

Timeline for clinical use

These are early days for the smart bandage, scientists say. Most studies have been in rodents and more work is needed to develop human-scale bandages, reduce cost, solve long-term data storage, and ensure material adheres well without irritating the skin.

But Dr. Gurtner is hopeful that some iteration could be used in clinical practice within a few years.

In May, he and colleagues at Stanford (Calif.) University published a paper in Nature Biotechnology describing their smart bandage. It includes a microcontroller unit, a radio antenna, biosensors, and an electrical stimulator all affixed to a rubbery, skin-like polymer (or hydrogel) about the thickness of a single coat of latex paint.

The bandage senses changes in temperature and electrical conductivity as the wound heals, and it gives electrical stimulation to accelerate that healing.

Animals treated with the bandage healed 25% faster, with 50% less scarring.

Electrical currents are already used for wound healing in clinical practice, Dr. Gurtner said. Because the stimulus is already approved and the cost to make the bandage could be low (as little as $10 to $50), he believes it could be ushered through the approval processes relatively quickly.

“Is this the ultimate embodiment of all the bells and whistles that are possible in a smart bandage? No. Not yet,” he said. “But we think it will help people. And right now, that’s good enough.”

A version of this article appeared on WebMD.com.

Publications
Topics
Sections

You cut yourself. You put on a bandage. In a week or so, your wound heals.

Most people take this routine for granted. But for the more than 8.2 million Americans who have chronic wounds, it’s not so simple.

Traumatic injuries, post-surgical complications, advanced age, and chronic illnesses like diabetes and vascular disease can all disrupt the delicate healing process, leading to wounds that last months or years. 

Left untreated, about 30% led to amputation. And recent studies show the risk of dying from a chronic wound complication within 5 years rivals that of most cancers.

Yet until recently, medical technology had not kept up with what experts say is a snowballing threat to public health.

“Wound care – even with all of the billions of products that are sold – still exists on kind of a medieval level,” said Geoffrey Gurtner, MD, chair of the department of surgery and professor of biomedical engineering at the University of Arizona College of Medicine. “We’re still putting on poultices and salves ... and when it comes to diagnosing infection, it’s really an art. I think we can do better.” 
 

Old-school bandage meets AI

Dr. Gurtner is among dozens of clinicians and researchers reimagining the humble bandage, combining cutting-edge materials science with artificial intelligence and patient data to develop “smart bandages” that do far more than shield a wound.

Someday soon, these paper-thin bandages embedded with miniaturized electronics could monitor the healing process in real time, alerting the patient – or a doctor – when things go wrong. With the press of a smartphone button, that bandage could deliver medicine to fight an infection or an electrical pulse to stimulate healing.

Some “closed-loop” designs need no prompting, instead monitoring the wound and automatically giving it what it needs.

Others in development could halt a battlefield wound from hemorrhaging or kick-start healing in a blast wound, preventing longer-term disability. 

The same technologies could – if the price is right – speed up healing and reduce scarring in minor cuts and scrapes, too, said Dr. Gurtner. 

And unlike many cutting-edge medical innovations, these next-generation bandages could be made relatively cheaply and benefit some of the most vulnerable populations, including older adults, people with low incomes, and those in developing countries.

They could also save the health care system money, as the U.S. spends more than $28 billion annually treating chronic wounds.

“This is a condition that many patients find shameful and embarrassing, so there hasn’t been a lot of advocacy,” said Dr. Gurtner, outgoing board president of the Wound Healing Society. “It’s a relatively ignored problem afflicting an underserved population that has a huge cost. It’s a perfect storm.”
 

How wounds heal, or don’t

Wound healing is one of the most complex processes of the human body.

First platelets rush to the injury, prompting blood to clot. Then immune cells emit compounds called inflammatory cytokines, helping to fight off pathogens and keep infection at bay. Other compounds, including nitric oxide, spark the growth of new blood vessels and collagen to rebuild skin and connective tissue. As inflammation slows and stops, the flesh continues to reform.

But some conditions can stall the process, often in the inflammatory stage. 

In people with diabetes, high glucose levels and poor circulation tend to sabotage the process. And people with nerve damage from spinal cord injuries, diabetes, or other ailments may not be able to feel it when a wound is getting worse or reinjured.

“We end up with patients going months with open wounds that are festering and infected,” said Roslyn Rivkah Isseroff, MD, professor of dermatology at the University of California Davis and head of the VA Northern California Health Care System’s wound healing clinic. “The patients are upset with the smell. These open ulcers put the patient at risk for systemic infection, like sepsis.” It can impact mental health, draining the patient’s ability to care for their wound.

“We see them once a week and send them home and say change your dressing every day, and they say, ‘I can barely move. I can’t do this,’ ” said Dr. Isseroff.

Checking for infection means removing bandages and culturing the wound. That can be painful, and results take time. 

A lot can happen to a wound in a week.

“Sometimes, they come back and it’s a disaster, and they have to be admitted to the ER or even get an amputation,” Dr. Gurtner said. 

People who are housing insecure or lack access to health care are even more vulnerable to complications. 

“If you had the ability to say ‘there is something bad happening,’ you could do a lot to prevent this cascade and downward spiral.” 
 

 

 

Bandages 2.0

In 2019, the Defense Advanced Research Projects Agency, the research arm of the Department of Defense, launched the Bioelectronics for Tissue Regeneration program to encourage scientists to develop a “closed-loop” bandage capable of both monitoring and hastening healing.

Tens of millions in funding has kick-started a flood of innovation since.

“It’s kind of a race to the finish,” said Marco Rolandi, PhD, associate professor of electrical and computer engineering at the University of California Santa Cruz and the principal investigator for a team including engineers, medical doctors, and computer scientists from UC Santa Cruz, UC Davis, and Tufts. “I’ve been amazed and impressed at all the work coming out.”

His team’s goal is to cut healing time in half by using (a) real-time monitoring of how a wound is healing – using indicators like temperature, pH level, oxygen, moisture, glucose, electrical activity, and certain proteins, and (b) appropriate stimulation.

“Every wound is different, so there is no one solution,” said Dr. Isseroff, the team’s clinical lead. “The idea is that it will be able to sense different parameters unique to the wound, use AI to figure out what stage it is in, and provide the right stimulus to kick it out of that stalled stage.”

The team has developed a proof-of-concept prototype: a bandage embedded with a tiny camera that takes pictures and transmits them to a computer algorithm to assess the wound’s progress. Miniaturized battery-powered actuators, or motors, automatically deliver medication.

Phase I trials in rodents went well, Dr. Rolandi said. The team is now testing the bandage on pigs.

Across the globe, other promising developments are underway.

In a scientific paper published in May, researchers at the University of Glasgow described a new “low-cost, environmentally friendly” bandage embedded with light-emitting diodes that use ultraviolet light to kill bacteria – no antibiotics needed. The fabric is stitched with a slim, flexible coil that powers the lights without a battery using wireless power transfer. In lab studies, it eradicated gram-negative bacteria (some of the nastiest bugs) in 6 hours.

Also in May, in the journal Bioactive Materials, a Penn State team detailed a bandage with medicine-injecting microneedles that can halt bleeding immediately after injury. In lab and animal tests, it reduced clotting time from 11.5 minutes to 1.3 minutes and bleeding by 90%.

“With hemorrhaging injuries, it is often the loss of blood – not the injury itself – that causes death,” said study author Amir Sheikhi, PhD, assistant professor of chemical and biomedical engineering at Penn State. “Those 10 minutes could be the difference between life and death.” 

Another smart bandage, developed at Northwestern University, Chicago, harmlessly dissolves – electrodes and all – into the body after it is no longer needed, eliminating what can be a painful removal.

Guillermo Ameer, DSc, a study author reporting on the technology in Science Advances, hopes it could be made cheaply and used in developing countries.

“We’d like to create something that you could use in your home, even in a very remote village,” said Dr. Ameer, professor of biomedical engineering at Northwestern.
 

Timeline for clinical use

These are early days for the smart bandage, scientists say. Most studies have been in rodents and more work is needed to develop human-scale bandages, reduce cost, solve long-term data storage, and ensure material adheres well without irritating the skin.

But Dr. Gurtner is hopeful that some iteration could be used in clinical practice within a few years.

In May, he and colleagues at Stanford (Calif.) University published a paper in Nature Biotechnology describing their smart bandage. It includes a microcontroller unit, a radio antenna, biosensors, and an electrical stimulator all affixed to a rubbery, skin-like polymer (or hydrogel) about the thickness of a single coat of latex paint.

The bandage senses changes in temperature and electrical conductivity as the wound heals, and it gives electrical stimulation to accelerate that healing.

Animals treated with the bandage healed 25% faster, with 50% less scarring.

Electrical currents are already used for wound healing in clinical practice, Dr. Gurtner said. Because the stimulus is already approved and the cost to make the bandage could be low (as little as $10 to $50), he believes it could be ushered through the approval processes relatively quickly.

“Is this the ultimate embodiment of all the bells and whistles that are possible in a smart bandage? No. Not yet,” he said. “But we think it will help people. And right now, that’s good enough.”

A version of this article appeared on WebMD.com.

You cut yourself. You put on a bandage. In a week or so, your wound heals.

Most people take this routine for granted. But for the more than 8.2 million Americans who have chronic wounds, it’s not so simple.

Traumatic injuries, post-surgical complications, advanced age, and chronic illnesses like diabetes and vascular disease can all disrupt the delicate healing process, leading to wounds that last months or years. 

Left untreated, about 30% led to amputation. And recent studies show the risk of dying from a chronic wound complication within 5 years rivals that of most cancers.

Yet until recently, medical technology had not kept up with what experts say is a snowballing threat to public health.

“Wound care – even with all of the billions of products that are sold – still exists on kind of a medieval level,” said Geoffrey Gurtner, MD, chair of the department of surgery and professor of biomedical engineering at the University of Arizona College of Medicine. “We’re still putting on poultices and salves ... and when it comes to diagnosing infection, it’s really an art. I think we can do better.” 
 

Old-school bandage meets AI

Dr. Gurtner is among dozens of clinicians and researchers reimagining the humble bandage, combining cutting-edge materials science with artificial intelligence and patient data to develop “smart bandages” that do far more than shield a wound.

Someday soon, these paper-thin bandages embedded with miniaturized electronics could monitor the healing process in real time, alerting the patient – or a doctor – when things go wrong. With the press of a smartphone button, that bandage could deliver medicine to fight an infection or an electrical pulse to stimulate healing.

Some “closed-loop” designs need no prompting, instead monitoring the wound and automatically giving it what it needs.

Others in development could halt a battlefield wound from hemorrhaging or kick-start healing in a blast wound, preventing longer-term disability. 

The same technologies could – if the price is right – speed up healing and reduce scarring in minor cuts and scrapes, too, said Dr. Gurtner. 

And unlike many cutting-edge medical innovations, these next-generation bandages could be made relatively cheaply and benefit some of the most vulnerable populations, including older adults, people with low incomes, and those in developing countries.

They could also save the health care system money, as the U.S. spends more than $28 billion annually treating chronic wounds.

“This is a condition that many patients find shameful and embarrassing, so there hasn’t been a lot of advocacy,” said Dr. Gurtner, outgoing board president of the Wound Healing Society. “It’s a relatively ignored problem afflicting an underserved population that has a huge cost. It’s a perfect storm.”
 

How wounds heal, or don’t

Wound healing is one of the most complex processes of the human body.

First platelets rush to the injury, prompting blood to clot. Then immune cells emit compounds called inflammatory cytokines, helping to fight off pathogens and keep infection at bay. Other compounds, including nitric oxide, spark the growth of new blood vessels and collagen to rebuild skin and connective tissue. As inflammation slows and stops, the flesh continues to reform.

But some conditions can stall the process, often in the inflammatory stage. 

In people with diabetes, high glucose levels and poor circulation tend to sabotage the process. And people with nerve damage from spinal cord injuries, diabetes, or other ailments may not be able to feel it when a wound is getting worse or reinjured.

“We end up with patients going months with open wounds that are festering and infected,” said Roslyn Rivkah Isseroff, MD, professor of dermatology at the University of California Davis and head of the VA Northern California Health Care System’s wound healing clinic. “The patients are upset with the smell. These open ulcers put the patient at risk for systemic infection, like sepsis.” It can impact mental health, draining the patient’s ability to care for their wound.

“We see them once a week and send them home and say change your dressing every day, and they say, ‘I can barely move. I can’t do this,’ ” said Dr. Isseroff.

Checking for infection means removing bandages and culturing the wound. That can be painful, and results take time. 

A lot can happen to a wound in a week.

“Sometimes, they come back and it’s a disaster, and they have to be admitted to the ER or even get an amputation,” Dr. Gurtner said. 

People who are housing insecure or lack access to health care are even more vulnerable to complications. 

“If you had the ability to say ‘there is something bad happening,’ you could do a lot to prevent this cascade and downward spiral.” 
 

 

 

Bandages 2.0

In 2019, the Defense Advanced Research Projects Agency, the research arm of the Department of Defense, launched the Bioelectronics for Tissue Regeneration program to encourage scientists to develop a “closed-loop” bandage capable of both monitoring and hastening healing.

Tens of millions in funding has kick-started a flood of innovation since.

“It’s kind of a race to the finish,” said Marco Rolandi, PhD, associate professor of electrical and computer engineering at the University of California Santa Cruz and the principal investigator for a team including engineers, medical doctors, and computer scientists from UC Santa Cruz, UC Davis, and Tufts. “I’ve been amazed and impressed at all the work coming out.”

His team’s goal is to cut healing time in half by using (a) real-time monitoring of how a wound is healing – using indicators like temperature, pH level, oxygen, moisture, glucose, electrical activity, and certain proteins, and (b) appropriate stimulation.

“Every wound is different, so there is no one solution,” said Dr. Isseroff, the team’s clinical lead. “The idea is that it will be able to sense different parameters unique to the wound, use AI to figure out what stage it is in, and provide the right stimulus to kick it out of that stalled stage.”

The team has developed a proof-of-concept prototype: a bandage embedded with a tiny camera that takes pictures and transmits them to a computer algorithm to assess the wound’s progress. Miniaturized battery-powered actuators, or motors, automatically deliver medication.

Phase I trials in rodents went well, Dr. Rolandi said. The team is now testing the bandage on pigs.

Across the globe, other promising developments are underway.

In a scientific paper published in May, researchers at the University of Glasgow described a new “low-cost, environmentally friendly” bandage embedded with light-emitting diodes that use ultraviolet light to kill bacteria – no antibiotics needed. The fabric is stitched with a slim, flexible coil that powers the lights without a battery using wireless power transfer. In lab studies, it eradicated gram-negative bacteria (some of the nastiest bugs) in 6 hours.

Also in May, in the journal Bioactive Materials, a Penn State team detailed a bandage with medicine-injecting microneedles that can halt bleeding immediately after injury. In lab and animal tests, it reduced clotting time from 11.5 minutes to 1.3 minutes and bleeding by 90%.

“With hemorrhaging injuries, it is often the loss of blood – not the injury itself – that causes death,” said study author Amir Sheikhi, PhD, assistant professor of chemical and biomedical engineering at Penn State. “Those 10 minutes could be the difference between life and death.” 

Another smart bandage, developed at Northwestern University, Chicago, harmlessly dissolves – electrodes and all – into the body after it is no longer needed, eliminating what can be a painful removal.

Guillermo Ameer, DSc, a study author reporting on the technology in Science Advances, hopes it could be made cheaply and used in developing countries.

“We’d like to create something that you could use in your home, even in a very remote village,” said Dr. Ameer, professor of biomedical engineering at Northwestern.
 

Timeline for clinical use

These are early days for the smart bandage, scientists say. Most studies have been in rodents and more work is needed to develop human-scale bandages, reduce cost, solve long-term data storage, and ensure material adheres well without irritating the skin.

But Dr. Gurtner is hopeful that some iteration could be used in clinical practice within a few years.

In May, he and colleagues at Stanford (Calif.) University published a paper in Nature Biotechnology describing their smart bandage. It includes a microcontroller unit, a radio antenna, biosensors, and an electrical stimulator all affixed to a rubbery, skin-like polymer (or hydrogel) about the thickness of a single coat of latex paint.

The bandage senses changes in temperature and electrical conductivity as the wound heals, and it gives electrical stimulation to accelerate that healing.

Animals treated with the bandage healed 25% faster, with 50% less scarring.

Electrical currents are already used for wound healing in clinical practice, Dr. Gurtner said. Because the stimulus is already approved and the cost to make the bandage could be low (as little as $10 to $50), he believes it could be ushered through the approval processes relatively quickly.

“Is this the ultimate embodiment of all the bells and whistles that are possible in a smart bandage? No. Not yet,” he said. “But we think it will help people. And right now, that’s good enough.”

A version of this article appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

IQ and concussion recovery

Article Type
Changed
Thu, 09/07/2023 - 12:07

Pediatric concussion is one of those rare phenomena in which we may be witnessing its emergence and clarification in a generation. When I was serving as the game doctor for our local high school football team in the 1970s, I and many other physicians had a very simplistic view of concussion. If the patient never lost conscious and had a reasonably intact short-term memory, we didn’t seriously entertain concussion as a diagnosis. “What’s the score and who is the president?” Were my favorite screening questions.

Obviously, we were underdiagnosing and mismanaging concussion. In part thanks to some high-profile athletes who suffered multiple concussions and eventually chronic traumatic encephalopathy (CTE) physicians began to realize that they should be looking more closely at children who sustained a head injury. The diagnostic criteria were expanded to include any injury that even temporarily effected brain function.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

With the new appreciation for the risk of multiple concussions, the focus broadened to include the question of when is it safe for the athlete to return to competition. What signs or symptoms can the patient offer us so we can be sure his or her brain is sufficiently recovered? Here we stepped off into a deep abyss of ignorance. Fortunately, it became obvious fairly quickly that imaging studies weren’t going to help us, as they were invariably normal or at least didn’t tell us anything that wasn’t obvious on a physical exam.

If the patient had a headache, complained of dizziness, or manifested amnesia, monitoring the patient was fairly straightforward. But, in the absence of symptoms and no obvious way to determine the pace of recovery of an organ we couldn’t visualize, clinicians were pulling criteria and time tables out of thin air. Guessing that the concussed brain was in some ways like a torn muscle or overstretched tendon, “brain rest” was often suggested. So no TV, no reading, and certainly none of the cerebral challenging activity of school. Fortunately, we don’t hear much about the notion of brain rest anymore and there is at least one study that suggests that patients kept home from school recover more slowly.

But there remains a significant number of patients who have persistent symptoms and are unable to resume their usual activities, including school and sports. Sometimes they describe headache or dizziness but often they complain of a vague mental unwellness. “Brain fog,” a term that has emerged in the wake of the COVID pandemic, might be an apt descriptor. Management of these slow recoverers has been a challenge.

However, two recent articles in the journal Pediatrics may provide some clarity and offer guidance in their management. In a study coming from the psychology department at Georgia State University, researchers reported that they have been able to find “no evidence of clinical meaningful differences in IQ after pediatric concussion.” In their words there is “strong evidence against reduced intelligence in the first few weeks to month after pediatric concussion.”

While their findings may simply toss the IQ onto the pile of worthless measures of healing, a companion commentary by Talin Babikian, PhD, a psychologist at the Semel Institute for Neuroscience and Human Behavior at UCLA, provides a more nuanced interpretation. He writes that if we are looking for an explanation when a patient’s recovery is taking longer than we might expect we need to look beyond some structural damage. Maybe the patient has a previously undiagnosed premorbid condition effecting his or her intellectual, cognitive, or learning abilities. Could the stall in improvement be the result of other symptoms? Here fatigue and sleep deprivation may be the culprits. Could some underlying emotional factor such as anxiety or depression be the problem? For example, I have seen patients whose fear of re-injury has prevented their return to full function. And, finally, the patient may be avoiding a “nonpreferred or challenging situation” unrelated to the injury.

In other words, the concussion may simply be the most obvious rip in a fabric that was already frayed and under stress. This kind of broad holistic (a word I usually like to avoid) thinking may be what is lacking as we struggle to understand other mysterious and chronic conditions such as Lyme disease and chronic fatigue syndrome.

While these two papers help provide some clarity in the management of pediatric concussion, what they fail to address is the bigger question of the relationship between head injury and CTE. The answers to that conundrum are enshrouded in a mix of politics and publicity that I doubt will clear in the near future.

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Publications
Topics
Sections

Pediatric concussion is one of those rare phenomena in which we may be witnessing its emergence and clarification in a generation. When I was serving as the game doctor for our local high school football team in the 1970s, I and many other physicians had a very simplistic view of concussion. If the patient never lost conscious and had a reasonably intact short-term memory, we didn’t seriously entertain concussion as a diagnosis. “What’s the score and who is the president?” Were my favorite screening questions.

Obviously, we were underdiagnosing and mismanaging concussion. In part thanks to some high-profile athletes who suffered multiple concussions and eventually chronic traumatic encephalopathy (CTE) physicians began to realize that they should be looking more closely at children who sustained a head injury. The diagnostic criteria were expanded to include any injury that even temporarily effected brain function.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

With the new appreciation for the risk of multiple concussions, the focus broadened to include the question of when is it safe for the athlete to return to competition. What signs or symptoms can the patient offer us so we can be sure his or her brain is sufficiently recovered? Here we stepped off into a deep abyss of ignorance. Fortunately, it became obvious fairly quickly that imaging studies weren’t going to help us, as they were invariably normal or at least didn’t tell us anything that wasn’t obvious on a physical exam.

If the patient had a headache, complained of dizziness, or manifested amnesia, monitoring the patient was fairly straightforward. But, in the absence of symptoms and no obvious way to determine the pace of recovery of an organ we couldn’t visualize, clinicians were pulling criteria and time tables out of thin air. Guessing that the concussed brain was in some ways like a torn muscle or overstretched tendon, “brain rest” was often suggested. So no TV, no reading, and certainly none of the cerebral challenging activity of school. Fortunately, we don’t hear much about the notion of brain rest anymore and there is at least one study that suggests that patients kept home from school recover more slowly.

But there remains a significant number of patients who have persistent symptoms and are unable to resume their usual activities, including school and sports. Sometimes they describe headache or dizziness but often they complain of a vague mental unwellness. “Brain fog,” a term that has emerged in the wake of the COVID pandemic, might be an apt descriptor. Management of these slow recoverers has been a challenge.

However, two recent articles in the journal Pediatrics may provide some clarity and offer guidance in their management. In a study coming from the psychology department at Georgia State University, researchers reported that they have been able to find “no evidence of clinical meaningful differences in IQ after pediatric concussion.” In their words there is “strong evidence against reduced intelligence in the first few weeks to month after pediatric concussion.”

While their findings may simply toss the IQ onto the pile of worthless measures of healing, a companion commentary by Talin Babikian, PhD, a psychologist at the Semel Institute for Neuroscience and Human Behavior at UCLA, provides a more nuanced interpretation. He writes that if we are looking for an explanation when a patient’s recovery is taking longer than we might expect we need to look beyond some structural damage. Maybe the patient has a previously undiagnosed premorbid condition effecting his or her intellectual, cognitive, or learning abilities. Could the stall in improvement be the result of other symptoms? Here fatigue and sleep deprivation may be the culprits. Could some underlying emotional factor such as anxiety or depression be the problem? For example, I have seen patients whose fear of re-injury has prevented their return to full function. And, finally, the patient may be avoiding a “nonpreferred or challenging situation” unrelated to the injury.

In other words, the concussion may simply be the most obvious rip in a fabric that was already frayed and under stress. This kind of broad holistic (a word I usually like to avoid) thinking may be what is lacking as we struggle to understand other mysterious and chronic conditions such as Lyme disease and chronic fatigue syndrome.

While these two papers help provide some clarity in the management of pediatric concussion, what they fail to address is the bigger question of the relationship between head injury and CTE. The answers to that conundrum are enshrouded in a mix of politics and publicity that I doubt will clear in the near future.

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Pediatric concussion is one of those rare phenomena in which we may be witnessing its emergence and clarification in a generation. When I was serving as the game doctor for our local high school football team in the 1970s, I and many other physicians had a very simplistic view of concussion. If the patient never lost conscious and had a reasonably intact short-term memory, we didn’t seriously entertain concussion as a diagnosis. “What’s the score and who is the president?” Were my favorite screening questions.

Obviously, we were underdiagnosing and mismanaging concussion. In part thanks to some high-profile athletes who suffered multiple concussions and eventually chronic traumatic encephalopathy (CTE) physicians began to realize that they should be looking more closely at children who sustained a head injury. The diagnostic criteria were expanded to include any injury that even temporarily effected brain function.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

With the new appreciation for the risk of multiple concussions, the focus broadened to include the question of when is it safe for the athlete to return to competition. What signs or symptoms can the patient offer us so we can be sure his or her brain is sufficiently recovered? Here we stepped off into a deep abyss of ignorance. Fortunately, it became obvious fairly quickly that imaging studies weren’t going to help us, as they were invariably normal or at least didn’t tell us anything that wasn’t obvious on a physical exam.

If the patient had a headache, complained of dizziness, or manifested amnesia, monitoring the patient was fairly straightforward. But, in the absence of symptoms and no obvious way to determine the pace of recovery of an organ we couldn’t visualize, clinicians were pulling criteria and time tables out of thin air. Guessing that the concussed brain was in some ways like a torn muscle or overstretched tendon, “brain rest” was often suggested. So no TV, no reading, and certainly none of the cerebral challenging activity of school. Fortunately, we don’t hear much about the notion of brain rest anymore and there is at least one study that suggests that patients kept home from school recover more slowly.

But there remains a significant number of patients who have persistent symptoms and are unable to resume their usual activities, including school and sports. Sometimes they describe headache or dizziness but often they complain of a vague mental unwellness. “Brain fog,” a term that has emerged in the wake of the COVID pandemic, might be an apt descriptor. Management of these slow recoverers has been a challenge.

However, two recent articles in the journal Pediatrics may provide some clarity and offer guidance in their management. In a study coming from the psychology department at Georgia State University, researchers reported that they have been able to find “no evidence of clinical meaningful differences in IQ after pediatric concussion.” In their words there is “strong evidence against reduced intelligence in the first few weeks to month after pediatric concussion.”

While their findings may simply toss the IQ onto the pile of worthless measures of healing, a companion commentary by Talin Babikian, PhD, a psychologist at the Semel Institute for Neuroscience and Human Behavior at UCLA, provides a more nuanced interpretation. He writes that if we are looking for an explanation when a patient’s recovery is taking longer than we might expect we need to look beyond some structural damage. Maybe the patient has a previously undiagnosed premorbid condition effecting his or her intellectual, cognitive, or learning abilities. Could the stall in improvement be the result of other symptoms? Here fatigue and sleep deprivation may be the culprits. Could some underlying emotional factor such as anxiety or depression be the problem? For example, I have seen patients whose fear of re-injury has prevented their return to full function. And, finally, the patient may be avoiding a “nonpreferred or challenging situation” unrelated to the injury.

In other words, the concussion may simply be the most obvious rip in a fabric that was already frayed and under stress. This kind of broad holistic (a word I usually like to avoid) thinking may be what is lacking as we struggle to understand other mysterious and chronic conditions such as Lyme disease and chronic fatigue syndrome.

While these two papers help provide some clarity in the management of pediatric concussion, what they fail to address is the bigger question of the relationship between head injury and CTE. The answers to that conundrum are enshrouded in a mix of politics and publicity that I doubt will clear in the near future.

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Almonds and almond oil

Article Type
Changed
Thu, 09/07/2023 - 09:19

Almonds and almond oil are known to exhibit anti-inflammatory, antihepatotoxicity, and immunity-boosting activity.1 The seed from the deciduous almond tree (Oleum amygdalae), which is native to Iran and parts of the Levant, almonds contain copious amounts of phenols and polyphenols, fatty acids, and vitamin E, all of which are known to exert antioxidant activity.2-5 These seeds have been found to have a substantial impact on serum lipids.4 Emollient and sclerosant characteristics have also been linked to almond oil, which has been found to ameliorate complexion and skin tone.5 Significantly, in vitro and in vivo studies have shown that UVB-induced photoaging can be attenuated through the use of almond oil and almond skin extract.2 Further, in traditional Chinese Medicine, Ayurveda, and ancient Greco-Persian medicine, almond oil was used to treat cutaneous conditions, including eczema and psoriasis.1The focus of this column is to provide an update on the use of almonds and almond oil for skincare since covering the topic in July 2014.

Dr. Leslie S. Baumann

Antiphotoaging activity

In 2019, Foolad and Vaughn conducted a prospective, investigator-blind, randomized controlled trial to determine the effects of almond consumption on facial sebum production and wrinkles. Participants (28 postmenopausal women with Fitzpatrick skin types I and II completed the study) consumed 20% of their daily energy intake in almonds or a calorie-matched snack over 16 weeks through the UC Davis Dermatology Clinic. Photographic analysis revealed that the almond group experienced significantly diminished wrinkle severity, compared with the control group. The investigators concluded that daily almond consumption has the potential to decrease wrinkle severity in postmenopausal women and that almonds may confer natural antiaging effects.4

In a similar investigation 2 years later, Rybak et al. reported on a prospective, randomized controlled study to ascertain the effects of almond consumption on photoaging in postmenopausal women with Fitzpatrick skin types I or II who obtained 20% of their daily energy consumption via almonds or a calorie-matched snack for 24 weeks. Results demonstrated significant effects conferred by almond consumption, with average wrinkle severity substantially diminished in the almond group at weeks 16 (by 15%) and 24 (by 16%), compared with baseline. In addition, facial pigment intensity was reduced by 20% in the almond group by week 16 and this was maintained through the end of the study. Further, sebum excretion was higher in the control group. The investigators concluded that the daily consumption of almonds may have the potential to enhance protection against photoaging, particularly in terms of facial wrinkles and pigment intensity, in postmenopausal women.3

Later in 2021, Li et al. conducted a study in 39 healthy Asian women (18-45 years old) with Fitzpatrick skin types II to IV to investigate the effects of almond consumption on UVB resistance. The researchers randomized participants to eat either 1.5 oz of almonds or 1.8 oz of pretzels daily for 12 weeks. Results showed that the minimal erythema dose was higher in the almond group as compared with the control group. No differences were observed in hydration, melanin, roughness, or sebum on facial skin. The authors concluded that daily oral almond intake may improve photoprotection by raising the minimal erythema dose.2

In a 2022 review on the cutaneous benefits of sweet almond, evening primrose, and jojoba oils, Blaak and Staib noted that all three have been used for hundreds if not thousands of years in traditional medicine to treat various conditions, including skin disorders. Further, they concluded that the longstanding uses of these oils has been borne out by contemporary data, which reveal cutaneous benefits for adult and young skin, particularly in bolstering stratum corneum integrity, recovery, and lipid ratio.6

Later that year, Sanju et al., reporting on the development and assessment of a broad-spectrum polyherbal sunscreen delivered through solid lipid nanoparticles, noted that almond oil was among the natural ingredients used because of its photoprotective characteristics. Overall, the sunscreen formulation, Safranal, was found to impart robust protection against UV radiation.7

 

 

Wound healing

In 2020, Borzou et al. conducted a single-blind randomized clinical trial to ascertain the impact of topical almond oil in preventing pressure injuries. Data collection occurred over 8 months in a hospital setting, with 108 patients randomly assigned to receive almond oil, placebo (liquid paraffin), or the control (standard of care). The researchers found that topically applied almond oil was linked to a lower incidence of pressure injuries, and they arose later in the study as compared with those injuries in the groups receiving paraffin or standard of care. Pressure injury incidence was 5.6% in the almond oil group, 13.9% in the placebo group, and 25.1% in the control group.8

That same year, Caglar et al. completed a randomized controlled trial in 90 preterm infants to assess the effects of sunflower seed oil and almond oil on the stratum corneum. Infants were randomly selected for treatment with either oil or control. A nurse researcher applied oils to the whole body except for the head and face four times daily for 5 days. Investigators determined that stratum corneum hydration was better in the oil groups as compared with control, with no difference found between sunflower seed and almond oils.9

Eczema, hand dermatitis, and striae

In 2018, Simon et al. performed a randomized, double-blind study to determine the short- and long-term effects of two emollients on pruritus and skin restoration in xerotic eczema. The emollients contained lactic acid and refined almond oil, with one also including polidocanol. Both emollients were effective in reducing the severity of itching, with skin moisture and lipid content found to have risen after the initial administration and yielding steady improvement over 2 weeks.10

Earlier that year, Zeichner et al. found that the use of an OTC sweet almond oil, rich in fatty acids and a standard-bearing treatment for eczema and psoriasis for centuries, was effective in treating hand dermatitis. Specifically, the moisturizer, which contained 7% sweet almond oil and 2% colloidal oatmeal, was identified as safe and effective in resolving moderate to severe hand dermatitis.11

Some studies have also shown almond oil to be effective against striae gravidarum. Hajhashemi et al. conducted a double-blind clinical trial in 160 nulliparous women to compare the effects of aloe vera gel and sweet almond oil on striae gravidarum in 2018. Volunteers were randomly assigned to one of three case groups (Aloe vera, sweet almond oil, or base cream) who received topical treatment on the abdomen, or the fourth group, which received no treatment. Results showed that both treatment creams were effective in decreasing erythema and the pruritus associated with striae as well as in preventing their expansion.12 Previously, Tashan and Kafkasli showed in a nonrandomized study that massage with bitter almond oil may diminish the visibility of present striae gravidarum and prevent the emergence of new striae.13

Conclusion

Almonds and almond oil have been used as food and in traditional medical practices dating back several centuries. In the last decade, intriguing results have emerged regarding the effects of almond consumption or topical almond oil administration on skin health. While much more research is necessary, the recent data seem to support the traditional uses of this tree seed for dermatologic purposes.

Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur in Miami. She founded the division of cosmetic dermatology at the University of Miami in 1997. The third edition of her bestselling textbook, “Cosmetic Dermatology” (New York: McGraw Hill), was published in 2022. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Johnson & Johnson, and Burt’s Bees. She is the CEO of Skin Type Solutions, a SaaS company used to generate skin care routines in office and as an e-commerce solution. Write to her at [email protected].

References

1. Ahmad Z. Complement Ther Clin Pract. 2010 Feb;16(1):10-2.

2. Li JN et al. J Cosmet Dermatol. 2021 Sep;20(9):2975-80.

3. Rybak I et al. Nutrients. 2021 Feb 27;13(3):785.

4. Foolad N et al. Phytother Res. 2019 Dec;33(12):3212-7.

5. Lin TK et al. Int J Mol Sci. 2017 Dec 27;19(1):70.

6. Blaak J, Staib P. Int J Cosmet Sci. 2022 Feb;44(1):1-9.

7. Sanju N et al. J Cosmet Dermatol. 2022 Oct;21(10):4433-46.

8. Borzou SR et al. J Wound Ostomy Continence Nurs. 2020 Jul/Aug;47(4):336-42.

9. Caglar S et al. Adv Skin Wound Care. 2020 Aug;33(8):1-6.

10. Simon D et al. Dermatol Ther. 2018 Nov;31(6):e12692.

11. Zeichner JA at al. J Drugs Dermatol. 2018 Jan 1;17(1):78-82.

12. Hajhashemi M et al. J Matern Fetal Neonatal Med. 2018 Jul;31(13):1703-8.

13. Timur Tashan S and Kafkasli A. J Clin Nurs. 2012 Jun;21(11-12):1570-6.
 

Publications
Topics
Sections

Almonds and almond oil are known to exhibit anti-inflammatory, antihepatotoxicity, and immunity-boosting activity.1 The seed from the deciduous almond tree (Oleum amygdalae), which is native to Iran and parts of the Levant, almonds contain copious amounts of phenols and polyphenols, fatty acids, and vitamin E, all of which are known to exert antioxidant activity.2-5 These seeds have been found to have a substantial impact on serum lipids.4 Emollient and sclerosant characteristics have also been linked to almond oil, which has been found to ameliorate complexion and skin tone.5 Significantly, in vitro and in vivo studies have shown that UVB-induced photoaging can be attenuated through the use of almond oil and almond skin extract.2 Further, in traditional Chinese Medicine, Ayurveda, and ancient Greco-Persian medicine, almond oil was used to treat cutaneous conditions, including eczema and psoriasis.1The focus of this column is to provide an update on the use of almonds and almond oil for skincare since covering the topic in July 2014.

Dr. Leslie S. Baumann

Antiphotoaging activity

In 2019, Foolad and Vaughn conducted a prospective, investigator-blind, randomized controlled trial to determine the effects of almond consumption on facial sebum production and wrinkles. Participants (28 postmenopausal women with Fitzpatrick skin types I and II completed the study) consumed 20% of their daily energy intake in almonds or a calorie-matched snack over 16 weeks through the UC Davis Dermatology Clinic. Photographic analysis revealed that the almond group experienced significantly diminished wrinkle severity, compared with the control group. The investigators concluded that daily almond consumption has the potential to decrease wrinkle severity in postmenopausal women and that almonds may confer natural antiaging effects.4

In a similar investigation 2 years later, Rybak et al. reported on a prospective, randomized controlled study to ascertain the effects of almond consumption on photoaging in postmenopausal women with Fitzpatrick skin types I or II who obtained 20% of their daily energy consumption via almonds or a calorie-matched snack for 24 weeks. Results demonstrated significant effects conferred by almond consumption, with average wrinkle severity substantially diminished in the almond group at weeks 16 (by 15%) and 24 (by 16%), compared with baseline. In addition, facial pigment intensity was reduced by 20% in the almond group by week 16 and this was maintained through the end of the study. Further, sebum excretion was higher in the control group. The investigators concluded that the daily consumption of almonds may have the potential to enhance protection against photoaging, particularly in terms of facial wrinkles and pigment intensity, in postmenopausal women.3

Later in 2021, Li et al. conducted a study in 39 healthy Asian women (18-45 years old) with Fitzpatrick skin types II to IV to investigate the effects of almond consumption on UVB resistance. The researchers randomized participants to eat either 1.5 oz of almonds or 1.8 oz of pretzels daily for 12 weeks. Results showed that the minimal erythema dose was higher in the almond group as compared with the control group. No differences were observed in hydration, melanin, roughness, or sebum on facial skin. The authors concluded that daily oral almond intake may improve photoprotection by raising the minimal erythema dose.2

In a 2022 review on the cutaneous benefits of sweet almond, evening primrose, and jojoba oils, Blaak and Staib noted that all three have been used for hundreds if not thousands of years in traditional medicine to treat various conditions, including skin disorders. Further, they concluded that the longstanding uses of these oils has been borne out by contemporary data, which reveal cutaneous benefits for adult and young skin, particularly in bolstering stratum corneum integrity, recovery, and lipid ratio.6

Later that year, Sanju et al., reporting on the development and assessment of a broad-spectrum polyherbal sunscreen delivered through solid lipid nanoparticles, noted that almond oil was among the natural ingredients used because of its photoprotective characteristics. Overall, the sunscreen formulation, Safranal, was found to impart robust protection against UV radiation.7

 

 

Wound healing

In 2020, Borzou et al. conducted a single-blind randomized clinical trial to ascertain the impact of topical almond oil in preventing pressure injuries. Data collection occurred over 8 months in a hospital setting, with 108 patients randomly assigned to receive almond oil, placebo (liquid paraffin), or the control (standard of care). The researchers found that topically applied almond oil was linked to a lower incidence of pressure injuries, and they arose later in the study as compared with those injuries in the groups receiving paraffin or standard of care. Pressure injury incidence was 5.6% in the almond oil group, 13.9% in the placebo group, and 25.1% in the control group.8

That same year, Caglar et al. completed a randomized controlled trial in 90 preterm infants to assess the effects of sunflower seed oil and almond oil on the stratum corneum. Infants were randomly selected for treatment with either oil or control. A nurse researcher applied oils to the whole body except for the head and face four times daily for 5 days. Investigators determined that stratum corneum hydration was better in the oil groups as compared with control, with no difference found between sunflower seed and almond oils.9

Eczema, hand dermatitis, and striae

In 2018, Simon et al. performed a randomized, double-blind study to determine the short- and long-term effects of two emollients on pruritus and skin restoration in xerotic eczema. The emollients contained lactic acid and refined almond oil, with one also including polidocanol. Both emollients were effective in reducing the severity of itching, with skin moisture and lipid content found to have risen after the initial administration and yielding steady improvement over 2 weeks.10

Earlier that year, Zeichner et al. found that the use of an OTC sweet almond oil, rich in fatty acids and a standard-bearing treatment for eczema and psoriasis for centuries, was effective in treating hand dermatitis. Specifically, the moisturizer, which contained 7% sweet almond oil and 2% colloidal oatmeal, was identified as safe and effective in resolving moderate to severe hand dermatitis.11

Some studies have also shown almond oil to be effective against striae gravidarum. Hajhashemi et al. conducted a double-blind clinical trial in 160 nulliparous women to compare the effects of aloe vera gel and sweet almond oil on striae gravidarum in 2018. Volunteers were randomly assigned to one of three case groups (Aloe vera, sweet almond oil, or base cream) who received topical treatment on the abdomen, or the fourth group, which received no treatment. Results showed that both treatment creams were effective in decreasing erythema and the pruritus associated with striae as well as in preventing their expansion.12 Previously, Tashan and Kafkasli showed in a nonrandomized study that massage with bitter almond oil may diminish the visibility of present striae gravidarum and prevent the emergence of new striae.13

Conclusion

Almonds and almond oil have been used as food and in traditional medical practices dating back several centuries. In the last decade, intriguing results have emerged regarding the effects of almond consumption or topical almond oil administration on skin health. While much more research is necessary, the recent data seem to support the traditional uses of this tree seed for dermatologic purposes.

Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur in Miami. She founded the division of cosmetic dermatology at the University of Miami in 1997. The third edition of her bestselling textbook, “Cosmetic Dermatology” (New York: McGraw Hill), was published in 2022. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Johnson & Johnson, and Burt’s Bees. She is the CEO of Skin Type Solutions, a SaaS company used to generate skin care routines in office and as an e-commerce solution. Write to her at [email protected].

References

1. Ahmad Z. Complement Ther Clin Pract. 2010 Feb;16(1):10-2.

2. Li JN et al. J Cosmet Dermatol. 2021 Sep;20(9):2975-80.

3. Rybak I et al. Nutrients. 2021 Feb 27;13(3):785.

4. Foolad N et al. Phytother Res. 2019 Dec;33(12):3212-7.

5. Lin TK et al. Int J Mol Sci. 2017 Dec 27;19(1):70.

6. Blaak J, Staib P. Int J Cosmet Sci. 2022 Feb;44(1):1-9.

7. Sanju N et al. J Cosmet Dermatol. 2022 Oct;21(10):4433-46.

8. Borzou SR et al. J Wound Ostomy Continence Nurs. 2020 Jul/Aug;47(4):336-42.

9. Caglar S et al. Adv Skin Wound Care. 2020 Aug;33(8):1-6.

10. Simon D et al. Dermatol Ther. 2018 Nov;31(6):e12692.

11. Zeichner JA at al. J Drugs Dermatol. 2018 Jan 1;17(1):78-82.

12. Hajhashemi M et al. J Matern Fetal Neonatal Med. 2018 Jul;31(13):1703-8.

13. Timur Tashan S and Kafkasli A. J Clin Nurs. 2012 Jun;21(11-12):1570-6.
 

Almonds and almond oil are known to exhibit anti-inflammatory, antihepatotoxicity, and immunity-boosting activity.1 The seed from the deciduous almond tree (Oleum amygdalae), which is native to Iran and parts of the Levant, almonds contain copious amounts of phenols and polyphenols, fatty acids, and vitamin E, all of which are known to exert antioxidant activity.2-5 These seeds have been found to have a substantial impact on serum lipids.4 Emollient and sclerosant characteristics have also been linked to almond oil, which has been found to ameliorate complexion and skin tone.5 Significantly, in vitro and in vivo studies have shown that UVB-induced photoaging can be attenuated through the use of almond oil and almond skin extract.2 Further, in traditional Chinese Medicine, Ayurveda, and ancient Greco-Persian medicine, almond oil was used to treat cutaneous conditions, including eczema and psoriasis.1The focus of this column is to provide an update on the use of almonds and almond oil for skincare since covering the topic in July 2014.

Dr. Leslie S. Baumann

Antiphotoaging activity

In 2019, Foolad and Vaughn conducted a prospective, investigator-blind, randomized controlled trial to determine the effects of almond consumption on facial sebum production and wrinkles. Participants (28 postmenopausal women with Fitzpatrick skin types I and II completed the study) consumed 20% of their daily energy intake in almonds or a calorie-matched snack over 16 weeks through the UC Davis Dermatology Clinic. Photographic analysis revealed that the almond group experienced significantly diminished wrinkle severity, compared with the control group. The investigators concluded that daily almond consumption has the potential to decrease wrinkle severity in postmenopausal women and that almonds may confer natural antiaging effects.4

In a similar investigation 2 years later, Rybak et al. reported on a prospective, randomized controlled study to ascertain the effects of almond consumption on photoaging in postmenopausal women with Fitzpatrick skin types I or II who obtained 20% of their daily energy consumption via almonds or a calorie-matched snack for 24 weeks. Results demonstrated significant effects conferred by almond consumption, with average wrinkle severity substantially diminished in the almond group at weeks 16 (by 15%) and 24 (by 16%), compared with baseline. In addition, facial pigment intensity was reduced by 20% in the almond group by week 16 and this was maintained through the end of the study. Further, sebum excretion was higher in the control group. The investigators concluded that the daily consumption of almonds may have the potential to enhance protection against photoaging, particularly in terms of facial wrinkles and pigment intensity, in postmenopausal women.3

Later in 2021, Li et al. conducted a study in 39 healthy Asian women (18-45 years old) with Fitzpatrick skin types II to IV to investigate the effects of almond consumption on UVB resistance. The researchers randomized participants to eat either 1.5 oz of almonds or 1.8 oz of pretzels daily for 12 weeks. Results showed that the minimal erythema dose was higher in the almond group as compared with the control group. No differences were observed in hydration, melanin, roughness, or sebum on facial skin. The authors concluded that daily oral almond intake may improve photoprotection by raising the minimal erythema dose.2

In a 2022 review on the cutaneous benefits of sweet almond, evening primrose, and jojoba oils, Blaak and Staib noted that all three have been used for hundreds if not thousands of years in traditional medicine to treat various conditions, including skin disorders. Further, they concluded that the longstanding uses of these oils has been borne out by contemporary data, which reveal cutaneous benefits for adult and young skin, particularly in bolstering stratum corneum integrity, recovery, and lipid ratio.6

Later that year, Sanju et al., reporting on the development and assessment of a broad-spectrum polyherbal sunscreen delivered through solid lipid nanoparticles, noted that almond oil was among the natural ingredients used because of its photoprotective characteristics. Overall, the sunscreen formulation, Safranal, was found to impart robust protection against UV radiation.7

 

 

Wound healing

In 2020, Borzou et al. conducted a single-blind randomized clinical trial to ascertain the impact of topical almond oil in preventing pressure injuries. Data collection occurred over 8 months in a hospital setting, with 108 patients randomly assigned to receive almond oil, placebo (liquid paraffin), or the control (standard of care). The researchers found that topically applied almond oil was linked to a lower incidence of pressure injuries, and they arose later in the study as compared with those injuries in the groups receiving paraffin or standard of care. Pressure injury incidence was 5.6% in the almond oil group, 13.9% in the placebo group, and 25.1% in the control group.8

That same year, Caglar et al. completed a randomized controlled trial in 90 preterm infants to assess the effects of sunflower seed oil and almond oil on the stratum corneum. Infants were randomly selected for treatment with either oil or control. A nurse researcher applied oils to the whole body except for the head and face four times daily for 5 days. Investigators determined that stratum corneum hydration was better in the oil groups as compared with control, with no difference found between sunflower seed and almond oils.9

Eczema, hand dermatitis, and striae

In 2018, Simon et al. performed a randomized, double-blind study to determine the short- and long-term effects of two emollients on pruritus and skin restoration in xerotic eczema. The emollients contained lactic acid and refined almond oil, with one also including polidocanol. Both emollients were effective in reducing the severity of itching, with skin moisture and lipid content found to have risen after the initial administration and yielding steady improvement over 2 weeks.10

Earlier that year, Zeichner et al. found that the use of an OTC sweet almond oil, rich in fatty acids and a standard-bearing treatment for eczema and psoriasis for centuries, was effective in treating hand dermatitis. Specifically, the moisturizer, which contained 7% sweet almond oil and 2% colloidal oatmeal, was identified as safe and effective in resolving moderate to severe hand dermatitis.11

Some studies have also shown almond oil to be effective against striae gravidarum. Hajhashemi et al. conducted a double-blind clinical trial in 160 nulliparous women to compare the effects of aloe vera gel and sweet almond oil on striae gravidarum in 2018. Volunteers were randomly assigned to one of three case groups (Aloe vera, sweet almond oil, or base cream) who received topical treatment on the abdomen, or the fourth group, which received no treatment. Results showed that both treatment creams were effective in decreasing erythema and the pruritus associated with striae as well as in preventing their expansion.12 Previously, Tashan and Kafkasli showed in a nonrandomized study that massage with bitter almond oil may diminish the visibility of present striae gravidarum and prevent the emergence of new striae.13

Conclusion

Almonds and almond oil have been used as food and in traditional medical practices dating back several centuries. In the last decade, intriguing results have emerged regarding the effects of almond consumption or topical almond oil administration on skin health. While much more research is necessary, the recent data seem to support the traditional uses of this tree seed for dermatologic purposes.

Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur in Miami. She founded the division of cosmetic dermatology at the University of Miami in 1997. The third edition of her bestselling textbook, “Cosmetic Dermatology” (New York: McGraw Hill), was published in 2022. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Johnson & Johnson, and Burt’s Bees. She is the CEO of Skin Type Solutions, a SaaS company used to generate skin care routines in office and as an e-commerce solution. Write to her at [email protected].

References

1. Ahmad Z. Complement Ther Clin Pract. 2010 Feb;16(1):10-2.

2. Li JN et al. J Cosmet Dermatol. 2021 Sep;20(9):2975-80.

3. Rybak I et al. Nutrients. 2021 Feb 27;13(3):785.

4. Foolad N et al. Phytother Res. 2019 Dec;33(12):3212-7.

5. Lin TK et al. Int J Mol Sci. 2017 Dec 27;19(1):70.

6. Blaak J, Staib P. Int J Cosmet Sci. 2022 Feb;44(1):1-9.

7. Sanju N et al. J Cosmet Dermatol. 2022 Oct;21(10):4433-46.

8. Borzou SR et al. J Wound Ostomy Continence Nurs. 2020 Jul/Aug;47(4):336-42.

9. Caglar S et al. Adv Skin Wound Care. 2020 Aug;33(8):1-6.

10. Simon D et al. Dermatol Ther. 2018 Nov;31(6):e12692.

11. Zeichner JA at al. J Drugs Dermatol. 2018 Jan 1;17(1):78-82.

12. Hajhashemi M et al. J Matern Fetal Neonatal Med. 2018 Jul;31(13):1703-8.

13. Timur Tashan S and Kafkasli A. J Clin Nurs. 2012 Jun;21(11-12):1570-6.
 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Skin has different daytime and nighttime needs, emerging circadian research suggests

Article Type
Changed
Thu, 09/07/2023 - 09:12

Emerging research on so-called “clock genes” suggests that the human skin has different daytime and nighttime needs, according to Ava Shamban, MD.

“Paying attention to the circadian rhythm of the skin is every bit as important as moisturizing the skin,” Dr. Shamban, a dermatologist who practices in Santa Monica, Calif., said at the annual Masters of Aesthetics Symposium. “It is paramount to both your morning and evening skin regimen routine,” she added.

Circadian rhythms are physical, mental, and behavioral changes that follow a 24-hour cycle. “These natural processes respond primarily to light and dark and affect most living things, including animals, plants, and microbes,” she said. “The circadian system is composed of peripheral circadian oscillators in many other cells, including the skin.”

The science has been around awhile, but dermatologists didn’t understand its impact until recently, she said.

In 1729, the French astronomer Jean-Jacques d’Ortous de Mairan demonstrated that mimosa leaves, which open at dawn and close at dusk, continued this cycle even when kept in darkness. In the 1970s, Seymour Benzer and Ronald Konopka showed that mutations in an unknown gene disrupted the circadian clock of fruit flies.

And in 2017, the Nobel Prize in Physiology or Medicine was awarded to Jeffrey C. Hall, Michael Rosbash, and Michael W. Young for discovering molecular mechanisms that control circadian rhythm. Using fruit flies as a model, they isolated a gene that controls the normal daily biological rhythm.

“They showed that this gene encodes a protein that accumulates in the cell during the night and is then degraded during the day, and they identified additional protein components, exposing the mechanism governing the self-sustaining clockwork inside the cell,” said Dr. Shamban.

In humans and other mammals, the primary body clock is located in the suprachiasmatic nucleus, a cluster of approximately 10,000 neurons located on either side of the midline above the optic chiasma, about 3 cm behind the eyes. Several clock genes have been identified that regulate and control transcription and translation.



“Expression of these core clock genes inside the cell influences many signaling pathways, which allows the cells to identify the time of day and perform their appropriate function,” Dr. Shamban said. “Furthermore, phosphorylation of core clock proteins leads to degradation to keep the 24-hour cycle in sync.”

Photoreceptive molecules known as opsins also appear to play a role in regulating the skin’s clock. A systematic review of 22 articles published in 2020 found that opsins are present in keratinocytes, melanocytes, dermal fibroblasts, and hair follicle cells, and they have been shown to mediate wound healing, melanogenesis, hair growth, and skin photoaging in human and nonhuman species.

“You may wonder, why does the skin respond so nicely to light?” Dr. Shamban said. “Because it contains opsins, and light exposure through opsin-regulated pathways stimulates melanin production.”

Patients can support their skin’s clock genes by understanding that skin barrier functions such as photoprotection and sebum production are increased during the day, while skin permeability processes such as DNA repair, cell proliferation, and blood flow are enhanced at night.

“Your skin has different daytime and nighttime needs,” Dr. Shamban commented. “Simply put, daytime is defense, and nighttime is offense. I think we’ve known this intuitively, but to know that there is science supporting this idea is important.”

Dr. Shamban wrote the book “Heal Your Skin: The Breakthrough Plan for Renewal” (Wiley, 2011). She disclosed that she conducts clinical trials for many pharmaceutical and device companies.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Emerging research on so-called “clock genes” suggests that the human skin has different daytime and nighttime needs, according to Ava Shamban, MD.

“Paying attention to the circadian rhythm of the skin is every bit as important as moisturizing the skin,” Dr. Shamban, a dermatologist who practices in Santa Monica, Calif., said at the annual Masters of Aesthetics Symposium. “It is paramount to both your morning and evening skin regimen routine,” she added.

Circadian rhythms are physical, mental, and behavioral changes that follow a 24-hour cycle. “These natural processes respond primarily to light and dark and affect most living things, including animals, plants, and microbes,” she said. “The circadian system is composed of peripheral circadian oscillators in many other cells, including the skin.”

The science has been around awhile, but dermatologists didn’t understand its impact until recently, she said.

In 1729, the French astronomer Jean-Jacques d’Ortous de Mairan demonstrated that mimosa leaves, which open at dawn and close at dusk, continued this cycle even when kept in darkness. In the 1970s, Seymour Benzer and Ronald Konopka showed that mutations in an unknown gene disrupted the circadian clock of fruit flies.

And in 2017, the Nobel Prize in Physiology or Medicine was awarded to Jeffrey C. Hall, Michael Rosbash, and Michael W. Young for discovering molecular mechanisms that control circadian rhythm. Using fruit flies as a model, they isolated a gene that controls the normal daily biological rhythm.

“They showed that this gene encodes a protein that accumulates in the cell during the night and is then degraded during the day, and they identified additional protein components, exposing the mechanism governing the self-sustaining clockwork inside the cell,” said Dr. Shamban.

In humans and other mammals, the primary body clock is located in the suprachiasmatic nucleus, a cluster of approximately 10,000 neurons located on either side of the midline above the optic chiasma, about 3 cm behind the eyes. Several clock genes have been identified that regulate and control transcription and translation.



“Expression of these core clock genes inside the cell influences many signaling pathways, which allows the cells to identify the time of day and perform their appropriate function,” Dr. Shamban said. “Furthermore, phosphorylation of core clock proteins leads to degradation to keep the 24-hour cycle in sync.”

Photoreceptive molecules known as opsins also appear to play a role in regulating the skin’s clock. A systematic review of 22 articles published in 2020 found that opsins are present in keratinocytes, melanocytes, dermal fibroblasts, and hair follicle cells, and they have been shown to mediate wound healing, melanogenesis, hair growth, and skin photoaging in human and nonhuman species.

“You may wonder, why does the skin respond so nicely to light?” Dr. Shamban said. “Because it contains opsins, and light exposure through opsin-regulated pathways stimulates melanin production.”

Patients can support their skin’s clock genes by understanding that skin barrier functions such as photoprotection and sebum production are increased during the day, while skin permeability processes such as DNA repair, cell proliferation, and blood flow are enhanced at night.

“Your skin has different daytime and nighttime needs,” Dr. Shamban commented. “Simply put, daytime is defense, and nighttime is offense. I think we’ve known this intuitively, but to know that there is science supporting this idea is important.”

Dr. Shamban wrote the book “Heal Your Skin: The Breakthrough Plan for Renewal” (Wiley, 2011). She disclosed that she conducts clinical trials for many pharmaceutical and device companies.

Emerging research on so-called “clock genes” suggests that the human skin has different daytime and nighttime needs, according to Ava Shamban, MD.

“Paying attention to the circadian rhythm of the skin is every bit as important as moisturizing the skin,” Dr. Shamban, a dermatologist who practices in Santa Monica, Calif., said at the annual Masters of Aesthetics Symposium. “It is paramount to both your morning and evening skin regimen routine,” she added.

Circadian rhythms are physical, mental, and behavioral changes that follow a 24-hour cycle. “These natural processes respond primarily to light and dark and affect most living things, including animals, plants, and microbes,” she said. “The circadian system is composed of peripheral circadian oscillators in many other cells, including the skin.”

The science has been around awhile, but dermatologists didn’t understand its impact until recently, she said.

In 1729, the French astronomer Jean-Jacques d’Ortous de Mairan demonstrated that mimosa leaves, which open at dawn and close at dusk, continued this cycle even when kept in darkness. In the 1970s, Seymour Benzer and Ronald Konopka showed that mutations in an unknown gene disrupted the circadian clock of fruit flies.

And in 2017, the Nobel Prize in Physiology or Medicine was awarded to Jeffrey C. Hall, Michael Rosbash, and Michael W. Young for discovering molecular mechanisms that control circadian rhythm. Using fruit flies as a model, they isolated a gene that controls the normal daily biological rhythm.

“They showed that this gene encodes a protein that accumulates in the cell during the night and is then degraded during the day, and they identified additional protein components, exposing the mechanism governing the self-sustaining clockwork inside the cell,” said Dr. Shamban.

In humans and other mammals, the primary body clock is located in the suprachiasmatic nucleus, a cluster of approximately 10,000 neurons located on either side of the midline above the optic chiasma, about 3 cm behind the eyes. Several clock genes have been identified that regulate and control transcription and translation.



“Expression of these core clock genes inside the cell influences many signaling pathways, which allows the cells to identify the time of day and perform their appropriate function,” Dr. Shamban said. “Furthermore, phosphorylation of core clock proteins leads to degradation to keep the 24-hour cycle in sync.”

Photoreceptive molecules known as opsins also appear to play a role in regulating the skin’s clock. A systematic review of 22 articles published in 2020 found that opsins are present in keratinocytes, melanocytes, dermal fibroblasts, and hair follicle cells, and they have been shown to mediate wound healing, melanogenesis, hair growth, and skin photoaging in human and nonhuman species.

“You may wonder, why does the skin respond so nicely to light?” Dr. Shamban said. “Because it contains opsins, and light exposure through opsin-regulated pathways stimulates melanin production.”

Patients can support their skin’s clock genes by understanding that skin barrier functions such as photoprotection and sebum production are increased during the day, while skin permeability processes such as DNA repair, cell proliferation, and blood flow are enhanced at night.

“Your skin has different daytime and nighttime needs,” Dr. Shamban commented. “Simply put, daytime is defense, and nighttime is offense. I think we’ve known this intuitively, but to know that there is science supporting this idea is important.”

Dr. Shamban wrote the book “Heal Your Skin: The Breakthrough Plan for Renewal” (Wiley, 2011). She disclosed that she conducts clinical trials for many pharmaceutical and device companies.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT MOAS 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article