Parkinson’s disease: What’s trauma got to do with it?

Article Type
Changed
Wed, 04/05/2023 - 10:08

 

This transcript has been edited for clarity.

Kathrin LaFaver, MD: Hello. I’m happy to talk today to Dr. Indu Subramanian, clinical professor at University of California, Los Angeles, and director of the Parkinson’s Disease Research, Education and Clinical Center in Los Angeles. I am a neurologist in Saratoga Springs, New York, and we will be talking today about Indu’s new paper on childhood trauma and Parkinson’s disease. Welcome and thanks for taking the time.

Indu Subramanian, MD: Thank you so much for letting us highlight this important topic.

Dr. LaFaver: There are many papers published every month on Parkinson’s disease, but this topic stands out because it’s not a thing that has been commonly looked at. What gave you the idea to study this?
 

Neurology behind other specialties

Dr. Subramanian: Kathrin, you and I have been looking at things that can inform us about our patients – the person who’s standing in front of us when they come in and we’re giving them this diagnosis. I think that so much of what we’ve done [in the past] is a cookie cutter approach to giving everybody the standard treatment. [We’ve been assuming that] It doesn’t matter if they’re a man or woman. It doesn’t matter if they’re a veteran. It doesn’t matter if they may be from a minoritized population.

Customization is so key, and we’re realizing that we have missed the boat often through the pandemic and in health care in general.

We’ve also been interested in approaches that are outside the box, right? We have this integrative medicine and lifestyle medicine background. I’ve been going to those meetings and really been struck by the mounting evidence on the importance of things like early adverse childhood events (ACEs), what zip code you live in, what your pollution index is, and how these things can affect people through their life and their health.

I think that it is high time neurologists pay attention to this. There’s been mounting evidence throughout many disease states, various types of cancers, and mental health. Cardiology is much more advanced, but we haven’t had much data in neurology. In fact, when we went to write this paper, there were just one or two papers that were looking at multiple sclerosis or general neurologic issues, but really nothing in Parkinson’s disease.

We know that Parkinson’s disease is not only a motor disease that affects mental health, but that it also affects nonmotor issues. Childhood adversity may affect how people progress or how quickly they may get a disease, and we were interested in how it may manifest in a disease like Parkinson’s disease.

That was the framework going to meetings. As we wrote this paper and were in various editing stages, there was a beautiful paper that came out by Nadine Burke Harris and team that really was a call to action for neurologists and caring about trauma.

Dr. LaFaver: I couldn’t agree more. It’s really an underrecognized issue. With my own background, being very interested in functional movement disorders, psychosomatic disorders, and so on, it becomes much more evident how common a trauma background is, not only for people we were traditionally asking about.

Why don’t you summarize your findings for us?
 

 

 

Adverse childhood events

Dr. Subramanian: This is a web-based survey, so obviously, these are patient self-reports of their disease. We have a large cohort of people that we’ve been following over 7 years. I’m looking at modifiable variables and what really impacts Parkinson’s disease. Some of our previous papers have looked at diet, exercise, and loneliness. This is the same cohort.

We ended up putting the ACEs questionnaire, which is 10 questions looking at whether you were exposed to certain things in your household below the age of 18. This is a relatively standard questionnaire that’s administered one time, and you get a score out of 10. This is something that has been pushed, at least in the state of California, as something that we should be checking more in all people coming in.

We introduced the survey, and we didn’t force everyone to take it. Unfortunately, there was 20% or so of our patients who chose not to answer these questions. One has to ask, who are those people that didn’t answer the questions? Are they the ones that may have had trauma and these questions were triggering? It was a gap. We didn’t add extra questions to explore why people didn’t answer those questions.

We have to also put this in context. We have a patient population that’s largely quite affluent, who are able to access web-based surveys through their computer, and largely Caucasian; there are not many minoritized populations in our cohort. We want to do better with that. We actually were able to gather a decent number of women. We represent women quite well in our survey. I think that’s because of this online approach and some of the things that we’re studying.

In our survey, we broke it down into people who had no ACEs, one to three ACEs, or four or more ACEs. This is a standard way to break down ACEs so that we’re able to categorize what to do with these patient populations.

What we saw – and it’s preliminary evidence – is that people who had higher ACE scores seemed to have more symptom severity when we controlled for things like years since diagnosis, age, and gender. They also seem to have a worse quality of life. There was some indication that there were more nonmotor issues in those populations, as you might expect, such as anxiety, depression, and things that presumably ACEs can affect separately.

There are some confounders, but I think we really want to use this as the first piece of evidence to hopefully pave the way for caring about trauma in Parkinson’s disease moving forward.

Dr. LaFaver: Thank you so much for that summary. You already mentioned the main methodology you used.

What is the next step for you? How do you see these findings informing our clinical care? Do you have suggestions for all of the neurologists listening in this regard?


 

PD not yet considered ACE-related

Dr. Subramanian: Dr. Burke Harris was the former surgeon general in California. She’s a woman of color and a brilliant speaker, and she had worked in inner cities, I think in San Francisco, with pediatric populations, seeing these effects of adversity in that time frame.

 

 

You see this population at risk, and then you’re following this cohort, which we knew from the Kaiser cohort determines earlier morbidity and mortality across a number of disease states. We’re seeing things like more heart attacks, more diabetes, and all kinds of things in these populations. This is not new news; we just have not been focusing on this.

In her paper, this call to action, they had talked about some ACE-related conditions that currently do not include Parkinson’s disease. There are three ACE-related neurologic conditions that people should be aware of. One is in the headache/pain universe. Another is in the stroke universe, and that’s understandable, given cardiovascular risk factors . Then the third is in this dementia risk category. I think Parkinson’s disease, as we know, can be associated with dementia. A large percentage of our patients get dementia, but we don’t have Parkinson’s disease called out in this framework.

What people are talking about is if you have no ACEs or are in this middle category of one to three ACEs and you don’t have an ACE-related diagnosis – which Parkinson’s disease is not currently – we just give some basic counseling about the importance of lifestyle. I think we would love to see that anyway. They’re talking about things like exercise, diet, sleep, social connection, getting out in nature, things like that, so just general counseling on the importance of that.

Then if you’re in this higher-risk category, and so with these ACE-related neurologic conditions, including dementia, headache, and stroke, if you had this middle range of one to three ACEs, they’re getting additional resources. Some of them may be referred for social work help or mental health support and things like that.

I’d really love to see that happening in Parkinson’s disease, because I think we have so many needs in our population. I’m always hoping to advocate for more mental health needs that are scarce and resources in the social support realm because I believe that social connection and social support is a huge buffer for this trauma.

ACEs are just one type of trauma. I take care of veterans in the Veterans [Affairs Department]. We have some information now coming out about posttraumatic stress disorder, predisposing to certain things in Parkinson’s disease, possibly head injury, and things like that. I think we have populations at risk that we can hopefully screen at intake, and I’m really pushing for that.

Maybe it’s not the neurologist that does this intake. It might be someone else on the team that can spend some time doing these questionnaires and understand if your patient has a high ACE score. Unless you ask, many patients don’t necessarily come forward to talk about this. I really am pushing for trying to screen and trying to advocate for more research in this area so that we can classify Parkinson’s disease as an ACE-related condition and thus give more resources from the mental health world, and also the social support world, to our patients.

Dr. LaFaver: Thank you. There are many important points, and I think it’s a very important thing to recognize that it may not be only trauma in childhood but also throughout life, as you said, and might really influence nonmotor symptoms of Parkinson’s disease in particular, including anxiety and pain, which are often difficult to treat.

I think there’s much more to do in research, advocacy, and education. We’re going to educate patients about this, and also educate other neurologists and providers. I think you mentioned that trauma-informed care is getting its spotlight in primary care and other specialties. I think we have catching up to do in neurology, and I think this is a really important work toward that goal.

Thank you so much for your work and for taking the time to share your thoughts. I hope to talk to you again soon.

Dr. Subramanian: Thank you so much, Kathrin.
 

Dr. LaFaver has disclosed no relevant financial relationships. Dr. Subramanian disclosed ties with Acorda Therapeutics.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity.

Kathrin LaFaver, MD: Hello. I’m happy to talk today to Dr. Indu Subramanian, clinical professor at University of California, Los Angeles, and director of the Parkinson’s Disease Research, Education and Clinical Center in Los Angeles. I am a neurologist in Saratoga Springs, New York, and we will be talking today about Indu’s new paper on childhood trauma and Parkinson’s disease. Welcome and thanks for taking the time.

Indu Subramanian, MD: Thank you so much for letting us highlight this important topic.

Dr. LaFaver: There are many papers published every month on Parkinson’s disease, but this topic stands out because it’s not a thing that has been commonly looked at. What gave you the idea to study this?
 

Neurology behind other specialties

Dr. Subramanian: Kathrin, you and I have been looking at things that can inform us about our patients – the person who’s standing in front of us when they come in and we’re giving them this diagnosis. I think that so much of what we’ve done [in the past] is a cookie cutter approach to giving everybody the standard treatment. [We’ve been assuming that] It doesn’t matter if they’re a man or woman. It doesn’t matter if they’re a veteran. It doesn’t matter if they may be from a minoritized population.

Customization is so key, and we’re realizing that we have missed the boat often through the pandemic and in health care in general.

We’ve also been interested in approaches that are outside the box, right? We have this integrative medicine and lifestyle medicine background. I’ve been going to those meetings and really been struck by the mounting evidence on the importance of things like early adverse childhood events (ACEs), what zip code you live in, what your pollution index is, and how these things can affect people through their life and their health.

I think that it is high time neurologists pay attention to this. There’s been mounting evidence throughout many disease states, various types of cancers, and mental health. Cardiology is much more advanced, but we haven’t had much data in neurology. In fact, when we went to write this paper, there were just one or two papers that were looking at multiple sclerosis or general neurologic issues, but really nothing in Parkinson’s disease.

We know that Parkinson’s disease is not only a motor disease that affects mental health, but that it also affects nonmotor issues. Childhood adversity may affect how people progress or how quickly they may get a disease, and we were interested in how it may manifest in a disease like Parkinson’s disease.

That was the framework going to meetings. As we wrote this paper and were in various editing stages, there was a beautiful paper that came out by Nadine Burke Harris and team that really was a call to action for neurologists and caring about trauma.

Dr. LaFaver: I couldn’t agree more. It’s really an underrecognized issue. With my own background, being very interested in functional movement disorders, psychosomatic disorders, and so on, it becomes much more evident how common a trauma background is, not only for people we were traditionally asking about.

Why don’t you summarize your findings for us?
 

 

 

Adverse childhood events

Dr. Subramanian: This is a web-based survey, so obviously, these are patient self-reports of their disease. We have a large cohort of people that we’ve been following over 7 years. I’m looking at modifiable variables and what really impacts Parkinson’s disease. Some of our previous papers have looked at diet, exercise, and loneliness. This is the same cohort.

We ended up putting the ACEs questionnaire, which is 10 questions looking at whether you were exposed to certain things in your household below the age of 18. This is a relatively standard questionnaire that’s administered one time, and you get a score out of 10. This is something that has been pushed, at least in the state of California, as something that we should be checking more in all people coming in.

We introduced the survey, and we didn’t force everyone to take it. Unfortunately, there was 20% or so of our patients who chose not to answer these questions. One has to ask, who are those people that didn’t answer the questions? Are they the ones that may have had trauma and these questions were triggering? It was a gap. We didn’t add extra questions to explore why people didn’t answer those questions.

We have to also put this in context. We have a patient population that’s largely quite affluent, who are able to access web-based surveys through their computer, and largely Caucasian; there are not many minoritized populations in our cohort. We want to do better with that. We actually were able to gather a decent number of women. We represent women quite well in our survey. I think that’s because of this online approach and some of the things that we’re studying.

In our survey, we broke it down into people who had no ACEs, one to three ACEs, or four or more ACEs. This is a standard way to break down ACEs so that we’re able to categorize what to do with these patient populations.

What we saw – and it’s preliminary evidence – is that people who had higher ACE scores seemed to have more symptom severity when we controlled for things like years since diagnosis, age, and gender. They also seem to have a worse quality of life. There was some indication that there were more nonmotor issues in those populations, as you might expect, such as anxiety, depression, and things that presumably ACEs can affect separately.

There are some confounders, but I think we really want to use this as the first piece of evidence to hopefully pave the way for caring about trauma in Parkinson’s disease moving forward.

Dr. LaFaver: Thank you so much for that summary. You already mentioned the main methodology you used.

What is the next step for you? How do you see these findings informing our clinical care? Do you have suggestions for all of the neurologists listening in this regard?


 

PD not yet considered ACE-related

Dr. Subramanian: Dr. Burke Harris was the former surgeon general in California. She’s a woman of color and a brilliant speaker, and she had worked in inner cities, I think in San Francisco, with pediatric populations, seeing these effects of adversity in that time frame.

 

 

You see this population at risk, and then you’re following this cohort, which we knew from the Kaiser cohort determines earlier morbidity and mortality across a number of disease states. We’re seeing things like more heart attacks, more diabetes, and all kinds of things in these populations. This is not new news; we just have not been focusing on this.

In her paper, this call to action, they had talked about some ACE-related conditions that currently do not include Parkinson’s disease. There are three ACE-related neurologic conditions that people should be aware of. One is in the headache/pain universe. Another is in the stroke universe, and that’s understandable, given cardiovascular risk factors . Then the third is in this dementia risk category. I think Parkinson’s disease, as we know, can be associated with dementia. A large percentage of our patients get dementia, but we don’t have Parkinson’s disease called out in this framework.

What people are talking about is if you have no ACEs or are in this middle category of one to three ACEs and you don’t have an ACE-related diagnosis – which Parkinson’s disease is not currently – we just give some basic counseling about the importance of lifestyle. I think we would love to see that anyway. They’re talking about things like exercise, diet, sleep, social connection, getting out in nature, things like that, so just general counseling on the importance of that.

Then if you’re in this higher-risk category, and so with these ACE-related neurologic conditions, including dementia, headache, and stroke, if you had this middle range of one to three ACEs, they’re getting additional resources. Some of them may be referred for social work help or mental health support and things like that.

I’d really love to see that happening in Parkinson’s disease, because I think we have so many needs in our population. I’m always hoping to advocate for more mental health needs that are scarce and resources in the social support realm because I believe that social connection and social support is a huge buffer for this trauma.

ACEs are just one type of trauma. I take care of veterans in the Veterans [Affairs Department]. We have some information now coming out about posttraumatic stress disorder, predisposing to certain things in Parkinson’s disease, possibly head injury, and things like that. I think we have populations at risk that we can hopefully screen at intake, and I’m really pushing for that.

Maybe it’s not the neurologist that does this intake. It might be someone else on the team that can spend some time doing these questionnaires and understand if your patient has a high ACE score. Unless you ask, many patients don’t necessarily come forward to talk about this. I really am pushing for trying to screen and trying to advocate for more research in this area so that we can classify Parkinson’s disease as an ACE-related condition and thus give more resources from the mental health world, and also the social support world, to our patients.

Dr. LaFaver: Thank you. There are many important points, and I think it’s a very important thing to recognize that it may not be only trauma in childhood but also throughout life, as you said, and might really influence nonmotor symptoms of Parkinson’s disease in particular, including anxiety and pain, which are often difficult to treat.

I think there’s much more to do in research, advocacy, and education. We’re going to educate patients about this, and also educate other neurologists and providers. I think you mentioned that trauma-informed care is getting its spotlight in primary care and other specialties. I think we have catching up to do in neurology, and I think this is a really important work toward that goal.

Thank you so much for your work and for taking the time to share your thoughts. I hope to talk to you again soon.

Dr. Subramanian: Thank you so much, Kathrin.
 

Dr. LaFaver has disclosed no relevant financial relationships. Dr. Subramanian disclosed ties with Acorda Therapeutics.

A version of this article originally appeared on Medscape.com.

 

This transcript has been edited for clarity.

Kathrin LaFaver, MD: Hello. I’m happy to talk today to Dr. Indu Subramanian, clinical professor at University of California, Los Angeles, and director of the Parkinson’s Disease Research, Education and Clinical Center in Los Angeles. I am a neurologist in Saratoga Springs, New York, and we will be talking today about Indu’s new paper on childhood trauma and Parkinson’s disease. Welcome and thanks for taking the time.

Indu Subramanian, MD: Thank you so much for letting us highlight this important topic.

Dr. LaFaver: There are many papers published every month on Parkinson’s disease, but this topic stands out because it’s not a thing that has been commonly looked at. What gave you the idea to study this?
 

Neurology behind other specialties

Dr. Subramanian: Kathrin, you and I have been looking at things that can inform us about our patients – the person who’s standing in front of us when they come in and we’re giving them this diagnosis. I think that so much of what we’ve done [in the past] is a cookie cutter approach to giving everybody the standard treatment. [We’ve been assuming that] It doesn’t matter if they’re a man or woman. It doesn’t matter if they’re a veteran. It doesn’t matter if they may be from a minoritized population.

Customization is so key, and we’re realizing that we have missed the boat often through the pandemic and in health care in general.

We’ve also been interested in approaches that are outside the box, right? We have this integrative medicine and lifestyle medicine background. I’ve been going to those meetings and really been struck by the mounting evidence on the importance of things like early adverse childhood events (ACEs), what zip code you live in, what your pollution index is, and how these things can affect people through their life and their health.

I think that it is high time neurologists pay attention to this. There’s been mounting evidence throughout many disease states, various types of cancers, and mental health. Cardiology is much more advanced, but we haven’t had much data in neurology. In fact, when we went to write this paper, there were just one or two papers that were looking at multiple sclerosis or general neurologic issues, but really nothing in Parkinson’s disease.

We know that Parkinson’s disease is not only a motor disease that affects mental health, but that it also affects nonmotor issues. Childhood adversity may affect how people progress or how quickly they may get a disease, and we were interested in how it may manifest in a disease like Parkinson’s disease.

That was the framework going to meetings. As we wrote this paper and were in various editing stages, there was a beautiful paper that came out by Nadine Burke Harris and team that really was a call to action for neurologists and caring about trauma.

Dr. LaFaver: I couldn’t agree more. It’s really an underrecognized issue. With my own background, being very interested in functional movement disorders, psychosomatic disorders, and so on, it becomes much more evident how common a trauma background is, not only for people we were traditionally asking about.

Why don’t you summarize your findings for us?
 

 

 

Adverse childhood events

Dr. Subramanian: This is a web-based survey, so obviously, these are patient self-reports of their disease. We have a large cohort of people that we’ve been following over 7 years. I’m looking at modifiable variables and what really impacts Parkinson’s disease. Some of our previous papers have looked at diet, exercise, and loneliness. This is the same cohort.

We ended up putting the ACEs questionnaire, which is 10 questions looking at whether you were exposed to certain things in your household below the age of 18. This is a relatively standard questionnaire that’s administered one time, and you get a score out of 10. This is something that has been pushed, at least in the state of California, as something that we should be checking more in all people coming in.

We introduced the survey, and we didn’t force everyone to take it. Unfortunately, there was 20% or so of our patients who chose not to answer these questions. One has to ask, who are those people that didn’t answer the questions? Are they the ones that may have had trauma and these questions were triggering? It was a gap. We didn’t add extra questions to explore why people didn’t answer those questions.

We have to also put this in context. We have a patient population that’s largely quite affluent, who are able to access web-based surveys through their computer, and largely Caucasian; there are not many minoritized populations in our cohort. We want to do better with that. We actually were able to gather a decent number of women. We represent women quite well in our survey. I think that’s because of this online approach and some of the things that we’re studying.

In our survey, we broke it down into people who had no ACEs, one to three ACEs, or four or more ACEs. This is a standard way to break down ACEs so that we’re able to categorize what to do with these patient populations.

What we saw – and it’s preliminary evidence – is that people who had higher ACE scores seemed to have more symptom severity when we controlled for things like years since diagnosis, age, and gender. They also seem to have a worse quality of life. There was some indication that there were more nonmotor issues in those populations, as you might expect, such as anxiety, depression, and things that presumably ACEs can affect separately.

There are some confounders, but I think we really want to use this as the first piece of evidence to hopefully pave the way for caring about trauma in Parkinson’s disease moving forward.

Dr. LaFaver: Thank you so much for that summary. You already mentioned the main methodology you used.

What is the next step for you? How do you see these findings informing our clinical care? Do you have suggestions for all of the neurologists listening in this regard?


 

PD not yet considered ACE-related

Dr. Subramanian: Dr. Burke Harris was the former surgeon general in California. She’s a woman of color and a brilliant speaker, and she had worked in inner cities, I think in San Francisco, with pediatric populations, seeing these effects of adversity in that time frame.

 

 

You see this population at risk, and then you’re following this cohort, which we knew from the Kaiser cohort determines earlier morbidity and mortality across a number of disease states. We’re seeing things like more heart attacks, more diabetes, and all kinds of things in these populations. This is not new news; we just have not been focusing on this.

In her paper, this call to action, they had talked about some ACE-related conditions that currently do not include Parkinson’s disease. There are three ACE-related neurologic conditions that people should be aware of. One is in the headache/pain universe. Another is in the stroke universe, and that’s understandable, given cardiovascular risk factors . Then the third is in this dementia risk category. I think Parkinson’s disease, as we know, can be associated with dementia. A large percentage of our patients get dementia, but we don’t have Parkinson’s disease called out in this framework.

What people are talking about is if you have no ACEs or are in this middle category of one to three ACEs and you don’t have an ACE-related diagnosis – which Parkinson’s disease is not currently – we just give some basic counseling about the importance of lifestyle. I think we would love to see that anyway. They’re talking about things like exercise, diet, sleep, social connection, getting out in nature, things like that, so just general counseling on the importance of that.

Then if you’re in this higher-risk category, and so with these ACE-related neurologic conditions, including dementia, headache, and stroke, if you had this middle range of one to three ACEs, they’re getting additional resources. Some of them may be referred for social work help or mental health support and things like that.

I’d really love to see that happening in Parkinson’s disease, because I think we have so many needs in our population. I’m always hoping to advocate for more mental health needs that are scarce and resources in the social support realm because I believe that social connection and social support is a huge buffer for this trauma.

ACEs are just one type of trauma. I take care of veterans in the Veterans [Affairs Department]. We have some information now coming out about posttraumatic stress disorder, predisposing to certain things in Parkinson’s disease, possibly head injury, and things like that. I think we have populations at risk that we can hopefully screen at intake, and I’m really pushing for that.

Maybe it’s not the neurologist that does this intake. It might be someone else on the team that can spend some time doing these questionnaires and understand if your patient has a high ACE score. Unless you ask, many patients don’t necessarily come forward to talk about this. I really am pushing for trying to screen and trying to advocate for more research in this area so that we can classify Parkinson’s disease as an ACE-related condition and thus give more resources from the mental health world, and also the social support world, to our patients.

Dr. LaFaver: Thank you. There are many important points, and I think it’s a very important thing to recognize that it may not be only trauma in childhood but also throughout life, as you said, and might really influence nonmotor symptoms of Parkinson’s disease in particular, including anxiety and pain, which are often difficult to treat.

I think there’s much more to do in research, advocacy, and education. We’re going to educate patients about this, and also educate other neurologists and providers. I think you mentioned that trauma-informed care is getting its spotlight in primary care and other specialties. I think we have catching up to do in neurology, and I think this is a really important work toward that goal.

Thank you so much for your work and for taking the time to share your thoughts. I hope to talk to you again soon.

Dr. Subramanian: Thank you so much, Kathrin.
 

Dr. LaFaver has disclosed no relevant financial relationships. Dr. Subramanian disclosed ties with Acorda Therapeutics.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Pretransfer visits with pediatric and adult rheumatologists smooth adolescent transition

Article Type
Changed
Fri, 04/07/2023 - 14:03

 

Implementing a pediatric transition program in which a patient meets with both their pediatric and soon-to-be adult rheumatologist during a visit before formal transition resulted in less time setting up the first adult visit, according to research presented at the Pediatric Rheumatology Symposium.

The presentation was one of two that focused on ways to improve the transition from pediatric to adult care for rheumatology patients. The other, a poster from researchers at Baylor College of Medicine, Houston, took the first steps toward learning what factors can help predict a successful transition.

Tara Haelle
Dr. John M. Bridges

“This period of transitioning from pediatric to adult care, both rheumatology specific and otherwise, is a high-risk time,” John M. Bridges, MD, a fourth-year pediatric rheumatology fellow at the University of Alabama at Birmingham, told attendees. “There are changes in insurance coverage, employment, geographic mobility, and shifting responsibilities between parents and children in the setting of a still-developing frontal lobe that contribute to the risk of this period. Risks include disease flare, and then organ damage, as well as issues with decreasing medication and therapy, adherence, unscheduled care utilization, and increasing loss to follow-up.”

Dr. Bridges developed a structured transition program called the Bridge to Adult Care from Childhood for Young Adults with Rheumatic Disease (BACC YARD) aimed at improving the pediatric transition period. The analysis he presented focused specifically on reducing loss to follow-up by introducing a pretransfer visit with both rheumatologists. The patient first meets with their pediatric rheumatologist.

During that visit, the adult rheumatologist attends and discusses the patient’s history and current therapy with the pediatric rheumatologist before entering the patient’s room and having “a brief introductory conversation, a sort of verbal handoff and handshake, in front of the patient,” Dr. Bridges explained. “Then I assume responsibility for this patient and their next visit is to see me, both proverbially and literally down the street at the adulthood rheumatology clinic, where this patient becomes a part of my continuity cohort.”



Bridges entered patients from this BACC YARD cohort into an observational registry that included their dual provider pretransfer visit and a posttransfer visit, occurring between July 2020 and May 2022. He compared these patients with a historical control cohort of 45 patients from March 2018 to March 2020, who had at least two pediatric rheumatology visits prior to their transfer to adult care and no documentation of outside rheumatology visits during the study period. Specifically, he examined at the requested and actual interval between patients’ final pediatric rheumatology visit and their first adult rheumatology visit.

The intervention cohort included 86 patients, mostly female (73%), with a median age of 20. About two-thirds were White (65%) and one-third (34%) were Black. One patient was Asian, and 7% were Hispanic. Just over half the patients had juvenile idiopathic arthritis (58%), and 30% had lupus and related connective tissue diseases. The other patients had vasculitis, uveitis, inflammatory myopathy, relapsing polychondritis, morphea, or syndrome of undifferentiated recurrent fever.

A total of 8% of these patients had previously been lost to follow-up at Children’s of Alabama before they re-established rheumatology care at UAB, and 3.5% came from a pediatric rheumatologist from somewhere other than Children’s of Alabama but established adult care at UAB through the BACC YARD program. Among the remaining patients, 65% (n = 56) had both a dual provider pretransfer visit and a posttransfer visit.

The BACC YARD patients requested their next rheumatology visit (the first adult one) a median 119 days after their last pediatric visit, and the actual time until that visit was a median 141 days (P < .05). By comparison, the 45 patients in the historical control group had a median 261 days between their last pediatric visit and their first adult visit (P < .001). The median days between visits was shorter for those with JIA (129 days) and lupus (119 days) than for patients with other conditions (149 days).

Bridges acknowledged that the study was limited by the small size of the cohort and potential contextual factors related to individual patients’ circumstances.

“We’re continuing to make iterative changes to this process to try to continue to improve the transition and its outcomes in this cohort,” Dr. Bridges said.

Aimee Hersh, MD, an associate professor of pediatric rheumatology and division chief of pediatric rheumatology at the University of Utah and Primary Children’s Hospital, both in Salt Lake City, attended the presentation and noted that the University of Utah has a very similar transfer program.

“I think one of the challenges of that model, and our model, is that you have to have a very specific type of physician who is both [medical-pediatrics] trained and has a specific interest in transition,” Dr. Hersh said in an interview. She noted that the adult rheumatologist at her institution didn’t train in pediatric rheumatology but did complete a meds-peds residency. “So if you can find an adult rheumatologist who can do something similar, can see older adolescent patients and serve as that transition bridge, then I think it is feasible.”

For practices that don’t have the resources for this kind of program, Dr. Hersh recommended the Got Transition program, which provides transition guidance that can be applied to any adolescent population with chronic illness.

The other study, led by Kristiana Nasto, BS, a third-year medical student at Baylor College of Medicine, reported on the findings from one aspect of a program also developed to improve the transition from pediatric to adult care for rheumatology patients. It included periodic self-reported evaluation using the validated Adolescent Assessment of Preparation for Transition (ADAPT) survey. As the first step to better understanding the factors that can predict successful transition, the researchers surveyed returning patients with any rheumatologic diagnosis, aged 14 years and older, between July 2021 and November 2022.

Since the survey was automated through the electronic medical record, patients and their caregivers could respond during in-person or virtual visit check-in. The researchers calculated three composite scores out of 100 for self-management, prescription management, and transfer planning, using responses from the ADAPT survey. Among 462 patients who returned 670 surveys, 87% provided surveys that could be scored for at least one composite score. Most respondents were female (75%), White (69%), non-Hispanic (64%), English speaking (90%), and aged 14-17 years (83%).

The overall average score for self-management from 401 respondents was 35. For prescription management, the average score was 59 from 288 respondents, and the average transfer planning score was 17 from 367 respondents. Self-management and transfer planning scores both improved with age (P = .0001). Self-management scores rose from an average of 20 at age 14 to an average of 64 at age 18 and older. Transfer planning scores increased from an average of 1 at age 14 to an average of 49 at age 18 and older. Prescription management scores remained high across all ages, from an average of 59 at age 14 to an average score of 66 at age 18 and older (P = .044). Although the scores did not statistically vary by age or race, Hispanic patients did score higher in self-management with an average of 44.5, compared with 31 among other patients (P = .0001).

Only 21% of patients completed two surveys, and 8.4% completed all three surveys. The average time between the first and second surveys was 4 months, during which there was no statistically significant change in self-management or prescription management scores, but transfer planning scores did increase from 14 to 21 (P = .008) among the 90 patients who completed those surveys.

The researchers concluded from their analysis that “participation in the transition pathway can rapidly improve transfer planning scores, [but] opportunities remain to improve readiness in all domains.” The researchers are in the process of developing Spanish-language surveys.

No external funding was noted for either study. Dr. Bridges, Dr. Hersh, and Ms. Nasto reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Implementing a pediatric transition program in which a patient meets with both their pediatric and soon-to-be adult rheumatologist during a visit before formal transition resulted in less time setting up the first adult visit, according to research presented at the Pediatric Rheumatology Symposium.

The presentation was one of two that focused on ways to improve the transition from pediatric to adult care for rheumatology patients. The other, a poster from researchers at Baylor College of Medicine, Houston, took the first steps toward learning what factors can help predict a successful transition.

Tara Haelle
Dr. John M. Bridges

“This period of transitioning from pediatric to adult care, both rheumatology specific and otherwise, is a high-risk time,” John M. Bridges, MD, a fourth-year pediatric rheumatology fellow at the University of Alabama at Birmingham, told attendees. “There are changes in insurance coverage, employment, geographic mobility, and shifting responsibilities between parents and children in the setting of a still-developing frontal lobe that contribute to the risk of this period. Risks include disease flare, and then organ damage, as well as issues with decreasing medication and therapy, adherence, unscheduled care utilization, and increasing loss to follow-up.”

Dr. Bridges developed a structured transition program called the Bridge to Adult Care from Childhood for Young Adults with Rheumatic Disease (BACC YARD) aimed at improving the pediatric transition period. The analysis he presented focused specifically on reducing loss to follow-up by introducing a pretransfer visit with both rheumatologists. The patient first meets with their pediatric rheumatologist.

During that visit, the adult rheumatologist attends and discusses the patient’s history and current therapy with the pediatric rheumatologist before entering the patient’s room and having “a brief introductory conversation, a sort of verbal handoff and handshake, in front of the patient,” Dr. Bridges explained. “Then I assume responsibility for this patient and their next visit is to see me, both proverbially and literally down the street at the adulthood rheumatology clinic, where this patient becomes a part of my continuity cohort.”



Bridges entered patients from this BACC YARD cohort into an observational registry that included their dual provider pretransfer visit and a posttransfer visit, occurring between July 2020 and May 2022. He compared these patients with a historical control cohort of 45 patients from March 2018 to March 2020, who had at least two pediatric rheumatology visits prior to their transfer to adult care and no documentation of outside rheumatology visits during the study period. Specifically, he examined at the requested and actual interval between patients’ final pediatric rheumatology visit and their first adult rheumatology visit.

The intervention cohort included 86 patients, mostly female (73%), with a median age of 20. About two-thirds were White (65%) and one-third (34%) were Black. One patient was Asian, and 7% were Hispanic. Just over half the patients had juvenile idiopathic arthritis (58%), and 30% had lupus and related connective tissue diseases. The other patients had vasculitis, uveitis, inflammatory myopathy, relapsing polychondritis, morphea, or syndrome of undifferentiated recurrent fever.

A total of 8% of these patients had previously been lost to follow-up at Children’s of Alabama before they re-established rheumatology care at UAB, and 3.5% came from a pediatric rheumatologist from somewhere other than Children’s of Alabama but established adult care at UAB through the BACC YARD program. Among the remaining patients, 65% (n = 56) had both a dual provider pretransfer visit and a posttransfer visit.

The BACC YARD patients requested their next rheumatology visit (the first adult one) a median 119 days after their last pediatric visit, and the actual time until that visit was a median 141 days (P < .05). By comparison, the 45 patients in the historical control group had a median 261 days between their last pediatric visit and their first adult visit (P < .001). The median days between visits was shorter for those with JIA (129 days) and lupus (119 days) than for patients with other conditions (149 days).

Bridges acknowledged that the study was limited by the small size of the cohort and potential contextual factors related to individual patients’ circumstances.

“We’re continuing to make iterative changes to this process to try to continue to improve the transition and its outcomes in this cohort,” Dr. Bridges said.

Aimee Hersh, MD, an associate professor of pediatric rheumatology and division chief of pediatric rheumatology at the University of Utah and Primary Children’s Hospital, both in Salt Lake City, attended the presentation and noted that the University of Utah has a very similar transfer program.

“I think one of the challenges of that model, and our model, is that you have to have a very specific type of physician who is both [medical-pediatrics] trained and has a specific interest in transition,” Dr. Hersh said in an interview. She noted that the adult rheumatologist at her institution didn’t train in pediatric rheumatology but did complete a meds-peds residency. “So if you can find an adult rheumatologist who can do something similar, can see older adolescent patients and serve as that transition bridge, then I think it is feasible.”

For practices that don’t have the resources for this kind of program, Dr. Hersh recommended the Got Transition program, which provides transition guidance that can be applied to any adolescent population with chronic illness.

The other study, led by Kristiana Nasto, BS, a third-year medical student at Baylor College of Medicine, reported on the findings from one aspect of a program also developed to improve the transition from pediatric to adult care for rheumatology patients. It included periodic self-reported evaluation using the validated Adolescent Assessment of Preparation for Transition (ADAPT) survey. As the first step to better understanding the factors that can predict successful transition, the researchers surveyed returning patients with any rheumatologic diagnosis, aged 14 years and older, between July 2021 and November 2022.

Since the survey was automated through the electronic medical record, patients and their caregivers could respond during in-person or virtual visit check-in. The researchers calculated three composite scores out of 100 for self-management, prescription management, and transfer planning, using responses from the ADAPT survey. Among 462 patients who returned 670 surveys, 87% provided surveys that could be scored for at least one composite score. Most respondents were female (75%), White (69%), non-Hispanic (64%), English speaking (90%), and aged 14-17 years (83%).

The overall average score for self-management from 401 respondents was 35. For prescription management, the average score was 59 from 288 respondents, and the average transfer planning score was 17 from 367 respondents. Self-management and transfer planning scores both improved with age (P = .0001). Self-management scores rose from an average of 20 at age 14 to an average of 64 at age 18 and older. Transfer planning scores increased from an average of 1 at age 14 to an average of 49 at age 18 and older. Prescription management scores remained high across all ages, from an average of 59 at age 14 to an average score of 66 at age 18 and older (P = .044). Although the scores did not statistically vary by age or race, Hispanic patients did score higher in self-management with an average of 44.5, compared with 31 among other patients (P = .0001).

Only 21% of patients completed two surveys, and 8.4% completed all three surveys. The average time between the first and second surveys was 4 months, during which there was no statistically significant change in self-management or prescription management scores, but transfer planning scores did increase from 14 to 21 (P = .008) among the 90 patients who completed those surveys.

The researchers concluded from their analysis that “participation in the transition pathway can rapidly improve transfer planning scores, [but] opportunities remain to improve readiness in all domains.” The researchers are in the process of developing Spanish-language surveys.

No external funding was noted for either study. Dr. Bridges, Dr. Hersh, and Ms. Nasto reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Implementing a pediatric transition program in which a patient meets with both their pediatric and soon-to-be adult rheumatologist during a visit before formal transition resulted in less time setting up the first adult visit, according to research presented at the Pediatric Rheumatology Symposium.

The presentation was one of two that focused on ways to improve the transition from pediatric to adult care for rheumatology patients. The other, a poster from researchers at Baylor College of Medicine, Houston, took the first steps toward learning what factors can help predict a successful transition.

Tara Haelle
Dr. John M. Bridges

“This period of transitioning from pediatric to adult care, both rheumatology specific and otherwise, is a high-risk time,” John M. Bridges, MD, a fourth-year pediatric rheumatology fellow at the University of Alabama at Birmingham, told attendees. “There are changes in insurance coverage, employment, geographic mobility, and shifting responsibilities between parents and children in the setting of a still-developing frontal lobe that contribute to the risk of this period. Risks include disease flare, and then organ damage, as well as issues with decreasing medication and therapy, adherence, unscheduled care utilization, and increasing loss to follow-up.”

Dr. Bridges developed a structured transition program called the Bridge to Adult Care from Childhood for Young Adults with Rheumatic Disease (BACC YARD) aimed at improving the pediatric transition period. The analysis he presented focused specifically on reducing loss to follow-up by introducing a pretransfer visit with both rheumatologists. The patient first meets with their pediatric rheumatologist.

During that visit, the adult rheumatologist attends and discusses the patient’s history and current therapy with the pediatric rheumatologist before entering the patient’s room and having “a brief introductory conversation, a sort of verbal handoff and handshake, in front of the patient,” Dr. Bridges explained. “Then I assume responsibility for this patient and their next visit is to see me, both proverbially and literally down the street at the adulthood rheumatology clinic, where this patient becomes a part of my continuity cohort.”



Bridges entered patients from this BACC YARD cohort into an observational registry that included their dual provider pretransfer visit and a posttransfer visit, occurring between July 2020 and May 2022. He compared these patients with a historical control cohort of 45 patients from March 2018 to March 2020, who had at least two pediatric rheumatology visits prior to their transfer to adult care and no documentation of outside rheumatology visits during the study period. Specifically, he examined at the requested and actual interval between patients’ final pediatric rheumatology visit and their first adult rheumatology visit.

The intervention cohort included 86 patients, mostly female (73%), with a median age of 20. About two-thirds were White (65%) and one-third (34%) were Black. One patient was Asian, and 7% were Hispanic. Just over half the patients had juvenile idiopathic arthritis (58%), and 30% had lupus and related connective tissue diseases. The other patients had vasculitis, uveitis, inflammatory myopathy, relapsing polychondritis, morphea, or syndrome of undifferentiated recurrent fever.

A total of 8% of these patients had previously been lost to follow-up at Children’s of Alabama before they re-established rheumatology care at UAB, and 3.5% came from a pediatric rheumatologist from somewhere other than Children’s of Alabama but established adult care at UAB through the BACC YARD program. Among the remaining patients, 65% (n = 56) had both a dual provider pretransfer visit and a posttransfer visit.

The BACC YARD patients requested their next rheumatology visit (the first adult one) a median 119 days after their last pediatric visit, and the actual time until that visit was a median 141 days (P < .05). By comparison, the 45 patients in the historical control group had a median 261 days between their last pediatric visit and their first adult visit (P < .001). The median days between visits was shorter for those with JIA (129 days) and lupus (119 days) than for patients with other conditions (149 days).

Bridges acknowledged that the study was limited by the small size of the cohort and potential contextual factors related to individual patients’ circumstances.

“We’re continuing to make iterative changes to this process to try to continue to improve the transition and its outcomes in this cohort,” Dr. Bridges said.

Aimee Hersh, MD, an associate professor of pediatric rheumatology and division chief of pediatric rheumatology at the University of Utah and Primary Children’s Hospital, both in Salt Lake City, attended the presentation and noted that the University of Utah has a very similar transfer program.

“I think one of the challenges of that model, and our model, is that you have to have a very specific type of physician who is both [medical-pediatrics] trained and has a specific interest in transition,” Dr. Hersh said in an interview. She noted that the adult rheumatologist at her institution didn’t train in pediatric rheumatology but did complete a meds-peds residency. “So if you can find an adult rheumatologist who can do something similar, can see older adolescent patients and serve as that transition bridge, then I think it is feasible.”

For practices that don’t have the resources for this kind of program, Dr. Hersh recommended the Got Transition program, which provides transition guidance that can be applied to any adolescent population with chronic illness.

The other study, led by Kristiana Nasto, BS, a third-year medical student at Baylor College of Medicine, reported on the findings from one aspect of a program also developed to improve the transition from pediatric to adult care for rheumatology patients. It included periodic self-reported evaluation using the validated Adolescent Assessment of Preparation for Transition (ADAPT) survey. As the first step to better understanding the factors that can predict successful transition, the researchers surveyed returning patients with any rheumatologic diagnosis, aged 14 years and older, between July 2021 and November 2022.

Since the survey was automated through the electronic medical record, patients and their caregivers could respond during in-person or virtual visit check-in. The researchers calculated three composite scores out of 100 for self-management, prescription management, and transfer planning, using responses from the ADAPT survey. Among 462 patients who returned 670 surveys, 87% provided surveys that could be scored for at least one composite score. Most respondents were female (75%), White (69%), non-Hispanic (64%), English speaking (90%), and aged 14-17 years (83%).

The overall average score for self-management from 401 respondents was 35. For prescription management, the average score was 59 from 288 respondents, and the average transfer planning score was 17 from 367 respondents. Self-management and transfer planning scores both improved with age (P = .0001). Self-management scores rose from an average of 20 at age 14 to an average of 64 at age 18 and older. Transfer planning scores increased from an average of 1 at age 14 to an average of 49 at age 18 and older. Prescription management scores remained high across all ages, from an average of 59 at age 14 to an average score of 66 at age 18 and older (P = .044). Although the scores did not statistically vary by age or race, Hispanic patients did score higher in self-management with an average of 44.5, compared with 31 among other patients (P = .0001).

Only 21% of patients completed two surveys, and 8.4% completed all three surveys. The average time between the first and second surveys was 4 months, during which there was no statistically significant change in self-management or prescription management scores, but transfer planning scores did increase from 14 to 21 (P = .008) among the 90 patients who completed those surveys.

The researchers concluded from their analysis that “participation in the transition pathway can rapidly improve transfer planning scores, [but] opportunities remain to improve readiness in all domains.” The researchers are in the process of developing Spanish-language surveys.

No external funding was noted for either study. Dr. Bridges, Dr. Hersh, and Ms. Nasto reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT PRSYM 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

MRD: Powerful metric for CLL research

Article Type
Changed
Mon, 05/08/2023 - 13:18

 

The latest therapies for chronic lymphocytic leukemia (CLL) offer prolonged remission, along with a need for better tools to gauge their effectiveness. Data from a new study published in Frontiers in Oncology demonstrate that assessing measurable residual disease (MRD) helps doctors evaluate and implement novel treatments.

“MRD measurement is now a key feature of CLL clinical trials reporting. It can change CLL care by enabling approval of medication use in the wider (nontrial) patient population based on MRD data, without having to wait (ever-increasing) times for conventional trial outcomes, such as progression-free survival [PFS],” said study author Tahla Munir MD, of the department of hematology, at the Leeds (England) Teaching Hospitals of the National Health Service Trust.

courtesy of NHS
Dr. Tahla Munir

“It also has potential to direct our treatment duration and follow-up strategies based on MRD results taken during or at the end of treatment, and to direct new treatment strategies, such as intermittent (as opposed to fixed-duration or continuous) treatment,” Dr. Munir said in an interview.

The review study defined MRD according to the detectable proportion of residual CLL cells. (Current international consensus for undetectable is U-MRD4 1 leukemic cell in 10,000 leukocytes.) The advantages and disadvantages of different MRD assays were analyzed. Multiparameter flow cytometry, an older technology, proved less sensitive to newer tests. It is reliable measuring to a sensitivity of U-MRD4 and more widely available than next-generation real-time quantitative polymerase chain reaction tests (NG-PCR).

“NG-PCR has the most potential for use in laboratory practice. It doesn’t require patient-specific primers and can detect around 1 CLL cell in 1x106 leukocytes. The biggest challenge is laboratory sequencing and bioinformatic capacity,” said lead study author Amelia Fisher, clinical research fellow at the division of cancer studies and pathology, University of Leeds.

“Multiple wells are required to gather adequate data to match the sensitivity of NGS. As this technology improves to match NGS sensitivity using fewer wells, once primers (bespoke to each patient) are designed it will provide a simple to use, rapid and easily reportable MRD tool, that could be scaled up in the event of MRD testing becoming routine practice,” explained Dr. Fisher.

The study also demonstrated how MRD can offer more in-depth insights into the success of treatments versus PFS. In the MURANO clinical trial, which compared venetoclax-rituximab treatment with standard chemoimmunotherapy (SC) to treat relapsed or refractory CLL, the PFS and overall survival (OS) remained significantly prolonged in the VR group at 5 years after therapy.

Analysis of MRD levels in the VR arm demonstrated that those with U-MRD4 had superior OS, with survival at 5 years of 95.3%, compared with those with higher rates of MRD (72.9%). A slower rate of MRD doubling time in the VR-treated patients, compared with the SC-treated patients, also buttressed the notion of moving from SC to VR treatment for the general CLL patient population.

Researchers cautioned that “a lot of the data is very recent, and therefore we do not have conventional trial outcomes, e.g., PFS and OS for all the studies. Some of the data we have is over a relatively short time period.”

University of Texas MD Anderson Cancer Center
Dr. Alessandra Ferrajoli

An independent expert not associated with the study, Alessandra Ferrajoli, MD, associate medical director of the department of leukemia at the University of Texas MD Anderson Cancer Center, Houston, expressed agreement with the study’s main findings.

“It is very likely that MRD assessment will be incorporated as a standard measurement of treatment efficacy in patients with CLL in the near future. The technologies have evolved to high levels of sensitivity, and the methods are being successfully harmonized and standardized,” she said.

Neither the study authors nor Dr. Ferrajoli reported conflicts of interest.

Publications
Topics
Sections

 

The latest therapies for chronic lymphocytic leukemia (CLL) offer prolonged remission, along with a need for better tools to gauge their effectiveness. Data from a new study published in Frontiers in Oncology demonstrate that assessing measurable residual disease (MRD) helps doctors evaluate and implement novel treatments.

“MRD measurement is now a key feature of CLL clinical trials reporting. It can change CLL care by enabling approval of medication use in the wider (nontrial) patient population based on MRD data, without having to wait (ever-increasing) times for conventional trial outcomes, such as progression-free survival [PFS],” said study author Tahla Munir MD, of the department of hematology, at the Leeds (England) Teaching Hospitals of the National Health Service Trust.

courtesy of NHS
Dr. Tahla Munir

“It also has potential to direct our treatment duration and follow-up strategies based on MRD results taken during or at the end of treatment, and to direct new treatment strategies, such as intermittent (as opposed to fixed-duration or continuous) treatment,” Dr. Munir said in an interview.

The review study defined MRD according to the detectable proportion of residual CLL cells. (Current international consensus for undetectable is U-MRD4 1 leukemic cell in 10,000 leukocytes.) The advantages and disadvantages of different MRD assays were analyzed. Multiparameter flow cytometry, an older technology, proved less sensitive to newer tests. It is reliable measuring to a sensitivity of U-MRD4 and more widely available than next-generation real-time quantitative polymerase chain reaction tests (NG-PCR).

“NG-PCR has the most potential for use in laboratory practice. It doesn’t require patient-specific primers and can detect around 1 CLL cell in 1x106 leukocytes. The biggest challenge is laboratory sequencing and bioinformatic capacity,” said lead study author Amelia Fisher, clinical research fellow at the division of cancer studies and pathology, University of Leeds.

“Multiple wells are required to gather adequate data to match the sensitivity of NGS. As this technology improves to match NGS sensitivity using fewer wells, once primers (bespoke to each patient) are designed it will provide a simple to use, rapid and easily reportable MRD tool, that could be scaled up in the event of MRD testing becoming routine practice,” explained Dr. Fisher.

The study also demonstrated how MRD can offer more in-depth insights into the success of treatments versus PFS. In the MURANO clinical trial, which compared venetoclax-rituximab treatment with standard chemoimmunotherapy (SC) to treat relapsed or refractory CLL, the PFS and overall survival (OS) remained significantly prolonged in the VR group at 5 years after therapy.

Analysis of MRD levels in the VR arm demonstrated that those with U-MRD4 had superior OS, with survival at 5 years of 95.3%, compared with those with higher rates of MRD (72.9%). A slower rate of MRD doubling time in the VR-treated patients, compared with the SC-treated patients, also buttressed the notion of moving from SC to VR treatment for the general CLL patient population.

Researchers cautioned that “a lot of the data is very recent, and therefore we do not have conventional trial outcomes, e.g., PFS and OS for all the studies. Some of the data we have is over a relatively short time period.”

University of Texas MD Anderson Cancer Center
Dr. Alessandra Ferrajoli

An independent expert not associated with the study, Alessandra Ferrajoli, MD, associate medical director of the department of leukemia at the University of Texas MD Anderson Cancer Center, Houston, expressed agreement with the study’s main findings.

“It is very likely that MRD assessment will be incorporated as a standard measurement of treatment efficacy in patients with CLL in the near future. The technologies have evolved to high levels of sensitivity, and the methods are being successfully harmonized and standardized,” she said.

Neither the study authors nor Dr. Ferrajoli reported conflicts of interest.

 

The latest therapies for chronic lymphocytic leukemia (CLL) offer prolonged remission, along with a need for better tools to gauge their effectiveness. Data from a new study published in Frontiers in Oncology demonstrate that assessing measurable residual disease (MRD) helps doctors evaluate and implement novel treatments.

“MRD measurement is now a key feature of CLL clinical trials reporting. It can change CLL care by enabling approval of medication use in the wider (nontrial) patient population based on MRD data, without having to wait (ever-increasing) times for conventional trial outcomes, such as progression-free survival [PFS],” said study author Tahla Munir MD, of the department of hematology, at the Leeds (England) Teaching Hospitals of the National Health Service Trust.

courtesy of NHS
Dr. Tahla Munir

“It also has potential to direct our treatment duration and follow-up strategies based on MRD results taken during or at the end of treatment, and to direct new treatment strategies, such as intermittent (as opposed to fixed-duration or continuous) treatment,” Dr. Munir said in an interview.

The review study defined MRD according to the detectable proportion of residual CLL cells. (Current international consensus for undetectable is U-MRD4 1 leukemic cell in 10,000 leukocytes.) The advantages and disadvantages of different MRD assays were analyzed. Multiparameter flow cytometry, an older technology, proved less sensitive to newer tests. It is reliable measuring to a sensitivity of U-MRD4 and more widely available than next-generation real-time quantitative polymerase chain reaction tests (NG-PCR).

“NG-PCR has the most potential for use in laboratory practice. It doesn’t require patient-specific primers and can detect around 1 CLL cell in 1x106 leukocytes. The biggest challenge is laboratory sequencing and bioinformatic capacity,” said lead study author Amelia Fisher, clinical research fellow at the division of cancer studies and pathology, University of Leeds.

“Multiple wells are required to gather adequate data to match the sensitivity of NGS. As this technology improves to match NGS sensitivity using fewer wells, once primers (bespoke to each patient) are designed it will provide a simple to use, rapid and easily reportable MRD tool, that could be scaled up in the event of MRD testing becoming routine practice,” explained Dr. Fisher.

The study also demonstrated how MRD can offer more in-depth insights into the success of treatments versus PFS. In the MURANO clinical trial, which compared venetoclax-rituximab treatment with standard chemoimmunotherapy (SC) to treat relapsed or refractory CLL, the PFS and overall survival (OS) remained significantly prolonged in the VR group at 5 years after therapy.

Analysis of MRD levels in the VR arm demonstrated that those with U-MRD4 had superior OS, with survival at 5 years of 95.3%, compared with those with higher rates of MRD (72.9%). A slower rate of MRD doubling time in the VR-treated patients, compared with the SC-treated patients, also buttressed the notion of moving from SC to VR treatment for the general CLL patient population.

Researchers cautioned that “a lot of the data is very recent, and therefore we do not have conventional trial outcomes, e.g., PFS and OS for all the studies. Some of the data we have is over a relatively short time period.”

University of Texas MD Anderson Cancer Center
Dr. Alessandra Ferrajoli

An independent expert not associated with the study, Alessandra Ferrajoli, MD, associate medical director of the department of leukemia at the University of Texas MD Anderson Cancer Center, Houston, expressed agreement with the study’s main findings.

“It is very likely that MRD assessment will be incorporated as a standard measurement of treatment efficacy in patients with CLL in the near future. The technologies have evolved to high levels of sensitivity, and the methods are being successfully harmonized and standardized,” she said.

Neither the study authors nor Dr. Ferrajoli reported conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM FRONTIERS IN ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cervical screening often stops at 65, but should it?

Article Type
Changed
Fri, 04/07/2023 - 14:04

 

“Did you love your wife?” asks a character in “Rose,” a book by Martin Cruz Smith.

“No, but she became a fact through perseverance,” the man replied.

Medicine also has such relationships, it seems – tentative ideas that turned into fact simply by existing long enough.

Age 65 as the cutoff for cervical screening may be one such example. It has existed for 27 years with limited science to back it up. That may soon change with the launch of a $3.3 million study that is being funded by the National Institutes of Health (NIH). The study is intended to provide a more solid foundation for the benefits and harms of cervical screening for women older than 65.

It’s an important issue: 20% of all cervical cancer cases are found in women who are older than 65. Most of these patients have late-stage disease, which can be fatal. In the United States, 35% of cervical cancer deaths occur after age 65. But women in this age group are usually no longer screened for cervical cancer.

Back in 1996, the U.S. Preventive Services Task Force recommended that for women at average risk with adequate prior screening, cervical screening should stop at the age of 65. This recommendation has been carried forward year after year and has been incorporated into several other guidelines.

For example, current guidelines from the American Cancer Society, the American College of Obstetricians and Gynecologists, and the USPSTF recommend that cervical screening stop at aged 65 for patients with adequate prior screening.

“Adequate screening” is defined as three consecutive normal Pap tests or two consecutive negative human papillomavirus tests or two consecutive negative co-tests within the prior 10 years, with the most recent screening within 5 years and with no precancerous lesions in the past 25 years.

This all sounds reasonable; however, for most women, medical records aren’t up to the task of providing a clean bill of cervical health over many decades.

Explained Sarah Feldman, MD, an associate professor in obstetrics, gynecology, and reproductive biology at Harvard Medical School, Boston: “You know, when a patient says to me at 65, ‘Should I continue screening?’ I say, ‘Do you have all your results?’ And they’ll say, ‘Well, I remember I had a sort of abnormal pap 15 years ago,’ and I say, ‘All right; well, who knows what that was?’ So I’ll continue screening.”

According to George Sawaya, MD, professor of obstetrics, gynecology, and reproductive sciences at the University of California, San Francisco, up to 60% of women do not meet the criteria to end screening at age 65. This means that each year in the United States, approximately 1.7 million women turn 65 and should, in theory, continue to undergo screening for cervical cancer.

Unfortunately, the evidence base for the harms and benefits of cervical screening after age 65 is almost nonexistent – at least by the current standards of evidence-based medicine.

“We need to be clear that we don’t really know the appropriateness of the screening after 65,” said Dr. Sawaya, “which is ironic, because cervical cancer screening is probably the most commonly implemented cancer screening test in the country because it starts so early and ends so late and it’s applied so frequently.”

Dr. Feldman agrees that the age 65 cutoff is “somewhat arbitrary.” She said, “Why don’t they want to consider it continuing past 65? I don’t really understand, I have to be honest with you.”

So what’s the scientific evidence backing up the 27-year-old recommendation?

In 2018, the USPSTF’s cervical-screening guidelines concluded “with moderate certainty that the benefits of screening in women older than 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer do not outweigh the potential harms.”

This recommendation was based on a new decision model commissioned by the USPSTF. The model was needed because, as noted by the guidelines’ authors, “None of the screening trials enrolled women older than 65 years, so direct evidence on when to stop screening is not available.”

In 2020, the ACS carried out a fresh literature review and published its own recommendations. The ACS concluded that “the evidence for the effectiveness of screening beyond age 65 is limited, based solely on observational and modeling studies.”

As a result, the ACS assigned a “qualified recommendation” to the age-65 moratorium (defined as “less certainty about the balance of benefits and harms or about patients’ values and preferences”).

Most recently, the 2021 Updated Cervical Cancer Screening Guidelines, published by the American College of Obstetricians and Gynecologists, endorsed the recommendations of the USPSTF.

Dr. Sawaya said, “The whole issue about screening over 65 is complicated from a lot of perspectives. We don’t know a lot about the safety. We don’t really know a lot about patients’ perceptions of it. But we do know that there has to be an upper age limit after which screening is just simply imprudent.”

Dr. Sawaya acknowledges that there exists a “heck-why-not” attitude toward cervical screening after 65 among some physicians, given that the tests are quick and cheap and could save a life, but he sounds a note of caution.

“It’s like when we used to use old cameras: the film was cheap, but the developing was really expensive,” Dr. Sawaya said. “So it’s not necessarily about the tests being cheap, it’s about the cascade of events [that follow].”

Follow-up for cervical cancer can be more hazardous for a postmenopausal patient than for a younger woman, explained Dr. Sawaya, because the transformation zone of the cervix may be difficult to see on colposcopy. Instead of a straightforward 5-minute procedure in the doctor’s office, the older patient may need the operating room simply to provide the first biopsy.

In addition, treatments such as cone biopsy, loop excision, or ablation are also more worrying for older women, said Dr. Sawaya, “So you start thinking about the risks of anesthesia, you start thinking about the risks of bleeding and infection, etc. And these have not been well described in older people.”

To add to the uncertainty about the merits and risks of hunting out cervical cancer in older women, a lot has changed in women’s health since 1996.

Explained Dr. Sawaya, “This stake was put in the ground in 1996, ... but since that time, life expectancy has gained 5 years. So a logical person would say, ‘Oh, well, let’s just say it should be 70 now, right?’ [But] can we even use old studies to inform the current cohort of women who are entering this 65-year-and-older age group?”

To answer all these questions, a 5-year, $3.3 million study funded by the NIH through the National Cancer Institute is now underway.

The project, named Comparative Effectiveness Research to Validate and Improve Cervical Cancer Screening (CERVICCS 2), will be led by Dr. Sawaya and Michael Silverberg, PhD, associate director of the Behavioral Health, Aging and Infectious Diseases Section of Kaiser Permanente Northern California’s Division of Research.

 

 


It’s not possible to conduct a true randomized controlled trial in this field of medicine for ethical reasons, so CERVICCS 2 will emulate a randomized study by following the fate of approximately 280,000 women older than 65 who were long-term members of two large health systems during 2005-2022. The cohort-study design will allow the researchers to track cervical cancer incidence, stage at diagnosis, and cancer mortality and then compare these outcomes to a person’s screening history – both before and after the crucial age 65 cutoff.

The California study will also look at the downsides of diagnostic procedures and surgical interventions that follow a positive screening result after the age of 65 and the personal experiences of the women involved.

Dr. Sawaya and Dr. Silverberg’s team will use software that emulates a clinical trial by utilizing observational data to compare the benefits and risks of screening continuation or screening cessation after age 65.

In effect, after 27 years of loyalty to a recommendation supported by low-quality evidence, medicine will finally have a reliable answer to the question, Should we continue to look for cervical cancer in women over 65?

Dr. Sawaya concluded: “There’s very few things that are packaged away and thought to be just the truth. And this is why we always have to be vigilant. ... And that’s what keeps science so interesting and exciting.”

Dr. Sawaya has disclosed no relevant financial relationships. Dr. Feldman writes for UpToDate and receives several NIH grants.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

“Did you love your wife?” asks a character in “Rose,” a book by Martin Cruz Smith.

“No, but she became a fact through perseverance,” the man replied.

Medicine also has such relationships, it seems – tentative ideas that turned into fact simply by existing long enough.

Age 65 as the cutoff for cervical screening may be one such example. It has existed for 27 years with limited science to back it up. That may soon change with the launch of a $3.3 million study that is being funded by the National Institutes of Health (NIH). The study is intended to provide a more solid foundation for the benefits and harms of cervical screening for women older than 65.

It’s an important issue: 20% of all cervical cancer cases are found in women who are older than 65. Most of these patients have late-stage disease, which can be fatal. In the United States, 35% of cervical cancer deaths occur after age 65. But women in this age group are usually no longer screened for cervical cancer.

Back in 1996, the U.S. Preventive Services Task Force recommended that for women at average risk with adequate prior screening, cervical screening should stop at the age of 65. This recommendation has been carried forward year after year and has been incorporated into several other guidelines.

For example, current guidelines from the American Cancer Society, the American College of Obstetricians and Gynecologists, and the USPSTF recommend that cervical screening stop at aged 65 for patients with adequate prior screening.

“Adequate screening” is defined as three consecutive normal Pap tests or two consecutive negative human papillomavirus tests or two consecutive negative co-tests within the prior 10 years, with the most recent screening within 5 years and with no precancerous lesions in the past 25 years.

This all sounds reasonable; however, for most women, medical records aren’t up to the task of providing a clean bill of cervical health over many decades.

Explained Sarah Feldman, MD, an associate professor in obstetrics, gynecology, and reproductive biology at Harvard Medical School, Boston: “You know, when a patient says to me at 65, ‘Should I continue screening?’ I say, ‘Do you have all your results?’ And they’ll say, ‘Well, I remember I had a sort of abnormal pap 15 years ago,’ and I say, ‘All right; well, who knows what that was?’ So I’ll continue screening.”

According to George Sawaya, MD, professor of obstetrics, gynecology, and reproductive sciences at the University of California, San Francisco, up to 60% of women do not meet the criteria to end screening at age 65. This means that each year in the United States, approximately 1.7 million women turn 65 and should, in theory, continue to undergo screening for cervical cancer.

Unfortunately, the evidence base for the harms and benefits of cervical screening after age 65 is almost nonexistent – at least by the current standards of evidence-based medicine.

“We need to be clear that we don’t really know the appropriateness of the screening after 65,” said Dr. Sawaya, “which is ironic, because cervical cancer screening is probably the most commonly implemented cancer screening test in the country because it starts so early and ends so late and it’s applied so frequently.”

Dr. Feldman agrees that the age 65 cutoff is “somewhat arbitrary.” She said, “Why don’t they want to consider it continuing past 65? I don’t really understand, I have to be honest with you.”

So what’s the scientific evidence backing up the 27-year-old recommendation?

In 2018, the USPSTF’s cervical-screening guidelines concluded “with moderate certainty that the benefits of screening in women older than 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer do not outweigh the potential harms.”

This recommendation was based on a new decision model commissioned by the USPSTF. The model was needed because, as noted by the guidelines’ authors, “None of the screening trials enrolled women older than 65 years, so direct evidence on when to stop screening is not available.”

In 2020, the ACS carried out a fresh literature review and published its own recommendations. The ACS concluded that “the evidence for the effectiveness of screening beyond age 65 is limited, based solely on observational and modeling studies.”

As a result, the ACS assigned a “qualified recommendation” to the age-65 moratorium (defined as “less certainty about the balance of benefits and harms or about patients’ values and preferences”).

Most recently, the 2021 Updated Cervical Cancer Screening Guidelines, published by the American College of Obstetricians and Gynecologists, endorsed the recommendations of the USPSTF.

Dr. Sawaya said, “The whole issue about screening over 65 is complicated from a lot of perspectives. We don’t know a lot about the safety. We don’t really know a lot about patients’ perceptions of it. But we do know that there has to be an upper age limit after which screening is just simply imprudent.”

Dr. Sawaya acknowledges that there exists a “heck-why-not” attitude toward cervical screening after 65 among some physicians, given that the tests are quick and cheap and could save a life, but he sounds a note of caution.

“It’s like when we used to use old cameras: the film was cheap, but the developing was really expensive,” Dr. Sawaya said. “So it’s not necessarily about the tests being cheap, it’s about the cascade of events [that follow].”

Follow-up for cervical cancer can be more hazardous for a postmenopausal patient than for a younger woman, explained Dr. Sawaya, because the transformation zone of the cervix may be difficult to see on colposcopy. Instead of a straightforward 5-minute procedure in the doctor’s office, the older patient may need the operating room simply to provide the first biopsy.

In addition, treatments such as cone biopsy, loop excision, or ablation are also more worrying for older women, said Dr. Sawaya, “So you start thinking about the risks of anesthesia, you start thinking about the risks of bleeding and infection, etc. And these have not been well described in older people.”

To add to the uncertainty about the merits and risks of hunting out cervical cancer in older women, a lot has changed in women’s health since 1996.

Explained Dr. Sawaya, “This stake was put in the ground in 1996, ... but since that time, life expectancy has gained 5 years. So a logical person would say, ‘Oh, well, let’s just say it should be 70 now, right?’ [But] can we even use old studies to inform the current cohort of women who are entering this 65-year-and-older age group?”

To answer all these questions, a 5-year, $3.3 million study funded by the NIH through the National Cancer Institute is now underway.

The project, named Comparative Effectiveness Research to Validate and Improve Cervical Cancer Screening (CERVICCS 2), will be led by Dr. Sawaya and Michael Silverberg, PhD, associate director of the Behavioral Health, Aging and Infectious Diseases Section of Kaiser Permanente Northern California’s Division of Research.

 

 


It’s not possible to conduct a true randomized controlled trial in this field of medicine for ethical reasons, so CERVICCS 2 will emulate a randomized study by following the fate of approximately 280,000 women older than 65 who were long-term members of two large health systems during 2005-2022. The cohort-study design will allow the researchers to track cervical cancer incidence, stage at diagnosis, and cancer mortality and then compare these outcomes to a person’s screening history – both before and after the crucial age 65 cutoff.

The California study will also look at the downsides of diagnostic procedures and surgical interventions that follow a positive screening result after the age of 65 and the personal experiences of the women involved.

Dr. Sawaya and Dr. Silverberg’s team will use software that emulates a clinical trial by utilizing observational data to compare the benefits and risks of screening continuation or screening cessation after age 65.

In effect, after 27 years of loyalty to a recommendation supported by low-quality evidence, medicine will finally have a reliable answer to the question, Should we continue to look for cervical cancer in women over 65?

Dr. Sawaya concluded: “There’s very few things that are packaged away and thought to be just the truth. And this is why we always have to be vigilant. ... And that’s what keeps science so interesting and exciting.”

Dr. Sawaya has disclosed no relevant financial relationships. Dr. Feldman writes for UpToDate and receives several NIH grants.

A version of this article first appeared on Medscape.com.

 

“Did you love your wife?” asks a character in “Rose,” a book by Martin Cruz Smith.

“No, but she became a fact through perseverance,” the man replied.

Medicine also has such relationships, it seems – tentative ideas that turned into fact simply by existing long enough.

Age 65 as the cutoff for cervical screening may be one such example. It has existed for 27 years with limited science to back it up. That may soon change with the launch of a $3.3 million study that is being funded by the National Institutes of Health (NIH). The study is intended to provide a more solid foundation for the benefits and harms of cervical screening for women older than 65.

It’s an important issue: 20% of all cervical cancer cases are found in women who are older than 65. Most of these patients have late-stage disease, which can be fatal. In the United States, 35% of cervical cancer deaths occur after age 65. But women in this age group are usually no longer screened for cervical cancer.

Back in 1996, the U.S. Preventive Services Task Force recommended that for women at average risk with adequate prior screening, cervical screening should stop at the age of 65. This recommendation has been carried forward year after year and has been incorporated into several other guidelines.

For example, current guidelines from the American Cancer Society, the American College of Obstetricians and Gynecologists, and the USPSTF recommend that cervical screening stop at aged 65 for patients with adequate prior screening.

“Adequate screening” is defined as three consecutive normal Pap tests or two consecutive negative human papillomavirus tests or two consecutive negative co-tests within the prior 10 years, with the most recent screening within 5 years and with no precancerous lesions in the past 25 years.

This all sounds reasonable; however, for most women, medical records aren’t up to the task of providing a clean bill of cervical health over many decades.

Explained Sarah Feldman, MD, an associate professor in obstetrics, gynecology, and reproductive biology at Harvard Medical School, Boston: “You know, when a patient says to me at 65, ‘Should I continue screening?’ I say, ‘Do you have all your results?’ And they’ll say, ‘Well, I remember I had a sort of abnormal pap 15 years ago,’ and I say, ‘All right; well, who knows what that was?’ So I’ll continue screening.”

According to George Sawaya, MD, professor of obstetrics, gynecology, and reproductive sciences at the University of California, San Francisco, up to 60% of women do not meet the criteria to end screening at age 65. This means that each year in the United States, approximately 1.7 million women turn 65 and should, in theory, continue to undergo screening for cervical cancer.

Unfortunately, the evidence base for the harms and benefits of cervical screening after age 65 is almost nonexistent – at least by the current standards of evidence-based medicine.

“We need to be clear that we don’t really know the appropriateness of the screening after 65,” said Dr. Sawaya, “which is ironic, because cervical cancer screening is probably the most commonly implemented cancer screening test in the country because it starts so early and ends so late and it’s applied so frequently.”

Dr. Feldman agrees that the age 65 cutoff is “somewhat arbitrary.” She said, “Why don’t they want to consider it continuing past 65? I don’t really understand, I have to be honest with you.”

So what’s the scientific evidence backing up the 27-year-old recommendation?

In 2018, the USPSTF’s cervical-screening guidelines concluded “with moderate certainty that the benefits of screening in women older than 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer do not outweigh the potential harms.”

This recommendation was based on a new decision model commissioned by the USPSTF. The model was needed because, as noted by the guidelines’ authors, “None of the screening trials enrolled women older than 65 years, so direct evidence on when to stop screening is not available.”

In 2020, the ACS carried out a fresh literature review and published its own recommendations. The ACS concluded that “the evidence for the effectiveness of screening beyond age 65 is limited, based solely on observational and modeling studies.”

As a result, the ACS assigned a “qualified recommendation” to the age-65 moratorium (defined as “less certainty about the balance of benefits and harms or about patients’ values and preferences”).

Most recently, the 2021 Updated Cervical Cancer Screening Guidelines, published by the American College of Obstetricians and Gynecologists, endorsed the recommendations of the USPSTF.

Dr. Sawaya said, “The whole issue about screening over 65 is complicated from a lot of perspectives. We don’t know a lot about the safety. We don’t really know a lot about patients’ perceptions of it. But we do know that there has to be an upper age limit after which screening is just simply imprudent.”

Dr. Sawaya acknowledges that there exists a “heck-why-not” attitude toward cervical screening after 65 among some physicians, given that the tests are quick and cheap and could save a life, but he sounds a note of caution.

“It’s like when we used to use old cameras: the film was cheap, but the developing was really expensive,” Dr. Sawaya said. “So it’s not necessarily about the tests being cheap, it’s about the cascade of events [that follow].”

Follow-up for cervical cancer can be more hazardous for a postmenopausal patient than for a younger woman, explained Dr. Sawaya, because the transformation zone of the cervix may be difficult to see on colposcopy. Instead of a straightforward 5-minute procedure in the doctor’s office, the older patient may need the operating room simply to provide the first biopsy.

In addition, treatments such as cone biopsy, loop excision, or ablation are also more worrying for older women, said Dr. Sawaya, “So you start thinking about the risks of anesthesia, you start thinking about the risks of bleeding and infection, etc. And these have not been well described in older people.”

To add to the uncertainty about the merits and risks of hunting out cervical cancer in older women, a lot has changed in women’s health since 1996.

Explained Dr. Sawaya, “This stake was put in the ground in 1996, ... but since that time, life expectancy has gained 5 years. So a logical person would say, ‘Oh, well, let’s just say it should be 70 now, right?’ [But] can we even use old studies to inform the current cohort of women who are entering this 65-year-and-older age group?”

To answer all these questions, a 5-year, $3.3 million study funded by the NIH through the National Cancer Institute is now underway.

The project, named Comparative Effectiveness Research to Validate and Improve Cervical Cancer Screening (CERVICCS 2), will be led by Dr. Sawaya and Michael Silverberg, PhD, associate director of the Behavioral Health, Aging and Infectious Diseases Section of Kaiser Permanente Northern California’s Division of Research.

 

 


It’s not possible to conduct a true randomized controlled trial in this field of medicine for ethical reasons, so CERVICCS 2 will emulate a randomized study by following the fate of approximately 280,000 women older than 65 who were long-term members of two large health systems during 2005-2022. The cohort-study design will allow the researchers to track cervical cancer incidence, stage at diagnosis, and cancer mortality and then compare these outcomes to a person’s screening history – both before and after the crucial age 65 cutoff.

The California study will also look at the downsides of diagnostic procedures and surgical interventions that follow a positive screening result after the age of 65 and the personal experiences of the women involved.

Dr. Sawaya and Dr. Silverberg’s team will use software that emulates a clinical trial by utilizing observational data to compare the benefits and risks of screening continuation or screening cessation after age 65.

In effect, after 27 years of loyalty to a recommendation supported by low-quality evidence, medicine will finally have a reliable answer to the question, Should we continue to look for cervical cancer in women over 65?

Dr. Sawaya concluded: “There’s very few things that are packaged away and thought to be just the truth. And this is why we always have to be vigilant. ... And that’s what keeps science so interesting and exciting.”

Dr. Sawaya has disclosed no relevant financial relationships. Dr. Feldman writes for UpToDate and receives several NIH grants.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Magnesium-rich diet linked to lower dementia risk

Article Type
Changed
Fri, 04/07/2023 - 14:04

A magnesium-rich diet has been linked to better brain health, an outcome that may help lower dementia risk, new research suggests.

Investigators studied more than 6,000 cognitively healthy individuals, aged 40-73, and found that those who consumed more than 550 mg of magnesium daily had a brain age approximately 1 year younger by age 55 years, compared with a person who consumed a normal magnesium intake (~360 mg per day).

“This research highlights the potential benefits of a diet high in magnesium and the role it plays in promoting good brain health,” lead author Khawlah Alateeq, a PhD candidate in neuroscience at Australian National University’s National Centre for Epidemiology and Population Health, said in an interview.

Clinicians “can use [the findings] to counsel patients on the benefits of increasing magnesium intake through a healthy diet and monitoring magnesium levels to prevent deficiencies,” she stated.

The study was published online  in the European Journal of Nutrition.
 

Promising target

The researchers were motivated to conduct the study because of “the growing concern over the increasing prevalence of dementia,” Ms. Alateeq said.

“Since there is no cure for dementia, and the development of pharmacological treatment for dementia has been unsuccessful over the last 30 years, prevention has been suggested as an effective approach to address the issue,” she added.

Nutrition, Ms. Alateeq said, is a “modifiable risk factor that can influence brain health and is highly amenable to scalable and cost-effective interventions.” It represents “a promising target” for risk reduction at a population level.

Previous research shows individuals with lower magnesium levels are at higher risk for AD, while those with higher dietary magnesium intake may be at lower risk of progressing from normal aging to cognitive impairment.

Most previous studies, however, included participants older than age 60 years, and it’s “unclear when the neuroprotective effects of dietary magnesium become detectable,” the researchers note.

Moreover, dietary patterns change and fluctuate, potentially leading to changes in magnesium intake over time. These changes may have as much impact as absolute magnesium at any point in time.

In light of the “current lack of understanding of when and to what extent dietary magnesium exerts its protective effects on the brain,” the researchers examined the association between magnesium trajectories over time, brain matter, and white matter lesions.

They also examined the association between magnesium and several different blood pressure measures (mean arterial pressure, systolic blood pressure, diastolic blood pressure, and pulse pressure).

Since cardiovascular health, neurodegeneration, and brain shrinkage patterns differ between men and women, the researchers stratified their analyses by sex.
 

Brain volume differences

The researchers analyzed the dietary magnesium intake of 6,001 individuals (mean age, 55.3 years) selected from the UK Biobank – a prospective cohort study of participants aged 37-73 at baseline, who were assessed between 2005 and 2023.

For the current study, only participants with baseline DBP and SBP measurements and structural MRI scans were included. Participants were also required to be free of neurologic disorders and to have an available record of dietary magnesium intake.

Covariates included age, sex, education, health conditions, smoking status, body mass index, amount of physical activity, smoking status, and alcohol intake.

Over a 16-month period, participants completed an online questionnaire five times. Their responses were used to calculate daily magnesium intake. Foods of particular interest included leafy green vegetables, legumes, nuts, seeds, and whole grains, all of which are magnesium rich.

They used latent class analysis (LCA) to “identify mutually exclusive subgroup (classes) of magnesium intake trajectory separately for men and women.”

Men had a slightly higher prevalence of BP medication and diabetes, compared with women, and postmenopausal women had a higher prevalence of BP medication and diabetes, compared with premenopausal women.

Compared with lower baseline magnesium intake, higher baseline dietary intake of magnesium was associated with larger brain volumes in several regions in both men and women.

The latent class analysis identified three classes of magnesium intake:




In women in particular, the “high-decreasing” trajectory was significantly associated with larger brain volumes, compared with the “normal-stable” trajectory, while the “low-increasing” trajectory was associated with smaller brain volumes.



Even an increase of 1 mg of magnesium per day (above 350 mg/day) made a difference in brain volume, especially in women. The changes associated with every 1-mg increase are found in the table below:



Associations between magnesium and BP measures were “mostly nonsignificant,” the researchers say, and the neuroprotective effect of higher magnesium intake in the high-decreasing trajectory was greater in postmenopausal versus premenopausal women.

“Our models indicate that compared to somebody with a normal magnesium intake (~350 mg per day), somebody in the top quartile of magnesium intake (≥ 550 mg per day) would be predicted to have a ~0.20% larger GM and ~0.46% larger RHC,” the authors summarize.

“In a population with an average age of 55 years, this effect corresponds to ~1 year of typical aging,” they note. “In other words, if this effect is generalizable to other populations, a 41% increase in magnesium intake may lead to significantly better brain health.”

Although the exact mechanisms underlying magnesium’s protective effects are “not yet clearly understood, there’s considerable evidence that magnesium levels are related to better cardiovascular health. Magnesium supplementation has been found to decrease blood pressure – and high blood pressure is a well-established risk factor for dementia,” said Ms. Alateeq.
 

 

 

Association, not causation

Yuko Hara, PhD, director of Aging and Prevention, Alzheimer’s Drug Discovery Foundation, noted that the study is observational and therefore shows an association, not causation.

“People eating a high-magnesium diet may also be eating a brain-healthy diet and getting high levels of nutrients/minerals other than magnesium alone,” suggested Dr. Hara, who was not involved with the study.

She noted that many foods are good sources of magnesium, including spinach, almonds, cashews, legumes, yogurt, brown rice, and avocados.

“Eating a brain-healthy diet (for example, the Mediterranean diet) is one of the Seven Steps to Protect Your Cognitive Vitality that ADDF’s Cognitive Vitality promotes,” she said.

Open Access funding was enabled and organized by the Council of Australian University Librarians and its Member Institutions. Ms. Alateeq, her co-authors, and Dr. Hara declare no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

A magnesium-rich diet has been linked to better brain health, an outcome that may help lower dementia risk, new research suggests.

Investigators studied more than 6,000 cognitively healthy individuals, aged 40-73, and found that those who consumed more than 550 mg of magnesium daily had a brain age approximately 1 year younger by age 55 years, compared with a person who consumed a normal magnesium intake (~360 mg per day).

“This research highlights the potential benefits of a diet high in magnesium and the role it plays in promoting good brain health,” lead author Khawlah Alateeq, a PhD candidate in neuroscience at Australian National University’s National Centre for Epidemiology and Population Health, said in an interview.

Clinicians “can use [the findings] to counsel patients on the benefits of increasing magnesium intake through a healthy diet and monitoring magnesium levels to prevent deficiencies,” she stated.

The study was published online  in the European Journal of Nutrition.
 

Promising target

The researchers were motivated to conduct the study because of “the growing concern over the increasing prevalence of dementia,” Ms. Alateeq said.

“Since there is no cure for dementia, and the development of pharmacological treatment for dementia has been unsuccessful over the last 30 years, prevention has been suggested as an effective approach to address the issue,” she added.

Nutrition, Ms. Alateeq said, is a “modifiable risk factor that can influence brain health and is highly amenable to scalable and cost-effective interventions.” It represents “a promising target” for risk reduction at a population level.

Previous research shows individuals with lower magnesium levels are at higher risk for AD, while those with higher dietary magnesium intake may be at lower risk of progressing from normal aging to cognitive impairment.

Most previous studies, however, included participants older than age 60 years, and it’s “unclear when the neuroprotective effects of dietary magnesium become detectable,” the researchers note.

Moreover, dietary patterns change and fluctuate, potentially leading to changes in magnesium intake over time. These changes may have as much impact as absolute magnesium at any point in time.

In light of the “current lack of understanding of when and to what extent dietary magnesium exerts its protective effects on the brain,” the researchers examined the association between magnesium trajectories over time, brain matter, and white matter lesions.

They also examined the association between magnesium and several different blood pressure measures (mean arterial pressure, systolic blood pressure, diastolic blood pressure, and pulse pressure).

Since cardiovascular health, neurodegeneration, and brain shrinkage patterns differ between men and women, the researchers stratified their analyses by sex.
 

Brain volume differences

The researchers analyzed the dietary magnesium intake of 6,001 individuals (mean age, 55.3 years) selected from the UK Biobank – a prospective cohort study of participants aged 37-73 at baseline, who were assessed between 2005 and 2023.

For the current study, only participants with baseline DBP and SBP measurements and structural MRI scans were included. Participants were also required to be free of neurologic disorders and to have an available record of dietary magnesium intake.

Covariates included age, sex, education, health conditions, smoking status, body mass index, amount of physical activity, smoking status, and alcohol intake.

Over a 16-month period, participants completed an online questionnaire five times. Their responses were used to calculate daily magnesium intake. Foods of particular interest included leafy green vegetables, legumes, nuts, seeds, and whole grains, all of which are magnesium rich.

They used latent class analysis (LCA) to “identify mutually exclusive subgroup (classes) of magnesium intake trajectory separately for men and women.”

Men had a slightly higher prevalence of BP medication and diabetes, compared with women, and postmenopausal women had a higher prevalence of BP medication and diabetes, compared with premenopausal women.

Compared with lower baseline magnesium intake, higher baseline dietary intake of magnesium was associated with larger brain volumes in several regions in both men and women.

The latent class analysis identified three classes of magnesium intake:




In women in particular, the “high-decreasing” trajectory was significantly associated with larger brain volumes, compared with the “normal-stable” trajectory, while the “low-increasing” trajectory was associated with smaller brain volumes.



Even an increase of 1 mg of magnesium per day (above 350 mg/day) made a difference in brain volume, especially in women. The changes associated with every 1-mg increase are found in the table below:



Associations between magnesium and BP measures were “mostly nonsignificant,” the researchers say, and the neuroprotective effect of higher magnesium intake in the high-decreasing trajectory was greater in postmenopausal versus premenopausal women.

“Our models indicate that compared to somebody with a normal magnesium intake (~350 mg per day), somebody in the top quartile of magnesium intake (≥ 550 mg per day) would be predicted to have a ~0.20% larger GM and ~0.46% larger RHC,” the authors summarize.

“In a population with an average age of 55 years, this effect corresponds to ~1 year of typical aging,” they note. “In other words, if this effect is generalizable to other populations, a 41% increase in magnesium intake may lead to significantly better brain health.”

Although the exact mechanisms underlying magnesium’s protective effects are “not yet clearly understood, there’s considerable evidence that magnesium levels are related to better cardiovascular health. Magnesium supplementation has been found to decrease blood pressure – and high blood pressure is a well-established risk factor for dementia,” said Ms. Alateeq.
 

 

 

Association, not causation

Yuko Hara, PhD, director of Aging and Prevention, Alzheimer’s Drug Discovery Foundation, noted that the study is observational and therefore shows an association, not causation.

“People eating a high-magnesium diet may also be eating a brain-healthy diet and getting high levels of nutrients/minerals other than magnesium alone,” suggested Dr. Hara, who was not involved with the study.

She noted that many foods are good sources of magnesium, including spinach, almonds, cashews, legumes, yogurt, brown rice, and avocados.

“Eating a brain-healthy diet (for example, the Mediterranean diet) is one of the Seven Steps to Protect Your Cognitive Vitality that ADDF’s Cognitive Vitality promotes,” she said.

Open Access funding was enabled and organized by the Council of Australian University Librarians and its Member Institutions. Ms. Alateeq, her co-authors, and Dr. Hara declare no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

A magnesium-rich diet has been linked to better brain health, an outcome that may help lower dementia risk, new research suggests.

Investigators studied more than 6,000 cognitively healthy individuals, aged 40-73, and found that those who consumed more than 550 mg of magnesium daily had a brain age approximately 1 year younger by age 55 years, compared with a person who consumed a normal magnesium intake (~360 mg per day).

“This research highlights the potential benefits of a diet high in magnesium and the role it plays in promoting good brain health,” lead author Khawlah Alateeq, a PhD candidate in neuroscience at Australian National University’s National Centre for Epidemiology and Population Health, said in an interview.

Clinicians “can use [the findings] to counsel patients on the benefits of increasing magnesium intake through a healthy diet and monitoring magnesium levels to prevent deficiencies,” she stated.

The study was published online  in the European Journal of Nutrition.
 

Promising target

The researchers were motivated to conduct the study because of “the growing concern over the increasing prevalence of dementia,” Ms. Alateeq said.

“Since there is no cure for dementia, and the development of pharmacological treatment for dementia has been unsuccessful over the last 30 years, prevention has been suggested as an effective approach to address the issue,” she added.

Nutrition, Ms. Alateeq said, is a “modifiable risk factor that can influence brain health and is highly amenable to scalable and cost-effective interventions.” It represents “a promising target” for risk reduction at a population level.

Previous research shows individuals with lower magnesium levels are at higher risk for AD, while those with higher dietary magnesium intake may be at lower risk of progressing from normal aging to cognitive impairment.

Most previous studies, however, included participants older than age 60 years, and it’s “unclear when the neuroprotective effects of dietary magnesium become detectable,” the researchers note.

Moreover, dietary patterns change and fluctuate, potentially leading to changes in magnesium intake over time. These changes may have as much impact as absolute magnesium at any point in time.

In light of the “current lack of understanding of when and to what extent dietary magnesium exerts its protective effects on the brain,” the researchers examined the association between magnesium trajectories over time, brain matter, and white matter lesions.

They also examined the association between magnesium and several different blood pressure measures (mean arterial pressure, systolic blood pressure, diastolic blood pressure, and pulse pressure).

Since cardiovascular health, neurodegeneration, and brain shrinkage patterns differ between men and women, the researchers stratified their analyses by sex.
 

Brain volume differences

The researchers analyzed the dietary magnesium intake of 6,001 individuals (mean age, 55.3 years) selected from the UK Biobank – a prospective cohort study of participants aged 37-73 at baseline, who were assessed between 2005 and 2023.

For the current study, only participants with baseline DBP and SBP measurements and structural MRI scans were included. Participants were also required to be free of neurologic disorders and to have an available record of dietary magnesium intake.

Covariates included age, sex, education, health conditions, smoking status, body mass index, amount of physical activity, smoking status, and alcohol intake.

Over a 16-month period, participants completed an online questionnaire five times. Their responses were used to calculate daily magnesium intake. Foods of particular interest included leafy green vegetables, legumes, nuts, seeds, and whole grains, all of which are magnesium rich.

They used latent class analysis (LCA) to “identify mutually exclusive subgroup (classes) of magnesium intake trajectory separately for men and women.”

Men had a slightly higher prevalence of BP medication and diabetes, compared with women, and postmenopausal women had a higher prevalence of BP medication and diabetes, compared with premenopausal women.

Compared with lower baseline magnesium intake, higher baseline dietary intake of magnesium was associated with larger brain volumes in several regions in both men and women.

The latent class analysis identified three classes of magnesium intake:




In women in particular, the “high-decreasing” trajectory was significantly associated with larger brain volumes, compared with the “normal-stable” trajectory, while the “low-increasing” trajectory was associated with smaller brain volumes.



Even an increase of 1 mg of magnesium per day (above 350 mg/day) made a difference in brain volume, especially in women. The changes associated with every 1-mg increase are found in the table below:



Associations between magnesium and BP measures were “mostly nonsignificant,” the researchers say, and the neuroprotective effect of higher magnesium intake in the high-decreasing trajectory was greater in postmenopausal versus premenopausal women.

“Our models indicate that compared to somebody with a normal magnesium intake (~350 mg per day), somebody in the top quartile of magnesium intake (≥ 550 mg per day) would be predicted to have a ~0.20% larger GM and ~0.46% larger RHC,” the authors summarize.

“In a population with an average age of 55 years, this effect corresponds to ~1 year of typical aging,” they note. “In other words, if this effect is generalizable to other populations, a 41% increase in magnesium intake may lead to significantly better brain health.”

Although the exact mechanisms underlying magnesium’s protective effects are “not yet clearly understood, there’s considerable evidence that magnesium levels are related to better cardiovascular health. Magnesium supplementation has been found to decrease blood pressure – and high blood pressure is a well-established risk factor for dementia,” said Ms. Alateeq.
 

 

 

Association, not causation

Yuko Hara, PhD, director of Aging and Prevention, Alzheimer’s Drug Discovery Foundation, noted that the study is observational and therefore shows an association, not causation.

“People eating a high-magnesium diet may also be eating a brain-healthy diet and getting high levels of nutrients/minerals other than magnesium alone,” suggested Dr. Hara, who was not involved with the study.

She noted that many foods are good sources of magnesium, including spinach, almonds, cashews, legumes, yogurt, brown rice, and avocados.

“Eating a brain-healthy diet (for example, the Mediterranean diet) is one of the Seven Steps to Protect Your Cognitive Vitality that ADDF’s Cognitive Vitality promotes,” she said.

Open Access funding was enabled and organized by the Council of Australian University Librarians and its Member Institutions. Ms. Alateeq, her co-authors, and Dr. Hara declare no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EUROPEAN JOURNAL OF NUTRITION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Autism: Is it in the water?

Article Type
Changed
Tue, 04/04/2023 - 15:05

 

This transcript has been edited for clarity.

Few diseases have stymied explanation like autism spectrum disorder (ASD). We know that the prevalence has been increasing dramatically, but we aren’t quite sure whether that is because of more screening and awareness or more fundamental changes. We know that much of the risk appears to be genetic, but there may be 1,000 genes involved in the syndrome. We know that certain environmental exposures, like pollution, might increase the risk – perhaps on a susceptible genetic background – but we’re not really sure which exposures are most harmful.

So, the search continues, across all domains of inquiry from cell culture to large epidemiologic analyses. And this week, a new player enters the field, and, as they say, it’s something in the water.

Does exposure to lithium in groundwater cause autism?

We’re talking about this paper, by Zeyan Liew and colleagues, appearing in JAMA Pediatrics.

Using the incredibly robust health data infrastructure in Denmark, the researchers were able to identify 8,842 children born between 2000 and 2013 with ASD and matched each one to five control kids of the same sex and age without autism.

They then mapped the location the mothers of these kids lived while they were pregnant – down to 5 meters resolution, actually – to groundwater lithium levels.

International Journal of Environmental Research and Public Health


Once that was done, the analysis was straightforward. Would moms who were pregnant in areas with higher groundwater lithium levels be more likely to have kids with ASD?

The results show a rather steady and consistent association between higher lithium levels in groundwater and the prevalence of ASD in children.

JAMA Pediatrics


We’re not talking huge numbers, but moms who lived in the areas of the highest quartile of lithium were about 46% more likely to have a child with ASD. That’s a relative risk, of course – this would be like an increase from 1 in 100 kids to 1.5 in 100 kids. But still, it’s intriguing.

But the case is far from closed here.

Groundwater concentration of lithium and the amount of lithium a pregnant mother ingests are not the same thing. It does turn out that virtually all drinking water in Denmark comes from groundwater sources – but not all lithium comes from drinking water. There are plenty of dietary sources of lithium as well. And, of course, there is medical lithium, but we’ll get to that in a second.

Dr. F. Perry Wilson


First, let’s talk about those lithium measurements. They were taken in 2013 – after all these kids were born. The authors acknowledge this limitation but show a high correlation between measured levels in 2013 and earlier measured levels from prior studies, suggesting that lithium levels in a given area are quite constant over time. That’s great – but if lithium levels are constant over time, this study does nothing to shed light on why autism diagnoses seem to be increasing.

Let’s put some numbers to the lithium concentrations the authors examined. The average was about 12 mcg/L.

As a reminder, a standard therapeutic dose of lithium used for bipolar disorder is like 600 mg. That means you’d need to drink more than 2,500 of those 5-gallon jugs that sit on your water cooler, per day, to approximate the dose you’d get from a lithium tablet. Of course, small doses can still cause toxicity – but I wanted to put this in perspective.

Also, we have some data on pregnant women who take medical lithium. An analysis of nine studies showed that first-trimester lithium use may be associated with congenital malformations – particularly some specific heart malformations – and some birth complications. But three of four separate studies looking at longer-term neurodevelopmental outcomes did not find any effect on development, attainment of milestones, or IQ. One study of 15 kids exposed to medical lithium in utero did note minor neurologic dysfunction in one child and a low verbal IQ in another – but that’s a very small study.

Of course, lithium levels vary around the world as well. The U.S. Geological Survey examined lithium content in groundwater in the United States, as you can see here.

U.S. Geological Survey


Our numbers are pretty similar to Denmark’s – in the 0-60 range. But an area in the Argentine Andes has levels as high as 1,600 mcg/L. A study of 194 babies from that area found higher lithium exposure was associated with lower fetal size, but I haven’t seen follow-up on neurodevelopmental outcomes.

The point is that there is a lot of variability here. It would be really interesting to map groundwater lithium levels to autism rates around the world. As a teaser, I will point out that, if you look at worldwide autism rates, you may be able to convince yourself that they are higher in more arid climates, and arid climates tend to have more groundwater lithium. But I’m really reaching here. More work needs to be done.

Global Burden of Disease Collaborative Network


And I hope it is done quickly. Lithium is in the midst of becoming a very important commodity thanks to the shift to electric vehicles. While we can hope that recycling will claim most of those batteries at the end of their life, some will escape reclamation and potentially put more lithium into the drinking water. I’d like to know how risky that is before it happens.

 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He has disclosed no relevant financial relationships. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t”, is available now.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity.

Few diseases have stymied explanation like autism spectrum disorder (ASD). We know that the prevalence has been increasing dramatically, but we aren’t quite sure whether that is because of more screening and awareness or more fundamental changes. We know that much of the risk appears to be genetic, but there may be 1,000 genes involved in the syndrome. We know that certain environmental exposures, like pollution, might increase the risk – perhaps on a susceptible genetic background – but we’re not really sure which exposures are most harmful.

So, the search continues, across all domains of inquiry from cell culture to large epidemiologic analyses. And this week, a new player enters the field, and, as they say, it’s something in the water.

Does exposure to lithium in groundwater cause autism?

We’re talking about this paper, by Zeyan Liew and colleagues, appearing in JAMA Pediatrics.

Using the incredibly robust health data infrastructure in Denmark, the researchers were able to identify 8,842 children born between 2000 and 2013 with ASD and matched each one to five control kids of the same sex and age without autism.

They then mapped the location the mothers of these kids lived while they were pregnant – down to 5 meters resolution, actually – to groundwater lithium levels.

International Journal of Environmental Research and Public Health


Once that was done, the analysis was straightforward. Would moms who were pregnant in areas with higher groundwater lithium levels be more likely to have kids with ASD?

The results show a rather steady and consistent association between higher lithium levels in groundwater and the prevalence of ASD in children.

JAMA Pediatrics


We’re not talking huge numbers, but moms who lived in the areas of the highest quartile of lithium were about 46% more likely to have a child with ASD. That’s a relative risk, of course – this would be like an increase from 1 in 100 kids to 1.5 in 100 kids. But still, it’s intriguing.

But the case is far from closed here.

Groundwater concentration of lithium and the amount of lithium a pregnant mother ingests are not the same thing. It does turn out that virtually all drinking water in Denmark comes from groundwater sources – but not all lithium comes from drinking water. There are plenty of dietary sources of lithium as well. And, of course, there is medical lithium, but we’ll get to that in a second.

Dr. F. Perry Wilson


First, let’s talk about those lithium measurements. They were taken in 2013 – after all these kids were born. The authors acknowledge this limitation but show a high correlation between measured levels in 2013 and earlier measured levels from prior studies, suggesting that lithium levels in a given area are quite constant over time. That’s great – but if lithium levels are constant over time, this study does nothing to shed light on why autism diagnoses seem to be increasing.

Let’s put some numbers to the lithium concentrations the authors examined. The average was about 12 mcg/L.

As a reminder, a standard therapeutic dose of lithium used for bipolar disorder is like 600 mg. That means you’d need to drink more than 2,500 of those 5-gallon jugs that sit on your water cooler, per day, to approximate the dose you’d get from a lithium tablet. Of course, small doses can still cause toxicity – but I wanted to put this in perspective.

Also, we have some data on pregnant women who take medical lithium. An analysis of nine studies showed that first-trimester lithium use may be associated with congenital malformations – particularly some specific heart malformations – and some birth complications. But three of four separate studies looking at longer-term neurodevelopmental outcomes did not find any effect on development, attainment of milestones, or IQ. One study of 15 kids exposed to medical lithium in utero did note minor neurologic dysfunction in one child and a low verbal IQ in another – but that’s a very small study.

Of course, lithium levels vary around the world as well. The U.S. Geological Survey examined lithium content in groundwater in the United States, as you can see here.

U.S. Geological Survey


Our numbers are pretty similar to Denmark’s – in the 0-60 range. But an area in the Argentine Andes has levels as high as 1,600 mcg/L. A study of 194 babies from that area found higher lithium exposure was associated with lower fetal size, but I haven’t seen follow-up on neurodevelopmental outcomes.

The point is that there is a lot of variability here. It would be really interesting to map groundwater lithium levels to autism rates around the world. As a teaser, I will point out that, if you look at worldwide autism rates, you may be able to convince yourself that they are higher in more arid climates, and arid climates tend to have more groundwater lithium. But I’m really reaching here. More work needs to be done.

Global Burden of Disease Collaborative Network


And I hope it is done quickly. Lithium is in the midst of becoming a very important commodity thanks to the shift to electric vehicles. While we can hope that recycling will claim most of those batteries at the end of their life, some will escape reclamation and potentially put more lithium into the drinking water. I’d like to know how risky that is before it happens.

 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He has disclosed no relevant financial relationships. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t”, is available now.

A version of this article originally appeared on Medscape.com.

 

This transcript has been edited for clarity.

Few diseases have stymied explanation like autism spectrum disorder (ASD). We know that the prevalence has been increasing dramatically, but we aren’t quite sure whether that is because of more screening and awareness or more fundamental changes. We know that much of the risk appears to be genetic, but there may be 1,000 genes involved in the syndrome. We know that certain environmental exposures, like pollution, might increase the risk – perhaps on a susceptible genetic background – but we’re not really sure which exposures are most harmful.

So, the search continues, across all domains of inquiry from cell culture to large epidemiologic analyses. And this week, a new player enters the field, and, as they say, it’s something in the water.

Does exposure to lithium in groundwater cause autism?

We’re talking about this paper, by Zeyan Liew and colleagues, appearing in JAMA Pediatrics.

Using the incredibly robust health data infrastructure in Denmark, the researchers were able to identify 8,842 children born between 2000 and 2013 with ASD and matched each one to five control kids of the same sex and age without autism.

They then mapped the location the mothers of these kids lived while they were pregnant – down to 5 meters resolution, actually – to groundwater lithium levels.

International Journal of Environmental Research and Public Health


Once that was done, the analysis was straightforward. Would moms who were pregnant in areas with higher groundwater lithium levels be more likely to have kids with ASD?

The results show a rather steady and consistent association between higher lithium levels in groundwater and the prevalence of ASD in children.

JAMA Pediatrics


We’re not talking huge numbers, but moms who lived in the areas of the highest quartile of lithium were about 46% more likely to have a child with ASD. That’s a relative risk, of course – this would be like an increase from 1 in 100 kids to 1.5 in 100 kids. But still, it’s intriguing.

But the case is far from closed here.

Groundwater concentration of lithium and the amount of lithium a pregnant mother ingests are not the same thing. It does turn out that virtually all drinking water in Denmark comes from groundwater sources – but not all lithium comes from drinking water. There are plenty of dietary sources of lithium as well. And, of course, there is medical lithium, but we’ll get to that in a second.

Dr. F. Perry Wilson


First, let’s talk about those lithium measurements. They were taken in 2013 – after all these kids were born. The authors acknowledge this limitation but show a high correlation between measured levels in 2013 and earlier measured levels from prior studies, suggesting that lithium levels in a given area are quite constant over time. That’s great – but if lithium levels are constant over time, this study does nothing to shed light on why autism diagnoses seem to be increasing.

Let’s put some numbers to the lithium concentrations the authors examined. The average was about 12 mcg/L.

As a reminder, a standard therapeutic dose of lithium used for bipolar disorder is like 600 mg. That means you’d need to drink more than 2,500 of those 5-gallon jugs that sit on your water cooler, per day, to approximate the dose you’d get from a lithium tablet. Of course, small doses can still cause toxicity – but I wanted to put this in perspective.

Also, we have some data on pregnant women who take medical lithium. An analysis of nine studies showed that first-trimester lithium use may be associated with congenital malformations – particularly some specific heart malformations – and some birth complications. But three of four separate studies looking at longer-term neurodevelopmental outcomes did not find any effect on development, attainment of milestones, or IQ. One study of 15 kids exposed to medical lithium in utero did note minor neurologic dysfunction in one child and a low verbal IQ in another – but that’s a very small study.

Of course, lithium levels vary around the world as well. The U.S. Geological Survey examined lithium content in groundwater in the United States, as you can see here.

U.S. Geological Survey


Our numbers are pretty similar to Denmark’s – in the 0-60 range. But an area in the Argentine Andes has levels as high as 1,600 mcg/L. A study of 194 babies from that area found higher lithium exposure was associated with lower fetal size, but I haven’t seen follow-up on neurodevelopmental outcomes.

The point is that there is a lot of variability here. It would be really interesting to map groundwater lithium levels to autism rates around the world. As a teaser, I will point out that, if you look at worldwide autism rates, you may be able to convince yourself that they are higher in more arid climates, and arid climates tend to have more groundwater lithium. But I’m really reaching here. More work needs to be done.

Global Burden of Disease Collaborative Network


And I hope it is done quickly. Lithium is in the midst of becoming a very important commodity thanks to the shift to electric vehicles. While we can hope that recycling will claim most of those batteries at the end of their life, some will escape reclamation and potentially put more lithium into the drinking water. I’d like to know how risky that is before it happens.

 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He has disclosed no relevant financial relationships. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t”, is available now.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The sacrifice of orthodoxy: Maintaining collegiality in psychiatry

Article Type
Changed
Tue, 04/04/2023 - 15:19

 

Psychiatrists practice in a wide array of ways. We approach our work and our patients with beliefs and preconceptions that develop over time. Our training has significant influence, though our own personalities and biases also affect our understanding.

Psychiatrists have philosophical lenses through which they see patients. We can reflect and see some standard archetypes. We are familiar with the reductionistic pharmacologist, the somatic treatment specialist, the psychodynamic ‘guru,’ and the medicolegally paralyzed practitioner. It is without judgment that we lay these out, for our very point is that we have these constituent parts within our own clinical identities. The intensity with which we subscribe to these clinical sensibilities could contribute to a biased orthodoxy.

Dr. Vladimir Khalafian

Orthodoxy can be defined as an accepted theory that stems from an authoritative entity. This is a well-known phenomenon that continues to be visible. For example, one can quickly peruse psychodynamic literature to find one school of thought criticizing another. It is not without some confrontation and even interpersonal rifts that the lineage of psychoanalytic theory has evolved. This has always been of interest to us. A core facet of psychoanalysis is empathy, truly knowing the inner state of a different person. And yet, the very bastions of this clinical sensibility frequently resort to veiled attacks on those in their field who have opposing views. It then begs the question: If even enlightened institutions fail at a nonjudgmental approach toward their colleagues, what hope is there for the rest of us clinicians, mired in the thick of day-to-day clinical practice?

It is our contention that the odds are against us. Even the aforementioned critique of psychoanalytic orthodoxy is just another example of how we humans organize our experience. Even as we write an article in argument against unbridled critique, we find it difficult to do so without engaging in it. For to criticize another is to help shore up our own personal identities. This is especially the case when clinicians deal with issues that we feel strongly about. The human psyche has a need to organize its experience, as “our experience of ourselves is fundamental to how we operate in the world. Our subjective experience is the phenomenology of all that one might be aware of.”1

Dr. Nicolas Badre

In this vein, we would like to cite attribution theory. This is a view of human behavior within social psychology. The Austrian psychologist Fritz Heider, PhD, investigated “the domain of social interactions, wondering how people perceive each other in interaction and especially how they make sense of each other’s behavior.”2 Attribution theory suggests that as humans organize our social interactions, we may make two basic assumptions. One is that our own behavior is highly affected by an environment that is beyond our control. The second is that when judging the behavior of others, we are more likely to attribute it to internal traits that they have. A classic example is automobile traffic. When we see someone driving erratically, we are more likely to blame them for being an inherently bad driver. However, if attention is called to our own driving, we are more likely to cite external factors such as rush hour, a bad driver around us, or a faulty vehicle.

We would like to reference one last model of human behavior. It has become customary within the field of neuroscience to view the brain as a predictive organ: “Theories of prediction in perception, action, and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e., by reducing the prediction error.”3 Perception itself has recently been described as a controlled hallucination, where the brain makes predictions of what it thinks it is about to see based on past experiences. Visual stimulus ultimately takes time to enter our eyes and be processed in the brain – “predictions would need to preactivate neural representations that would typically be driven by sensory input, before the actual arrival of that input.”4 It thus seems to be an inherent method of the brain to anticipate visual and even social events to help human beings sustain themselves.

Having spoken of a psychoanalytic conceptualization of self-organization, the theory of attribution, and research into social neuroscience, we turn our attention back to the central question that this article would like to address. Can we, as clinicians, truly put ourselves into the mindset of our colleagues and appreciate, and even agree with, the philosophies and methodologies of our fellow psychiatrists?

When we find ourselves busy in rote clinical practice, we believe the likelihood of intercollegiate mentalization is low; our ability to relate to our peers becomes strained. We ultimately do not practice in a vacuum. Psychiatrists, even those in a solo private practice, are ultimately part of a community of providers who, more or less, follow some emergent ‘standard of care.’ This can be a vague concept; but one that takes on a concrete form in the minds of certain clinicians and certainly in the setting of a medicolegal court. Yet, the psychiatrists that we know all have very stereotyped ways of practice. And at the heart of it, we all think that we are right.

We can use polypharmacy as an example. Imagine that you have a new patient intake, who tells you that they are transferring care from another psychiatrist. They inform you of their medication regimen. This patient presents on eight or more psychotropics. Many of us may have a visceral reaction at this point and, following the aforementioned attribution theory, we may ask ourselves what ‘quack’ of a doctor would do this. Yet some among us would think that a very competent psychopharmacologist was daring enough to use the full armamentarium of psychopharmacology to help this patient, who must be treatment refractory.

When speaking with such a patient, we would be quick to reflect on our own parsimonious use of medications. We would tell ourselves that we are responsible providers and would be quick to recommend discontinuation of medications. This would help us feel better about ourselves, and would of course assuage the ever-present medicolegal ‘big brother’ in our minds. It is through this very process that we affirm our self-identities. For if this patient’s previous physician was a bad psychiatrist, then we are a good psychiatrist. It is through this process that our clinical selves find confirmation.

We do not mean to reduce the complexities of human behavior to quick stereotypes. However, it is our belief that when confronted with clinical or philosophical disputes with our colleagues, the basic rules of human behavior will attempt to dissolve and override efforts at mentalization, collegiality, or interpersonal sensitivity. For to accept a clinical practice view that is different from ours would be akin to giving up the essence of our clinical identities. It could be compared to the fragmentation process of a vulnerable psyche when confronted with a reality that is at odds with preconceived notions and experiences.

While we may be able to appreciate the nuances and sensibilities of another provider, we believe it would be particularly difficult for most of us to actually attempt to practice in a fashion that is not congruent with our own organizers of experience. Whether or not our practice style is ‘perfect,’ it has worked for us. Social neuroscience and our understanding of the organization of the self would predict that we would hold onto our way of practice with all the mind’s defenses. Externalization, denial, and projection could all be called into action in this battle against existential fragmentation.

Do we seek to portray a clinical world where there is no hope for genuine modeling of clinical sensibilities to other psychiatrists? That is not our intention. Yet it seems that many of the theoretical frameworks that we subscribe to argue against this possibility. We would be hypocritical if we did not here state that our own theoretical frameworks are yet other examples of “organizers of experience.” Attribution theory, intersubjectivity, and social neuroscience are simply our ways of organizing the chaos of perceptions, ideas, and intricacies of human behavior.

If we accept that psychiatrists, like all human beings, are trapped in a subjective experience, then we can be more playful and flexible when interacting with our colleagues. We do not have to be as defensive of our practices and accusatory of others. If we practice daily according to some orthodoxy, then we color our experiences of the patient and of our colleagues’ ways of practice. We automatically start off on the wrong foot. And yet, to give up this orthodoxy would, by definition, be disorganizing and fragmenting to us. For as Nietzsche said, “truth is an illusion without which a certain species could not survive.”5

Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Badre and Dr. Khalafian have no conflicts of interest.

References

1. Buirski P and Haglund P. Making sense together: The intersubjective approach to psychotherapy. Northvale, NJ: Jason Aronson; 2001.

2. Malle BF. Attribution theories: How people make sense of behavior. In Chadee D (ed.), Theories in social psychology. pp. 72-95. Wiley-Blackwell; 2011.

3. Brown EC and Brune M. The role of prediction in social neuroscience. Front Hum Neurosci. 2012 May 24;6:147. doi: 10.3389/fnhum.2012.00147.

4. Blom T et al. Predictions drive neural representations of visual events ahead of incoming sensory information. Proc Natl Acad Sci USA. 2020 Mar 31;117(13):7510-7515. doi: 10.1073/pnas.1917777117.

5. Yalom I. The Gift of Therapy. Harper Perennial; 2002.

Publications
Topics
Sections

 

Psychiatrists practice in a wide array of ways. We approach our work and our patients with beliefs and preconceptions that develop over time. Our training has significant influence, though our own personalities and biases also affect our understanding.

Psychiatrists have philosophical lenses through which they see patients. We can reflect and see some standard archetypes. We are familiar with the reductionistic pharmacologist, the somatic treatment specialist, the psychodynamic ‘guru,’ and the medicolegally paralyzed practitioner. It is without judgment that we lay these out, for our very point is that we have these constituent parts within our own clinical identities. The intensity with which we subscribe to these clinical sensibilities could contribute to a biased orthodoxy.

Dr. Vladimir Khalafian

Orthodoxy can be defined as an accepted theory that stems from an authoritative entity. This is a well-known phenomenon that continues to be visible. For example, one can quickly peruse psychodynamic literature to find one school of thought criticizing another. It is not without some confrontation and even interpersonal rifts that the lineage of psychoanalytic theory has evolved. This has always been of interest to us. A core facet of psychoanalysis is empathy, truly knowing the inner state of a different person. And yet, the very bastions of this clinical sensibility frequently resort to veiled attacks on those in their field who have opposing views. It then begs the question: If even enlightened institutions fail at a nonjudgmental approach toward their colleagues, what hope is there for the rest of us clinicians, mired in the thick of day-to-day clinical practice?

It is our contention that the odds are against us. Even the aforementioned critique of psychoanalytic orthodoxy is just another example of how we humans organize our experience. Even as we write an article in argument against unbridled critique, we find it difficult to do so without engaging in it. For to criticize another is to help shore up our own personal identities. This is especially the case when clinicians deal with issues that we feel strongly about. The human psyche has a need to organize its experience, as “our experience of ourselves is fundamental to how we operate in the world. Our subjective experience is the phenomenology of all that one might be aware of.”1

Dr. Nicolas Badre

In this vein, we would like to cite attribution theory. This is a view of human behavior within social psychology. The Austrian psychologist Fritz Heider, PhD, investigated “the domain of social interactions, wondering how people perceive each other in interaction and especially how they make sense of each other’s behavior.”2 Attribution theory suggests that as humans organize our social interactions, we may make two basic assumptions. One is that our own behavior is highly affected by an environment that is beyond our control. The second is that when judging the behavior of others, we are more likely to attribute it to internal traits that they have. A classic example is automobile traffic. When we see someone driving erratically, we are more likely to blame them for being an inherently bad driver. However, if attention is called to our own driving, we are more likely to cite external factors such as rush hour, a bad driver around us, or a faulty vehicle.

We would like to reference one last model of human behavior. It has become customary within the field of neuroscience to view the brain as a predictive organ: “Theories of prediction in perception, action, and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e., by reducing the prediction error.”3 Perception itself has recently been described as a controlled hallucination, where the brain makes predictions of what it thinks it is about to see based on past experiences. Visual stimulus ultimately takes time to enter our eyes and be processed in the brain – “predictions would need to preactivate neural representations that would typically be driven by sensory input, before the actual arrival of that input.”4 It thus seems to be an inherent method of the brain to anticipate visual and even social events to help human beings sustain themselves.

Having spoken of a psychoanalytic conceptualization of self-organization, the theory of attribution, and research into social neuroscience, we turn our attention back to the central question that this article would like to address. Can we, as clinicians, truly put ourselves into the mindset of our colleagues and appreciate, and even agree with, the philosophies and methodologies of our fellow psychiatrists?

When we find ourselves busy in rote clinical practice, we believe the likelihood of intercollegiate mentalization is low; our ability to relate to our peers becomes strained. We ultimately do not practice in a vacuum. Psychiatrists, even those in a solo private practice, are ultimately part of a community of providers who, more or less, follow some emergent ‘standard of care.’ This can be a vague concept; but one that takes on a concrete form in the minds of certain clinicians and certainly in the setting of a medicolegal court. Yet, the psychiatrists that we know all have very stereotyped ways of practice. And at the heart of it, we all think that we are right.

We can use polypharmacy as an example. Imagine that you have a new patient intake, who tells you that they are transferring care from another psychiatrist. They inform you of their medication regimen. This patient presents on eight or more psychotropics. Many of us may have a visceral reaction at this point and, following the aforementioned attribution theory, we may ask ourselves what ‘quack’ of a doctor would do this. Yet some among us would think that a very competent psychopharmacologist was daring enough to use the full armamentarium of psychopharmacology to help this patient, who must be treatment refractory.

When speaking with such a patient, we would be quick to reflect on our own parsimonious use of medications. We would tell ourselves that we are responsible providers and would be quick to recommend discontinuation of medications. This would help us feel better about ourselves, and would of course assuage the ever-present medicolegal ‘big brother’ in our minds. It is through this very process that we affirm our self-identities. For if this patient’s previous physician was a bad psychiatrist, then we are a good psychiatrist. It is through this process that our clinical selves find confirmation.

We do not mean to reduce the complexities of human behavior to quick stereotypes. However, it is our belief that when confronted with clinical or philosophical disputes with our colleagues, the basic rules of human behavior will attempt to dissolve and override efforts at mentalization, collegiality, or interpersonal sensitivity. For to accept a clinical practice view that is different from ours would be akin to giving up the essence of our clinical identities. It could be compared to the fragmentation process of a vulnerable psyche when confronted with a reality that is at odds with preconceived notions and experiences.

While we may be able to appreciate the nuances and sensibilities of another provider, we believe it would be particularly difficult for most of us to actually attempt to practice in a fashion that is not congruent with our own organizers of experience. Whether or not our practice style is ‘perfect,’ it has worked for us. Social neuroscience and our understanding of the organization of the self would predict that we would hold onto our way of practice with all the mind’s defenses. Externalization, denial, and projection could all be called into action in this battle against existential fragmentation.

Do we seek to portray a clinical world where there is no hope for genuine modeling of clinical sensibilities to other psychiatrists? That is not our intention. Yet it seems that many of the theoretical frameworks that we subscribe to argue against this possibility. We would be hypocritical if we did not here state that our own theoretical frameworks are yet other examples of “organizers of experience.” Attribution theory, intersubjectivity, and social neuroscience are simply our ways of organizing the chaos of perceptions, ideas, and intricacies of human behavior.

If we accept that psychiatrists, like all human beings, are trapped in a subjective experience, then we can be more playful and flexible when interacting with our colleagues. We do not have to be as defensive of our practices and accusatory of others. If we practice daily according to some orthodoxy, then we color our experiences of the patient and of our colleagues’ ways of practice. We automatically start off on the wrong foot. And yet, to give up this orthodoxy would, by definition, be disorganizing and fragmenting to us. For as Nietzsche said, “truth is an illusion without which a certain species could not survive.”5

Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Badre and Dr. Khalafian have no conflicts of interest.

References

1. Buirski P and Haglund P. Making sense together: The intersubjective approach to psychotherapy. Northvale, NJ: Jason Aronson; 2001.

2. Malle BF. Attribution theories: How people make sense of behavior. In Chadee D (ed.), Theories in social psychology. pp. 72-95. Wiley-Blackwell; 2011.

3. Brown EC and Brune M. The role of prediction in social neuroscience. Front Hum Neurosci. 2012 May 24;6:147. doi: 10.3389/fnhum.2012.00147.

4. Blom T et al. Predictions drive neural representations of visual events ahead of incoming sensory information. Proc Natl Acad Sci USA. 2020 Mar 31;117(13):7510-7515. doi: 10.1073/pnas.1917777117.

5. Yalom I. The Gift of Therapy. Harper Perennial; 2002.

 

Psychiatrists practice in a wide array of ways. We approach our work and our patients with beliefs and preconceptions that develop over time. Our training has significant influence, though our own personalities and biases also affect our understanding.

Psychiatrists have philosophical lenses through which they see patients. We can reflect and see some standard archetypes. We are familiar with the reductionistic pharmacologist, the somatic treatment specialist, the psychodynamic ‘guru,’ and the medicolegally paralyzed practitioner. It is without judgment that we lay these out, for our very point is that we have these constituent parts within our own clinical identities. The intensity with which we subscribe to these clinical sensibilities could contribute to a biased orthodoxy.

Dr. Vladimir Khalafian

Orthodoxy can be defined as an accepted theory that stems from an authoritative entity. This is a well-known phenomenon that continues to be visible. For example, one can quickly peruse psychodynamic literature to find one school of thought criticizing another. It is not without some confrontation and even interpersonal rifts that the lineage of psychoanalytic theory has evolved. This has always been of interest to us. A core facet of psychoanalysis is empathy, truly knowing the inner state of a different person. And yet, the very bastions of this clinical sensibility frequently resort to veiled attacks on those in their field who have opposing views. It then begs the question: If even enlightened institutions fail at a nonjudgmental approach toward their colleagues, what hope is there for the rest of us clinicians, mired in the thick of day-to-day clinical practice?

It is our contention that the odds are against us. Even the aforementioned critique of psychoanalytic orthodoxy is just another example of how we humans organize our experience. Even as we write an article in argument against unbridled critique, we find it difficult to do so without engaging in it. For to criticize another is to help shore up our own personal identities. This is especially the case when clinicians deal with issues that we feel strongly about. The human psyche has a need to organize its experience, as “our experience of ourselves is fundamental to how we operate in the world. Our subjective experience is the phenomenology of all that one might be aware of.”1

Dr. Nicolas Badre

In this vein, we would like to cite attribution theory. This is a view of human behavior within social psychology. The Austrian psychologist Fritz Heider, PhD, investigated “the domain of social interactions, wondering how people perceive each other in interaction and especially how they make sense of each other’s behavior.”2 Attribution theory suggests that as humans organize our social interactions, we may make two basic assumptions. One is that our own behavior is highly affected by an environment that is beyond our control. The second is that when judging the behavior of others, we are more likely to attribute it to internal traits that they have. A classic example is automobile traffic. When we see someone driving erratically, we are more likely to blame them for being an inherently bad driver. However, if attention is called to our own driving, we are more likely to cite external factors such as rush hour, a bad driver around us, or a faulty vehicle.

We would like to reference one last model of human behavior. It has become customary within the field of neuroscience to view the brain as a predictive organ: “Theories of prediction in perception, action, and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e., by reducing the prediction error.”3 Perception itself has recently been described as a controlled hallucination, where the brain makes predictions of what it thinks it is about to see based on past experiences. Visual stimulus ultimately takes time to enter our eyes and be processed in the brain – “predictions would need to preactivate neural representations that would typically be driven by sensory input, before the actual arrival of that input.”4 It thus seems to be an inherent method of the brain to anticipate visual and even social events to help human beings sustain themselves.

Having spoken of a psychoanalytic conceptualization of self-organization, the theory of attribution, and research into social neuroscience, we turn our attention back to the central question that this article would like to address. Can we, as clinicians, truly put ourselves into the mindset of our colleagues and appreciate, and even agree with, the philosophies and methodologies of our fellow psychiatrists?

When we find ourselves busy in rote clinical practice, we believe the likelihood of intercollegiate mentalization is low; our ability to relate to our peers becomes strained. We ultimately do not practice in a vacuum. Psychiatrists, even those in a solo private practice, are ultimately part of a community of providers who, more or less, follow some emergent ‘standard of care.’ This can be a vague concept; but one that takes on a concrete form in the minds of certain clinicians and certainly in the setting of a medicolegal court. Yet, the psychiatrists that we know all have very stereotyped ways of practice. And at the heart of it, we all think that we are right.

We can use polypharmacy as an example. Imagine that you have a new patient intake, who tells you that they are transferring care from another psychiatrist. They inform you of their medication regimen. This patient presents on eight or more psychotropics. Many of us may have a visceral reaction at this point and, following the aforementioned attribution theory, we may ask ourselves what ‘quack’ of a doctor would do this. Yet some among us would think that a very competent psychopharmacologist was daring enough to use the full armamentarium of psychopharmacology to help this patient, who must be treatment refractory.

When speaking with such a patient, we would be quick to reflect on our own parsimonious use of medications. We would tell ourselves that we are responsible providers and would be quick to recommend discontinuation of medications. This would help us feel better about ourselves, and would of course assuage the ever-present medicolegal ‘big brother’ in our minds. It is through this very process that we affirm our self-identities. For if this patient’s previous physician was a bad psychiatrist, then we are a good psychiatrist. It is through this process that our clinical selves find confirmation.

We do not mean to reduce the complexities of human behavior to quick stereotypes. However, it is our belief that when confronted with clinical or philosophical disputes with our colleagues, the basic rules of human behavior will attempt to dissolve and override efforts at mentalization, collegiality, or interpersonal sensitivity. For to accept a clinical practice view that is different from ours would be akin to giving up the essence of our clinical identities. It could be compared to the fragmentation process of a vulnerable psyche when confronted with a reality that is at odds with preconceived notions and experiences.

While we may be able to appreciate the nuances and sensibilities of another provider, we believe it would be particularly difficult for most of us to actually attempt to practice in a fashion that is not congruent with our own organizers of experience. Whether or not our practice style is ‘perfect,’ it has worked for us. Social neuroscience and our understanding of the organization of the self would predict that we would hold onto our way of practice with all the mind’s defenses. Externalization, denial, and projection could all be called into action in this battle against existential fragmentation.

Do we seek to portray a clinical world where there is no hope for genuine modeling of clinical sensibilities to other psychiatrists? That is not our intention. Yet it seems that many of the theoretical frameworks that we subscribe to argue against this possibility. We would be hypocritical if we did not here state that our own theoretical frameworks are yet other examples of “organizers of experience.” Attribution theory, intersubjectivity, and social neuroscience are simply our ways of organizing the chaos of perceptions, ideas, and intricacies of human behavior.

If we accept that psychiatrists, like all human beings, are trapped in a subjective experience, then we can be more playful and flexible when interacting with our colleagues. We do not have to be as defensive of our practices and accusatory of others. If we practice daily according to some orthodoxy, then we color our experiences of the patient and of our colleagues’ ways of practice. We automatically start off on the wrong foot. And yet, to give up this orthodoxy would, by definition, be disorganizing and fragmenting to us. For as Nietzsche said, “truth is an illusion without which a certain species could not survive.”5

Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Badre and Dr. Khalafian have no conflicts of interest.

References

1. Buirski P and Haglund P. Making sense together: The intersubjective approach to psychotherapy. Northvale, NJ: Jason Aronson; 2001.

2. Malle BF. Attribution theories: How people make sense of behavior. In Chadee D (ed.), Theories in social psychology. pp. 72-95. Wiley-Blackwell; 2011.

3. Brown EC and Brune M. The role of prediction in social neuroscience. Front Hum Neurosci. 2012 May 24;6:147. doi: 10.3389/fnhum.2012.00147.

4. Blom T et al. Predictions drive neural representations of visual events ahead of incoming sensory information. Proc Natl Acad Sci USA. 2020 Mar 31;117(13):7510-7515. doi: 10.1073/pnas.1917777117.

5. Yalom I. The Gift of Therapy. Harper Perennial; 2002.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Melasma

Article Type
Changed
Wed, 04/05/2023 - 11:29
Display Headline
Melasma

THE COMPARISON

A Melasma on the face of a Hispanic woman, with hyperpigmentation on the cheeks, bridge of the nose, and upper lip.

B Melasma on the face of a Malaysian woman, with hyperpigmentation on the upper cheeks and bridge of the nose.

C Melasma on the face of an African woman, with hyperpigmentation on the upper cheeks and lateral to the eyes.

Melasma
Photographs courtesy of Richard P. Usatine, MD.

Melasma (also known as chloasma) is a pigmentary disorder that causes chronic symmetric hyperpigmentation on the face. In patients with darker skin tones, centrofacial areas are affected.1 Increased deposition of melanin distributed in the dermis leads to dermal melanosis. Newer research suggests that mast cell and keratinocyte interactions, altered gene regulation, neovascularization, and disruptions in the basement membrane cause melasma.2 Patients present with epidermal or dermal melasma or a combination of both (mixed melasma).3 Wood lamp examination is helpful to distinguish between epidermal and dermal melasma. Dermal and mixed melasma can be difficult to treat and require multimodal treatments.

Epidemiology

Melasma commonly affects women aged 20 to 40 years,4 with a female to male ratio of 9:1.5 Potential triggers of melasma include hormones (eg, pregnancy, oral contraceptives, hormone replacement therapy) and exposure to UV light.2,5 Melasma occurs in patients of all racial and ethnic backgrounds; however, the prevalence is higher in patients with darker skin tones.2

Key clinical features in people with darker skin tones

Melasma commonly manifests as symmetrically distributed, reticulated (lacy), dark brown to grayish brown patches on the cheeks, nose, forehead, upper lip, and chin in patients with darker skin tones.5 The pigment can be tan brown in patients with lighter skin tones. Given that postinflammatory hyperpigmentation and other pigmentary disorders can cause a similar appearance, a biopsy sometimes is needed to confirm the diagnosis, but melasma is diagnosed via physical examination in most patients. Melasma can be misdiagnosed as postinflammatory hyperpigmentation, solar lentigines, exogenous ochronosis, and Hori nevus.5

Worth noting

Prevention

• Daily sunscreen use is critical to prevent worsening of melasma. Sunscreen may not appear cosmetically elegant on darker skin tones, which creates a barrier to its use.6 Protection from both sunlight and visible light is necessary. Visible light, including light from light bulbs and device-emitted blue light, can worsen melasma. Iron oxides in tinted sunscreen offer protection from visible light.

• Physicians can recommend sunscreens that are more transparent or tinted for a better cosmetic match.

• Severe flares of melasma can occur with sun exposure despite good control with medications and laser modalities.

Treatment

• First-line therapies include topical hydroquinone 2% to 4%, tretinoin, azelaic acid, kojic acid, or ascorbic acid (vitamin C). A popular topical compound is a steroid, tretinoin, and hydroquinone.1,5 Over-the-counter hydroquinone has been removed from the market due to safety concerns; however, it is still first line in the treatment of melasma. If hydroquinone is prescribed, treatment intervals of 6 to 8 weeks followed by a hydroquinone-free period is advised to reduce the risk for exogenous ochronosis (a paradoxical darkening of the skin).

• Chemical peels are second-line treatments that are effective for melasma. Improvement in epidermal melasma has been shown with chemical peels containing Jessner solution, salicylic acid, or α-hydroxy acid. Patients with dermal and mixed melasma have seen improvement with trichloroacetic acid 25% to 35% with or without Jessner solution.1

• Cysteamine is a topical treatment created from the degradation of coenzyme A. It disrupts the synthesis of melanin to create a more even skin tone. It may be recommended in combination with sunscreen as a first-line or second-line topical therapy.

• Oral tranexamic acid is a third-line treatment that is an analogue for lysine. It decreases prostaglandin production, which leads to a lower number of tyrosine precursors available for the creation of melanin. Tranexamic acid has been shown to lighten the appearance of melasma.7 The most common and dangerous adverse effect of tranexamic acid is blood clots and this treatment should be avoided in those on combination (estrogen and progestin) contraceptives or those with a personal or family history of clotting disorders.8

• Fourth-line treatments such as lasers (performed by dermatologists) can destroy the deposition of pigment while avoiding destruction of epidermal keratinocytes.1,9,10 They also are commonly employed in refractive melasma. The most common lasers are nonablative fractionated lasers and low-fluence Q-switched lasers. The Q-switched Nd:YAG and picosecond lasers are safe for treating melasma in darker skin tones. Ablative fractionated lasers such as CO2 lasers and erbium:YAG lasers also have been used in the treatment of melasma; however, there is still an extremely high risk for postinflammatory dyspigmentation 1 to 2 months after the procedure.10

• Although there is still a risk for rebound hyperpigmentation after laser treatment, use of topical hydroquinone pretreatment may help decrease postoperative hyperpigmentation.1,5 Patients who are treated with the incorrect laser or overtreated may develop postinflammatory hyperpigmentation, rebound hyperpigmentation, or hypopigmentation.

Health disparity highlight

Melasma, most common in patients with skin of color, is a common chronic pigmentation disorder that is cosmetically and psychologically burdensome,11 leading to decreased quality of life, emotional functioning, and selfesteem.12 Clinicians should counsel patients and work closely on long-term management. The treatment options for melasma are considered cosmetic and may be cost prohibitive for many to cover out-of-pocket. Topical treatments have been found to be the most cost-effective.13 Some compounding pharmacies and drug discount programs provide more affordable treatment pricing; however, some patients are still unable to afford these options.

References
  1. Cunha PR, Kroumpouzos G. Melasma and vitiligo: novel and experimental therapies. J Clin Exp Derm Res. 2016;7:2. doi:10.4172/2155-9554.1000e106
  2. Rajanala S, Maymone MBC, Vashi NA. Melasma pathogenesis: a review of the latest research, pathological findings, and investigational therapies. Dermatol Online J. 2019;25:13030/qt47b7r28c.
  3. Grimes PE, Yamada N, Bhawan J. Light microscopic, immunohistochemical, and ultrastructural alterations in patients with melasma. Am J Dermatopathol. 2005;27:96-101.
  4. Achar A, Rathi SK. Melasma: a clinico-epidemiological study of 312 cases. Indian J Dermatol. 2011;56:380-382.
  5. Ogbechie-Godec OA, Elbuluk N. Melasma: an up-to-date comprehensive review. Dermatol Ther. 2017;7:305-318.
  6. Morquette AJ, Waples ER, Heath CR. The importance of cosmetically elegant sunscreen in skin of color populations. J Cosmet Dermatol. 2022;21:1337-1338.
  7. Taraz M, Nikham S, Ehsani AH. Tranexamic acid in treatment of melasma: a comprehensive review of clinical studies [published online January 30, 2017]. Dermatol Ther. doi:10.1111/dth.12465
  8. Bala HR, Lee S, Wong C, et al. Oral tranexamic acid for the treatment of melasma: a review. Dermatol Surg. 2018;44:814-825.
  9. Castanedo-Cazares JP, Hernandez-Blanco D, Carlos-Ortega B, et al. Near-visible light and UV photoprotection in the treatment of melasma: a double-blind randomized trial. Photodermatol Photoimmunol Photomed. 2014;30:35-42.
  10. Trivedi MK, Yang FC, Cho BK. A review of laser and light therapy in melasma. Int J Womens Dermatol. 2017;3:11-20.
  11. Dodmani PN, Deshmukh AR. Assessment of quality of life of melasma patients as per melasma quality of life scale (MELASQoL). Pigment Int. 2020;7:75-79.
  12. Balkrishnan R, McMichael A, Camacho FT, et al. Development and validation of a health‐related quality of life instrument for women with melasma. Br J Dermatol. 2003;149:572-577.
  13. Alikhan A, Daly M, Wu J, et al. Cost-effectiveness of a hydroquinone /tretinoin/fluocinolone acetonide cream combination in treating melasma in the United States. J Dermatolog Treat. 2010;21:276-281.
Article PDF
Author and Disclosure Information

Nicole A. Negbenebor, MD
Mohs Micrographic Surgery and Dermatologic Oncology Fellow
University of Iowa
Iowa City

Richard P. Usatine, MD
Professor, Family and Community Medicine
Professor, Dermatology and Cutaneous Surgery
University of Texas Health
San Antonio

Candrice R. Heath, MD
Assistant Professor, Department of Dermatology
Lewis Katz School of Medicine
Temple University
Philadelphia, Pennsylvania

The authors report no conflict of interest.

Simultaneously published in Cutis and The Journal of Family Practice.

Issue
Cutis - 111(4)
Publications
Topics
Page Number
211-212
Sections
Author and Disclosure Information

Nicole A. Negbenebor, MD
Mohs Micrographic Surgery and Dermatologic Oncology Fellow
University of Iowa
Iowa City

Richard P. Usatine, MD
Professor, Family and Community Medicine
Professor, Dermatology and Cutaneous Surgery
University of Texas Health
San Antonio

Candrice R. Heath, MD
Assistant Professor, Department of Dermatology
Lewis Katz School of Medicine
Temple University
Philadelphia, Pennsylvania

The authors report no conflict of interest.

Simultaneously published in Cutis and The Journal of Family Practice.

Author and Disclosure Information

Nicole A. Negbenebor, MD
Mohs Micrographic Surgery and Dermatologic Oncology Fellow
University of Iowa
Iowa City

Richard P. Usatine, MD
Professor, Family and Community Medicine
Professor, Dermatology and Cutaneous Surgery
University of Texas Health
San Antonio

Candrice R. Heath, MD
Assistant Professor, Department of Dermatology
Lewis Katz School of Medicine
Temple University
Philadelphia, Pennsylvania

The authors report no conflict of interest.

Simultaneously published in Cutis and The Journal of Family Practice.

Article PDF
Article PDF

THE COMPARISON

A Melasma on the face of a Hispanic woman, with hyperpigmentation on the cheeks, bridge of the nose, and upper lip.

B Melasma on the face of a Malaysian woman, with hyperpigmentation on the upper cheeks and bridge of the nose.

C Melasma on the face of an African woman, with hyperpigmentation on the upper cheeks and lateral to the eyes.

Melasma
Photographs courtesy of Richard P. Usatine, MD.

Melasma (also known as chloasma) is a pigmentary disorder that causes chronic symmetric hyperpigmentation on the face. In patients with darker skin tones, centrofacial areas are affected.1 Increased deposition of melanin distributed in the dermis leads to dermal melanosis. Newer research suggests that mast cell and keratinocyte interactions, altered gene regulation, neovascularization, and disruptions in the basement membrane cause melasma.2 Patients present with epidermal or dermal melasma or a combination of both (mixed melasma).3 Wood lamp examination is helpful to distinguish between epidermal and dermal melasma. Dermal and mixed melasma can be difficult to treat and require multimodal treatments.

Epidemiology

Melasma commonly affects women aged 20 to 40 years,4 with a female to male ratio of 9:1.5 Potential triggers of melasma include hormones (eg, pregnancy, oral contraceptives, hormone replacement therapy) and exposure to UV light.2,5 Melasma occurs in patients of all racial and ethnic backgrounds; however, the prevalence is higher in patients with darker skin tones.2

Key clinical features in people with darker skin tones

Melasma commonly manifests as symmetrically distributed, reticulated (lacy), dark brown to grayish brown patches on the cheeks, nose, forehead, upper lip, and chin in patients with darker skin tones.5 The pigment can be tan brown in patients with lighter skin tones. Given that postinflammatory hyperpigmentation and other pigmentary disorders can cause a similar appearance, a biopsy sometimes is needed to confirm the diagnosis, but melasma is diagnosed via physical examination in most patients. Melasma can be misdiagnosed as postinflammatory hyperpigmentation, solar lentigines, exogenous ochronosis, and Hori nevus.5

Worth noting

Prevention

• Daily sunscreen use is critical to prevent worsening of melasma. Sunscreen may not appear cosmetically elegant on darker skin tones, which creates a barrier to its use.6 Protection from both sunlight and visible light is necessary. Visible light, including light from light bulbs and device-emitted blue light, can worsen melasma. Iron oxides in tinted sunscreen offer protection from visible light.

• Physicians can recommend sunscreens that are more transparent or tinted for a better cosmetic match.

• Severe flares of melasma can occur with sun exposure despite good control with medications and laser modalities.

Treatment

• First-line therapies include topical hydroquinone 2% to 4%, tretinoin, azelaic acid, kojic acid, or ascorbic acid (vitamin C). A popular topical compound is a steroid, tretinoin, and hydroquinone.1,5 Over-the-counter hydroquinone has been removed from the market due to safety concerns; however, it is still first line in the treatment of melasma. If hydroquinone is prescribed, treatment intervals of 6 to 8 weeks followed by a hydroquinone-free period is advised to reduce the risk for exogenous ochronosis (a paradoxical darkening of the skin).

• Chemical peels are second-line treatments that are effective for melasma. Improvement in epidermal melasma has been shown with chemical peels containing Jessner solution, salicylic acid, or α-hydroxy acid. Patients with dermal and mixed melasma have seen improvement with trichloroacetic acid 25% to 35% with or without Jessner solution.1

• Cysteamine is a topical treatment created from the degradation of coenzyme A. It disrupts the synthesis of melanin to create a more even skin tone. It may be recommended in combination with sunscreen as a first-line or second-line topical therapy.

• Oral tranexamic acid is a third-line treatment that is an analogue for lysine. It decreases prostaglandin production, which leads to a lower number of tyrosine precursors available for the creation of melanin. Tranexamic acid has been shown to lighten the appearance of melasma.7 The most common and dangerous adverse effect of tranexamic acid is blood clots and this treatment should be avoided in those on combination (estrogen and progestin) contraceptives or those with a personal or family history of clotting disorders.8

• Fourth-line treatments such as lasers (performed by dermatologists) can destroy the deposition of pigment while avoiding destruction of epidermal keratinocytes.1,9,10 They also are commonly employed in refractive melasma. The most common lasers are nonablative fractionated lasers and low-fluence Q-switched lasers. The Q-switched Nd:YAG and picosecond lasers are safe for treating melasma in darker skin tones. Ablative fractionated lasers such as CO2 lasers and erbium:YAG lasers also have been used in the treatment of melasma; however, there is still an extremely high risk for postinflammatory dyspigmentation 1 to 2 months after the procedure.10

• Although there is still a risk for rebound hyperpigmentation after laser treatment, use of topical hydroquinone pretreatment may help decrease postoperative hyperpigmentation.1,5 Patients who are treated with the incorrect laser or overtreated may develop postinflammatory hyperpigmentation, rebound hyperpigmentation, or hypopigmentation.

Health disparity highlight

Melasma, most common in patients with skin of color, is a common chronic pigmentation disorder that is cosmetically and psychologically burdensome,11 leading to decreased quality of life, emotional functioning, and selfesteem.12 Clinicians should counsel patients and work closely on long-term management. The treatment options for melasma are considered cosmetic and may be cost prohibitive for many to cover out-of-pocket. Topical treatments have been found to be the most cost-effective.13 Some compounding pharmacies and drug discount programs provide more affordable treatment pricing; however, some patients are still unable to afford these options.

THE COMPARISON

A Melasma on the face of a Hispanic woman, with hyperpigmentation on the cheeks, bridge of the nose, and upper lip.

B Melasma on the face of a Malaysian woman, with hyperpigmentation on the upper cheeks and bridge of the nose.

C Melasma on the face of an African woman, with hyperpigmentation on the upper cheeks and lateral to the eyes.

Melasma
Photographs courtesy of Richard P. Usatine, MD.

Melasma (also known as chloasma) is a pigmentary disorder that causes chronic symmetric hyperpigmentation on the face. In patients with darker skin tones, centrofacial areas are affected.1 Increased deposition of melanin distributed in the dermis leads to dermal melanosis. Newer research suggests that mast cell and keratinocyte interactions, altered gene regulation, neovascularization, and disruptions in the basement membrane cause melasma.2 Patients present with epidermal or dermal melasma or a combination of both (mixed melasma).3 Wood lamp examination is helpful to distinguish between epidermal and dermal melasma. Dermal and mixed melasma can be difficult to treat and require multimodal treatments.

Epidemiology

Melasma commonly affects women aged 20 to 40 years,4 with a female to male ratio of 9:1.5 Potential triggers of melasma include hormones (eg, pregnancy, oral contraceptives, hormone replacement therapy) and exposure to UV light.2,5 Melasma occurs in patients of all racial and ethnic backgrounds; however, the prevalence is higher in patients with darker skin tones.2

Key clinical features in people with darker skin tones

Melasma commonly manifests as symmetrically distributed, reticulated (lacy), dark brown to grayish brown patches on the cheeks, nose, forehead, upper lip, and chin in patients with darker skin tones.5 The pigment can be tan brown in patients with lighter skin tones. Given that postinflammatory hyperpigmentation and other pigmentary disorders can cause a similar appearance, a biopsy sometimes is needed to confirm the diagnosis, but melasma is diagnosed via physical examination in most patients. Melasma can be misdiagnosed as postinflammatory hyperpigmentation, solar lentigines, exogenous ochronosis, and Hori nevus.5

Worth noting

Prevention

• Daily sunscreen use is critical to prevent worsening of melasma. Sunscreen may not appear cosmetically elegant on darker skin tones, which creates a barrier to its use.6 Protection from both sunlight and visible light is necessary. Visible light, including light from light bulbs and device-emitted blue light, can worsen melasma. Iron oxides in tinted sunscreen offer protection from visible light.

• Physicians can recommend sunscreens that are more transparent or tinted for a better cosmetic match.

• Severe flares of melasma can occur with sun exposure despite good control with medications and laser modalities.

Treatment

• First-line therapies include topical hydroquinone 2% to 4%, tretinoin, azelaic acid, kojic acid, or ascorbic acid (vitamin C). A popular topical compound is a steroid, tretinoin, and hydroquinone.1,5 Over-the-counter hydroquinone has been removed from the market due to safety concerns; however, it is still first line in the treatment of melasma. If hydroquinone is prescribed, treatment intervals of 6 to 8 weeks followed by a hydroquinone-free period is advised to reduce the risk for exogenous ochronosis (a paradoxical darkening of the skin).

• Chemical peels are second-line treatments that are effective for melasma. Improvement in epidermal melasma has been shown with chemical peels containing Jessner solution, salicylic acid, or α-hydroxy acid. Patients with dermal and mixed melasma have seen improvement with trichloroacetic acid 25% to 35% with or without Jessner solution.1

• Cysteamine is a topical treatment created from the degradation of coenzyme A. It disrupts the synthesis of melanin to create a more even skin tone. It may be recommended in combination with sunscreen as a first-line or second-line topical therapy.

• Oral tranexamic acid is a third-line treatment that is an analogue for lysine. It decreases prostaglandin production, which leads to a lower number of tyrosine precursors available for the creation of melanin. Tranexamic acid has been shown to lighten the appearance of melasma.7 The most common and dangerous adverse effect of tranexamic acid is blood clots and this treatment should be avoided in those on combination (estrogen and progestin) contraceptives or those with a personal or family history of clotting disorders.8

• Fourth-line treatments such as lasers (performed by dermatologists) can destroy the deposition of pigment while avoiding destruction of epidermal keratinocytes.1,9,10 They also are commonly employed in refractive melasma. The most common lasers are nonablative fractionated lasers and low-fluence Q-switched lasers. The Q-switched Nd:YAG and picosecond lasers are safe for treating melasma in darker skin tones. Ablative fractionated lasers such as CO2 lasers and erbium:YAG lasers also have been used in the treatment of melasma; however, there is still an extremely high risk for postinflammatory dyspigmentation 1 to 2 months after the procedure.10

• Although there is still a risk for rebound hyperpigmentation after laser treatment, use of topical hydroquinone pretreatment may help decrease postoperative hyperpigmentation.1,5 Patients who are treated with the incorrect laser or overtreated may develop postinflammatory hyperpigmentation, rebound hyperpigmentation, or hypopigmentation.

Health disparity highlight

Melasma, most common in patients with skin of color, is a common chronic pigmentation disorder that is cosmetically and psychologically burdensome,11 leading to decreased quality of life, emotional functioning, and selfesteem.12 Clinicians should counsel patients and work closely on long-term management. The treatment options for melasma are considered cosmetic and may be cost prohibitive for many to cover out-of-pocket. Topical treatments have been found to be the most cost-effective.13 Some compounding pharmacies and drug discount programs provide more affordable treatment pricing; however, some patients are still unable to afford these options.

References
  1. Cunha PR, Kroumpouzos G. Melasma and vitiligo: novel and experimental therapies. J Clin Exp Derm Res. 2016;7:2. doi:10.4172/2155-9554.1000e106
  2. Rajanala S, Maymone MBC, Vashi NA. Melasma pathogenesis: a review of the latest research, pathological findings, and investigational therapies. Dermatol Online J. 2019;25:13030/qt47b7r28c.
  3. Grimes PE, Yamada N, Bhawan J. Light microscopic, immunohistochemical, and ultrastructural alterations in patients with melasma. Am J Dermatopathol. 2005;27:96-101.
  4. Achar A, Rathi SK. Melasma: a clinico-epidemiological study of 312 cases. Indian J Dermatol. 2011;56:380-382.
  5. Ogbechie-Godec OA, Elbuluk N. Melasma: an up-to-date comprehensive review. Dermatol Ther. 2017;7:305-318.
  6. Morquette AJ, Waples ER, Heath CR. The importance of cosmetically elegant sunscreen in skin of color populations. J Cosmet Dermatol. 2022;21:1337-1338.
  7. Taraz M, Nikham S, Ehsani AH. Tranexamic acid in treatment of melasma: a comprehensive review of clinical studies [published online January 30, 2017]. Dermatol Ther. doi:10.1111/dth.12465
  8. Bala HR, Lee S, Wong C, et al. Oral tranexamic acid for the treatment of melasma: a review. Dermatol Surg. 2018;44:814-825.
  9. Castanedo-Cazares JP, Hernandez-Blanco D, Carlos-Ortega B, et al. Near-visible light and UV photoprotection in the treatment of melasma: a double-blind randomized trial. Photodermatol Photoimmunol Photomed. 2014;30:35-42.
  10. Trivedi MK, Yang FC, Cho BK. A review of laser and light therapy in melasma. Int J Womens Dermatol. 2017;3:11-20.
  11. Dodmani PN, Deshmukh AR. Assessment of quality of life of melasma patients as per melasma quality of life scale (MELASQoL). Pigment Int. 2020;7:75-79.
  12. Balkrishnan R, McMichael A, Camacho FT, et al. Development and validation of a health‐related quality of life instrument for women with melasma. Br J Dermatol. 2003;149:572-577.
  13. Alikhan A, Daly M, Wu J, et al. Cost-effectiveness of a hydroquinone /tretinoin/fluocinolone acetonide cream combination in treating melasma in the United States. J Dermatolog Treat. 2010;21:276-281.
References
  1. Cunha PR, Kroumpouzos G. Melasma and vitiligo: novel and experimental therapies. J Clin Exp Derm Res. 2016;7:2. doi:10.4172/2155-9554.1000e106
  2. Rajanala S, Maymone MBC, Vashi NA. Melasma pathogenesis: a review of the latest research, pathological findings, and investigational therapies. Dermatol Online J. 2019;25:13030/qt47b7r28c.
  3. Grimes PE, Yamada N, Bhawan J. Light microscopic, immunohistochemical, and ultrastructural alterations in patients with melasma. Am J Dermatopathol. 2005;27:96-101.
  4. Achar A, Rathi SK. Melasma: a clinico-epidemiological study of 312 cases. Indian J Dermatol. 2011;56:380-382.
  5. Ogbechie-Godec OA, Elbuluk N. Melasma: an up-to-date comprehensive review. Dermatol Ther. 2017;7:305-318.
  6. Morquette AJ, Waples ER, Heath CR. The importance of cosmetically elegant sunscreen in skin of color populations. J Cosmet Dermatol. 2022;21:1337-1338.
  7. Taraz M, Nikham S, Ehsani AH. Tranexamic acid in treatment of melasma: a comprehensive review of clinical studies [published online January 30, 2017]. Dermatol Ther. doi:10.1111/dth.12465
  8. Bala HR, Lee S, Wong C, et al. Oral tranexamic acid for the treatment of melasma: a review. Dermatol Surg. 2018;44:814-825.
  9. Castanedo-Cazares JP, Hernandez-Blanco D, Carlos-Ortega B, et al. Near-visible light and UV photoprotection in the treatment of melasma: a double-blind randomized trial. Photodermatol Photoimmunol Photomed. 2014;30:35-42.
  10. Trivedi MK, Yang FC, Cho BK. A review of laser and light therapy in melasma. Int J Womens Dermatol. 2017;3:11-20.
  11. Dodmani PN, Deshmukh AR. Assessment of quality of life of melasma patients as per melasma quality of life scale (MELASQoL). Pigment Int. 2020;7:75-79.
  12. Balkrishnan R, McMichael A, Camacho FT, et al. Development and validation of a health‐related quality of life instrument for women with melasma. Br J Dermatol. 2003;149:572-577.
  13. Alikhan A, Daly M, Wu J, et al. Cost-effectiveness of a hydroquinone /tretinoin/fluocinolone acetonide cream combination in treating melasma in the United States. J Dermatolog Treat. 2010;21:276-281.
Issue
Cutis - 111(4)
Issue
Cutis - 111(4)
Page Number
211-212
Page Number
211-212
Publications
Publications
Topics
Article Type
Display Headline
Melasma
Display Headline
Melasma
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 04/04/2023 - 13:45
Un-Gate On Date
Tue, 04/04/2023 - 13:45
Use ProPublica
CFC Schedule Remove Status
Tue, 04/04/2023 - 13:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Children ate more fruits and vegetables during longer meals: Study

Article Type
Changed
Tue, 04/04/2023 - 13:56

 

Adding 10 minutes to family mealtimes increased children’s consumption of fruits and vegetables by approximately one portion, based on data from 50 parent-child dyads.

Family meals are known to affect children’s food choices and preferences and can be an effective setting for improving children’s nutrition, wrote Mattea Dallacker, PhD, of the University of Mannheim, Germany, and colleagues.

However, the effect of extending meal duration on increasing fruit and vegetable intake in particular has not been examined, they said.

In a study published in JAMA Network Open, the researchers provided two free evening meals to 50 parent-child dyads under each of two different conditions. The control condition was defined by the families as a regular family mealtime duration (an average meal was 20.83 minutes), while the intervention was an average meal time 10 minutes (50%) longer. The age of the parents ranged from 22 to 55 years, with a mean of 43 years; 72% of the parent participants were mothers. The children’s ages ranged from 6 to 11 years, with a mean of 8 years, with approximately equal numbers of boys and girls.

The study was conducted in a family meal laboratory setting in Berlin, and groups were randomized to the longer or shorter meal setting first. The primary outcome was the total number of pieces of fruit and vegetables eaten by the child as part of each of the two meals.

Both meals were the “typical German evening meal of sliced bread, cold cuts of cheese and meat, and bite-sized pieces of fruits and vegetables,” followed by a dessert course of chocolate pudding or fruit yogurt and cookies, the researchers wrote. Beverages were water and one sugar-sweetened beverage; the specific foods and beverages were based on the child’s preferences, reported in an online preassessment, and the foods were consistent for the longer and shorter meals. All participants were asked not to eat for 2 hours prior to arriving for their meals at the laboratory.

During longer meals, children ate an average of seven additional bite-sized pieces of fruits and vegetables, which translates to approximately a full portion (defined as 100 g, such as a medium apple), the researchers wrote. The difference was significant compared with the shorter meals for fruits (P = .01) and vegetables (P < .001).

A piece of fruit was approximately 10 grams (6-10 g for grapes and tangerine segments; 10-14 g for cherry tomatoes; and 9-11 g for apple, banana, carrot, or cucumber). Other foods served with the meals included cheese, meats, butter, and sweet spreads.

Children also ate more slowly (defined as fewer bites per minute) during the longer meals, and they reported significantly greater satiety after the longer meals (P < .001 for both). The consumption of bread and cold cuts was similar for the two meal settings.

“Higher intake of fruits and vegetables during longer meals cannot be explained by longer exposure to food alone; otherwise, an increased intake of bread and cold cuts would have occurred,” the researchers wrote in their discussion. “One possible explanation is that the fruits and vegetables were cut into bite-sized pieces, making them convenient to eat.”

Further analysis showed that during the longer meals, more fruits and vegetables were consumed overall, but more vegetables were eaten from the start of the meal, while the additional fruit was eaten during the additional time at the end.

The findings were limited by several factors, primarily use of a laboratory setting that does not generalize to natural eating environments, the researchers noted. Other potential limitations included the effect of a video cameras on desirable behaviors and the limited ethnic and socioeconomic diversity of the study population, they said. The results were strengthened by the within-dyad study design that allowed for control of factors such as video observation, but more research is needed with more diverse groups and across longer time frames, the researchers said.

However, the results suggest that adding 10 minutes to a family mealtime can yield significant improvements in children’s diets, they said. They suggested strategies including playing music chosen by the child/children and setting rules that everyone must remain at the table for a certain length of time, with fruits and vegetables available on the table.

“If the effects of this simple, inexpensive, and low-threshold intervention prove stable over time, it could contribute to addressing a major public health problem,” the researchers concluded.
 

 

 

Findings intriguing, more data needed

The current study is important because food and vegetable intake in the majority of children falls below the recommended daily allowance, Karalyn Kinsella, MD, a pediatrician in private practice in Cheshire, Conn., said in an interview.

The key take-home message for clinicians is the continued need to stress the importance of family meals, said Dr. Kinsella. “Many children continue to be overbooked with activities, and it may be rare for many families to sit down together for a meal for any length of time.”

Don’t discount the potential effect of a longer school lunch on children’s fruit and vegetable consumption as well, she added. “Advocating for longer lunch time is important, as many kids report not being able to finish their lunch at school.”

The current study was limited by being conducted in a lab setting, which may have influenced children’s desire for different foods, “also they had fewer distractions, and were being offered favorite foods,” said Dr. Kinsella.

Looking ahead, “it would be interesting to see if this result carried over to nonpreferred fruits and veggies and made any difference for picky eaters,” she said. 

The study received no outside funding. The open-access publication of the study (but not the study itself) was supported by the Max Planck Institute for Human Development Library Open Access Fund. The researchers had no financial conflicts to disclose. Dr. Kinsella had no financial conflicts to disclose and serves on the editorial advisory board of Pediatric News.

Publications
Topics
Sections

 

Adding 10 minutes to family mealtimes increased children’s consumption of fruits and vegetables by approximately one portion, based on data from 50 parent-child dyads.

Family meals are known to affect children’s food choices and preferences and can be an effective setting for improving children’s nutrition, wrote Mattea Dallacker, PhD, of the University of Mannheim, Germany, and colleagues.

However, the effect of extending meal duration on increasing fruit and vegetable intake in particular has not been examined, they said.

In a study published in JAMA Network Open, the researchers provided two free evening meals to 50 parent-child dyads under each of two different conditions. The control condition was defined by the families as a regular family mealtime duration (an average meal was 20.83 minutes), while the intervention was an average meal time 10 minutes (50%) longer. The age of the parents ranged from 22 to 55 years, with a mean of 43 years; 72% of the parent participants were mothers. The children’s ages ranged from 6 to 11 years, with a mean of 8 years, with approximately equal numbers of boys and girls.

The study was conducted in a family meal laboratory setting in Berlin, and groups were randomized to the longer or shorter meal setting first. The primary outcome was the total number of pieces of fruit and vegetables eaten by the child as part of each of the two meals.

Both meals were the “typical German evening meal of sliced bread, cold cuts of cheese and meat, and bite-sized pieces of fruits and vegetables,” followed by a dessert course of chocolate pudding or fruit yogurt and cookies, the researchers wrote. Beverages were water and one sugar-sweetened beverage; the specific foods and beverages were based on the child’s preferences, reported in an online preassessment, and the foods were consistent for the longer and shorter meals. All participants were asked not to eat for 2 hours prior to arriving for their meals at the laboratory.

During longer meals, children ate an average of seven additional bite-sized pieces of fruits and vegetables, which translates to approximately a full portion (defined as 100 g, such as a medium apple), the researchers wrote. The difference was significant compared with the shorter meals for fruits (P = .01) and vegetables (P < .001).

A piece of fruit was approximately 10 grams (6-10 g for grapes and tangerine segments; 10-14 g for cherry tomatoes; and 9-11 g for apple, banana, carrot, or cucumber). Other foods served with the meals included cheese, meats, butter, and sweet spreads.

Children also ate more slowly (defined as fewer bites per minute) during the longer meals, and they reported significantly greater satiety after the longer meals (P < .001 for both). The consumption of bread and cold cuts was similar for the two meal settings.

“Higher intake of fruits and vegetables during longer meals cannot be explained by longer exposure to food alone; otherwise, an increased intake of bread and cold cuts would have occurred,” the researchers wrote in their discussion. “One possible explanation is that the fruits and vegetables were cut into bite-sized pieces, making them convenient to eat.”

Further analysis showed that during the longer meals, more fruits and vegetables were consumed overall, but more vegetables were eaten from the start of the meal, while the additional fruit was eaten during the additional time at the end.

The findings were limited by several factors, primarily use of a laboratory setting that does not generalize to natural eating environments, the researchers noted. Other potential limitations included the effect of a video cameras on desirable behaviors and the limited ethnic and socioeconomic diversity of the study population, they said. The results were strengthened by the within-dyad study design that allowed for control of factors such as video observation, but more research is needed with more diverse groups and across longer time frames, the researchers said.

However, the results suggest that adding 10 minutes to a family mealtime can yield significant improvements in children’s diets, they said. They suggested strategies including playing music chosen by the child/children and setting rules that everyone must remain at the table for a certain length of time, with fruits and vegetables available on the table.

“If the effects of this simple, inexpensive, and low-threshold intervention prove stable over time, it could contribute to addressing a major public health problem,” the researchers concluded.
 

 

 

Findings intriguing, more data needed

The current study is important because food and vegetable intake in the majority of children falls below the recommended daily allowance, Karalyn Kinsella, MD, a pediatrician in private practice in Cheshire, Conn., said in an interview.

The key take-home message for clinicians is the continued need to stress the importance of family meals, said Dr. Kinsella. “Many children continue to be overbooked with activities, and it may be rare for many families to sit down together for a meal for any length of time.”

Don’t discount the potential effect of a longer school lunch on children’s fruit and vegetable consumption as well, she added. “Advocating for longer lunch time is important, as many kids report not being able to finish their lunch at school.”

The current study was limited by being conducted in a lab setting, which may have influenced children’s desire for different foods, “also they had fewer distractions, and were being offered favorite foods,” said Dr. Kinsella.

Looking ahead, “it would be interesting to see if this result carried over to nonpreferred fruits and veggies and made any difference for picky eaters,” she said. 

The study received no outside funding. The open-access publication of the study (but not the study itself) was supported by the Max Planck Institute for Human Development Library Open Access Fund. The researchers had no financial conflicts to disclose. Dr. Kinsella had no financial conflicts to disclose and serves on the editorial advisory board of Pediatric News.

 

Adding 10 minutes to family mealtimes increased children’s consumption of fruits and vegetables by approximately one portion, based on data from 50 parent-child dyads.

Family meals are known to affect children’s food choices and preferences and can be an effective setting for improving children’s nutrition, wrote Mattea Dallacker, PhD, of the University of Mannheim, Germany, and colleagues.

However, the effect of extending meal duration on increasing fruit and vegetable intake in particular has not been examined, they said.

In a study published in JAMA Network Open, the researchers provided two free evening meals to 50 parent-child dyads under each of two different conditions. The control condition was defined by the families as a regular family mealtime duration (an average meal was 20.83 minutes), while the intervention was an average meal time 10 minutes (50%) longer. The age of the parents ranged from 22 to 55 years, with a mean of 43 years; 72% of the parent participants were mothers. The children’s ages ranged from 6 to 11 years, with a mean of 8 years, with approximately equal numbers of boys and girls.

The study was conducted in a family meal laboratory setting in Berlin, and groups were randomized to the longer or shorter meal setting first. The primary outcome was the total number of pieces of fruit and vegetables eaten by the child as part of each of the two meals.

Both meals were the “typical German evening meal of sliced bread, cold cuts of cheese and meat, and bite-sized pieces of fruits and vegetables,” followed by a dessert course of chocolate pudding or fruit yogurt and cookies, the researchers wrote. Beverages were water and one sugar-sweetened beverage; the specific foods and beverages were based on the child’s preferences, reported in an online preassessment, and the foods were consistent for the longer and shorter meals. All participants were asked not to eat for 2 hours prior to arriving for their meals at the laboratory.

During longer meals, children ate an average of seven additional bite-sized pieces of fruits and vegetables, which translates to approximately a full portion (defined as 100 g, such as a medium apple), the researchers wrote. The difference was significant compared with the shorter meals for fruits (P = .01) and vegetables (P < .001).

A piece of fruit was approximately 10 grams (6-10 g for grapes and tangerine segments; 10-14 g for cherry tomatoes; and 9-11 g for apple, banana, carrot, or cucumber). Other foods served with the meals included cheese, meats, butter, and sweet spreads.

Children also ate more slowly (defined as fewer bites per minute) during the longer meals, and they reported significantly greater satiety after the longer meals (P < .001 for both). The consumption of bread and cold cuts was similar for the two meal settings.

“Higher intake of fruits and vegetables during longer meals cannot be explained by longer exposure to food alone; otherwise, an increased intake of bread and cold cuts would have occurred,” the researchers wrote in their discussion. “One possible explanation is that the fruits and vegetables were cut into bite-sized pieces, making them convenient to eat.”

Further analysis showed that during the longer meals, more fruits and vegetables were consumed overall, but more vegetables were eaten from the start of the meal, while the additional fruit was eaten during the additional time at the end.

The findings were limited by several factors, primarily use of a laboratory setting that does not generalize to natural eating environments, the researchers noted. Other potential limitations included the effect of a video cameras on desirable behaviors and the limited ethnic and socioeconomic diversity of the study population, they said. The results were strengthened by the within-dyad study design that allowed for control of factors such as video observation, but more research is needed with more diverse groups and across longer time frames, the researchers said.

However, the results suggest that adding 10 minutes to a family mealtime can yield significant improvements in children’s diets, they said. They suggested strategies including playing music chosen by the child/children and setting rules that everyone must remain at the table for a certain length of time, with fruits and vegetables available on the table.

“If the effects of this simple, inexpensive, and low-threshold intervention prove stable over time, it could contribute to addressing a major public health problem,” the researchers concluded.
 

 

 

Findings intriguing, more data needed

The current study is important because food and vegetable intake in the majority of children falls below the recommended daily allowance, Karalyn Kinsella, MD, a pediatrician in private practice in Cheshire, Conn., said in an interview.

The key take-home message for clinicians is the continued need to stress the importance of family meals, said Dr. Kinsella. “Many children continue to be overbooked with activities, and it may be rare for many families to sit down together for a meal for any length of time.”

Don’t discount the potential effect of a longer school lunch on children’s fruit and vegetable consumption as well, she added. “Advocating for longer lunch time is important, as many kids report not being able to finish their lunch at school.”

The current study was limited by being conducted in a lab setting, which may have influenced children’s desire for different foods, “also they had fewer distractions, and were being offered favorite foods,” said Dr. Kinsella.

Looking ahead, “it would be interesting to see if this result carried over to nonpreferred fruits and veggies and made any difference for picky eaters,” she said. 

The study received no outside funding. The open-access publication of the study (but not the study itself) was supported by the Max Planck Institute for Human Development Library Open Access Fund. The researchers had no financial conflicts to disclose. Dr. Kinsella had no financial conflicts to disclose and serves on the editorial advisory board of Pediatric News.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Recurrent Oral and Gluteal Cleft Erosions

Article Type
Changed
Wed, 04/05/2023 - 10:28
Display Headline
Recurrent Oral and Gluteal Cleft Erosions

The Diagnosis: Lichen Planus Pemphigoides

Lichen planus pemphigoides (LPP) is a rare acquired autoimmune blistering disorder with an estimated worldwide prevalence of approximately 1 in 1,000,000 individuals.1 It often manifests with overlapping features of both LP and bullous pemphigoid (BP). The condition usually presents in the fifth decade of life and has a slight female predominance.2 Although primarily idiopathic, it has been associated with certain medications and treatments, such as angiotensin-converting enzyme inhibitors, programmed cell death protein 1 inhibitors, programmed cell death ligand 1 inhibitors, labetalol, narrowband UVB, and psoralen plus UVA.3,4

Patients initially present with lesions of classic lichen planus (LP) with pink-purple, flat-topped, pruritic, polygonal papules and plaques.5 After weeks to months, tense vesicles and bullae usually develop on the sites of LP as well as on uninvolved skin. One study found a mean lag time of about 8.3 months for blistering to present after LP,5 but concurrent presentations of both have been reported.1 In addition, oral mucosal involvement has been seen in 36% of cases. The most commonly affected sites are the extremities; however, involvement can be widespread.2

The pathogenesis of LPP currently is unknown. It has been proposed that in LP, injury of basal keratinocytes exposes hidden basement membrane and hemidesmosome antigens including BP180, a 180 kDa transmembrane protein of the basement membrane zone (BMZ),6 which triggers an immune response where T cells recognize the extracellular portion of BP180 and antibodies are formed against the likely autoantigen.1 One study has suggested that the autoantigen in LPP is the MCW-4 epitope within the C-terminal end of the NC16A domain of BP180.7

Histopathology of LPP reveals characteristics of both LP as well as BP. Typical features of LP on hematoxylin and eosin (H&E) staining include lichenoid lymphocytic interface dermatitis, sawtooth rete ridges, wedge-shaped hypergranulosis, and colloid bodies, as demonstrated from the biopsy of our patient’s gluteal cleft lesion (quiz image 1), while the predominant feature of BP on H&E staining includes a subepidermal bulla with eosinophils.2 Typically, direct immunofluorescence (DIF) shows linear deposits of IgG and/or C3 along the BMZ. Indirect immunofluorescence (IIF) often reveals IgG against the roof of the BMZ in a human split-skin substrate.1 Antibodies against BP180 or uncommonly BP230 often are detected on enzyme-linked immunosorbent assay (ELISA). For our patient, IIF and ELISA tests were positive. Given the clinical presentation with recurrent oral and gluteal cleft erosions, histologic findings, and the results of our patient’s immunological testing, the diagnosis of LPP was made.

Topical steroids often are used to treat localized disease of LPP.8 Oral prednisone also may be given for widespread or unresponsive disease.9 Other treatments include azathioprine, mycophenolate mofetil, hydroxychloroquine, dapsone, tetracycline in combination with nicotinamide, acitretin, ustekinumab, baricitinib, and rituximab with intravenous immunoglobulin.3,8,10-12 Any potential medication culprits should be discontinued.9 Patients with oral involvement may require a soft diet to avoid further mucosal insult.10 Additionally, providers should consider dentistry, ophthalmology, and/or otolaryngology referrals depending on disease severity.

Bullous pemphigoid, the most common autoimmune blistering disease, has an estimated incidence of 10 to 43 per million individuals per year.2 Classically, it presents with tense bullae on the skin of the lower abdomen, thighs, groin, forearms, and axillae. Circulating antibodies against 2 BMZ proteins—BP180 and BP230—are important factors in BP pathogenesis.2 Diagnosis of BP is based on clinical features, histologic findings, and immunological studies including DIF, IIF, and ELISA. An eosinophil-rich subepidermal split typically is seen on H&E staining (Figure 1).

Bullous pemphigoid. An eosinophil-rich subepidermal blister is present (H&E, original magnification ×200).
FIGURE 1. Bullous pemphigoid. An eosinophil-rich subepidermal blister is present (H&E, original magnification ×200).

Direct immunofluorescence displays linear IgG and/ or C3 staining at the BMZ. Indirect immunofluorescence on a human salt-split skin substrate commonly shows linear BMZ deposition on the roof of the blister.2 Indirect immunofluorescence for IgG deposition on monkey esophagus substrate shows linear BMZ deposition. Antibodies against the NC16A domain of BP180 (NC16A-BP180) are dominant, but BP230 antibodies against BP230 also are detected with ELISA.2 Further studies have indicated that the NC16A epitopes of BP180 that are targeted in BP are MCW-0-3,2 different from the autoantigen MCW-4 that is targeted in LPP.7

Paraneoplastic pemphigus (PNP) is another diagnosis to consider. Patients with PNP initially present with oral findings—most commonly chronic, erosive, and painful mucositis—followed by cutaneous involvement, which varies from the development of bullae to the formation of plaques similar to those of LP.13 The latter, in combination with oral erosions, may appear clinically similar to LPP. The results of DIF in conjugation with IIF and ELISA may help to further differentiate these disorders. Direct immunofluorescence in PNP typically reveals positive intercellular and/or BMZ IgG and C3, while DIF in LPP reveals depositions along the BMZ alone. Indirect immunofluorescence performed on rat bladder epithelium is particularly useful, as binding of IgG to rat bladder epithelium is characteristic of PNP and not seen in other disorders.14 Lastly, patients with PNP may develop IgG antibodies to various antigens such as desmoplakin I, desmoplakin II, envoplakin, periplakin, BP230, desmoglein 1, and desmoglein 3, which would not be expected in LPP patients.15 Hematoxylin and eosin staining differs from LPP, primarily with the location of the blister being intraepidermal. Acantholysis with hemorrhagic bullae can be seen (Figure 2).

Paraneoplastic pemphigus. Acantholysis, hemorrhagic bullae formation, and suprabasilar dyscohesion are present (H&E, original magnification ×100).
FIGURE 2. Paraneoplastic pemphigus. Acantholysis, hemorrhagic bullae formation, and suprabasilar dyscohesion are present (H&E, original magnification ×100).

Classic LP is an inflammatory disorder that mainly affects adults, with an estimated incidence of less than 1%.16 The classic form presents with purple, flat-topped, pruritic, polygonal papules and plaques of varying size that often are characterized by Wickham striae. Lichen planus possesses a broad spectrum of subtypes involving different locations, though skin lesions usually are localized to the extremities. Despite an unknown etiology, activated T cells and T helper type 1 cytokines are considered key in keratinocyte injury. Compact orthokeratosis, wedge-shaped hypergranulosis, focal dyskeratosis, and colloid bodies typically are found on H&E staining, along with a dense bandlike lymphohistiocytic infiltrate at the dermoepidermal junction (DEJ)(Figure 3). Direct immunofluorescence typically shows a shaggy band of fibrinogen along the DEJ in addition to colloid bodies that stain with various autoantibodies including IgM, IgG, IgA, and C3.16

Classic lichen planus. Lichenoid interface dermatitis at the dermoepidermal junction (H&E, original magnification ×100).
FIGURE 3. Classic lichen planus. Lichenoid interface dermatitis at the dermoepidermal junction (H&E, original magnification ×100).

Bullous LP is a rare variant of LP that commonly develops on the oral mucosa and the legs, with blisters confined on pre-existing LP lesions.9 The pathogenesis is related to an epidermal inflammatory infiltrate that leads to basal layer destruction followed by dermal-epidermal separations that cause blistering.17 Bullous LP does not have positive DIF, IIF, or ELISA because the pathophysiology does not involve autoantibody production. Histopathology typically displays an extensive inflammatory infiltrate and degeneration of the basal keratinocytes, resulting in large dermal-epidermal separations called Max-Joseph spaces (Figure 4).17 Colloid bodies are prominent in bullous LP but rarely are seen in LPP; eosinophils also are much more prominent in LPP compared to bullous LP.18 Unlike in LPP, DIF usually is negative in bullous LP, though lichenoid lesions may exhibit globular deposition of IgM, IgG, and IgA in the colloid bodies of the lower epidermis and/or papillary dermis. Similar to LP, DIF of the biopsy specimen shows linear or shaggy deposits of fibrinogen at the DEJ.17

Bullous lichen planus. A Max-Joseph space is visible due to a lichenoid infiltrate and degeneration of basal keratinocytes (H&E, original magnification ×100).
FIGURE 4. Bullous lichen planus. A Max-Joseph space is visible due to a lichenoid infiltrate and degeneration of basal keratinocytes (H&E, original magnification ×100).

References
  1. Hübner F, Langan EA, Recke A. Lichen planus pemphigoides: from lichenoid inflammation to autoantibody-mediated blistering. Front Immunol. 2019;10:1389.
  2.  Montagnon CM, Tolkachjov SN, Murrell DF, et al. Subepithelial autoimmune blistering dermatoses: clinical features and diagnosis. J Am Acad Dermatol. 2021;85:1-14.
  3. Hackländer K, Lehmann P, Hofmann SC. Successful treatment of lichen planus pemphigoides using acitretin as monotherapy. J Dtsch Dermatol Ges. 2014;12:818-819.
  4. Boyle M, Ashi S, Puiu T, et al. Lichen planus pemphigoides associated with PD-1 and PD-L1 inhibitors: a case series and review of the literature. Am J Dermatopathol. 2022;44:360-367.
  5. Zaraa I, Mahfoudh A, Sellami MK, et al. Lichen planus pemphigoides: four new cases and a review of the literature. Int J Dermatol. 2013;52:406-412.
  6. Bolognia J, Schaffer J, Cerroni L, eds. Dermatology. 4th ed. Elsevier; 2018.
  7. Zillikens D, Caux F, Mascaru JM Jr, et al. Autoantibodies in lichen planus pemphigoides react with a novel epitope within the C-terminal NC16A domain of BP180. J Invest Dermatol. 1999;113:117-121.
  8. Knisley RR, Petropolis AA, Mackey VT. Lichen planus pemphigoides treated with ustekinumab. Cutis. 2017;100:415-418.
  9. Liakopoulou A, Rallis E. Bullous lichen planus—a review. J Dermatol Case Rep. 2017;11:1-4.
  10. Weston G, Payette M. Update on lichen planus and its clinical variants. Int J Womens Dermatol. 2015;1:140-149.
  11. Moussa A, Colla TG, Asfour L, et al. Effective treatment of refractory lichen planus pemphigoides with a Janus kinase-1/2 inhibitor. Clin Exp Dermatol. 2022;47:2040-2041.
  12. Brennan M, Baldissano M, King L, et al. Successful use of rituximab and intravenous gamma globulin to treat checkpoint inhibitor-induced severe lichen planus pemphigoides. Skinmed. 2020;18:246-249.
  13. Kim JH, Kim SC. Paraneoplastic pemphigus: paraneoplastic autoimmune disease of the skin and mucosa. Front Immunol. 2019;10:1259.
  14. Stevens SR, Griffiths CE, Anhalt GJ, et al. Paraneoplastic pemphigus presenting as a lichen planus pemphigoides-like eruption. Arch Dermatol. 1993;129:866-869. 
  15. Ohzono A, Sogame R, Li X, et al. Clinical and immunological findings in 104 cases of paraneoplastic pemphigus. Br J Dermatol. 2015;173:1447-1452.
  16. Tziotzios C, Lee JYW, Brier T, et al. Lichen planus and lichenoid dermatoses: clinical overview and molecular basis. J Am Acad Dermatol. 2018;79:789-804.
  17. Papara C, Danescu S, Sitaru C, et al. Challenges and pitfalls between lichen planus pemphigoides and bullous lichen planus. Australas J Dermatol. 2022;63:165-171.
  18. Tripathy DM, Vashisht D, Rathore G, et al. Bullous lichen planus vs lichen planus pemphigoides: a diagnostic dilemma. Indian Dermatol Online J. 2022;13:282-284.
Article PDF
Author and Disclosure Information

Drs. Zhang, Braniecki, and Haber are from the Department of Dermatology, University of Illinois, Chicago. Ms. Hunt is from the Homer Stryker School of Medicine, Western Michigan University, Kalamazoo. Drs. Liu, Arps, and Tan are from Consolidated Pathology Consultants, Libertyville, Illinois.

The authors report no conflict of interest.

Correspondence: Jane Zhang, MD, University of Illinois, College of Medicine, Department of Dermatology, College of Medicine East Building (CME), RM 380, 808 S Wood St, Chicago, IL 60612 ([email protected]).

Issue
Cutis - 111(4)
Publications
Topics
Page Number
185,194-196
Sections
Author and Disclosure Information

Drs. Zhang, Braniecki, and Haber are from the Department of Dermatology, University of Illinois, Chicago. Ms. Hunt is from the Homer Stryker School of Medicine, Western Michigan University, Kalamazoo. Drs. Liu, Arps, and Tan are from Consolidated Pathology Consultants, Libertyville, Illinois.

The authors report no conflict of interest.

Correspondence: Jane Zhang, MD, University of Illinois, College of Medicine, Department of Dermatology, College of Medicine East Building (CME), RM 380, 808 S Wood St, Chicago, IL 60612 ([email protected]).

Author and Disclosure Information

Drs. Zhang, Braniecki, and Haber are from the Department of Dermatology, University of Illinois, Chicago. Ms. Hunt is from the Homer Stryker School of Medicine, Western Michigan University, Kalamazoo. Drs. Liu, Arps, and Tan are from Consolidated Pathology Consultants, Libertyville, Illinois.

The authors report no conflict of interest.

Correspondence: Jane Zhang, MD, University of Illinois, College of Medicine, Department of Dermatology, College of Medicine East Building (CME), RM 380, 808 S Wood St, Chicago, IL 60612 ([email protected]).

Article PDF
Article PDF

The Diagnosis: Lichen Planus Pemphigoides

Lichen planus pemphigoides (LPP) is a rare acquired autoimmune blistering disorder with an estimated worldwide prevalence of approximately 1 in 1,000,000 individuals.1 It often manifests with overlapping features of both LP and bullous pemphigoid (BP). The condition usually presents in the fifth decade of life and has a slight female predominance.2 Although primarily idiopathic, it has been associated with certain medications and treatments, such as angiotensin-converting enzyme inhibitors, programmed cell death protein 1 inhibitors, programmed cell death ligand 1 inhibitors, labetalol, narrowband UVB, and psoralen plus UVA.3,4

Patients initially present with lesions of classic lichen planus (LP) with pink-purple, flat-topped, pruritic, polygonal papules and plaques.5 After weeks to months, tense vesicles and bullae usually develop on the sites of LP as well as on uninvolved skin. One study found a mean lag time of about 8.3 months for blistering to present after LP,5 but concurrent presentations of both have been reported.1 In addition, oral mucosal involvement has been seen in 36% of cases. The most commonly affected sites are the extremities; however, involvement can be widespread.2

The pathogenesis of LPP currently is unknown. It has been proposed that in LP, injury of basal keratinocytes exposes hidden basement membrane and hemidesmosome antigens including BP180, a 180 kDa transmembrane protein of the basement membrane zone (BMZ),6 which triggers an immune response where T cells recognize the extracellular portion of BP180 and antibodies are formed against the likely autoantigen.1 One study has suggested that the autoantigen in LPP is the MCW-4 epitope within the C-terminal end of the NC16A domain of BP180.7

Histopathology of LPP reveals characteristics of both LP as well as BP. Typical features of LP on hematoxylin and eosin (H&E) staining include lichenoid lymphocytic interface dermatitis, sawtooth rete ridges, wedge-shaped hypergranulosis, and colloid bodies, as demonstrated from the biopsy of our patient’s gluteal cleft lesion (quiz image 1), while the predominant feature of BP on H&E staining includes a subepidermal bulla with eosinophils.2 Typically, direct immunofluorescence (DIF) shows linear deposits of IgG and/or C3 along the BMZ. Indirect immunofluorescence (IIF) often reveals IgG against the roof of the BMZ in a human split-skin substrate.1 Antibodies against BP180 or uncommonly BP230 often are detected on enzyme-linked immunosorbent assay (ELISA). For our patient, IIF and ELISA tests were positive. Given the clinical presentation with recurrent oral and gluteal cleft erosions, histologic findings, and the results of our patient’s immunological testing, the diagnosis of LPP was made.

Topical steroids often are used to treat localized disease of LPP.8 Oral prednisone also may be given for widespread or unresponsive disease.9 Other treatments include azathioprine, mycophenolate mofetil, hydroxychloroquine, dapsone, tetracycline in combination with nicotinamide, acitretin, ustekinumab, baricitinib, and rituximab with intravenous immunoglobulin.3,8,10-12 Any potential medication culprits should be discontinued.9 Patients with oral involvement may require a soft diet to avoid further mucosal insult.10 Additionally, providers should consider dentistry, ophthalmology, and/or otolaryngology referrals depending on disease severity.

Bullous pemphigoid, the most common autoimmune blistering disease, has an estimated incidence of 10 to 43 per million individuals per year.2 Classically, it presents with tense bullae on the skin of the lower abdomen, thighs, groin, forearms, and axillae. Circulating antibodies against 2 BMZ proteins—BP180 and BP230—are important factors in BP pathogenesis.2 Diagnosis of BP is based on clinical features, histologic findings, and immunological studies including DIF, IIF, and ELISA. An eosinophil-rich subepidermal split typically is seen on H&E staining (Figure 1).

Bullous pemphigoid. An eosinophil-rich subepidermal blister is present (H&E, original magnification ×200).
FIGURE 1. Bullous pemphigoid. An eosinophil-rich subepidermal blister is present (H&E, original magnification ×200).

Direct immunofluorescence displays linear IgG and/ or C3 staining at the BMZ. Indirect immunofluorescence on a human salt-split skin substrate commonly shows linear BMZ deposition on the roof of the blister.2 Indirect immunofluorescence for IgG deposition on monkey esophagus substrate shows linear BMZ deposition. Antibodies against the NC16A domain of BP180 (NC16A-BP180) are dominant, but BP230 antibodies against BP230 also are detected with ELISA.2 Further studies have indicated that the NC16A epitopes of BP180 that are targeted in BP are MCW-0-3,2 different from the autoantigen MCW-4 that is targeted in LPP.7

Paraneoplastic pemphigus (PNP) is another diagnosis to consider. Patients with PNP initially present with oral findings—most commonly chronic, erosive, and painful mucositis—followed by cutaneous involvement, which varies from the development of bullae to the formation of plaques similar to those of LP.13 The latter, in combination with oral erosions, may appear clinically similar to LPP. The results of DIF in conjugation with IIF and ELISA may help to further differentiate these disorders. Direct immunofluorescence in PNP typically reveals positive intercellular and/or BMZ IgG and C3, while DIF in LPP reveals depositions along the BMZ alone. Indirect immunofluorescence performed on rat bladder epithelium is particularly useful, as binding of IgG to rat bladder epithelium is characteristic of PNP and not seen in other disorders.14 Lastly, patients with PNP may develop IgG antibodies to various antigens such as desmoplakin I, desmoplakin II, envoplakin, periplakin, BP230, desmoglein 1, and desmoglein 3, which would not be expected in LPP patients.15 Hematoxylin and eosin staining differs from LPP, primarily with the location of the blister being intraepidermal. Acantholysis with hemorrhagic bullae can be seen (Figure 2).

Paraneoplastic pemphigus. Acantholysis, hemorrhagic bullae formation, and suprabasilar dyscohesion are present (H&E, original magnification ×100).
FIGURE 2. Paraneoplastic pemphigus. Acantholysis, hemorrhagic bullae formation, and suprabasilar dyscohesion are present (H&E, original magnification ×100).

Classic LP is an inflammatory disorder that mainly affects adults, with an estimated incidence of less than 1%.16 The classic form presents with purple, flat-topped, pruritic, polygonal papules and plaques of varying size that often are characterized by Wickham striae. Lichen planus possesses a broad spectrum of subtypes involving different locations, though skin lesions usually are localized to the extremities. Despite an unknown etiology, activated T cells and T helper type 1 cytokines are considered key in keratinocyte injury. Compact orthokeratosis, wedge-shaped hypergranulosis, focal dyskeratosis, and colloid bodies typically are found on H&E staining, along with a dense bandlike lymphohistiocytic infiltrate at the dermoepidermal junction (DEJ)(Figure 3). Direct immunofluorescence typically shows a shaggy band of fibrinogen along the DEJ in addition to colloid bodies that stain with various autoantibodies including IgM, IgG, IgA, and C3.16

Classic lichen planus. Lichenoid interface dermatitis at the dermoepidermal junction (H&E, original magnification ×100).
FIGURE 3. Classic lichen planus. Lichenoid interface dermatitis at the dermoepidermal junction (H&E, original magnification ×100).

Bullous LP is a rare variant of LP that commonly develops on the oral mucosa and the legs, with blisters confined on pre-existing LP lesions.9 The pathogenesis is related to an epidermal inflammatory infiltrate that leads to basal layer destruction followed by dermal-epidermal separations that cause blistering.17 Bullous LP does not have positive DIF, IIF, or ELISA because the pathophysiology does not involve autoantibody production. Histopathology typically displays an extensive inflammatory infiltrate and degeneration of the basal keratinocytes, resulting in large dermal-epidermal separations called Max-Joseph spaces (Figure 4).17 Colloid bodies are prominent in bullous LP but rarely are seen in LPP; eosinophils also are much more prominent in LPP compared to bullous LP.18 Unlike in LPP, DIF usually is negative in bullous LP, though lichenoid lesions may exhibit globular deposition of IgM, IgG, and IgA in the colloid bodies of the lower epidermis and/or papillary dermis. Similar to LP, DIF of the biopsy specimen shows linear or shaggy deposits of fibrinogen at the DEJ.17

Bullous lichen planus. A Max-Joseph space is visible due to a lichenoid infiltrate and degeneration of basal keratinocytes (H&E, original magnification ×100).
FIGURE 4. Bullous lichen planus. A Max-Joseph space is visible due to a lichenoid infiltrate and degeneration of basal keratinocytes (H&E, original magnification ×100).

The Diagnosis: Lichen Planus Pemphigoides

Lichen planus pemphigoides (LPP) is a rare acquired autoimmune blistering disorder with an estimated worldwide prevalence of approximately 1 in 1,000,000 individuals.1 It often manifests with overlapping features of both LP and bullous pemphigoid (BP). The condition usually presents in the fifth decade of life and has a slight female predominance.2 Although primarily idiopathic, it has been associated with certain medications and treatments, such as angiotensin-converting enzyme inhibitors, programmed cell death protein 1 inhibitors, programmed cell death ligand 1 inhibitors, labetalol, narrowband UVB, and psoralen plus UVA.3,4

Patients initially present with lesions of classic lichen planus (LP) with pink-purple, flat-topped, pruritic, polygonal papules and plaques.5 After weeks to months, tense vesicles and bullae usually develop on the sites of LP as well as on uninvolved skin. One study found a mean lag time of about 8.3 months for blistering to present after LP,5 but concurrent presentations of both have been reported.1 In addition, oral mucosal involvement has been seen in 36% of cases. The most commonly affected sites are the extremities; however, involvement can be widespread.2

The pathogenesis of LPP currently is unknown. It has been proposed that in LP, injury of basal keratinocytes exposes hidden basement membrane and hemidesmosome antigens including BP180, a 180 kDa transmembrane protein of the basement membrane zone (BMZ),6 which triggers an immune response where T cells recognize the extracellular portion of BP180 and antibodies are formed against the likely autoantigen.1 One study has suggested that the autoantigen in LPP is the MCW-4 epitope within the C-terminal end of the NC16A domain of BP180.7

Histopathology of LPP reveals characteristics of both LP as well as BP. Typical features of LP on hematoxylin and eosin (H&E) staining include lichenoid lymphocytic interface dermatitis, sawtooth rete ridges, wedge-shaped hypergranulosis, and colloid bodies, as demonstrated from the biopsy of our patient’s gluteal cleft lesion (quiz image 1), while the predominant feature of BP on H&E staining includes a subepidermal bulla with eosinophils.2 Typically, direct immunofluorescence (DIF) shows linear deposits of IgG and/or C3 along the BMZ. Indirect immunofluorescence (IIF) often reveals IgG against the roof of the BMZ in a human split-skin substrate.1 Antibodies against BP180 or uncommonly BP230 often are detected on enzyme-linked immunosorbent assay (ELISA). For our patient, IIF and ELISA tests were positive. Given the clinical presentation with recurrent oral and gluteal cleft erosions, histologic findings, and the results of our patient’s immunological testing, the diagnosis of LPP was made.

Topical steroids often are used to treat localized disease of LPP.8 Oral prednisone also may be given for widespread or unresponsive disease.9 Other treatments include azathioprine, mycophenolate mofetil, hydroxychloroquine, dapsone, tetracycline in combination with nicotinamide, acitretin, ustekinumab, baricitinib, and rituximab with intravenous immunoglobulin.3,8,10-12 Any potential medication culprits should be discontinued.9 Patients with oral involvement may require a soft diet to avoid further mucosal insult.10 Additionally, providers should consider dentistry, ophthalmology, and/or otolaryngology referrals depending on disease severity.

Bullous pemphigoid, the most common autoimmune blistering disease, has an estimated incidence of 10 to 43 per million individuals per year.2 Classically, it presents with tense bullae on the skin of the lower abdomen, thighs, groin, forearms, and axillae. Circulating antibodies against 2 BMZ proteins—BP180 and BP230—are important factors in BP pathogenesis.2 Diagnosis of BP is based on clinical features, histologic findings, and immunological studies including DIF, IIF, and ELISA. An eosinophil-rich subepidermal split typically is seen on H&E staining (Figure 1).

Bullous pemphigoid. An eosinophil-rich subepidermal blister is present (H&E, original magnification ×200).
FIGURE 1. Bullous pemphigoid. An eosinophil-rich subepidermal blister is present (H&E, original magnification ×200).

Direct immunofluorescence displays linear IgG and/ or C3 staining at the BMZ. Indirect immunofluorescence on a human salt-split skin substrate commonly shows linear BMZ deposition on the roof of the blister.2 Indirect immunofluorescence for IgG deposition on monkey esophagus substrate shows linear BMZ deposition. Antibodies against the NC16A domain of BP180 (NC16A-BP180) are dominant, but BP230 antibodies against BP230 also are detected with ELISA.2 Further studies have indicated that the NC16A epitopes of BP180 that are targeted in BP are MCW-0-3,2 different from the autoantigen MCW-4 that is targeted in LPP.7

Paraneoplastic pemphigus (PNP) is another diagnosis to consider. Patients with PNP initially present with oral findings—most commonly chronic, erosive, and painful mucositis—followed by cutaneous involvement, which varies from the development of bullae to the formation of plaques similar to those of LP.13 The latter, in combination with oral erosions, may appear clinically similar to LPP. The results of DIF in conjugation with IIF and ELISA may help to further differentiate these disorders. Direct immunofluorescence in PNP typically reveals positive intercellular and/or BMZ IgG and C3, while DIF in LPP reveals depositions along the BMZ alone. Indirect immunofluorescence performed on rat bladder epithelium is particularly useful, as binding of IgG to rat bladder epithelium is characteristic of PNP and not seen in other disorders.14 Lastly, patients with PNP may develop IgG antibodies to various antigens such as desmoplakin I, desmoplakin II, envoplakin, periplakin, BP230, desmoglein 1, and desmoglein 3, which would not be expected in LPP patients.15 Hematoxylin and eosin staining differs from LPP, primarily with the location of the blister being intraepidermal. Acantholysis with hemorrhagic bullae can be seen (Figure 2).

Paraneoplastic pemphigus. Acantholysis, hemorrhagic bullae formation, and suprabasilar dyscohesion are present (H&E, original magnification ×100).
FIGURE 2. Paraneoplastic pemphigus. Acantholysis, hemorrhagic bullae formation, and suprabasilar dyscohesion are present (H&E, original magnification ×100).

Classic LP is an inflammatory disorder that mainly affects adults, with an estimated incidence of less than 1%.16 The classic form presents with purple, flat-topped, pruritic, polygonal papules and plaques of varying size that often are characterized by Wickham striae. Lichen planus possesses a broad spectrum of subtypes involving different locations, though skin lesions usually are localized to the extremities. Despite an unknown etiology, activated T cells and T helper type 1 cytokines are considered key in keratinocyte injury. Compact orthokeratosis, wedge-shaped hypergranulosis, focal dyskeratosis, and colloid bodies typically are found on H&E staining, along with a dense bandlike lymphohistiocytic infiltrate at the dermoepidermal junction (DEJ)(Figure 3). Direct immunofluorescence typically shows a shaggy band of fibrinogen along the DEJ in addition to colloid bodies that stain with various autoantibodies including IgM, IgG, IgA, and C3.16

Classic lichen planus. Lichenoid interface dermatitis at the dermoepidermal junction (H&E, original magnification ×100).
FIGURE 3. Classic lichen planus. Lichenoid interface dermatitis at the dermoepidermal junction (H&E, original magnification ×100).

Bullous LP is a rare variant of LP that commonly develops on the oral mucosa and the legs, with blisters confined on pre-existing LP lesions.9 The pathogenesis is related to an epidermal inflammatory infiltrate that leads to basal layer destruction followed by dermal-epidermal separations that cause blistering.17 Bullous LP does not have positive DIF, IIF, or ELISA because the pathophysiology does not involve autoantibody production. Histopathology typically displays an extensive inflammatory infiltrate and degeneration of the basal keratinocytes, resulting in large dermal-epidermal separations called Max-Joseph spaces (Figure 4).17 Colloid bodies are prominent in bullous LP but rarely are seen in LPP; eosinophils also are much more prominent in LPP compared to bullous LP.18 Unlike in LPP, DIF usually is negative in bullous LP, though lichenoid lesions may exhibit globular deposition of IgM, IgG, and IgA in the colloid bodies of the lower epidermis and/or papillary dermis. Similar to LP, DIF of the biopsy specimen shows linear or shaggy deposits of fibrinogen at the DEJ.17

Bullous lichen planus. A Max-Joseph space is visible due to a lichenoid infiltrate and degeneration of basal keratinocytes (H&E, original magnification ×100).
FIGURE 4. Bullous lichen planus. A Max-Joseph space is visible due to a lichenoid infiltrate and degeneration of basal keratinocytes (H&E, original magnification ×100).

References
  1. Hübner F, Langan EA, Recke A. Lichen planus pemphigoides: from lichenoid inflammation to autoantibody-mediated blistering. Front Immunol. 2019;10:1389.
  2.  Montagnon CM, Tolkachjov SN, Murrell DF, et al. Subepithelial autoimmune blistering dermatoses: clinical features and diagnosis. J Am Acad Dermatol. 2021;85:1-14.
  3. Hackländer K, Lehmann P, Hofmann SC. Successful treatment of lichen planus pemphigoides using acitretin as monotherapy. J Dtsch Dermatol Ges. 2014;12:818-819.
  4. Boyle M, Ashi S, Puiu T, et al. Lichen planus pemphigoides associated with PD-1 and PD-L1 inhibitors: a case series and review of the literature. Am J Dermatopathol. 2022;44:360-367.
  5. Zaraa I, Mahfoudh A, Sellami MK, et al. Lichen planus pemphigoides: four new cases and a review of the literature. Int J Dermatol. 2013;52:406-412.
  6. Bolognia J, Schaffer J, Cerroni L, eds. Dermatology. 4th ed. Elsevier; 2018.
  7. Zillikens D, Caux F, Mascaru JM Jr, et al. Autoantibodies in lichen planus pemphigoides react with a novel epitope within the C-terminal NC16A domain of BP180. J Invest Dermatol. 1999;113:117-121.
  8. Knisley RR, Petropolis AA, Mackey VT. Lichen planus pemphigoides treated with ustekinumab. Cutis. 2017;100:415-418.
  9. Liakopoulou A, Rallis E. Bullous lichen planus—a review. J Dermatol Case Rep. 2017;11:1-4.
  10. Weston G, Payette M. Update on lichen planus and its clinical variants. Int J Womens Dermatol. 2015;1:140-149.
  11. Moussa A, Colla TG, Asfour L, et al. Effective treatment of refractory lichen planus pemphigoides with a Janus kinase-1/2 inhibitor. Clin Exp Dermatol. 2022;47:2040-2041.
  12. Brennan M, Baldissano M, King L, et al. Successful use of rituximab and intravenous gamma globulin to treat checkpoint inhibitor-induced severe lichen planus pemphigoides. Skinmed. 2020;18:246-249.
  13. Kim JH, Kim SC. Paraneoplastic pemphigus: paraneoplastic autoimmune disease of the skin and mucosa. Front Immunol. 2019;10:1259.
  14. Stevens SR, Griffiths CE, Anhalt GJ, et al. Paraneoplastic pemphigus presenting as a lichen planus pemphigoides-like eruption. Arch Dermatol. 1993;129:866-869. 
  15. Ohzono A, Sogame R, Li X, et al. Clinical and immunological findings in 104 cases of paraneoplastic pemphigus. Br J Dermatol. 2015;173:1447-1452.
  16. Tziotzios C, Lee JYW, Brier T, et al. Lichen planus and lichenoid dermatoses: clinical overview and molecular basis. J Am Acad Dermatol. 2018;79:789-804.
  17. Papara C, Danescu S, Sitaru C, et al. Challenges and pitfalls between lichen planus pemphigoides and bullous lichen planus. Australas J Dermatol. 2022;63:165-171.
  18. Tripathy DM, Vashisht D, Rathore G, et al. Bullous lichen planus vs lichen planus pemphigoides: a diagnostic dilemma. Indian Dermatol Online J. 2022;13:282-284.
References
  1. Hübner F, Langan EA, Recke A. Lichen planus pemphigoides: from lichenoid inflammation to autoantibody-mediated blistering. Front Immunol. 2019;10:1389.
  2.  Montagnon CM, Tolkachjov SN, Murrell DF, et al. Subepithelial autoimmune blistering dermatoses: clinical features and diagnosis. J Am Acad Dermatol. 2021;85:1-14.
  3. Hackländer K, Lehmann P, Hofmann SC. Successful treatment of lichen planus pemphigoides using acitretin as monotherapy. J Dtsch Dermatol Ges. 2014;12:818-819.
  4. Boyle M, Ashi S, Puiu T, et al. Lichen planus pemphigoides associated with PD-1 and PD-L1 inhibitors: a case series and review of the literature. Am J Dermatopathol. 2022;44:360-367.
  5. Zaraa I, Mahfoudh A, Sellami MK, et al. Lichen planus pemphigoides: four new cases and a review of the literature. Int J Dermatol. 2013;52:406-412.
  6. Bolognia J, Schaffer J, Cerroni L, eds. Dermatology. 4th ed. Elsevier; 2018.
  7. Zillikens D, Caux F, Mascaru JM Jr, et al. Autoantibodies in lichen planus pemphigoides react with a novel epitope within the C-terminal NC16A domain of BP180. J Invest Dermatol. 1999;113:117-121.
  8. Knisley RR, Petropolis AA, Mackey VT. Lichen planus pemphigoides treated with ustekinumab. Cutis. 2017;100:415-418.
  9. Liakopoulou A, Rallis E. Bullous lichen planus—a review. J Dermatol Case Rep. 2017;11:1-4.
  10. Weston G, Payette M. Update on lichen planus and its clinical variants. Int J Womens Dermatol. 2015;1:140-149.
  11. Moussa A, Colla TG, Asfour L, et al. Effective treatment of refractory lichen planus pemphigoides with a Janus kinase-1/2 inhibitor. Clin Exp Dermatol. 2022;47:2040-2041.
  12. Brennan M, Baldissano M, King L, et al. Successful use of rituximab and intravenous gamma globulin to treat checkpoint inhibitor-induced severe lichen planus pemphigoides. Skinmed. 2020;18:246-249.
  13. Kim JH, Kim SC. Paraneoplastic pemphigus: paraneoplastic autoimmune disease of the skin and mucosa. Front Immunol. 2019;10:1259.
  14. Stevens SR, Griffiths CE, Anhalt GJ, et al. Paraneoplastic pemphigus presenting as a lichen planus pemphigoides-like eruption. Arch Dermatol. 1993;129:866-869. 
  15. Ohzono A, Sogame R, Li X, et al. Clinical and immunological findings in 104 cases of paraneoplastic pemphigus. Br J Dermatol. 2015;173:1447-1452.
  16. Tziotzios C, Lee JYW, Brier T, et al. Lichen planus and lichenoid dermatoses: clinical overview and molecular basis. J Am Acad Dermatol. 2018;79:789-804.
  17. Papara C, Danescu S, Sitaru C, et al. Challenges and pitfalls between lichen planus pemphigoides and bullous lichen planus. Australas J Dermatol. 2022;63:165-171.
  18. Tripathy DM, Vashisht D, Rathore G, et al. Bullous lichen planus vs lichen planus pemphigoides: a diagnostic dilemma. Indian Dermatol Online J. 2022;13:282-284.
Issue
Cutis - 111(4)
Issue
Cutis - 111(4)
Page Number
185,194-196
Page Number
185,194-196
Publications
Publications
Topics
Article Type
Display Headline
Recurrent Oral and Gluteal Cleft Erosions
Display Headline
Recurrent Oral and Gluteal Cleft Erosions
Sections
Questionnaire Body

A 71-year-old woman with no relevant medical history presented with recurrent painful erosions on the gingivae and gluteal cleft of 1 year’s duration. She previously was diagnosed by her periodontist with erosive lichen planus and was prescribed topical and oral steroids with minimal improvement. She denied fever, chills, weakness, fatigue, vision changes, eye pain, and sore throat. Dermatologic examination revealed edematous and erythematous upper and lower gingivae with mild erosions, as well as thin, eroded, erythematous plaques within the gluteal cleft. Indirect immunofluorescence revealed IgG with epidermal localization in a human split-skin substrate, and an enzyme-linked immunosorbent assay revealed positive IgG to bullous pemphigoid (BP) 180 and negative IgG to BP230. A 4-mm punch biopsy of the gluteal cleft was performed.

H&E, original magnification ×100.
H&E, original magnification ×100.

Erythematous eroded plaque of the gluteal cleft.
Erythematous eroded plaque of the gluteal cleft.

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 04/04/2023 - 12:00
Un-Gate On Date
Tue, 04/04/2023 - 12:00
Use ProPublica
CFC Schedule Remove Status
Tue, 04/04/2023 - 12:00
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media