User login
Wide Regional Variation in Dementia Risk Across the United States
TOPLINE:
The likelihood of receiving a dementia diagnosis in older adults varies significantly by region across the United States, a new study suggests. Rates ranged from 1.7% to 5.4%, with variations more pronounced in those aged 66-74 years and Black or Hispanic individuals.
METHODOLOGY:
- Researchers analyzed newly diagnosed cases of Alzheimer’s disease and related dementias (ADRD) using the 2018-2019 Medicare claims data for 4.8 million older adults across 306 hospital referral regions (HRRs).
- Participants were categorized by age and race or ethnicity to examine variations in diagnosis rates.
- Regional characteristics such as education level and prevalence of obesity, smoking, and diabetes were included to adjust for population risk factors.
- ADRD-specific diagnostic intensity was calculated as the ratio of the observed-to-expected new cases of ADRD in each HRR.
TAKEAWAY:
- Unadjusted analysis for that overall, 3% of older adults received a new ADRD diagnosis in 2019, with rates ranging from 1.7 to 5.4 per 100 individuals across HRRs and varied by age category.
- Regions in the South had the highest unadjusted ADRD case concentration, and the areas in the West/Northwest had the lowest.
- The ADRD-specific diagnosis intensity was 0.69-1.47 and varied the most in Black and Hispanic individuals and those aged 66-74 years.
- Regional differences in ADRD diagnosis rates are not fully explained by population risk factors, indicating potential health system-level differences.
IN PRACTICE:
“From place to place, the likelihood of getting your dementia diagnosed varies, and that may happen because of everything from practice norms for healthcare providers to individual patients’ knowledge and care-seeking behavior. These findings go beyond demographic and population-level differences in risk and indicate that there are health system-level differences that could be targeted and remediated,” lead author Julie P.W. Bynum, MD, MPH, said in a press release.
SOURCE:
The study was led by Dr. Bynum, professor of internal medicine, University of Michigan Medical School, Ann Arbor, Michigan, and published online in Alzheimer’s & Dementia.
LIMITATIONS:
The results may not be generalizable to other groups. The observational design of the study cannot completely negate residual confounding. The measures of population risks are coarser than those used in well-characterized epidemiologic studies, leading to potential imprecision. Finally, the study was not designed to determine whether regional differences in the likelihood of ADRD diagnosis resulted in differences in the population health outcomes.
DISCLOSURES:
The study was supported by a grant from the National Institute on Aging. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The likelihood of receiving a dementia diagnosis in older adults varies significantly by region across the United States, a new study suggests. Rates ranged from 1.7% to 5.4%, with variations more pronounced in those aged 66-74 years and Black or Hispanic individuals.
METHODOLOGY:
- Researchers analyzed newly diagnosed cases of Alzheimer’s disease and related dementias (ADRD) using the 2018-2019 Medicare claims data for 4.8 million older adults across 306 hospital referral regions (HRRs).
- Participants were categorized by age and race or ethnicity to examine variations in diagnosis rates.
- Regional characteristics such as education level and prevalence of obesity, smoking, and diabetes were included to adjust for population risk factors.
- ADRD-specific diagnostic intensity was calculated as the ratio of the observed-to-expected new cases of ADRD in each HRR.
TAKEAWAY:
- Unadjusted analysis for that overall, 3% of older adults received a new ADRD diagnosis in 2019, with rates ranging from 1.7 to 5.4 per 100 individuals across HRRs and varied by age category.
- Regions in the South had the highest unadjusted ADRD case concentration, and the areas in the West/Northwest had the lowest.
- The ADRD-specific diagnosis intensity was 0.69-1.47 and varied the most in Black and Hispanic individuals and those aged 66-74 years.
- Regional differences in ADRD diagnosis rates are not fully explained by population risk factors, indicating potential health system-level differences.
IN PRACTICE:
“From place to place, the likelihood of getting your dementia diagnosed varies, and that may happen because of everything from practice norms for healthcare providers to individual patients’ knowledge and care-seeking behavior. These findings go beyond demographic and population-level differences in risk and indicate that there are health system-level differences that could be targeted and remediated,” lead author Julie P.W. Bynum, MD, MPH, said in a press release.
SOURCE:
The study was led by Dr. Bynum, professor of internal medicine, University of Michigan Medical School, Ann Arbor, Michigan, and published online in Alzheimer’s & Dementia.
LIMITATIONS:
The results may not be generalizable to other groups. The observational design of the study cannot completely negate residual confounding. The measures of population risks are coarser than those used in well-characterized epidemiologic studies, leading to potential imprecision. Finally, the study was not designed to determine whether regional differences in the likelihood of ADRD diagnosis resulted in differences in the population health outcomes.
DISCLOSURES:
The study was supported by a grant from the National Institute on Aging. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The likelihood of receiving a dementia diagnosis in older adults varies significantly by region across the United States, a new study suggests. Rates ranged from 1.7% to 5.4%, with variations more pronounced in those aged 66-74 years and Black or Hispanic individuals.
METHODOLOGY:
- Researchers analyzed newly diagnosed cases of Alzheimer’s disease and related dementias (ADRD) using the 2018-2019 Medicare claims data for 4.8 million older adults across 306 hospital referral regions (HRRs).
- Participants were categorized by age and race or ethnicity to examine variations in diagnosis rates.
- Regional characteristics such as education level and prevalence of obesity, smoking, and diabetes were included to adjust for population risk factors.
- ADRD-specific diagnostic intensity was calculated as the ratio of the observed-to-expected new cases of ADRD in each HRR.
TAKEAWAY:
- Unadjusted analysis for that overall, 3% of older adults received a new ADRD diagnosis in 2019, with rates ranging from 1.7 to 5.4 per 100 individuals across HRRs and varied by age category.
- Regions in the South had the highest unadjusted ADRD case concentration, and the areas in the West/Northwest had the lowest.
- The ADRD-specific diagnosis intensity was 0.69-1.47 and varied the most in Black and Hispanic individuals and those aged 66-74 years.
- Regional differences in ADRD diagnosis rates are not fully explained by population risk factors, indicating potential health system-level differences.
IN PRACTICE:
“From place to place, the likelihood of getting your dementia diagnosed varies, and that may happen because of everything from practice norms for healthcare providers to individual patients’ knowledge and care-seeking behavior. These findings go beyond demographic and population-level differences in risk and indicate that there are health system-level differences that could be targeted and remediated,” lead author Julie P.W. Bynum, MD, MPH, said in a press release.
SOURCE:
The study was led by Dr. Bynum, professor of internal medicine, University of Michigan Medical School, Ann Arbor, Michigan, and published online in Alzheimer’s & Dementia.
LIMITATIONS:
The results may not be generalizable to other groups. The observational design of the study cannot completely negate residual confounding. The measures of population risks are coarser than those used in well-characterized epidemiologic studies, leading to potential imprecision. Finally, the study was not designed to determine whether regional differences in the likelihood of ADRD diagnosis resulted in differences in the population health outcomes.
DISCLOSURES:
The study was supported by a grant from the National Institute on Aging. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Parkinson’s Risk in Women and History of Migraine: New Data
TOPLINE:
A history of migraine is not associated with an elevated risk for Parkinson’s disease (PD) in women, regardless of headache frequency or migraine subtype, a new study suggests.
METHODOLOGY:
- Researchers analyzed data on 39,312 women health professionals aged ≥ 45 years and having no history of PD who enrolled in the Women’s Health Study between 1992 and 1995 and were followed until 2021.
- At baseline, 7321 women (18.6%) had migraine.
- The mean follow-up duration was 22 years.
- The primary outcome was a self-reported, physician-confirmed diagnosis of PD.
TAKEAWAY:
- During the study period, 685 women self-reported a diagnosis of PD.
- Of these, 18.7% of reported cases were in women with any migraine and 81.3% in women without migraine.
- No significant association was found between PD risk and a history of migraine, migraine subtypes (with or without aura), or migraine frequency.
- Migraine was not associated with a higher risk for PD than that of nonmigraine headaches.
IN PRACTICE:
“These results are reassuring for women who have migraine, which itself causes many burdens, that they don’t have to worry about an increased risk of Parkinson’s disease in the future,” study author Tobias Kurth, Charité - Universitätsmedizin Berlin, Germany, said in a press release.
SOURCE:
The study was led by Ricarda S. Schulz, MSc, Charité - Universitätsmedizin Berlin. It was published online in Neurology.
LIMITATIONS:
The study’s findings may not be generalizable to other populations, such as men and non-White individuals. The self-reported data on migraine and PD may be subject to inaccuracies. PD is often not diagnosed until symptoms have reached an advanced stage, potentially leading to cases being underreported. Changes in the status and frequency of migraine over the study period were not accounted for, which may have affected the results.
DISCLOSURES:
The authors did not disclose any specific funding for this work. The Women’s Health Study was supported by the National Cancer Institute and National Heart, Lung, and Blood Institute. Two authors reported having financial ties outside this work. Full disclosures are available in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
A history of migraine is not associated with an elevated risk for Parkinson’s disease (PD) in women, regardless of headache frequency or migraine subtype, a new study suggests.
METHODOLOGY:
- Researchers analyzed data on 39,312 women health professionals aged ≥ 45 years and having no history of PD who enrolled in the Women’s Health Study between 1992 and 1995 and were followed until 2021.
- At baseline, 7321 women (18.6%) had migraine.
- The mean follow-up duration was 22 years.
- The primary outcome was a self-reported, physician-confirmed diagnosis of PD.
TAKEAWAY:
- During the study period, 685 women self-reported a diagnosis of PD.
- Of these, 18.7% of reported cases were in women with any migraine and 81.3% in women without migraine.
- No significant association was found between PD risk and a history of migraine, migraine subtypes (with or without aura), or migraine frequency.
- Migraine was not associated with a higher risk for PD than that of nonmigraine headaches.
IN PRACTICE:
“These results are reassuring for women who have migraine, which itself causes many burdens, that they don’t have to worry about an increased risk of Parkinson’s disease in the future,” study author Tobias Kurth, Charité - Universitätsmedizin Berlin, Germany, said in a press release.
SOURCE:
The study was led by Ricarda S. Schulz, MSc, Charité - Universitätsmedizin Berlin. It was published online in Neurology.
LIMITATIONS:
The study’s findings may not be generalizable to other populations, such as men and non-White individuals. The self-reported data on migraine and PD may be subject to inaccuracies. PD is often not diagnosed until symptoms have reached an advanced stage, potentially leading to cases being underreported. Changes in the status and frequency of migraine over the study period were not accounted for, which may have affected the results.
DISCLOSURES:
The authors did not disclose any specific funding for this work. The Women’s Health Study was supported by the National Cancer Institute and National Heart, Lung, and Blood Institute. Two authors reported having financial ties outside this work. Full disclosures are available in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
A history of migraine is not associated with an elevated risk for Parkinson’s disease (PD) in women, regardless of headache frequency or migraine subtype, a new study suggests.
METHODOLOGY:
- Researchers analyzed data on 39,312 women health professionals aged ≥ 45 years and having no history of PD who enrolled in the Women’s Health Study between 1992 and 1995 and were followed until 2021.
- At baseline, 7321 women (18.6%) had migraine.
- The mean follow-up duration was 22 years.
- The primary outcome was a self-reported, physician-confirmed diagnosis of PD.
TAKEAWAY:
- During the study period, 685 women self-reported a diagnosis of PD.
- Of these, 18.7% of reported cases were in women with any migraine and 81.3% in women without migraine.
- No significant association was found between PD risk and a history of migraine, migraine subtypes (with or without aura), or migraine frequency.
- Migraine was not associated with a higher risk for PD than that of nonmigraine headaches.
IN PRACTICE:
“These results are reassuring for women who have migraine, which itself causes many burdens, that they don’t have to worry about an increased risk of Parkinson’s disease in the future,” study author Tobias Kurth, Charité - Universitätsmedizin Berlin, Germany, said in a press release.
SOURCE:
The study was led by Ricarda S. Schulz, MSc, Charité - Universitätsmedizin Berlin. It was published online in Neurology.
LIMITATIONS:
The study’s findings may not be generalizable to other populations, such as men and non-White individuals. The self-reported data on migraine and PD may be subject to inaccuracies. PD is often not diagnosed until symptoms have reached an advanced stage, potentially leading to cases being underreported. Changes in the status and frequency of migraine over the study period were not accounted for, which may have affected the results.
DISCLOSURES:
The authors did not disclose any specific funding for this work. The Women’s Health Study was supported by the National Cancer Institute and National Heart, Lung, and Blood Institute. Two authors reported having financial ties outside this work. Full disclosures are available in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Baby-Led Weaning
I first heard the term “baby-led weaning” about 20 years ago, which turns out was just a few years after the concept was introduced to the public by a public health/midwife in Britain. Starting infants on solid foods when they could feed themselves didn’t sound as off-the-wall to me as it did to most other folks, but I chose not to include it in my list of standard recommendations at the 4- and 6-month well child visits. If any parent had asked me my opinion I would have told them to give it a try with a few specific cautions about what and how. But, I don’t recall any parents asking me. The ones who knew me well or had read, or at least heard about, my book on picky eating must have already figured out what my answer would be. The parents who didn’t know me may have been afraid I would tell them it was a crazy idea.
Twelve years ago I retired from office practice and hadn’t heard a peep about baby-led weaning until last week when I encountered a story in The New York Times. It appears that while I have been reveling in my post-practice existence, baby-led weaning has become a “thing.” As the author of the article observed: “The concept seems to appeal to millennials who favor parenting philosophies that prioritize child autonomy.”
Baby-led weaning’s traction has been so robust that the largest manufacturer of baby food in this country has been labeling some of its products “baby-led friendly since 2021.” There are several online businesses that have tapped into the growing market. One offers a very detailed free directory that lists almost any edible you can imagine with recommendations of when and how they can be presented in a safe and appealing matter to little hand feeders. Of course the company has also figured out a way to monetize the product.
Not surprisingly the American Academy of Pediatrics (AAP) has remained silent on baby-led weaning. However, in The New York Times article, Dr. Mark R. Corkins, chair of the AAP nutrition committee, is quoted as describing baby-led weaning is “a social media–driven invention.”
While I was interested to learn about the concept’s growth and commercialization, I was troubled to find that like co-sleeping, sleep training, and exclusive breastfeeding, baby-led weaning has become one of those angst-producing topics that is torturing new parents who live every day in fear that they “aren’t doing it right.” We pediatricians might deserve a small dose of blame for not vigorously emphasizing that there are numerous ways to skin that cat known as parenting. However, social media websites and Mom chat rooms are probably more responsible for creating an atmosphere in which parents are afraid of being ostracized for the decisions they have made in good faith whether it is about weaning or when to start toilet training.
In isolated cultures, weaning a baby to solids was probably never a topic for discussion or debate. New parents did what their parents did, or more likely a child’s grandmother advised or took over the process herself. The child was fed what the rest of the family ate. If it was something the infant could handle himself you gave it to him. If not you mashed it up or maybe you chewed it for him into a consistency he could manage.
However, most new parents have become so distanced from their own parents’ childrearing practices geographically, temporally, and philosophically, that they must rely on folks like us and others whom they believe are, or at least claim to be, experts. Young adults are no longer hesitant to cross ethnic thresholds when they decide to be co-parents, meaning that any remnant of family tradition is either diluted or lost outright. In the void created by this abandonment of tradition, corporations were happy to step in with easy-to-prepare baby food that lacks in nutritional and dietary variety. Baby-led weaning is just one more logical step in the metamorphosis of our society’s infant feeding patterns.
I still have no problem with baby-led weaning as an option for parents, particularly if with just a click of a mouse they can access safe and healthy advice to make up for generations of grandmotherly experience acquired over hundreds of years. However,
It is one thing when parents hoping to encourage the process of self-feeding offer their infants an edible that may not be in the family’s usual diet. However, it is a totally different matter when a family allows itself to become dietary contortionists to a accommodate a 4-year-old whose diet consists of a monotonous rotation of three pasta shapes topped with grated Parmesan cheese, and on a good day a raw carrot slice or two. Parents living in this nutritional wasteland may have given up on managing their children’s pickiness, and may find it is less stressful to join the child and eat a few forkfuls of pasta to preserve some semblance of a family dinner. Then after the child has been put to bed they have their own balanced meal.
Almost by definition family meals are a compromise. Even adults without children negotiate often unspoken menu patterns with their partners. “This evening we’ll have your favorite, I may have my favorite next week.”
Most parents of young children understand that their diet may be a bit heavier on pasta than they might prefer and a little less varied when it comes to vegetables. It is just part of the deal. However, when mealtimes become totally dictated by the pickiness of a child there is a problem. While a poorly structured child-led family diet may be nutritionally deficient, the bigger problem is that it is expensive in time and labor, two resources usually in short supply in young families.
Theoretically, infants who have led their own weaning are more likely to have been introduced to a broad variety of flavors and textures and this may carry them into childhood as more adventuresome eaters. Picky eating can be managed successfully and result in a family that can enjoy the psychological and emotional benefits of nutritionally balanced family meals, but it requires a combination of parental courage and patience.
It is unclear exactly how we got into a situation in which a generation of parents makes things more difficult for themselves by favoring practices that overemphasize child autonomy. It may be that the parents had suffered under autocratic parents themselves, or more likely they have read too many novels or watched too many movies and TV shows in which the parents were portrayed as overbearing or controlling. Or, it may simply be that they haven’t had enough exposure to young children to realize that they all benefit from clear limits to a varying degree.
In the process of watching tens of thousands of parents, it has become clear to me that those who are the most successful are leaders and that they lead primarily by example. They have learned to be masters in the art of deception by creating a safe environment with sensible limits while at the same time fostering an atmosphere in which the child sees himself as participating in the process.
The biblical prophet Isaiah (11:6-9) in his description of how things will be different after the Lord acts to help his people predicts: “and a little child shall lead them.” This prediction fits nicely as the last in a string of crazy situations that includes a wolf living with a lamb and a leopard lying down with a calf.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I first heard the term “baby-led weaning” about 20 years ago, which turns out was just a few years after the concept was introduced to the public by a public health/midwife in Britain. Starting infants on solid foods when they could feed themselves didn’t sound as off-the-wall to me as it did to most other folks, but I chose not to include it in my list of standard recommendations at the 4- and 6-month well child visits. If any parent had asked me my opinion I would have told them to give it a try with a few specific cautions about what and how. But, I don’t recall any parents asking me. The ones who knew me well or had read, or at least heard about, my book on picky eating must have already figured out what my answer would be. The parents who didn’t know me may have been afraid I would tell them it was a crazy idea.
Twelve years ago I retired from office practice and hadn’t heard a peep about baby-led weaning until last week when I encountered a story in The New York Times. It appears that while I have been reveling in my post-practice existence, baby-led weaning has become a “thing.” As the author of the article observed: “The concept seems to appeal to millennials who favor parenting philosophies that prioritize child autonomy.”
Baby-led weaning’s traction has been so robust that the largest manufacturer of baby food in this country has been labeling some of its products “baby-led friendly since 2021.” There are several online businesses that have tapped into the growing market. One offers a very detailed free directory that lists almost any edible you can imagine with recommendations of when and how they can be presented in a safe and appealing matter to little hand feeders. Of course the company has also figured out a way to monetize the product.
Not surprisingly the American Academy of Pediatrics (AAP) has remained silent on baby-led weaning. However, in The New York Times article, Dr. Mark R. Corkins, chair of the AAP nutrition committee, is quoted as describing baby-led weaning is “a social media–driven invention.”
While I was interested to learn about the concept’s growth and commercialization, I was troubled to find that like co-sleeping, sleep training, and exclusive breastfeeding, baby-led weaning has become one of those angst-producing topics that is torturing new parents who live every day in fear that they “aren’t doing it right.” We pediatricians might deserve a small dose of blame for not vigorously emphasizing that there are numerous ways to skin that cat known as parenting. However, social media websites and Mom chat rooms are probably more responsible for creating an atmosphere in which parents are afraid of being ostracized for the decisions they have made in good faith whether it is about weaning or when to start toilet training.
In isolated cultures, weaning a baby to solids was probably never a topic for discussion or debate. New parents did what their parents did, or more likely a child’s grandmother advised or took over the process herself. The child was fed what the rest of the family ate. If it was something the infant could handle himself you gave it to him. If not you mashed it up or maybe you chewed it for him into a consistency he could manage.
However, most new parents have become so distanced from their own parents’ childrearing practices geographically, temporally, and philosophically, that they must rely on folks like us and others whom they believe are, or at least claim to be, experts. Young adults are no longer hesitant to cross ethnic thresholds when they decide to be co-parents, meaning that any remnant of family tradition is either diluted or lost outright. In the void created by this abandonment of tradition, corporations were happy to step in with easy-to-prepare baby food that lacks in nutritional and dietary variety. Baby-led weaning is just one more logical step in the metamorphosis of our society’s infant feeding patterns.
I still have no problem with baby-led weaning as an option for parents, particularly if with just a click of a mouse they can access safe and healthy advice to make up for generations of grandmotherly experience acquired over hundreds of years. However,
It is one thing when parents hoping to encourage the process of self-feeding offer their infants an edible that may not be in the family’s usual diet. However, it is a totally different matter when a family allows itself to become dietary contortionists to a accommodate a 4-year-old whose diet consists of a monotonous rotation of three pasta shapes topped with grated Parmesan cheese, and on a good day a raw carrot slice or two. Parents living in this nutritional wasteland may have given up on managing their children’s pickiness, and may find it is less stressful to join the child and eat a few forkfuls of pasta to preserve some semblance of a family dinner. Then after the child has been put to bed they have their own balanced meal.
Almost by definition family meals are a compromise. Even adults without children negotiate often unspoken menu patterns with their partners. “This evening we’ll have your favorite, I may have my favorite next week.”
Most parents of young children understand that their diet may be a bit heavier on pasta than they might prefer and a little less varied when it comes to vegetables. It is just part of the deal. However, when mealtimes become totally dictated by the pickiness of a child there is a problem. While a poorly structured child-led family diet may be nutritionally deficient, the bigger problem is that it is expensive in time and labor, two resources usually in short supply in young families.
Theoretically, infants who have led their own weaning are more likely to have been introduced to a broad variety of flavors and textures and this may carry them into childhood as more adventuresome eaters. Picky eating can be managed successfully and result in a family that can enjoy the psychological and emotional benefits of nutritionally balanced family meals, but it requires a combination of parental courage and patience.
It is unclear exactly how we got into a situation in which a generation of parents makes things more difficult for themselves by favoring practices that overemphasize child autonomy. It may be that the parents had suffered under autocratic parents themselves, or more likely they have read too many novels or watched too many movies and TV shows in which the parents were portrayed as overbearing or controlling. Or, it may simply be that they haven’t had enough exposure to young children to realize that they all benefit from clear limits to a varying degree.
In the process of watching tens of thousands of parents, it has become clear to me that those who are the most successful are leaders and that they lead primarily by example. They have learned to be masters in the art of deception by creating a safe environment with sensible limits while at the same time fostering an atmosphere in which the child sees himself as participating in the process.
The biblical prophet Isaiah (11:6-9) in his description of how things will be different after the Lord acts to help his people predicts: “and a little child shall lead them.” This prediction fits nicely as the last in a string of crazy situations that includes a wolf living with a lamb and a leopard lying down with a calf.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I first heard the term “baby-led weaning” about 20 years ago, which turns out was just a few years after the concept was introduced to the public by a public health/midwife in Britain. Starting infants on solid foods when they could feed themselves didn’t sound as off-the-wall to me as it did to most other folks, but I chose not to include it in my list of standard recommendations at the 4- and 6-month well child visits. If any parent had asked me my opinion I would have told them to give it a try with a few specific cautions about what and how. But, I don’t recall any parents asking me. The ones who knew me well or had read, or at least heard about, my book on picky eating must have already figured out what my answer would be. The parents who didn’t know me may have been afraid I would tell them it was a crazy idea.
Twelve years ago I retired from office practice and hadn’t heard a peep about baby-led weaning until last week when I encountered a story in The New York Times. It appears that while I have been reveling in my post-practice existence, baby-led weaning has become a “thing.” As the author of the article observed: “The concept seems to appeal to millennials who favor parenting philosophies that prioritize child autonomy.”
Baby-led weaning’s traction has been so robust that the largest manufacturer of baby food in this country has been labeling some of its products “baby-led friendly since 2021.” There are several online businesses that have tapped into the growing market. One offers a very detailed free directory that lists almost any edible you can imagine with recommendations of when and how they can be presented in a safe and appealing matter to little hand feeders. Of course the company has also figured out a way to monetize the product.
Not surprisingly the American Academy of Pediatrics (AAP) has remained silent on baby-led weaning. However, in The New York Times article, Dr. Mark R. Corkins, chair of the AAP nutrition committee, is quoted as describing baby-led weaning is “a social media–driven invention.”
While I was interested to learn about the concept’s growth and commercialization, I was troubled to find that like co-sleeping, sleep training, and exclusive breastfeeding, baby-led weaning has become one of those angst-producing topics that is torturing new parents who live every day in fear that they “aren’t doing it right.” We pediatricians might deserve a small dose of blame for not vigorously emphasizing that there are numerous ways to skin that cat known as parenting. However, social media websites and Mom chat rooms are probably more responsible for creating an atmosphere in which parents are afraid of being ostracized for the decisions they have made in good faith whether it is about weaning or when to start toilet training.
In isolated cultures, weaning a baby to solids was probably never a topic for discussion or debate. New parents did what their parents did, or more likely a child’s grandmother advised or took over the process herself. The child was fed what the rest of the family ate. If it was something the infant could handle himself you gave it to him. If not you mashed it up or maybe you chewed it for him into a consistency he could manage.
However, most new parents have become so distanced from their own parents’ childrearing practices geographically, temporally, and philosophically, that they must rely on folks like us and others whom they believe are, or at least claim to be, experts. Young adults are no longer hesitant to cross ethnic thresholds when they decide to be co-parents, meaning that any remnant of family tradition is either diluted or lost outright. In the void created by this abandonment of tradition, corporations were happy to step in with easy-to-prepare baby food that lacks in nutritional and dietary variety. Baby-led weaning is just one more logical step in the metamorphosis of our society’s infant feeding patterns.
I still have no problem with baby-led weaning as an option for parents, particularly if with just a click of a mouse they can access safe and healthy advice to make up for generations of grandmotherly experience acquired over hundreds of years. However,
It is one thing when parents hoping to encourage the process of self-feeding offer their infants an edible that may not be in the family’s usual diet. However, it is a totally different matter when a family allows itself to become dietary contortionists to a accommodate a 4-year-old whose diet consists of a monotonous rotation of three pasta shapes topped with grated Parmesan cheese, and on a good day a raw carrot slice or two. Parents living in this nutritional wasteland may have given up on managing their children’s pickiness, and may find it is less stressful to join the child and eat a few forkfuls of pasta to preserve some semblance of a family dinner. Then after the child has been put to bed they have their own balanced meal.
Almost by definition family meals are a compromise. Even adults without children negotiate often unspoken menu patterns with their partners. “This evening we’ll have your favorite, I may have my favorite next week.”
Most parents of young children understand that their diet may be a bit heavier on pasta than they might prefer and a little less varied when it comes to vegetables. It is just part of the deal. However, when mealtimes become totally dictated by the pickiness of a child there is a problem. While a poorly structured child-led family diet may be nutritionally deficient, the bigger problem is that it is expensive in time and labor, two resources usually in short supply in young families.
Theoretically, infants who have led their own weaning are more likely to have been introduced to a broad variety of flavors and textures and this may carry them into childhood as more adventuresome eaters. Picky eating can be managed successfully and result in a family that can enjoy the psychological and emotional benefits of nutritionally balanced family meals, but it requires a combination of parental courage and patience.
It is unclear exactly how we got into a situation in which a generation of parents makes things more difficult for themselves by favoring practices that overemphasize child autonomy. It may be that the parents had suffered under autocratic parents themselves, or more likely they have read too many novels or watched too many movies and TV shows in which the parents were portrayed as overbearing or controlling. Or, it may simply be that they haven’t had enough exposure to young children to realize that they all benefit from clear limits to a varying degree.
In the process of watching tens of thousands of parents, it has become clear to me that those who are the most successful are leaders and that they lead primarily by example. They have learned to be masters in the art of deception by creating a safe environment with sensible limits while at the same time fostering an atmosphere in which the child sees himself as participating in the process.
The biblical prophet Isaiah (11:6-9) in his description of how things will be different after the Lord acts to help his people predicts: “and a little child shall lead them.” This prediction fits nicely as the last in a string of crazy situations that includes a wolf living with a lamb and a leopard lying down with a calf.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
ACG/ASGE Task Force Identifies 19 Indicators for Achieving Quality GI Endoscopy
— most of which have a performance target > 98%, implying they should be achieved in nearly every case.
The task force’s work was published online in The American Journal of Gastroenterology.
“The purpose of this paper is to delineate all of the steps that the endoscopist should be thinking about before they perform any endoscopy,” task force member Nicholas Shaheen, MD, MPH, Division of Gastroenterology and Hepatology, the University of North Carolina at Chapel Hill, said in an interview.
“Some of these are straightforward — for instance, did we get informed consent? Others are more nuanced — did we appropriately plan for sedation for the procedure, or did we give the right antibiotics before the procedure to prevent an infectious complication after?
“While the vast majority of endoscopists do these measures with every procedure, especially in unusual circumstances or when the procedure is an emergency, they can be overlooked. Having these quality indicators listed in one place should minimize these omissions,” Dr. Shaheen said.
Four Priority Indicators
The update represents the third iteration of the ACG/ASGE quality indicators on GI endoscopic procedures, the most recent of which was published nearly a decade ago.
As in preceding versions, the task force “prioritized indicators that have wide-ranging clinical implications and have been validated in clinical studies.” There are 19 in total, divided into three time periods: Preprocedure (8), intraprocedure (4), and postprocedure (7).
While all 19 indicators are intended to serve as a framework for continual quality improvement efforts among endoscopists and units, the task force recognized a subset of 4 they identified as being a particular priority:
- Frequency with which endoscopy is performed for an indication that is included in a published standard list of appropriate indications and the indication is documented (performance target > 95%)
- Frequency with which prophylactic antibiotics are administered for appropriate indications (performance target > 98%)
- Frequency with which a plan for the management of antithrombotic therapy is formulated and documented before the procedure (performance target = 95%)
- Frequency with which adverse events are documented (performance target > 98%)
Room for Improvement
There remains a lack of compliance with some of these indicators, the task force said.
“Procedures are still performed for questionable indications, adverse events are not always captured and documented, and communication between the endoscopist and patient and/or involved clinicians is sometimes lacking.
“For these reasons, strict attention to the quality indicators in this document and an active plan for improvement in areas of measured deficiency should be a central pillar of the successful practice of endoscopy,” they wrote.
The task force advised that quality improvement efforts initially focus on the four priority indicators and then progress to include other indicators once it is determined that endoscopists are performing above recommended thresholds, either at baseline or after corrective interventions.
Reached for comment, Ashwin N. Ananthakrishnan, MD, MPH, AGAF, a gastroenterologist with Massachusetts General Hospital and Harvard Medical School, both in Boston, Massachusetts, said in an interview that these updated recommendations are “important and commonsense standard procedures that should be followed for all procedures.”
“We recognize endoscopic evaluation plays an important role in the assessment of GI illnesses, but there are also both risks and costs to this as a diagnostic and therapeutic intervention. Thus, it is important to make sure these standards are met, to optimize the outcomes of our patients,” said Dr. Ananthakrishnan, who was not involved in this work.
In a separate statement, the American Gastroenterological Association affirmed that is committed to supporting gastroenterologists in providing high-quality care via improved patients outcomes, increased efficiency and cost-effectiveness. AGA encouraged GIs to visit gastro.org/quality to learn more and find quality measures on topics including Barrett’s esophagus, inflammatory bowel disease, acute pancreatitis, and gastric intestinal metaplasia.
This work had no financial support. Dr. Shaheen and Dr. Ananthakrishnan disclosed having no relevant competing interests.
A version of this article first appeared on Medscape.com.
— most of which have a performance target > 98%, implying they should be achieved in nearly every case.
The task force’s work was published online in The American Journal of Gastroenterology.
“The purpose of this paper is to delineate all of the steps that the endoscopist should be thinking about before they perform any endoscopy,” task force member Nicholas Shaheen, MD, MPH, Division of Gastroenterology and Hepatology, the University of North Carolina at Chapel Hill, said in an interview.
“Some of these are straightforward — for instance, did we get informed consent? Others are more nuanced — did we appropriately plan for sedation for the procedure, or did we give the right antibiotics before the procedure to prevent an infectious complication after?
“While the vast majority of endoscopists do these measures with every procedure, especially in unusual circumstances or when the procedure is an emergency, they can be overlooked. Having these quality indicators listed in one place should minimize these omissions,” Dr. Shaheen said.
Four Priority Indicators
The update represents the third iteration of the ACG/ASGE quality indicators on GI endoscopic procedures, the most recent of which was published nearly a decade ago.
As in preceding versions, the task force “prioritized indicators that have wide-ranging clinical implications and have been validated in clinical studies.” There are 19 in total, divided into three time periods: Preprocedure (8), intraprocedure (4), and postprocedure (7).
While all 19 indicators are intended to serve as a framework for continual quality improvement efforts among endoscopists and units, the task force recognized a subset of 4 they identified as being a particular priority:
- Frequency with which endoscopy is performed for an indication that is included in a published standard list of appropriate indications and the indication is documented (performance target > 95%)
- Frequency with which prophylactic antibiotics are administered for appropriate indications (performance target > 98%)
- Frequency with which a plan for the management of antithrombotic therapy is formulated and documented before the procedure (performance target = 95%)
- Frequency with which adverse events are documented (performance target > 98%)
Room for Improvement
There remains a lack of compliance with some of these indicators, the task force said.
“Procedures are still performed for questionable indications, adverse events are not always captured and documented, and communication between the endoscopist and patient and/or involved clinicians is sometimes lacking.
“For these reasons, strict attention to the quality indicators in this document and an active plan for improvement in areas of measured deficiency should be a central pillar of the successful practice of endoscopy,” they wrote.
The task force advised that quality improvement efforts initially focus on the four priority indicators and then progress to include other indicators once it is determined that endoscopists are performing above recommended thresholds, either at baseline or after corrective interventions.
Reached for comment, Ashwin N. Ananthakrishnan, MD, MPH, AGAF, a gastroenterologist with Massachusetts General Hospital and Harvard Medical School, both in Boston, Massachusetts, said in an interview that these updated recommendations are “important and commonsense standard procedures that should be followed for all procedures.”
“We recognize endoscopic evaluation plays an important role in the assessment of GI illnesses, but there are also both risks and costs to this as a diagnostic and therapeutic intervention. Thus, it is important to make sure these standards are met, to optimize the outcomes of our patients,” said Dr. Ananthakrishnan, who was not involved in this work.
In a separate statement, the American Gastroenterological Association affirmed that is committed to supporting gastroenterologists in providing high-quality care via improved patients outcomes, increased efficiency and cost-effectiveness. AGA encouraged GIs to visit gastro.org/quality to learn more and find quality measures on topics including Barrett’s esophagus, inflammatory bowel disease, acute pancreatitis, and gastric intestinal metaplasia.
This work had no financial support. Dr. Shaheen and Dr. Ananthakrishnan disclosed having no relevant competing interests.
A version of this article first appeared on Medscape.com.
— most of which have a performance target > 98%, implying they should be achieved in nearly every case.
The task force’s work was published online in The American Journal of Gastroenterology.
“The purpose of this paper is to delineate all of the steps that the endoscopist should be thinking about before they perform any endoscopy,” task force member Nicholas Shaheen, MD, MPH, Division of Gastroenterology and Hepatology, the University of North Carolina at Chapel Hill, said in an interview.
“Some of these are straightforward — for instance, did we get informed consent? Others are more nuanced — did we appropriately plan for sedation for the procedure, or did we give the right antibiotics before the procedure to prevent an infectious complication after?
“While the vast majority of endoscopists do these measures with every procedure, especially in unusual circumstances or when the procedure is an emergency, they can be overlooked. Having these quality indicators listed in one place should minimize these omissions,” Dr. Shaheen said.
Four Priority Indicators
The update represents the third iteration of the ACG/ASGE quality indicators on GI endoscopic procedures, the most recent of which was published nearly a decade ago.
As in preceding versions, the task force “prioritized indicators that have wide-ranging clinical implications and have been validated in clinical studies.” There are 19 in total, divided into three time periods: Preprocedure (8), intraprocedure (4), and postprocedure (7).
While all 19 indicators are intended to serve as a framework for continual quality improvement efforts among endoscopists and units, the task force recognized a subset of 4 they identified as being a particular priority:
- Frequency with which endoscopy is performed for an indication that is included in a published standard list of appropriate indications and the indication is documented (performance target > 95%)
- Frequency with which prophylactic antibiotics are administered for appropriate indications (performance target > 98%)
- Frequency with which a plan for the management of antithrombotic therapy is formulated and documented before the procedure (performance target = 95%)
- Frequency with which adverse events are documented (performance target > 98%)
Room for Improvement
There remains a lack of compliance with some of these indicators, the task force said.
“Procedures are still performed for questionable indications, adverse events are not always captured and documented, and communication between the endoscopist and patient and/or involved clinicians is sometimes lacking.
“For these reasons, strict attention to the quality indicators in this document and an active plan for improvement in areas of measured deficiency should be a central pillar of the successful practice of endoscopy,” they wrote.
The task force advised that quality improvement efforts initially focus on the four priority indicators and then progress to include other indicators once it is determined that endoscopists are performing above recommended thresholds, either at baseline or after corrective interventions.
Reached for comment, Ashwin N. Ananthakrishnan, MD, MPH, AGAF, a gastroenterologist with Massachusetts General Hospital and Harvard Medical School, both in Boston, Massachusetts, said in an interview that these updated recommendations are “important and commonsense standard procedures that should be followed for all procedures.”
“We recognize endoscopic evaluation plays an important role in the assessment of GI illnesses, but there are also both risks and costs to this as a diagnostic and therapeutic intervention. Thus, it is important to make sure these standards are met, to optimize the outcomes of our patients,” said Dr. Ananthakrishnan, who was not involved in this work.
In a separate statement, the American Gastroenterological Association affirmed that is committed to supporting gastroenterologists in providing high-quality care via improved patients outcomes, increased efficiency and cost-effectiveness. AGA encouraged GIs to visit gastro.org/quality to learn more and find quality measures on topics including Barrett’s esophagus, inflammatory bowel disease, acute pancreatitis, and gastric intestinal metaplasia.
This work had no financial support. Dr. Shaheen and Dr. Ananthakrishnan disclosed having no relevant competing interests.
A version of this article first appeared on Medscape.com.
FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY
Do Cannabis Users Need More Anesthesia During Surgery?
TOPLINE:
However, the clinical relevance of this difference remains unclear.
METHODOLOGY:
- To assess if cannabis use leads to higher doses of inhalational anesthesia during surgery, the researchers conducted a retrospective cohort study comparing the average intraoperative minimum alveolar concentrations of volatile anesthetics (isoflurane and sevoflurane) between older adults who used cannabis products and those who did not.
- The researchers reviewed electronic health records of 22,476 patients aged 65 years or older who underwent surgery at the University of Florida Health System between 2018 and 2020.
- Overall, 268 patients who reported using cannabis within 60 days of surgery (median age, 69 years; 35% women) were matched to 1072 nonusers.
- The median duration of anesthesia was 175 minutes.
- The primary outcome was the intraoperative time-weighted average of isoflurane or sevoflurane minimum alveolar concentration equivalents.
TAKEAWAY:
- Cannabis users had significantly higher average minimum alveolar concentrations of isoflurane or sevoflurane than nonusers (mean, 0.58 vs 0.54; mean difference, 0.04; P = .021).
- The findings were confirmed in a sensitivity analysis that revealed higher mean average minimum alveolar concentrations of anesthesia in cannabis users than in nonusers (0.57 vs 0.53; P = .029).
- Although the 0.04 difference in minimum alveolar concentration between cannabis users and nonusers was statistically significant, its clinical importance is unclear.
IN PRACTICE:
“While recent guidelines underscore the importance of universal screening for cannabinoids before surgery, caution is paramount to prevent clinical bias leading to the administration of unnecessary higher doses of inhalational anesthesia, especially as robust evidence supporting such practices remains lacking,” the authors of the study wrote.
SOURCE:
This study was led by Ruba Sajdeya, MD, PhD, of the Department of Epidemiology at the University of Florida, Gainesville, and was published online in August 2024 in Anesthesiology.
LIMITATIONS:
This study lacked access to prescription or dispensed medications, including opioids, which may have introduced residual confounding. Potential underdocumentation of cannabis use in medical records could have led to exposure misclassification. The causality between cannabis usage and increased anesthetic dosing could not be established due to the observational nature of this study.
DISCLOSURES:
This study was supported by the National Institute on Aging, the National Institutes of Health, and in part by the University of Florida Clinical and Translational Science Institute. Some authors declared receiving research support, consulting fees, and honoraria and having other ties with pharmaceutical companies and various other sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
However, the clinical relevance of this difference remains unclear.
METHODOLOGY:
- To assess if cannabis use leads to higher doses of inhalational anesthesia during surgery, the researchers conducted a retrospective cohort study comparing the average intraoperative minimum alveolar concentrations of volatile anesthetics (isoflurane and sevoflurane) between older adults who used cannabis products and those who did not.
- The researchers reviewed electronic health records of 22,476 patients aged 65 years or older who underwent surgery at the University of Florida Health System between 2018 and 2020.
- Overall, 268 patients who reported using cannabis within 60 days of surgery (median age, 69 years; 35% women) were matched to 1072 nonusers.
- The median duration of anesthesia was 175 minutes.
- The primary outcome was the intraoperative time-weighted average of isoflurane or sevoflurane minimum alveolar concentration equivalents.
TAKEAWAY:
- Cannabis users had significantly higher average minimum alveolar concentrations of isoflurane or sevoflurane than nonusers (mean, 0.58 vs 0.54; mean difference, 0.04; P = .021).
- The findings were confirmed in a sensitivity analysis that revealed higher mean average minimum alveolar concentrations of anesthesia in cannabis users than in nonusers (0.57 vs 0.53; P = .029).
- Although the 0.04 difference in minimum alveolar concentration between cannabis users and nonusers was statistically significant, its clinical importance is unclear.
IN PRACTICE:
“While recent guidelines underscore the importance of universal screening for cannabinoids before surgery, caution is paramount to prevent clinical bias leading to the administration of unnecessary higher doses of inhalational anesthesia, especially as robust evidence supporting such practices remains lacking,” the authors of the study wrote.
SOURCE:
This study was led by Ruba Sajdeya, MD, PhD, of the Department of Epidemiology at the University of Florida, Gainesville, and was published online in August 2024 in Anesthesiology.
LIMITATIONS:
This study lacked access to prescription or dispensed medications, including opioids, which may have introduced residual confounding. Potential underdocumentation of cannabis use in medical records could have led to exposure misclassification. The causality between cannabis usage and increased anesthetic dosing could not be established due to the observational nature of this study.
DISCLOSURES:
This study was supported by the National Institute on Aging, the National Institutes of Health, and in part by the University of Florida Clinical and Translational Science Institute. Some authors declared receiving research support, consulting fees, and honoraria and having other ties with pharmaceutical companies and various other sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
However, the clinical relevance of this difference remains unclear.
METHODOLOGY:
- To assess if cannabis use leads to higher doses of inhalational anesthesia during surgery, the researchers conducted a retrospective cohort study comparing the average intraoperative minimum alveolar concentrations of volatile anesthetics (isoflurane and sevoflurane) between older adults who used cannabis products and those who did not.
- The researchers reviewed electronic health records of 22,476 patients aged 65 years or older who underwent surgery at the University of Florida Health System between 2018 and 2020.
- Overall, 268 patients who reported using cannabis within 60 days of surgery (median age, 69 years; 35% women) were matched to 1072 nonusers.
- The median duration of anesthesia was 175 minutes.
- The primary outcome was the intraoperative time-weighted average of isoflurane or sevoflurane minimum alveolar concentration equivalents.
TAKEAWAY:
- Cannabis users had significantly higher average minimum alveolar concentrations of isoflurane or sevoflurane than nonusers (mean, 0.58 vs 0.54; mean difference, 0.04; P = .021).
- The findings were confirmed in a sensitivity analysis that revealed higher mean average minimum alveolar concentrations of anesthesia in cannabis users than in nonusers (0.57 vs 0.53; P = .029).
- Although the 0.04 difference in minimum alveolar concentration between cannabis users and nonusers was statistically significant, its clinical importance is unclear.
IN PRACTICE:
“While recent guidelines underscore the importance of universal screening for cannabinoids before surgery, caution is paramount to prevent clinical bias leading to the administration of unnecessary higher doses of inhalational anesthesia, especially as robust evidence supporting such practices remains lacking,” the authors of the study wrote.
SOURCE:
This study was led by Ruba Sajdeya, MD, PhD, of the Department of Epidemiology at the University of Florida, Gainesville, and was published online in August 2024 in Anesthesiology.
LIMITATIONS:
This study lacked access to prescription or dispensed medications, including opioids, which may have introduced residual confounding. Potential underdocumentation of cannabis use in medical records could have led to exposure misclassification. The causality between cannabis usage and increased anesthetic dosing could not be established due to the observational nature of this study.
DISCLOSURES:
This study was supported by the National Institute on Aging, the National Institutes of Health, and in part by the University of Florida Clinical and Translational Science Institute. Some authors declared receiving research support, consulting fees, and honoraria and having other ties with pharmaceutical companies and various other sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Enhanced Care for Pediatric Patients With Generalized Lichen Planus: Diagnosis and Treatment Tips
Practice Gap
Lichen planus (LP) is an inflammatory cutaneous disorder. Although it often is characterized by the 6 Ps—pruritic, polygonal, planar, purple, papules, and plaques with a predilection for the wrists and ankles—the presentation can vary in morphology and distribution.1-5 With an incidence of approximately 1% in the general population, LP is undoubtedly uncommon.1 Its prevalence in the pediatric population is especially low, with only 2% to 3% of cases manifesting in individuals younger than 20 years.2
Generalized LP (also referred to as eruptive or exanthematous LP) is a rarely reported clinical subtype in which lesions are disseminated or spread rapidly.5 The rarity of generalized LP in children often leads to misdiagnosis or delayed treatment, impacting the patient’s quality of life. Thus, there is a need for heightened awareness among clinicians on the variable presentation of LP in the pediatric population. Incorporating a punch biopsy for the diagnosis of LP when lesions manifest as widespread, erythematous to violaceous, flat-topped papules or plaques, along with the addition of an intramuscular (IM) injection in the treatment plan, improves overall patient outcomes.
Tools and Techniques
A detailed physical examination followed by a punch biopsy was critical for the diagnosis of generalized LP in a 7-year-old Black girl. The examination revealed a widespread distribution of dark, violaceous, polygonal, shiny, flat-topped, firm papules coalescing into plaques across the entire body, with a greater predilection for the legs and overlying joints (Figure, A). Some lesions exhibited fine, silver-white, reticular patterns consistent with Wickham striae. Notably, there was no involvement of the scalp, nails, or mucosal surfaces.
The patient had no relevant medical or family history of skin disease and no recent history of illness. She previously was treated by a pediatrician with triamcinolone cream 0.1%, a course of oral cephalexin, and oral cetirizine 10 mg once daily without relief of symptoms.
Although the clinical presentation was consistent with LP, the differential diagnosis included lichen simplex chronicus, atopic dermatitis, psoriasis, and generalized granuloma annulare. To address the need for early recognition of LP in pediatric patients, a punch biopsy of a lesion on the left anterior thigh was performed and showed lichenoid interface dermatitis—a pivotal finding in distinguishing LP from other conditions in the differential.
Given the patient’s age and severity of the LP, a combination of topical and systemic therapies was prescribed—clobetasol cream 0.025% twice daily and 1 injection of 0.5 cc of IM triamcinolone acetonide 40 mg/mL. This regimen was guided by the efficacy of IM injections in providing prompt symptomatic relief, particularly for patients with extensive disease or for those whose condition is refractory to topical treatments.6 Our patient achieved remarkable improvement at 2-week follow-up (Figure, B), without any observed adverse effects. At that time, the patient’s mother refused further systemic treatment and opted for only the topical therapy as well as natural light therapy.
Practice Implications
Timely and accurate diagnosis of LP in pediatric patients, especially those with skin of color, is crucial. Early intervention is especially important in mitigating the risk for chronic symptoms and preventing potential scarring, which tends to be more pronounced and challenging to treat in individuals with darker skin tones.7 Although not present in our patient, it is important to note that LP can affect the face (including the eyelids) as well as the palms and soles in pediatric patients with skin of color.
The most common approach to management of pediatric LP involves the use of a topical corticosteroid and an oral antihistamine, but the recalcitrant and generalized distribution of lesions warrants the administration of a systemic corticosteroid regardless of the patient’s age.6 In our patient, prompt administration of low-dose IM triamcinolone was both crucial and beneficial. Although an underutilized approach, IM triamcinolone helps to prevent the progression of lesions to the scalp, nails, and mucosa while also reducing inflammation and pruritus in glabrous skin.8
Triamcinolone acetonide injections—administered at concentrations of 5 to 40 mg/mL—directly into the lesion (0.5–1 cc per 2 cm2) are highly effective in managing recalcitrant thickened lesions such as those seen in hypertrophic LP and palmoplantar LP.6 This treatment is particularly beneficial when lesions are unresponsive to topical therapies. Administered every 3 to 6 weeks, these injections provide rapid symptom relief, typically within 72 hours,6 while also contributing to the reduction of lesion size and thickness over time. The concentration of triamcinolone acetonide should be selected based on the lesion’s severity, with higher concentrations reserved for thicker, more resistant lesions. More frequent injections may be warranted in cases in which rapid lesion reduction is necessary, while less frequent sessions may suffice for maintenance therapy. It is important to follow patients closely for adverse effects, such as signs of local skin atrophy or hypopigmentation, and to adjust the dose or frequency accordingly. To mitigate these risks, consider using the lowest effective concentration and rotating injection sites if treating multiple lesions. Additionally, combining intralesional corticosteroids with topical therapies can enhance outcomes, particularly in cases in which monotherapy is insufficient.
Patients should be monitored vigilantly for complications of LP. The risk for postinflammatory hyperpigmentation is a particular concern for patients with skin of color. Other complications of untreated LP include nail deformities and scarring alopecia.9 Regular and thorough follow-ups every few months to monitor scalp, mucosal, and genital involvement are essential to manage this risk effectively.
Furthermore, patient education is key. Informing patients and their caregivers about the nature of LP, the available treatment options, and the importance of ongoing follow-up can help to enhance treatment adherence and improve overall outcomes.
- Le Cleach L, Chosidow O. Clinical practice. Lichen planus. N Engl J Med. 2012;366:723-732. doi:10.1056/NEJMcp1103641
- Handa S, Sahoo B. Childhood lichen planus: a study of 87 cases. Int J Dermatol. 2002;41:423-427. doi:10.1046/j.1365-4362.2002.01522.x
- George J, Murray T, Bain M. Generalized, eruptive lichen planus in a pediatric patient. Contemp Pediatr. 2022;39:32-34.
- Arnold DL, Krishnamurthy K. Lichen planus. StatPearls [Internet]. Updated June 1, 2023. Accessed August 12, 2024. https://www.ncbi.nlm.nih.gov/books/NBK526126/
- Weston G, Payette M. Update on lichen planus and its clinical variants. Int J Womens Dermatol. 2015;1:140-149. doi:10.1016/j.ijwd.2015.04.001
- Mutalik SD, Belgaumkar VA, Rasal YD. Current perspectives in the treatment of childhood lichen planus. Indian J Paediatr Dermatol. 2021;22:316-325. doi:10.4103/ijpd.ijpd_165_20
- Usatine RP, Tinitigan M. Diagnosis and treatment of lichen planus. Am Fam Physician. 2011;84:53-60.
- Thomas LW, Elsensohn A, Bergheim T, et al. Intramuscular steroids in the treatment of dermatologic disease: a systematic review. J Drugs Dermatol. 2018;17:323-329.
- Gorouhi F, Davari P, Fazel N. Cutaneous and mucosal lichen planus: a comprehensive review of clinical subtypes, risk factors, diagnosis, and prognosis. ScientificWorldJournal. 2014;2014:742826. doi:10.1155/2014/742826
Practice Gap
Lichen planus (LP) is an inflammatory cutaneous disorder. Although it often is characterized by the 6 Ps—pruritic, polygonal, planar, purple, papules, and plaques with a predilection for the wrists and ankles—the presentation can vary in morphology and distribution.1-5 With an incidence of approximately 1% in the general population, LP is undoubtedly uncommon.1 Its prevalence in the pediatric population is especially low, with only 2% to 3% of cases manifesting in individuals younger than 20 years.2
Generalized LP (also referred to as eruptive or exanthematous LP) is a rarely reported clinical subtype in which lesions are disseminated or spread rapidly.5 The rarity of generalized LP in children often leads to misdiagnosis or delayed treatment, impacting the patient’s quality of life. Thus, there is a need for heightened awareness among clinicians on the variable presentation of LP in the pediatric population. Incorporating a punch biopsy for the diagnosis of LP when lesions manifest as widespread, erythematous to violaceous, flat-topped papules or plaques, along with the addition of an intramuscular (IM) injection in the treatment plan, improves overall patient outcomes.
Tools and Techniques
A detailed physical examination followed by a punch biopsy was critical for the diagnosis of generalized LP in a 7-year-old Black girl. The examination revealed a widespread distribution of dark, violaceous, polygonal, shiny, flat-topped, firm papules coalescing into plaques across the entire body, with a greater predilection for the legs and overlying joints (Figure, A). Some lesions exhibited fine, silver-white, reticular patterns consistent with Wickham striae. Notably, there was no involvement of the scalp, nails, or mucosal surfaces.
The patient had no relevant medical or family history of skin disease and no recent history of illness. She previously was treated by a pediatrician with triamcinolone cream 0.1%, a course of oral cephalexin, and oral cetirizine 10 mg once daily without relief of symptoms.
Although the clinical presentation was consistent with LP, the differential diagnosis included lichen simplex chronicus, atopic dermatitis, psoriasis, and generalized granuloma annulare. To address the need for early recognition of LP in pediatric patients, a punch biopsy of a lesion on the left anterior thigh was performed and showed lichenoid interface dermatitis—a pivotal finding in distinguishing LP from other conditions in the differential.
Given the patient’s age and severity of the LP, a combination of topical and systemic therapies was prescribed—clobetasol cream 0.025% twice daily and 1 injection of 0.5 cc of IM triamcinolone acetonide 40 mg/mL. This regimen was guided by the efficacy of IM injections in providing prompt symptomatic relief, particularly for patients with extensive disease or for those whose condition is refractory to topical treatments.6 Our patient achieved remarkable improvement at 2-week follow-up (Figure, B), without any observed adverse effects. At that time, the patient’s mother refused further systemic treatment and opted for only the topical therapy as well as natural light therapy.
Practice Implications
Timely and accurate diagnosis of LP in pediatric patients, especially those with skin of color, is crucial. Early intervention is especially important in mitigating the risk for chronic symptoms and preventing potential scarring, which tends to be more pronounced and challenging to treat in individuals with darker skin tones.7 Although not present in our patient, it is important to note that LP can affect the face (including the eyelids) as well as the palms and soles in pediatric patients with skin of color.
The most common approach to management of pediatric LP involves the use of a topical corticosteroid and an oral antihistamine, but the recalcitrant and generalized distribution of lesions warrants the administration of a systemic corticosteroid regardless of the patient’s age.6 In our patient, prompt administration of low-dose IM triamcinolone was both crucial and beneficial. Although an underutilized approach, IM triamcinolone helps to prevent the progression of lesions to the scalp, nails, and mucosa while also reducing inflammation and pruritus in glabrous skin.8
Triamcinolone acetonide injections—administered at concentrations of 5 to 40 mg/mL—directly into the lesion (0.5–1 cc per 2 cm2) are highly effective in managing recalcitrant thickened lesions such as those seen in hypertrophic LP and palmoplantar LP.6 This treatment is particularly beneficial when lesions are unresponsive to topical therapies. Administered every 3 to 6 weeks, these injections provide rapid symptom relief, typically within 72 hours,6 while also contributing to the reduction of lesion size and thickness over time. The concentration of triamcinolone acetonide should be selected based on the lesion’s severity, with higher concentrations reserved for thicker, more resistant lesions. More frequent injections may be warranted in cases in which rapid lesion reduction is necessary, while less frequent sessions may suffice for maintenance therapy. It is important to follow patients closely for adverse effects, such as signs of local skin atrophy or hypopigmentation, and to adjust the dose or frequency accordingly. To mitigate these risks, consider using the lowest effective concentration and rotating injection sites if treating multiple lesions. Additionally, combining intralesional corticosteroids with topical therapies can enhance outcomes, particularly in cases in which monotherapy is insufficient.
Patients should be monitored vigilantly for complications of LP. The risk for postinflammatory hyperpigmentation is a particular concern for patients with skin of color. Other complications of untreated LP include nail deformities and scarring alopecia.9 Regular and thorough follow-ups every few months to monitor scalp, mucosal, and genital involvement are essential to manage this risk effectively.
Furthermore, patient education is key. Informing patients and their caregivers about the nature of LP, the available treatment options, and the importance of ongoing follow-up can help to enhance treatment adherence and improve overall outcomes.
Practice Gap
Lichen planus (LP) is an inflammatory cutaneous disorder. Although it often is characterized by the 6 Ps—pruritic, polygonal, planar, purple, papules, and plaques with a predilection for the wrists and ankles—the presentation can vary in morphology and distribution.1-5 With an incidence of approximately 1% in the general population, LP is undoubtedly uncommon.1 Its prevalence in the pediatric population is especially low, with only 2% to 3% of cases manifesting in individuals younger than 20 years.2
Generalized LP (also referred to as eruptive or exanthematous LP) is a rarely reported clinical subtype in which lesions are disseminated or spread rapidly.5 The rarity of generalized LP in children often leads to misdiagnosis or delayed treatment, impacting the patient’s quality of life. Thus, there is a need for heightened awareness among clinicians on the variable presentation of LP in the pediatric population. Incorporating a punch biopsy for the diagnosis of LP when lesions manifest as widespread, erythematous to violaceous, flat-topped papules or plaques, along with the addition of an intramuscular (IM) injection in the treatment plan, improves overall patient outcomes.
Tools and Techniques
A detailed physical examination followed by a punch biopsy was critical for the diagnosis of generalized LP in a 7-year-old Black girl. The examination revealed a widespread distribution of dark, violaceous, polygonal, shiny, flat-topped, firm papules coalescing into plaques across the entire body, with a greater predilection for the legs and overlying joints (Figure, A). Some lesions exhibited fine, silver-white, reticular patterns consistent with Wickham striae. Notably, there was no involvement of the scalp, nails, or mucosal surfaces.
The patient had no relevant medical or family history of skin disease and no recent history of illness. She previously was treated by a pediatrician with triamcinolone cream 0.1%, a course of oral cephalexin, and oral cetirizine 10 mg once daily without relief of symptoms.
Although the clinical presentation was consistent with LP, the differential diagnosis included lichen simplex chronicus, atopic dermatitis, psoriasis, and generalized granuloma annulare. To address the need for early recognition of LP in pediatric patients, a punch biopsy of a lesion on the left anterior thigh was performed and showed lichenoid interface dermatitis—a pivotal finding in distinguishing LP from other conditions in the differential.
Given the patient’s age and severity of the LP, a combination of topical and systemic therapies was prescribed—clobetasol cream 0.025% twice daily and 1 injection of 0.5 cc of IM triamcinolone acetonide 40 mg/mL. This regimen was guided by the efficacy of IM injections in providing prompt symptomatic relief, particularly for patients with extensive disease or for those whose condition is refractory to topical treatments.6 Our patient achieved remarkable improvement at 2-week follow-up (Figure, B), without any observed adverse effects. At that time, the patient’s mother refused further systemic treatment and opted for only the topical therapy as well as natural light therapy.
Practice Implications
Timely and accurate diagnosis of LP in pediatric patients, especially those with skin of color, is crucial. Early intervention is especially important in mitigating the risk for chronic symptoms and preventing potential scarring, which tends to be more pronounced and challenging to treat in individuals with darker skin tones.7 Although not present in our patient, it is important to note that LP can affect the face (including the eyelids) as well as the palms and soles in pediatric patients with skin of color.
The most common approach to management of pediatric LP involves the use of a topical corticosteroid and an oral antihistamine, but the recalcitrant and generalized distribution of lesions warrants the administration of a systemic corticosteroid regardless of the patient’s age.6 In our patient, prompt administration of low-dose IM triamcinolone was both crucial and beneficial. Although an underutilized approach, IM triamcinolone helps to prevent the progression of lesions to the scalp, nails, and mucosa while also reducing inflammation and pruritus in glabrous skin.8
Triamcinolone acetonide injections—administered at concentrations of 5 to 40 mg/mL—directly into the lesion (0.5–1 cc per 2 cm2) are highly effective in managing recalcitrant thickened lesions such as those seen in hypertrophic LP and palmoplantar LP.6 This treatment is particularly beneficial when lesions are unresponsive to topical therapies. Administered every 3 to 6 weeks, these injections provide rapid symptom relief, typically within 72 hours,6 while also contributing to the reduction of lesion size and thickness over time. The concentration of triamcinolone acetonide should be selected based on the lesion’s severity, with higher concentrations reserved for thicker, more resistant lesions. More frequent injections may be warranted in cases in which rapid lesion reduction is necessary, while less frequent sessions may suffice for maintenance therapy. It is important to follow patients closely for adverse effects, such as signs of local skin atrophy or hypopigmentation, and to adjust the dose or frequency accordingly. To mitigate these risks, consider using the lowest effective concentration and rotating injection sites if treating multiple lesions. Additionally, combining intralesional corticosteroids with topical therapies can enhance outcomes, particularly in cases in which monotherapy is insufficient.
Patients should be monitored vigilantly for complications of LP. The risk for postinflammatory hyperpigmentation is a particular concern for patients with skin of color. Other complications of untreated LP include nail deformities and scarring alopecia.9 Regular and thorough follow-ups every few months to monitor scalp, mucosal, and genital involvement are essential to manage this risk effectively.
Furthermore, patient education is key. Informing patients and their caregivers about the nature of LP, the available treatment options, and the importance of ongoing follow-up can help to enhance treatment adherence and improve overall outcomes.
- Le Cleach L, Chosidow O. Clinical practice. Lichen planus. N Engl J Med. 2012;366:723-732. doi:10.1056/NEJMcp1103641
- Handa S, Sahoo B. Childhood lichen planus: a study of 87 cases. Int J Dermatol. 2002;41:423-427. doi:10.1046/j.1365-4362.2002.01522.x
- George J, Murray T, Bain M. Generalized, eruptive lichen planus in a pediatric patient. Contemp Pediatr. 2022;39:32-34.
- Arnold DL, Krishnamurthy K. Lichen planus. StatPearls [Internet]. Updated June 1, 2023. Accessed August 12, 2024. https://www.ncbi.nlm.nih.gov/books/NBK526126/
- Weston G, Payette M. Update on lichen planus and its clinical variants. Int J Womens Dermatol. 2015;1:140-149. doi:10.1016/j.ijwd.2015.04.001
- Mutalik SD, Belgaumkar VA, Rasal YD. Current perspectives in the treatment of childhood lichen planus. Indian J Paediatr Dermatol. 2021;22:316-325. doi:10.4103/ijpd.ijpd_165_20
- Usatine RP, Tinitigan M. Diagnosis and treatment of lichen planus. Am Fam Physician. 2011;84:53-60.
- Thomas LW, Elsensohn A, Bergheim T, et al. Intramuscular steroids in the treatment of dermatologic disease: a systematic review. J Drugs Dermatol. 2018;17:323-329.
- Gorouhi F, Davari P, Fazel N. Cutaneous and mucosal lichen planus: a comprehensive review of clinical subtypes, risk factors, diagnosis, and prognosis. ScientificWorldJournal. 2014;2014:742826. doi:10.1155/2014/742826
- Le Cleach L, Chosidow O. Clinical practice. Lichen planus. N Engl J Med. 2012;366:723-732. doi:10.1056/NEJMcp1103641
- Handa S, Sahoo B. Childhood lichen planus: a study of 87 cases. Int J Dermatol. 2002;41:423-427. doi:10.1046/j.1365-4362.2002.01522.x
- George J, Murray T, Bain M. Generalized, eruptive lichen planus in a pediatric patient. Contemp Pediatr. 2022;39:32-34.
- Arnold DL, Krishnamurthy K. Lichen planus. StatPearls [Internet]. Updated June 1, 2023. Accessed August 12, 2024. https://www.ncbi.nlm.nih.gov/books/NBK526126/
- Weston G, Payette M. Update on lichen planus and its clinical variants. Int J Womens Dermatol. 2015;1:140-149. doi:10.1016/j.ijwd.2015.04.001
- Mutalik SD, Belgaumkar VA, Rasal YD. Current perspectives in the treatment of childhood lichen planus. Indian J Paediatr Dermatol. 2021;22:316-325. doi:10.4103/ijpd.ijpd_165_20
- Usatine RP, Tinitigan M. Diagnosis and treatment of lichen planus. Am Fam Physician. 2011;84:53-60.
- Thomas LW, Elsensohn A, Bergheim T, et al. Intramuscular steroids in the treatment of dermatologic disease: a systematic review. J Drugs Dermatol. 2018;17:323-329.
- Gorouhi F, Davari P, Fazel N. Cutaneous and mucosal lichen planus: a comprehensive review of clinical subtypes, risk factors, diagnosis, and prognosis. ScientificWorldJournal. 2014;2014:742826. doi:10.1155/2014/742826
Nonhormonal Treatment May Ease Menopausal Symptoms
Elinzanetant, the selective antagonist of neurokinin 1 and 3 receptors, led to rapid improvement in the frequency of vasomotor symptoms and significant improvements in the severity of symptoms, sleep disturbances, and menopause-related quality of life in two phase 3 studies. Researchers led by JoAnn V. Pinkerton, MD, from the University of Virginia Health in Charlottesville, reported their findings, which resulted from the randomized OASIS 1 and 2 studies, in JAMA.
“Women experience a variety of symptoms during their menopausal transition, including vasomotor symptoms ... and sleep disturbances, reported by up to 80% and 60%, respectively,” wrote the researchers. “Menopausal symptoms can negatively impact quality of life, reducing the capacity for daily activities and work productivity, and may be associated with long-term negative health outcomes such as cardiovascular events, depressive symptoms, cognitive decline, and other adverse brain outcomes.” The researchers also noted that some therapeutic options are available, including hormone replacement therapy and, in some countries, paroxetine, a selective serotonin reuptake inhibitor.
The Italian Ministry of Health’s menopause website points out that the transition generally occurs between ages 45 and 55 years. This huge hormonal change has consequences for women’s health. Ministry experts explain that diet and hormone replacement therapy (which should be taken under medical supervision) can prevent or counteract these consequences.
“Many women have contraindications, have tolerability issues leading to discontinuation, or prefer not to take these treatments,” wrote Dr. Pinkerton and colleagues, who evaluated the efficacy and tolerability of elinzanetant, a nonhormonal alternative treatment in development. The two double-blind, randomized, phase 3 studies (OASIS 1 and 2) included postmenopausal participants between ages 40 and 65 years with moderate to severe vasomotor symptoms who were treated with elinzanetant (OASIS 1, n = 199; OASIS 2, n = 200) or placebo (OASIS 1, n = 197; OASIS 2, n = 200).
After 4 weeks of treatment, 62.8% of participants in the OASIS 1 study and 62.2% in the OASIS 2 study reported at least a 50% reduction in the frequency of vasomotor symptoms (29.2% and 32.3% in the respective placebo groups). Improvements increased by week 12, with 71.4% and 74.7% of women in the elinzanetant group achieving this reduction (42.0% and 48.3% in the respective placebo groups). Women who took the medication also reported a reduction in the severity of vasomotor symptoms and improvements in sleep and menopause-related quality of life, with no significant tolerability and safety issues. “Elinzanetant has the potential to provide a well-tolerated and efficacious nonhormonal treatment option to address the unmet health needs of many menopausal individuals with moderate to severe vasomotor symptoms,” the authors concluded.
“With the discovery of nonhormonal treatment options targeting the neurons responsible for vasomotor symptoms, menopause care should advance on this solid scientific footing to benefit affected individuals,” wrote Stephanie S. Faubion, MD, and Chrisandra L. Shufelt, MD, who are affiliated with the Mayo Clinic in Rochester, Minnesota, and Jacksonville, Florida, in an accompanying editorial.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Elinzanetant, the selective antagonist of neurokinin 1 and 3 receptors, led to rapid improvement in the frequency of vasomotor symptoms and significant improvements in the severity of symptoms, sleep disturbances, and menopause-related quality of life in two phase 3 studies. Researchers led by JoAnn V. Pinkerton, MD, from the University of Virginia Health in Charlottesville, reported their findings, which resulted from the randomized OASIS 1 and 2 studies, in JAMA.
“Women experience a variety of symptoms during their menopausal transition, including vasomotor symptoms ... and sleep disturbances, reported by up to 80% and 60%, respectively,” wrote the researchers. “Menopausal symptoms can negatively impact quality of life, reducing the capacity for daily activities and work productivity, and may be associated with long-term negative health outcomes such as cardiovascular events, depressive symptoms, cognitive decline, and other adverse brain outcomes.” The researchers also noted that some therapeutic options are available, including hormone replacement therapy and, in some countries, paroxetine, a selective serotonin reuptake inhibitor.
The Italian Ministry of Health’s menopause website points out that the transition generally occurs between ages 45 and 55 years. This huge hormonal change has consequences for women’s health. Ministry experts explain that diet and hormone replacement therapy (which should be taken under medical supervision) can prevent or counteract these consequences.
“Many women have contraindications, have tolerability issues leading to discontinuation, or prefer not to take these treatments,” wrote Dr. Pinkerton and colleagues, who evaluated the efficacy and tolerability of elinzanetant, a nonhormonal alternative treatment in development. The two double-blind, randomized, phase 3 studies (OASIS 1 and 2) included postmenopausal participants between ages 40 and 65 years with moderate to severe vasomotor symptoms who were treated with elinzanetant (OASIS 1, n = 199; OASIS 2, n = 200) or placebo (OASIS 1, n = 197; OASIS 2, n = 200).
After 4 weeks of treatment, 62.8% of participants in the OASIS 1 study and 62.2% in the OASIS 2 study reported at least a 50% reduction in the frequency of vasomotor symptoms (29.2% and 32.3% in the respective placebo groups). Improvements increased by week 12, with 71.4% and 74.7% of women in the elinzanetant group achieving this reduction (42.0% and 48.3% in the respective placebo groups). Women who took the medication also reported a reduction in the severity of vasomotor symptoms and improvements in sleep and menopause-related quality of life, with no significant tolerability and safety issues. “Elinzanetant has the potential to provide a well-tolerated and efficacious nonhormonal treatment option to address the unmet health needs of many menopausal individuals with moderate to severe vasomotor symptoms,” the authors concluded.
“With the discovery of nonhormonal treatment options targeting the neurons responsible for vasomotor symptoms, menopause care should advance on this solid scientific footing to benefit affected individuals,” wrote Stephanie S. Faubion, MD, and Chrisandra L. Shufelt, MD, who are affiliated with the Mayo Clinic in Rochester, Minnesota, and Jacksonville, Florida, in an accompanying editorial.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Elinzanetant, the selective antagonist of neurokinin 1 and 3 receptors, led to rapid improvement in the frequency of vasomotor symptoms and significant improvements in the severity of symptoms, sleep disturbances, and menopause-related quality of life in two phase 3 studies. Researchers led by JoAnn V. Pinkerton, MD, from the University of Virginia Health in Charlottesville, reported their findings, which resulted from the randomized OASIS 1 and 2 studies, in JAMA.
“Women experience a variety of symptoms during their menopausal transition, including vasomotor symptoms ... and sleep disturbances, reported by up to 80% and 60%, respectively,” wrote the researchers. “Menopausal symptoms can negatively impact quality of life, reducing the capacity for daily activities and work productivity, and may be associated with long-term negative health outcomes such as cardiovascular events, depressive symptoms, cognitive decline, and other adverse brain outcomes.” The researchers also noted that some therapeutic options are available, including hormone replacement therapy and, in some countries, paroxetine, a selective serotonin reuptake inhibitor.
The Italian Ministry of Health’s menopause website points out that the transition generally occurs between ages 45 and 55 years. This huge hormonal change has consequences for women’s health. Ministry experts explain that diet and hormone replacement therapy (which should be taken under medical supervision) can prevent or counteract these consequences.
“Many women have contraindications, have tolerability issues leading to discontinuation, or prefer not to take these treatments,” wrote Dr. Pinkerton and colleagues, who evaluated the efficacy and tolerability of elinzanetant, a nonhormonal alternative treatment in development. The two double-blind, randomized, phase 3 studies (OASIS 1 and 2) included postmenopausal participants between ages 40 and 65 years with moderate to severe vasomotor symptoms who were treated with elinzanetant (OASIS 1, n = 199; OASIS 2, n = 200) or placebo (OASIS 1, n = 197; OASIS 2, n = 200).
After 4 weeks of treatment, 62.8% of participants in the OASIS 1 study and 62.2% in the OASIS 2 study reported at least a 50% reduction in the frequency of vasomotor symptoms (29.2% and 32.3% in the respective placebo groups). Improvements increased by week 12, with 71.4% and 74.7% of women in the elinzanetant group achieving this reduction (42.0% and 48.3% in the respective placebo groups). Women who took the medication also reported a reduction in the severity of vasomotor symptoms and improvements in sleep and menopause-related quality of life, with no significant tolerability and safety issues. “Elinzanetant has the potential to provide a well-tolerated and efficacious nonhormonal treatment option to address the unmet health needs of many menopausal individuals with moderate to severe vasomotor symptoms,” the authors concluded.
“With the discovery of nonhormonal treatment options targeting the neurons responsible for vasomotor symptoms, menopause care should advance on this solid scientific footing to benefit affected individuals,” wrote Stephanie S. Faubion, MD, and Chrisandra L. Shufelt, MD, who are affiliated with the Mayo Clinic in Rochester, Minnesota, and Jacksonville, Florida, in an accompanying editorial.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
FROM JAMA
Monitor Asthma Patients on Biologics for Remission, Potential EGPA Symptoms During Steroid Tapering
VIENNA — , according to pulmonary experts presenting at the European Respiratory Society (ERS) 2024 International Congress.
Biologics have revolutionized the treatment of severe asthma, significantly improving patient outcomes. However, the focus has recently shifted toward achieving more comprehensive disease control. Remission, already a well-established goal in conditions like rheumatoid arthritis and inflammatory bowel disease, is now being explored in patients with asthma receiving biologics.
Peter Howarth, medical director at Global Medical, Specialty Medicine, GSK, in Brentford, England, said that new clinical remission criteria in asthma may be overly rigid and of little use. He said that more attainable limits must be created. Meanwhile, clinicians should collect clinical data more thoroughly.
In parallel, studies have also raised questions about the role of biologics in the emergence of EGPA.
Defining Clinical Remission in Asthma
Last year, a working group, including members from the American Thoracic Society and the American College and Academy of Allergy, Asthma, and Immunology, proposed new guidelines to define clinical remission in asthma. These guidelines extended beyond the typical outcomes of no severe exacerbations, no maintenance oral corticosteroid use, good asthma control, and stable lung function. The additional recommendations included no missed work or school due to asthma, limited use of rescue medication (no more than once a month), and reduced inhaled corticosteroid use to low or medium doses.
To explore the feasibility of achieving these clinical remission outcomes, GSK partnered with the Mayo Clinic for a retrospective analysis of the medical records of 700 patients with asthma undergoing various biologic therapies. The study revealed that essential data for determining clinical remission, such as asthma control and exacerbation records, were inconsistently documented. While some data were recorded, such as maintenance corticosteroid use in 50%-60% of cases, other key measures, like asthma control, were recorded in less than a quarter of the patients.
GSK researchers analyzed available data and found that around 30% of patients on any biologic therapy met three components of remission. Mepolizumab performed better than other corticosteroids, with over 40% of those receiving the drug meeting these criteria. However, when stricter definitions were applied, such as requiring four or more remission components, fewer patients achieved remission — less than 10% for four components, with no patients meeting the full seven-point criteria proposed by the working group.
An ongoing ERS Task Force is now exploring what clinical remission outcomes are practical to achieve, as the current definitions may be too aspirational, said Mr. Howarth. “It’s a matter of defying what is practical to achieve because if you can’t achieve it, then it won’t be valuable.”
He also pointed out that biologics are often used for the most severe cases of asthma after other treatments have failed. Evidence suggests that introducing biologics earlier in the disease, before chronic damage occurs, may result in better patient outcomes.
Biologics and EGPA
In a retrospective study, clinical details of 27 patients with adult-onset asthma from 28 countries, all on biologic therapy, were analyzed. The study, a multicounty collaboration, was led by ERS Severe Heterogeneous Asthma Research Collaboration, Patient-centred (SHARP), and aimed to understand the role of biologics in the emergence of EGPA.
The most significant finding presented at the ERS 2024 International Congress was that EGPA was not associated with maintenance corticosteroids; instead, it often emerged when corticosteroid doses were reduced or tapered off. “This might suggest that steroid withdrawal may unmask the underlying disease,” said Hitasha Rupani, MD, a consultant respiratory physician at the University Hospital Southampton, in Southampton, England. Importantly, the rate at which steroids were tapered did not influence the onset of EGPA, indicating that the tapering process, rather than its speed, may be the critical factor. However, due to the small sample size, this remains a hypothesis, Dr. Rupani explained.
The study also found that when clinicians had a clinical suspicion of EGPA before starting biologic therapy, the diagnosis was made earlier than in cases without such suspicion. Dr. Rupani concluded that this underscores the importance of clinical vigilance and the need to monitor patients closely for EGPA symptoms, especially during corticosteroid tapering.
The study was funded by GSK. Mr. Howarth is an employee at GSK. Dr. Rupani reports no relevant financial relationships.
A version of this article appeared on Medscape.com.
VIENNA — , according to pulmonary experts presenting at the European Respiratory Society (ERS) 2024 International Congress.
Biologics have revolutionized the treatment of severe asthma, significantly improving patient outcomes. However, the focus has recently shifted toward achieving more comprehensive disease control. Remission, already a well-established goal in conditions like rheumatoid arthritis and inflammatory bowel disease, is now being explored in patients with asthma receiving biologics.
Peter Howarth, medical director at Global Medical, Specialty Medicine, GSK, in Brentford, England, said that new clinical remission criteria in asthma may be overly rigid and of little use. He said that more attainable limits must be created. Meanwhile, clinicians should collect clinical data more thoroughly.
In parallel, studies have also raised questions about the role of biologics in the emergence of EGPA.
Defining Clinical Remission in Asthma
Last year, a working group, including members from the American Thoracic Society and the American College and Academy of Allergy, Asthma, and Immunology, proposed new guidelines to define clinical remission in asthma. These guidelines extended beyond the typical outcomes of no severe exacerbations, no maintenance oral corticosteroid use, good asthma control, and stable lung function. The additional recommendations included no missed work or school due to asthma, limited use of rescue medication (no more than once a month), and reduced inhaled corticosteroid use to low or medium doses.
To explore the feasibility of achieving these clinical remission outcomes, GSK partnered with the Mayo Clinic for a retrospective analysis of the medical records of 700 patients with asthma undergoing various biologic therapies. The study revealed that essential data for determining clinical remission, such as asthma control and exacerbation records, were inconsistently documented. While some data were recorded, such as maintenance corticosteroid use in 50%-60% of cases, other key measures, like asthma control, were recorded in less than a quarter of the patients.
GSK researchers analyzed available data and found that around 30% of patients on any biologic therapy met three components of remission. Mepolizumab performed better than other corticosteroids, with over 40% of those receiving the drug meeting these criteria. However, when stricter definitions were applied, such as requiring four or more remission components, fewer patients achieved remission — less than 10% for four components, with no patients meeting the full seven-point criteria proposed by the working group.
An ongoing ERS Task Force is now exploring what clinical remission outcomes are practical to achieve, as the current definitions may be too aspirational, said Mr. Howarth. “It’s a matter of defying what is practical to achieve because if you can’t achieve it, then it won’t be valuable.”
He also pointed out that biologics are often used for the most severe cases of asthma after other treatments have failed. Evidence suggests that introducing biologics earlier in the disease, before chronic damage occurs, may result in better patient outcomes.
Biologics and EGPA
In a retrospective study, clinical details of 27 patients with adult-onset asthma from 28 countries, all on biologic therapy, were analyzed. The study, a multicounty collaboration, was led by ERS Severe Heterogeneous Asthma Research Collaboration, Patient-centred (SHARP), and aimed to understand the role of biologics in the emergence of EGPA.
The most significant finding presented at the ERS 2024 International Congress was that EGPA was not associated with maintenance corticosteroids; instead, it often emerged when corticosteroid doses were reduced or tapered off. “This might suggest that steroid withdrawal may unmask the underlying disease,” said Hitasha Rupani, MD, a consultant respiratory physician at the University Hospital Southampton, in Southampton, England. Importantly, the rate at which steroids were tapered did not influence the onset of EGPA, indicating that the tapering process, rather than its speed, may be the critical factor. However, due to the small sample size, this remains a hypothesis, Dr. Rupani explained.
The study also found that when clinicians had a clinical suspicion of EGPA before starting biologic therapy, the diagnosis was made earlier than in cases without such suspicion. Dr. Rupani concluded that this underscores the importance of clinical vigilance and the need to monitor patients closely for EGPA symptoms, especially during corticosteroid tapering.
The study was funded by GSK. Mr. Howarth is an employee at GSK. Dr. Rupani reports no relevant financial relationships.
A version of this article appeared on Medscape.com.
VIENNA — , according to pulmonary experts presenting at the European Respiratory Society (ERS) 2024 International Congress.
Biologics have revolutionized the treatment of severe asthma, significantly improving patient outcomes. However, the focus has recently shifted toward achieving more comprehensive disease control. Remission, already a well-established goal in conditions like rheumatoid arthritis and inflammatory bowel disease, is now being explored in patients with asthma receiving biologics.
Peter Howarth, medical director at Global Medical, Specialty Medicine, GSK, in Brentford, England, said that new clinical remission criteria in asthma may be overly rigid and of little use. He said that more attainable limits must be created. Meanwhile, clinicians should collect clinical data more thoroughly.
In parallel, studies have also raised questions about the role of biologics in the emergence of EGPA.
Defining Clinical Remission in Asthma
Last year, a working group, including members from the American Thoracic Society and the American College and Academy of Allergy, Asthma, and Immunology, proposed new guidelines to define clinical remission in asthma. These guidelines extended beyond the typical outcomes of no severe exacerbations, no maintenance oral corticosteroid use, good asthma control, and stable lung function. The additional recommendations included no missed work or school due to asthma, limited use of rescue medication (no more than once a month), and reduced inhaled corticosteroid use to low or medium doses.
To explore the feasibility of achieving these clinical remission outcomes, GSK partnered with the Mayo Clinic for a retrospective analysis of the medical records of 700 patients with asthma undergoing various biologic therapies. The study revealed that essential data for determining clinical remission, such as asthma control and exacerbation records, were inconsistently documented. While some data were recorded, such as maintenance corticosteroid use in 50%-60% of cases, other key measures, like asthma control, were recorded in less than a quarter of the patients.
GSK researchers analyzed available data and found that around 30% of patients on any biologic therapy met three components of remission. Mepolizumab performed better than other corticosteroids, with over 40% of those receiving the drug meeting these criteria. However, when stricter definitions were applied, such as requiring four or more remission components, fewer patients achieved remission — less than 10% for four components, with no patients meeting the full seven-point criteria proposed by the working group.
An ongoing ERS Task Force is now exploring what clinical remission outcomes are practical to achieve, as the current definitions may be too aspirational, said Mr. Howarth. “It’s a matter of defying what is practical to achieve because if you can’t achieve it, then it won’t be valuable.”
He also pointed out that biologics are often used for the most severe cases of asthma after other treatments have failed. Evidence suggests that introducing biologics earlier in the disease, before chronic damage occurs, may result in better patient outcomes.
Biologics and EGPA
In a retrospective study, clinical details of 27 patients with adult-onset asthma from 28 countries, all on biologic therapy, were analyzed. The study, a multicounty collaboration, was led by ERS Severe Heterogeneous Asthma Research Collaboration, Patient-centred (SHARP), and aimed to understand the role of biologics in the emergence of EGPA.
The most significant finding presented at the ERS 2024 International Congress was that EGPA was not associated with maintenance corticosteroids; instead, it often emerged when corticosteroid doses were reduced or tapered off. “This might suggest that steroid withdrawal may unmask the underlying disease,” said Hitasha Rupani, MD, a consultant respiratory physician at the University Hospital Southampton, in Southampton, England. Importantly, the rate at which steroids were tapered did not influence the onset of EGPA, indicating that the tapering process, rather than its speed, may be the critical factor. However, due to the small sample size, this remains a hypothesis, Dr. Rupani explained.
The study also found that when clinicians had a clinical suspicion of EGPA before starting biologic therapy, the diagnosis was made earlier than in cases without such suspicion. Dr. Rupani concluded that this underscores the importance of clinical vigilance and the need to monitor patients closely for EGPA symptoms, especially during corticosteroid tapering.
The study was funded by GSK. Mr. Howarth is an employee at GSK. Dr. Rupani reports no relevant financial relationships.
A version of this article appeared on Medscape.com.
Night Owls May Be at Greater Risk for T2D, Beyond Lifestyle
MADRID — research presented at the annual meeting of the European Association for the Study of Diabetes suggested.
In the study, night owls were almost 50% more likely to develop T2D than those who went to sleep earlier.
“The magnitude of this risk was more than I expected, [although] residual confounding may have occurred,” said Jeroen van der Velde, PhD, Leiden University Medical Center in the Netherlands, who presented the study.
“Late chronotype has previously been associated with unhealthy lifestyle and overweight or obesity and, subsequently, cardiometabolic diseases,” he said in an interview. However, although the current study found that individuals with late chronotypes did indeed have larger waists and more visceral fat, “we (and others) believe that lifestyle cannot fully explain the relation between late chronotype and metabolic disorders.”
“In addition,” he noted, “previous studies that observed that late chronotype is associated with overweight or obesity mainly focused on body mass index [BMI]. However, BMI alone does not provide accurate information regarding fat distribution in the body. People with similar BMI may have different underlying fat distribution, and this may be more relevant than BMI for metabolic risk.”
The researchers examined associations between chronotype and BMI, waist circumference, visceral fat, liver fat, and the risk for T2D in a middle-aged population from the Netherlands Epidemiology of Obesity study. Among the 5026 participants, the mean age was 56 years, 54% were women, and mean BMI was 30.
Using data from the study, the study investigators calculated the midpoint of sleep (MPS) and divided participants into three chronotypes: Early MPS < 2:30 PM (20% of participants); intermediate MPS 2:30–4:00 PM (reference category; 60% of participants); and late MPS ≥ 4:00 PM (20% of participants). BMI and waist circumference were measured in all participants, and visceral fat and liver fat were measured in 1576 participants using MRI scans and MR spectroscopy, respectively.
During a median follow-up of 6.6 years, 225 participants were diagnosed with T2D. After adjustment for age, sex, education, physical activity, smoking, alcohol intake, diet quality, sleep quality and duration, and total body fat, participants with a late chronotype had a 46% increased risk for T2D.
Further, those with a late chronotype had 0.7 higher BMI, 1.9-cm larger waist circumference, 7 cm2 more visceral fat, and 14% more liver fat.
Body Clock Out of Sync?
“Late chronotype was associated with increased ectopic body fat and with an increased risk of T2D independent of lifestyle factors and is an emerging risk factor for metabolic diseases,” the researchers concluded.
“A likely explanation is that the circadian rhythm or body clock in late chronotypes is out of sync with the work and social schedules followed by society,” Dr. van der Velde suggested. “This can lead to circadian misalignment, which we know can lead to metabolic disturbances and ultimately type 2 diabetes.”
Might trying to adjust chronotype earlier in life have an effect on risk?
“Chronotype, as measured via midpoint of sleep, does change a lot in the first 30 years or so in life,” he said. “After that it seems to stabilize. I suppose that if you adapt an intermediate or early chronotype around the age of 30 years, this will help to maintain an earlier chronotype later in life, although we cannot answer this from our study.”
Nevertheless, with respect to T2D risk, “chronotype is likely only part of the puzzle,” he noted.
“People with late chronotypes typically eat late in the evening, and this has also been associated with adverse metabolic effects. At this stage, we do not know if a person changes his/her chronotype that this will also lead to metabolic improvements. More research is needed before we can make recommendations regarding chronotype and timing of other lifestyle behaviors.”
Commenting on the study, Gianluca Iacobellis, MD, PhD, director of the University of Miami Hospital Diabetes Service, Coral Gables, Florida, said: “Interesting data. Altering the physiological circadian rhythm can affect the complex hormonal system — including cortisol, ghrelin, leptin, and serotonin — that regulates insulin sensitivity, glucose, and blood pressure control. The night owl may become more insulin resistant and therefore at higher risk of developing diabetes.”
Like Dr. van der Velde, he noted that “late sleep may be associated with night binging that can cause weight gain and ultimately obesity, further increasing the risk of diabetes.”
Dr. Iacobellis’s group recently showed that vital exhaustion, which is characterized by fatigue and loss of vigor, is associated with a higher cardiovascular risk for and markers of visceral adiposity.
“Abnormal circadian rhythms can be easily associated with vital exhaustion,” he said. Therefore, night owls with more visceral than peripheral fat accumulation might also be at higher cardiometabolic risk through that mechanism.
“However environmental factors and family history can play an important role too,” he added.
Regardless of the mechanisms involved, “preventive actions should be taken to educate teenagers and individuals at higher risk to have healthy sleep habits,” Dr. Iacobellis concluded.
No information regarding funding was provided; Dr. van der Velde and Dr. Iacobellis reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
MADRID — research presented at the annual meeting of the European Association for the Study of Diabetes suggested.
In the study, night owls were almost 50% more likely to develop T2D than those who went to sleep earlier.
“The magnitude of this risk was more than I expected, [although] residual confounding may have occurred,” said Jeroen van der Velde, PhD, Leiden University Medical Center in the Netherlands, who presented the study.
“Late chronotype has previously been associated with unhealthy lifestyle and overweight or obesity and, subsequently, cardiometabolic diseases,” he said in an interview. However, although the current study found that individuals with late chronotypes did indeed have larger waists and more visceral fat, “we (and others) believe that lifestyle cannot fully explain the relation between late chronotype and metabolic disorders.”
“In addition,” he noted, “previous studies that observed that late chronotype is associated with overweight or obesity mainly focused on body mass index [BMI]. However, BMI alone does not provide accurate information regarding fat distribution in the body. People with similar BMI may have different underlying fat distribution, and this may be more relevant than BMI for metabolic risk.”
The researchers examined associations between chronotype and BMI, waist circumference, visceral fat, liver fat, and the risk for T2D in a middle-aged population from the Netherlands Epidemiology of Obesity study. Among the 5026 participants, the mean age was 56 years, 54% were women, and mean BMI was 30.
Using data from the study, the study investigators calculated the midpoint of sleep (MPS) and divided participants into three chronotypes: Early MPS < 2:30 PM (20% of participants); intermediate MPS 2:30–4:00 PM (reference category; 60% of participants); and late MPS ≥ 4:00 PM (20% of participants). BMI and waist circumference were measured in all participants, and visceral fat and liver fat were measured in 1576 participants using MRI scans and MR spectroscopy, respectively.
During a median follow-up of 6.6 years, 225 participants were diagnosed with T2D. After adjustment for age, sex, education, physical activity, smoking, alcohol intake, diet quality, sleep quality and duration, and total body fat, participants with a late chronotype had a 46% increased risk for T2D.
Further, those with a late chronotype had 0.7 higher BMI, 1.9-cm larger waist circumference, 7 cm2 more visceral fat, and 14% more liver fat.
Body Clock Out of Sync?
“Late chronotype was associated with increased ectopic body fat and with an increased risk of T2D independent of lifestyle factors and is an emerging risk factor for metabolic diseases,” the researchers concluded.
“A likely explanation is that the circadian rhythm or body clock in late chronotypes is out of sync with the work and social schedules followed by society,” Dr. van der Velde suggested. “This can lead to circadian misalignment, which we know can lead to metabolic disturbances and ultimately type 2 diabetes.”
Might trying to adjust chronotype earlier in life have an effect on risk?
“Chronotype, as measured via midpoint of sleep, does change a lot in the first 30 years or so in life,” he said. “After that it seems to stabilize. I suppose that if you adapt an intermediate or early chronotype around the age of 30 years, this will help to maintain an earlier chronotype later in life, although we cannot answer this from our study.”
Nevertheless, with respect to T2D risk, “chronotype is likely only part of the puzzle,” he noted.
“People with late chronotypes typically eat late in the evening, and this has also been associated with adverse metabolic effects. At this stage, we do not know if a person changes his/her chronotype that this will also lead to metabolic improvements. More research is needed before we can make recommendations regarding chronotype and timing of other lifestyle behaviors.”
Commenting on the study, Gianluca Iacobellis, MD, PhD, director of the University of Miami Hospital Diabetes Service, Coral Gables, Florida, said: “Interesting data. Altering the physiological circadian rhythm can affect the complex hormonal system — including cortisol, ghrelin, leptin, and serotonin — that regulates insulin sensitivity, glucose, and blood pressure control. The night owl may become more insulin resistant and therefore at higher risk of developing diabetes.”
Like Dr. van der Velde, he noted that “late sleep may be associated with night binging that can cause weight gain and ultimately obesity, further increasing the risk of diabetes.”
Dr. Iacobellis’s group recently showed that vital exhaustion, which is characterized by fatigue and loss of vigor, is associated with a higher cardiovascular risk for and markers of visceral adiposity.
“Abnormal circadian rhythms can be easily associated with vital exhaustion,” he said. Therefore, night owls with more visceral than peripheral fat accumulation might also be at higher cardiometabolic risk through that mechanism.
“However environmental factors and family history can play an important role too,” he added.
Regardless of the mechanisms involved, “preventive actions should be taken to educate teenagers and individuals at higher risk to have healthy sleep habits,” Dr. Iacobellis concluded.
No information regarding funding was provided; Dr. van der Velde and Dr. Iacobellis reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
MADRID — research presented at the annual meeting of the European Association for the Study of Diabetes suggested.
In the study, night owls were almost 50% more likely to develop T2D than those who went to sleep earlier.
“The magnitude of this risk was more than I expected, [although] residual confounding may have occurred,” said Jeroen van der Velde, PhD, Leiden University Medical Center in the Netherlands, who presented the study.
“Late chronotype has previously been associated with unhealthy lifestyle and overweight or obesity and, subsequently, cardiometabolic diseases,” he said in an interview. However, although the current study found that individuals with late chronotypes did indeed have larger waists and more visceral fat, “we (and others) believe that lifestyle cannot fully explain the relation between late chronotype and metabolic disorders.”
“In addition,” he noted, “previous studies that observed that late chronotype is associated with overweight or obesity mainly focused on body mass index [BMI]. However, BMI alone does not provide accurate information regarding fat distribution in the body. People with similar BMI may have different underlying fat distribution, and this may be more relevant than BMI for metabolic risk.”
The researchers examined associations between chronotype and BMI, waist circumference, visceral fat, liver fat, and the risk for T2D in a middle-aged population from the Netherlands Epidemiology of Obesity study. Among the 5026 participants, the mean age was 56 years, 54% were women, and mean BMI was 30.
Using data from the study, the study investigators calculated the midpoint of sleep (MPS) and divided participants into three chronotypes: Early MPS < 2:30 PM (20% of participants); intermediate MPS 2:30–4:00 PM (reference category; 60% of participants); and late MPS ≥ 4:00 PM (20% of participants). BMI and waist circumference were measured in all participants, and visceral fat and liver fat were measured in 1576 participants using MRI scans and MR spectroscopy, respectively.
During a median follow-up of 6.6 years, 225 participants were diagnosed with T2D. After adjustment for age, sex, education, physical activity, smoking, alcohol intake, diet quality, sleep quality and duration, and total body fat, participants with a late chronotype had a 46% increased risk for T2D.
Further, those with a late chronotype had 0.7 higher BMI, 1.9-cm larger waist circumference, 7 cm2 more visceral fat, and 14% more liver fat.
Body Clock Out of Sync?
“Late chronotype was associated with increased ectopic body fat and with an increased risk of T2D independent of lifestyle factors and is an emerging risk factor for metabolic diseases,” the researchers concluded.
“A likely explanation is that the circadian rhythm or body clock in late chronotypes is out of sync with the work and social schedules followed by society,” Dr. van der Velde suggested. “This can lead to circadian misalignment, which we know can lead to metabolic disturbances and ultimately type 2 diabetes.”
Might trying to adjust chronotype earlier in life have an effect on risk?
“Chronotype, as measured via midpoint of sleep, does change a lot in the first 30 years or so in life,” he said. “After that it seems to stabilize. I suppose that if you adapt an intermediate or early chronotype around the age of 30 years, this will help to maintain an earlier chronotype later in life, although we cannot answer this from our study.”
Nevertheless, with respect to T2D risk, “chronotype is likely only part of the puzzle,” he noted.
“People with late chronotypes typically eat late in the evening, and this has also been associated with adverse metabolic effects. At this stage, we do not know if a person changes his/her chronotype that this will also lead to metabolic improvements. More research is needed before we can make recommendations regarding chronotype and timing of other lifestyle behaviors.”
Commenting on the study, Gianluca Iacobellis, MD, PhD, director of the University of Miami Hospital Diabetes Service, Coral Gables, Florida, said: “Interesting data. Altering the physiological circadian rhythm can affect the complex hormonal system — including cortisol, ghrelin, leptin, and serotonin — that regulates insulin sensitivity, glucose, and blood pressure control. The night owl may become more insulin resistant and therefore at higher risk of developing diabetes.”
Like Dr. van der Velde, he noted that “late sleep may be associated with night binging that can cause weight gain and ultimately obesity, further increasing the risk of diabetes.”
Dr. Iacobellis’s group recently showed that vital exhaustion, which is characterized by fatigue and loss of vigor, is associated with a higher cardiovascular risk for and markers of visceral adiposity.
“Abnormal circadian rhythms can be easily associated with vital exhaustion,” he said. Therefore, night owls with more visceral than peripheral fat accumulation might also be at higher cardiometabolic risk through that mechanism.
“However environmental factors and family history can play an important role too,” he added.
Regardless of the mechanisms involved, “preventive actions should be taken to educate teenagers and individuals at higher risk to have healthy sleep habits,” Dr. Iacobellis concluded.
No information regarding funding was provided; Dr. van der Velde and Dr. Iacobellis reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM EASD 2024
Debate: Should Patients With CLL Take Breaks From Targeted Therapies?
At the annual meeting of the Society of Hematologic Oncology, two hematologist-oncologists — Inhye Ahn, MD, of Dana-Farber Cancer Institute in Boston, Massachusetts, and Kerry A. Rogers, MD, of Ohio State University in Columbus — faced off in a debate. Ahn said the drugs can indeed be discontinued, while Rogers argued against stopping the medications.
“When I talk to my own patient about standard of care options in CLL, I use the analogy of a marathon and a sprint,” Dr. Ahn said. A marathon refers to continuous treatment with Bruton’s kinase inhibitors given daily for years, while the sprint refers to the combination of venetoclax with an anti-CD20 monoclonal antibody given over 12 cycles for the frontline regimen and 2 years for refractory CLL.
“I tell them these are both considered very efficacious regimens and well tolerated, one is by IV [the venetoclax regimen] and the other isn’t [Bruton’s kinase inhibitors], and they have different toxicity profile. I ask them what would you do? The most common question that I get from my patient is, ‘why would anyone do a marathon?’ ”
It’s not solely the length of treatment that’s important, Dr. Ahn said, as toxicities from the long-term use of Bruton’s kinase inhibitors build up over time and can lead to hypertension, arrhythmia, and sudden cardiac death.
In addition, she said, infections can occur, as well as hampered vaccine response, an important risk in the era of the COVID-19 pandemic. The cost of the drugs is high and adds up over time, and continuous use can boost resistance.
Is there a way to turn the marathon of Bruton’s kinase inhibitor use into a sprint without hurting patients? The answer is yes, through temporary discontinuation, Dr. Ahn said, although she cautioned that early cessation could lead to disease flare. “We dipped into our own database of 84 CLL patients treated with ibrutinib, and our conclusion was that temporary dose interruption or dose reduction did not impact progression-free survival”
Moving forward, she said, “more research is needed to define the optimal regimen that would lead to treatment cessation, the optimal patient population, who would benefit most from the cessation strategy, treatment duration, and how we define success.” For her part, Dr. Rogers argued that the continuous use of Bruton’s kinase inhibitors is “really the most effective treatment we have in CLL.”
It’s clear that “responses deepen with continued treatment,” Dr. Rogers said, noting that remission times grow over years of treatment. She highlighted a 2022 study of patients with CLL who took ibrutinib that found complete remission or complete remission with incomplete hematologic recovery was 7% at 12 months and 34% at 7 years. When patients quit taking the drugs, “you don’t get to maximize your patient’s response to this treatment.”
Dr. Rogers also noted that the RESONATE-2 trial found that ibrutinib is linked to the longest median progression-free survival of any CLL treatment at 8.9 years. “That really struck me a very effective initial therapy.”
Indeed, “when you’re offering someone initial therapy with a Bruton’s kinase inhibitor as a continuous treatment strategy, you can tell people that they can expect a normal lifespan with this approach. That’s extremely important when you’re talking to patients about whether they might want to alter their leukemia treatment.”
Finally, she noted that discontinuation of ibrutinib was linked to shorter survival in early research. “This was the first suggestion that discontinuation is not good.”
Dr. Rogers said that discontinuing the drugs is sometimes necessary because of adverse events, but patients can “certainly switch to a more tolerable Bruton’s kinase inhibitor. With the options available today, that should be a strategy that’s considered.”
Audience members at the debate were invited to respond to a live online survey about whether Bruton’s kinase inhibitors can be discontinued. Among 49 respondents, most (52.3%) said no, 42.8% said yes, and the rest were undecided/other.
Disclosures for the speakers were not provided. Dr. Ahn disclosed consulting for BeiGene and AstraZeneca. Dr. Rogers disclosed receiving research funding from Genentech, AbbVie, Janssen, and Novartis; consulting for AstraZeneca, BeiGene, Janssen, Pharmacyclics, AbbVie, Genentech, and LOXO@Lilly; and receiving travel funding from AstraZeneca.
A version of this article appeared on Medscape.com.
At the annual meeting of the Society of Hematologic Oncology, two hematologist-oncologists — Inhye Ahn, MD, of Dana-Farber Cancer Institute in Boston, Massachusetts, and Kerry A. Rogers, MD, of Ohio State University in Columbus — faced off in a debate. Ahn said the drugs can indeed be discontinued, while Rogers argued against stopping the medications.
“When I talk to my own patient about standard of care options in CLL, I use the analogy of a marathon and a sprint,” Dr. Ahn said. A marathon refers to continuous treatment with Bruton’s kinase inhibitors given daily for years, while the sprint refers to the combination of venetoclax with an anti-CD20 monoclonal antibody given over 12 cycles for the frontline regimen and 2 years for refractory CLL.
“I tell them these are both considered very efficacious regimens and well tolerated, one is by IV [the venetoclax regimen] and the other isn’t [Bruton’s kinase inhibitors], and they have different toxicity profile. I ask them what would you do? The most common question that I get from my patient is, ‘why would anyone do a marathon?’ ”
It’s not solely the length of treatment that’s important, Dr. Ahn said, as toxicities from the long-term use of Bruton’s kinase inhibitors build up over time and can lead to hypertension, arrhythmia, and sudden cardiac death.
In addition, she said, infections can occur, as well as hampered vaccine response, an important risk in the era of the COVID-19 pandemic. The cost of the drugs is high and adds up over time, and continuous use can boost resistance.
Is there a way to turn the marathon of Bruton’s kinase inhibitor use into a sprint without hurting patients? The answer is yes, through temporary discontinuation, Dr. Ahn said, although she cautioned that early cessation could lead to disease flare. “We dipped into our own database of 84 CLL patients treated with ibrutinib, and our conclusion was that temporary dose interruption or dose reduction did not impact progression-free survival”
Moving forward, she said, “more research is needed to define the optimal regimen that would lead to treatment cessation, the optimal patient population, who would benefit most from the cessation strategy, treatment duration, and how we define success.” For her part, Dr. Rogers argued that the continuous use of Bruton’s kinase inhibitors is “really the most effective treatment we have in CLL.”
It’s clear that “responses deepen with continued treatment,” Dr. Rogers said, noting that remission times grow over years of treatment. She highlighted a 2022 study of patients with CLL who took ibrutinib that found complete remission or complete remission with incomplete hematologic recovery was 7% at 12 months and 34% at 7 years. When patients quit taking the drugs, “you don’t get to maximize your patient’s response to this treatment.”
Dr. Rogers also noted that the RESONATE-2 trial found that ibrutinib is linked to the longest median progression-free survival of any CLL treatment at 8.9 years. “That really struck me a very effective initial therapy.”
Indeed, “when you’re offering someone initial therapy with a Bruton’s kinase inhibitor as a continuous treatment strategy, you can tell people that they can expect a normal lifespan with this approach. That’s extremely important when you’re talking to patients about whether they might want to alter their leukemia treatment.”
Finally, she noted that discontinuation of ibrutinib was linked to shorter survival in early research. “This was the first suggestion that discontinuation is not good.”
Dr. Rogers said that discontinuing the drugs is sometimes necessary because of adverse events, but patients can “certainly switch to a more tolerable Bruton’s kinase inhibitor. With the options available today, that should be a strategy that’s considered.”
Audience members at the debate were invited to respond to a live online survey about whether Bruton’s kinase inhibitors can be discontinued. Among 49 respondents, most (52.3%) said no, 42.8% said yes, and the rest were undecided/other.
Disclosures for the speakers were not provided. Dr. Ahn disclosed consulting for BeiGene and AstraZeneca. Dr. Rogers disclosed receiving research funding from Genentech, AbbVie, Janssen, and Novartis; consulting for AstraZeneca, BeiGene, Janssen, Pharmacyclics, AbbVie, Genentech, and LOXO@Lilly; and receiving travel funding from AstraZeneca.
A version of this article appeared on Medscape.com.
At the annual meeting of the Society of Hematologic Oncology, two hematologist-oncologists — Inhye Ahn, MD, of Dana-Farber Cancer Institute in Boston, Massachusetts, and Kerry A. Rogers, MD, of Ohio State University in Columbus — faced off in a debate. Ahn said the drugs can indeed be discontinued, while Rogers argued against stopping the medications.
“When I talk to my own patient about standard of care options in CLL, I use the analogy of a marathon and a sprint,” Dr. Ahn said. A marathon refers to continuous treatment with Bruton’s kinase inhibitors given daily for years, while the sprint refers to the combination of venetoclax with an anti-CD20 monoclonal antibody given over 12 cycles for the frontline regimen and 2 years for refractory CLL.
“I tell them these are both considered very efficacious regimens and well tolerated, one is by IV [the venetoclax regimen] and the other isn’t [Bruton’s kinase inhibitors], and they have different toxicity profile. I ask them what would you do? The most common question that I get from my patient is, ‘why would anyone do a marathon?’ ”
It’s not solely the length of treatment that’s important, Dr. Ahn said, as toxicities from the long-term use of Bruton’s kinase inhibitors build up over time and can lead to hypertension, arrhythmia, and sudden cardiac death.
In addition, she said, infections can occur, as well as hampered vaccine response, an important risk in the era of the COVID-19 pandemic. The cost of the drugs is high and adds up over time, and continuous use can boost resistance.
Is there a way to turn the marathon of Bruton’s kinase inhibitor use into a sprint without hurting patients? The answer is yes, through temporary discontinuation, Dr. Ahn said, although she cautioned that early cessation could lead to disease flare. “We dipped into our own database of 84 CLL patients treated with ibrutinib, and our conclusion was that temporary dose interruption or dose reduction did not impact progression-free survival”
Moving forward, she said, “more research is needed to define the optimal regimen that would lead to treatment cessation, the optimal patient population, who would benefit most from the cessation strategy, treatment duration, and how we define success.” For her part, Dr. Rogers argued that the continuous use of Bruton’s kinase inhibitors is “really the most effective treatment we have in CLL.”
It’s clear that “responses deepen with continued treatment,” Dr. Rogers said, noting that remission times grow over years of treatment. She highlighted a 2022 study of patients with CLL who took ibrutinib that found complete remission or complete remission with incomplete hematologic recovery was 7% at 12 months and 34% at 7 years. When patients quit taking the drugs, “you don’t get to maximize your patient’s response to this treatment.”
Dr. Rogers also noted that the RESONATE-2 trial found that ibrutinib is linked to the longest median progression-free survival of any CLL treatment at 8.9 years. “That really struck me a very effective initial therapy.”
Indeed, “when you’re offering someone initial therapy with a Bruton’s kinase inhibitor as a continuous treatment strategy, you can tell people that they can expect a normal lifespan with this approach. That’s extremely important when you’re talking to patients about whether they might want to alter their leukemia treatment.”
Finally, she noted that discontinuation of ibrutinib was linked to shorter survival in early research. “This was the first suggestion that discontinuation is not good.”
Dr. Rogers said that discontinuing the drugs is sometimes necessary because of adverse events, but patients can “certainly switch to a more tolerable Bruton’s kinase inhibitor. With the options available today, that should be a strategy that’s considered.”
Audience members at the debate were invited to respond to a live online survey about whether Bruton’s kinase inhibitors can be discontinued. Among 49 respondents, most (52.3%) said no, 42.8% said yes, and the rest were undecided/other.
Disclosures for the speakers were not provided. Dr. Ahn disclosed consulting for BeiGene and AstraZeneca. Dr. Rogers disclosed receiving research funding from Genentech, AbbVie, Janssen, and Novartis; consulting for AstraZeneca, BeiGene, Janssen, Pharmacyclics, AbbVie, Genentech, and LOXO@Lilly; and receiving travel funding from AstraZeneca.
A version of this article appeared on Medscape.com.
FROM SOHO 2024