User login
Lessons from the longest study on happiness
The Harvard Study of Adult Development may be the most comprehensive study ever conducted, as it followed its participants for their entire adult lives. The study was started in Boston in 1938 and has already covered three generations: grandparents, parents, and children, who are now considered “baby boomers.” It analyzed more than 2,000 people throughout 85 years of longitudinal study.
In January, Robert J. Waldinger, MD, the current director of this incredible study, published the book The Good Life: Lessons From the World’s Longest Scientific Study of Happiness, coauthored with the study’s associate director, Marc Schulz, PhD.
By following this large population for more than 8 decades, the study uncovered the factors most correlated with well-being and happiness. Here, I have summarized some of the authors’ main concepts.
Most important factors
The study’s happiest participants had two major factors in common throughout its 85 years: Taking care of their health and building loving relationships with others.
It seems obvious that being in good health is essential to live well. However, to some surprise, researchers determined that good relationships were the most significant predictor of health and happiness during aging. Other authors have confirmed this finding, and research has sought to analyze the physiological mechanisms associated with this benefit.
Professional success insufficient
Professional success on its own does not guarantee happiness, even though it may be gratifying. The study revealed that those who were happiest were not isolated. In fact, the happiest people valued and fostered relationships. Levels of education and cultural awareness, which tend to be higher among those with higher salaries, were also important factors for adopting healthy habits (promoted more often as of the 1960s) and for better access to health care.
Social skills
Loneliness is increasingly common and creates challenges when dealing with stressful situations. It is essential to have someone with whom we can vent. Therefore, Dr. Waldinger recommends assessing how to foster, strengthen, and broaden relationships. He calls this maintaining social connections and, just as with physical fitness, it also requires constant practice. Friendships and relationships need regular commitment to keep them from fizzling out. A simple telephone call can help. Participating in activities that bring joy and encourage camaraderie, such as sports, hobbies, and volunteer work, may broaden the relationship network.
Happiness not constant
Social media almost always shows the positive side of people’s lives and suggests that everyone lives worry-free. However, the truth is that no one’s life is free of difficulties and challenges. Social skills contribute to resilience.
It is never too late for a turnaround and for people to change their lives through new relationships and experiences. Those who think they know everything about life are very mistaken. The study showed that good things happened to those who had given up on changing their situation, and good news appeared when they least expected it.
This study highlights the importance of having social skills and always cultivating our relationships to help us become healthier, overcome challenging moments, and achieve the happiness that we all desire.
We finally have robust evidence-based data to use when speaking on happiness.
Dr. Wajngarten is professor of cardiology, University of São Paulo, Brazil. He has disclosed no relevant financial relationships.
This article was translated from the Medscape Portuguese Edition. A version of this article appeared on Medscape.com.
The Harvard Study of Adult Development may be the most comprehensive study ever conducted, as it followed its participants for their entire adult lives. The study was started in Boston in 1938 and has already covered three generations: grandparents, parents, and children, who are now considered “baby boomers.” It analyzed more than 2,000 people throughout 85 years of longitudinal study.
In January, Robert J. Waldinger, MD, the current director of this incredible study, published the book The Good Life: Lessons From the World’s Longest Scientific Study of Happiness, coauthored with the study’s associate director, Marc Schulz, PhD.
By following this large population for more than 8 decades, the study uncovered the factors most correlated with well-being and happiness. Here, I have summarized some of the authors’ main concepts.
Most important factors
The study’s happiest participants had two major factors in common throughout its 85 years: Taking care of their health and building loving relationships with others.
It seems obvious that being in good health is essential to live well. However, to some surprise, researchers determined that good relationships were the most significant predictor of health and happiness during aging. Other authors have confirmed this finding, and research has sought to analyze the physiological mechanisms associated with this benefit.
Professional success insufficient
Professional success on its own does not guarantee happiness, even though it may be gratifying. The study revealed that those who were happiest were not isolated. In fact, the happiest people valued and fostered relationships. Levels of education and cultural awareness, which tend to be higher among those with higher salaries, were also important factors for adopting healthy habits (promoted more often as of the 1960s) and for better access to health care.
Social skills
Loneliness is increasingly common and creates challenges when dealing with stressful situations. It is essential to have someone with whom we can vent. Therefore, Dr. Waldinger recommends assessing how to foster, strengthen, and broaden relationships. He calls this maintaining social connections and, just as with physical fitness, it also requires constant practice. Friendships and relationships need regular commitment to keep them from fizzling out. A simple telephone call can help. Participating in activities that bring joy and encourage camaraderie, such as sports, hobbies, and volunteer work, may broaden the relationship network.
Happiness not constant
Social media almost always shows the positive side of people’s lives and suggests that everyone lives worry-free. However, the truth is that no one’s life is free of difficulties and challenges. Social skills contribute to resilience.
It is never too late for a turnaround and for people to change their lives through new relationships and experiences. Those who think they know everything about life are very mistaken. The study showed that good things happened to those who had given up on changing their situation, and good news appeared when they least expected it.
This study highlights the importance of having social skills and always cultivating our relationships to help us become healthier, overcome challenging moments, and achieve the happiness that we all desire.
We finally have robust evidence-based data to use when speaking on happiness.
Dr. Wajngarten is professor of cardiology, University of São Paulo, Brazil. He has disclosed no relevant financial relationships.
This article was translated from the Medscape Portuguese Edition. A version of this article appeared on Medscape.com.
The Harvard Study of Adult Development may be the most comprehensive study ever conducted, as it followed its participants for their entire adult lives. The study was started in Boston in 1938 and has already covered three generations: grandparents, parents, and children, who are now considered “baby boomers.” It analyzed more than 2,000 people throughout 85 years of longitudinal study.
In January, Robert J. Waldinger, MD, the current director of this incredible study, published the book The Good Life: Lessons From the World’s Longest Scientific Study of Happiness, coauthored with the study’s associate director, Marc Schulz, PhD.
By following this large population for more than 8 decades, the study uncovered the factors most correlated with well-being and happiness. Here, I have summarized some of the authors’ main concepts.
Most important factors
The study’s happiest participants had two major factors in common throughout its 85 years: Taking care of their health and building loving relationships with others.
It seems obvious that being in good health is essential to live well. However, to some surprise, researchers determined that good relationships were the most significant predictor of health and happiness during aging. Other authors have confirmed this finding, and research has sought to analyze the physiological mechanisms associated with this benefit.
Professional success insufficient
Professional success on its own does not guarantee happiness, even though it may be gratifying. The study revealed that those who were happiest were not isolated. In fact, the happiest people valued and fostered relationships. Levels of education and cultural awareness, which tend to be higher among those with higher salaries, were also important factors for adopting healthy habits (promoted more often as of the 1960s) and for better access to health care.
Social skills
Loneliness is increasingly common and creates challenges when dealing with stressful situations. It is essential to have someone with whom we can vent. Therefore, Dr. Waldinger recommends assessing how to foster, strengthen, and broaden relationships. He calls this maintaining social connections and, just as with physical fitness, it also requires constant practice. Friendships and relationships need regular commitment to keep them from fizzling out. A simple telephone call can help. Participating in activities that bring joy and encourage camaraderie, such as sports, hobbies, and volunteer work, may broaden the relationship network.
Happiness not constant
Social media almost always shows the positive side of people’s lives and suggests that everyone lives worry-free. However, the truth is that no one’s life is free of difficulties and challenges. Social skills contribute to resilience.
It is never too late for a turnaround and for people to change their lives through new relationships and experiences. Those who think they know everything about life are very mistaken. The study showed that good things happened to those who had given up on changing their situation, and good news appeared when they least expected it.
This study highlights the importance of having social skills and always cultivating our relationships to help us become healthier, overcome challenging moments, and achieve the happiness that we all desire.
We finally have robust evidence-based data to use when speaking on happiness.
Dr. Wajngarten is professor of cardiology, University of São Paulo, Brazil. He has disclosed no relevant financial relationships.
This article was translated from the Medscape Portuguese Edition. A version of this article appeared on Medscape.com.
Affordable IVF – Are we there yet?
The price for an in vitro fertilization (IVF) cycle continues to increase annually by many clinics, particularly because of “add-ons” of dubious value.
The initial application of IVF was for tubal factor infertility. Over the decades since 1981, the year of the first successful live birth in the United States, indications for IVF have dramatically expanded – ovulation dysfunction, unexplained infertility, male factor, advanced stage endometriosis, unexplained infertility, embryo testing to avoid an inherited genetic disease from the intended parents carrying the same mutation, and family balancing for gender, along with fertility preservation, including before potentially gonadotoxic treatment and “elective” planned oocyte cryopreservation.
From RESOLVE.org, the National Infertility Association: “As of June 2022, 20 states have passed fertility insurance coverage laws, 14 of those laws include IVF coverage, and 12 states have fertility preservation laws for iatrogenic (medically induced) infertility.” Consequently, “affordable IVF” is paramount to providing equal access for patients.
I spoke with the past president of The Society for Assisted Reproductive Technology (SART.org), Kevin Doody, MD, HCLD, to discuss current IVF treatment options for couples that may decrease their financial burden, particularly by applying a novel approach – called INVOcell – that involves using the woman’s vagina as the embryo “incubator.” Dr. Doody is director of CARE Fertility in Bedford, Tex., and clinical professor at UT Southwestern Medical Center, Dallas.
How does limiting the dosage of gonadotropins in IVF cycles, known as “minimal stimulation,” affect pregnancy outcomes?
IVF medications are often costly, so it is logical to try and minimize expenses by using them judiciously. “Minimal stimulation” generally is not the best approach, as having more eggs usually leads to better pregnancy rates. High egg yield increases short-term success and provides additional embryos for future attempts.
However, extremely high gonadotropin doses do not necessarily yield more eggs or successful pregnancies. The dose response to gonadotropins follows a sigmoid curve, and typically doses beyond 225-300 IU per day do not offer additional benefits, except for women with an elevated body weight. Yet, some physicians continue to use higher doses in women with low ovarian reserve, which is often not beneficial and can add unnecessary costs.
Is “natural cycle” IVF cost-effective with acceptable pregnancy success rates?
Although the first-ever IVF baby was conceived through a natural cycle, this approach has very low success rates. Even with advancements in IVF laboratory technologies, the outcomes of natural cycle IVF have remained disappointingly low and are generally considered unacceptable.
Are there other cost-saving alternatives for IVF that still maintain reasonable success rates?
Some patients can undergo a more simplified ovarian stimulation protocol that reduces the number of monitoring visits, thus reducing costs. In couples without a severe male factor, the application and additional expense of intracytoplasmic sperm injection (ICSI) is unnecessary. Pre-implantation genetic testing for embryo aneuploidy, another “add-on” procedure, has specific indications and medical evidence does not support its use in all patient cycles.
How can the cost of a standard IVF cycle be reduced, especially in areas without mandated infertility insurance coverage?
Addressing this issue involves considering principles of justice in medical ethics, which emphasize equal health care access for all individuals. Infertility is a medical condition and IVF is expensive, so lack of insurance coverage often restricts access. Our clinic offers a more affordable option called “effortless IVF” using an intravaginal culture system (INVOcell), which minimizes the monitoring process while maintaining satisfactory success rates and reducing the risks associated with ovarian hyperstimulation syndrome.
What is INVOcell, and how successful is it in terms of live birth rates?
INVOcell is an innovative approach to IVF, where an intravaginal culture system is used as an “embryo incubator whereby freshly harvested eggs along with sperm are immediately added to a small chamber device that is placed in the woman’s vagina for up to 5 days to allow for fertilization and embryo development.” The woman, typically, has no discomfort from the device. For appropriately selected patients, the literature has shown live birth rates are comparable to those achieved using conventional laboratory incubation systems.
As an early participant in INVOcell research, can you share insights on the ideal candidates for this procedure and any contraindications?
The INVOcell system is best suited for straightforward cases. It is not recommended for severe male factor infertility requiring ICSI, since this will delay application of the chamber device and increase cost. Further, cases involving preimplantation genetic testing are not recommended because the embryos may not develop synchronously within the device to the embryo stage needed for a biopsy.
What training is required for embryologists and physicians to use INVOcell?
Embryologists require training for a few hours to learn the basics of INVOcell. They must master loading eggs into and retrieving embryos from the device. Practicing on discarded eggs and embryos, embryologists can accelerate the acquisition of the proper technique needed for INVOcell. Physicians find the training easier; they mainly need to learn the correct placement and removal of the device in the vagina.
Is INVOcell gaining acceptance among patients and IVF centers?
Acceptance varies. In our practice, INVOcell has largely replaced superovulation and intrauterine insemination treatments. However, some clinics still need to determine how this tool fits within their practice.
Have IVF success rates plateaued as affordable options increase?
IVF success rates grew substantially in the 1980s and 1990s, fostered by improved embryo culture systems and higher numbers of embryos transferred, the latter at the expense of a multiple gestation. While the rate of improvement has slowed, coinciding with the increasing use of single embryo transfer, advancements in IVF continue toward the goal of improving the singleton live birth rate per IVF cycle. There is still room for enhancement in success rates alongside cost reduction. Continued innovation is needed, especially for patients with challenging underlying biological issues.
Can you provide insight into the next potential breakthrough in IVF that may reduce costs, be less invasive, and maintain optimal pregnancy rates?
I am very excited about recent breakthroughs in in vitro maturation (IVM) of oocytes. The bottleneck in IVF clinics (and significant expense) primarily relates to the need to stimulate the ovaries to get mature and competent eggs. The technology of IVM has existed for decades but has yet to be fully embraced by clinics because of the poor competency of oocytes matured in the laboratory.
Immature eggs resume meiosis immediately upon removal from the ovary. Nuclear maturation of eggs in the lab is easy. In fact, it happens too quickly, thereby not allowing for the maturation of the egg cytoplasm. This has previously led to poor development of embryos following fertilization and low success rates.
Recently, a new laboratory strategy has resulted in a significant improvement in success. This improved culture system uses a peptide that prevents the resumption of meiosis for the initial culture time frame. Substances, including follicle stimulating hormone, can be added to the media to promote oocyte cytoplasmic maturation. Following this, the eggs are placed in a media without the meiosis inhibitor to allow for nuclear maturation. This results in a significantly higher proportion of competent mature eggs.
Dr. Trolice is director of The IVF Center in Winter Park, Fla., and professor of obstetrics and gynecology at the University of Central Florida, Orlando.
The price for an in vitro fertilization (IVF) cycle continues to increase annually by many clinics, particularly because of “add-ons” of dubious value.
The initial application of IVF was for tubal factor infertility. Over the decades since 1981, the year of the first successful live birth in the United States, indications for IVF have dramatically expanded – ovulation dysfunction, unexplained infertility, male factor, advanced stage endometriosis, unexplained infertility, embryo testing to avoid an inherited genetic disease from the intended parents carrying the same mutation, and family balancing for gender, along with fertility preservation, including before potentially gonadotoxic treatment and “elective” planned oocyte cryopreservation.
From RESOLVE.org, the National Infertility Association: “As of June 2022, 20 states have passed fertility insurance coverage laws, 14 of those laws include IVF coverage, and 12 states have fertility preservation laws for iatrogenic (medically induced) infertility.” Consequently, “affordable IVF” is paramount to providing equal access for patients.
I spoke with the past president of The Society for Assisted Reproductive Technology (SART.org), Kevin Doody, MD, HCLD, to discuss current IVF treatment options for couples that may decrease their financial burden, particularly by applying a novel approach – called INVOcell – that involves using the woman’s vagina as the embryo “incubator.” Dr. Doody is director of CARE Fertility in Bedford, Tex., and clinical professor at UT Southwestern Medical Center, Dallas.
How does limiting the dosage of gonadotropins in IVF cycles, known as “minimal stimulation,” affect pregnancy outcomes?
IVF medications are often costly, so it is logical to try and minimize expenses by using them judiciously. “Minimal stimulation” generally is not the best approach, as having more eggs usually leads to better pregnancy rates. High egg yield increases short-term success and provides additional embryos for future attempts.
However, extremely high gonadotropin doses do not necessarily yield more eggs or successful pregnancies. The dose response to gonadotropins follows a sigmoid curve, and typically doses beyond 225-300 IU per day do not offer additional benefits, except for women with an elevated body weight. Yet, some physicians continue to use higher doses in women with low ovarian reserve, which is often not beneficial and can add unnecessary costs.
Is “natural cycle” IVF cost-effective with acceptable pregnancy success rates?
Although the first-ever IVF baby was conceived through a natural cycle, this approach has very low success rates. Even with advancements in IVF laboratory technologies, the outcomes of natural cycle IVF have remained disappointingly low and are generally considered unacceptable.
Are there other cost-saving alternatives for IVF that still maintain reasonable success rates?
Some patients can undergo a more simplified ovarian stimulation protocol that reduces the number of monitoring visits, thus reducing costs. In couples without a severe male factor, the application and additional expense of intracytoplasmic sperm injection (ICSI) is unnecessary. Pre-implantation genetic testing for embryo aneuploidy, another “add-on” procedure, has specific indications and medical evidence does not support its use in all patient cycles.
How can the cost of a standard IVF cycle be reduced, especially in areas without mandated infertility insurance coverage?
Addressing this issue involves considering principles of justice in medical ethics, which emphasize equal health care access for all individuals. Infertility is a medical condition and IVF is expensive, so lack of insurance coverage often restricts access. Our clinic offers a more affordable option called “effortless IVF” using an intravaginal culture system (INVOcell), which minimizes the monitoring process while maintaining satisfactory success rates and reducing the risks associated with ovarian hyperstimulation syndrome.
What is INVOcell, and how successful is it in terms of live birth rates?
INVOcell is an innovative approach to IVF, where an intravaginal culture system is used as an “embryo incubator whereby freshly harvested eggs along with sperm are immediately added to a small chamber device that is placed in the woman’s vagina for up to 5 days to allow for fertilization and embryo development.” The woman, typically, has no discomfort from the device. For appropriately selected patients, the literature has shown live birth rates are comparable to those achieved using conventional laboratory incubation systems.
As an early participant in INVOcell research, can you share insights on the ideal candidates for this procedure and any contraindications?
The INVOcell system is best suited for straightforward cases. It is not recommended for severe male factor infertility requiring ICSI, since this will delay application of the chamber device and increase cost. Further, cases involving preimplantation genetic testing are not recommended because the embryos may not develop synchronously within the device to the embryo stage needed for a biopsy.
What training is required for embryologists and physicians to use INVOcell?
Embryologists require training for a few hours to learn the basics of INVOcell. They must master loading eggs into and retrieving embryos from the device. Practicing on discarded eggs and embryos, embryologists can accelerate the acquisition of the proper technique needed for INVOcell. Physicians find the training easier; they mainly need to learn the correct placement and removal of the device in the vagina.
Is INVOcell gaining acceptance among patients and IVF centers?
Acceptance varies. In our practice, INVOcell has largely replaced superovulation and intrauterine insemination treatments. However, some clinics still need to determine how this tool fits within their practice.
Have IVF success rates plateaued as affordable options increase?
IVF success rates grew substantially in the 1980s and 1990s, fostered by improved embryo culture systems and higher numbers of embryos transferred, the latter at the expense of a multiple gestation. While the rate of improvement has slowed, coinciding with the increasing use of single embryo transfer, advancements in IVF continue toward the goal of improving the singleton live birth rate per IVF cycle. There is still room for enhancement in success rates alongside cost reduction. Continued innovation is needed, especially for patients with challenging underlying biological issues.
Can you provide insight into the next potential breakthrough in IVF that may reduce costs, be less invasive, and maintain optimal pregnancy rates?
I am very excited about recent breakthroughs in in vitro maturation (IVM) of oocytes. The bottleneck in IVF clinics (and significant expense) primarily relates to the need to stimulate the ovaries to get mature and competent eggs. The technology of IVM has existed for decades but has yet to be fully embraced by clinics because of the poor competency of oocytes matured in the laboratory.
Immature eggs resume meiosis immediately upon removal from the ovary. Nuclear maturation of eggs in the lab is easy. In fact, it happens too quickly, thereby not allowing for the maturation of the egg cytoplasm. This has previously led to poor development of embryos following fertilization and low success rates.
Recently, a new laboratory strategy has resulted in a significant improvement in success. This improved culture system uses a peptide that prevents the resumption of meiosis for the initial culture time frame. Substances, including follicle stimulating hormone, can be added to the media to promote oocyte cytoplasmic maturation. Following this, the eggs are placed in a media without the meiosis inhibitor to allow for nuclear maturation. This results in a significantly higher proportion of competent mature eggs.
Dr. Trolice is director of The IVF Center in Winter Park, Fla., and professor of obstetrics and gynecology at the University of Central Florida, Orlando.
The price for an in vitro fertilization (IVF) cycle continues to increase annually by many clinics, particularly because of “add-ons” of dubious value.
The initial application of IVF was for tubal factor infertility. Over the decades since 1981, the year of the first successful live birth in the United States, indications for IVF have dramatically expanded – ovulation dysfunction, unexplained infertility, male factor, advanced stage endometriosis, unexplained infertility, embryo testing to avoid an inherited genetic disease from the intended parents carrying the same mutation, and family balancing for gender, along with fertility preservation, including before potentially gonadotoxic treatment and “elective” planned oocyte cryopreservation.
From RESOLVE.org, the National Infertility Association: “As of June 2022, 20 states have passed fertility insurance coverage laws, 14 of those laws include IVF coverage, and 12 states have fertility preservation laws for iatrogenic (medically induced) infertility.” Consequently, “affordable IVF” is paramount to providing equal access for patients.
I spoke with the past president of The Society for Assisted Reproductive Technology (SART.org), Kevin Doody, MD, HCLD, to discuss current IVF treatment options for couples that may decrease their financial burden, particularly by applying a novel approach – called INVOcell – that involves using the woman’s vagina as the embryo “incubator.” Dr. Doody is director of CARE Fertility in Bedford, Tex., and clinical professor at UT Southwestern Medical Center, Dallas.
How does limiting the dosage of gonadotropins in IVF cycles, known as “minimal stimulation,” affect pregnancy outcomes?
IVF medications are often costly, so it is logical to try and minimize expenses by using them judiciously. “Minimal stimulation” generally is not the best approach, as having more eggs usually leads to better pregnancy rates. High egg yield increases short-term success and provides additional embryos for future attempts.
However, extremely high gonadotropin doses do not necessarily yield more eggs or successful pregnancies. The dose response to gonadotropins follows a sigmoid curve, and typically doses beyond 225-300 IU per day do not offer additional benefits, except for women with an elevated body weight. Yet, some physicians continue to use higher doses in women with low ovarian reserve, which is often not beneficial and can add unnecessary costs.
Is “natural cycle” IVF cost-effective with acceptable pregnancy success rates?
Although the first-ever IVF baby was conceived through a natural cycle, this approach has very low success rates. Even with advancements in IVF laboratory technologies, the outcomes of natural cycle IVF have remained disappointingly low and are generally considered unacceptable.
Are there other cost-saving alternatives for IVF that still maintain reasonable success rates?
Some patients can undergo a more simplified ovarian stimulation protocol that reduces the number of monitoring visits, thus reducing costs. In couples without a severe male factor, the application and additional expense of intracytoplasmic sperm injection (ICSI) is unnecessary. Pre-implantation genetic testing for embryo aneuploidy, another “add-on” procedure, has specific indications and medical evidence does not support its use in all patient cycles.
How can the cost of a standard IVF cycle be reduced, especially in areas without mandated infertility insurance coverage?
Addressing this issue involves considering principles of justice in medical ethics, which emphasize equal health care access for all individuals. Infertility is a medical condition and IVF is expensive, so lack of insurance coverage often restricts access. Our clinic offers a more affordable option called “effortless IVF” using an intravaginal culture system (INVOcell), which minimizes the monitoring process while maintaining satisfactory success rates and reducing the risks associated with ovarian hyperstimulation syndrome.
What is INVOcell, and how successful is it in terms of live birth rates?
INVOcell is an innovative approach to IVF, where an intravaginal culture system is used as an “embryo incubator whereby freshly harvested eggs along with sperm are immediately added to a small chamber device that is placed in the woman’s vagina for up to 5 days to allow for fertilization and embryo development.” The woman, typically, has no discomfort from the device. For appropriately selected patients, the literature has shown live birth rates are comparable to those achieved using conventional laboratory incubation systems.
As an early participant in INVOcell research, can you share insights on the ideal candidates for this procedure and any contraindications?
The INVOcell system is best suited for straightforward cases. It is not recommended for severe male factor infertility requiring ICSI, since this will delay application of the chamber device and increase cost. Further, cases involving preimplantation genetic testing are not recommended because the embryos may not develop synchronously within the device to the embryo stage needed for a biopsy.
What training is required for embryologists and physicians to use INVOcell?
Embryologists require training for a few hours to learn the basics of INVOcell. They must master loading eggs into and retrieving embryos from the device. Practicing on discarded eggs and embryos, embryologists can accelerate the acquisition of the proper technique needed for INVOcell. Physicians find the training easier; they mainly need to learn the correct placement and removal of the device in the vagina.
Is INVOcell gaining acceptance among patients and IVF centers?
Acceptance varies. In our practice, INVOcell has largely replaced superovulation and intrauterine insemination treatments. However, some clinics still need to determine how this tool fits within their practice.
Have IVF success rates plateaued as affordable options increase?
IVF success rates grew substantially in the 1980s and 1990s, fostered by improved embryo culture systems and higher numbers of embryos transferred, the latter at the expense of a multiple gestation. While the rate of improvement has slowed, coinciding with the increasing use of single embryo transfer, advancements in IVF continue toward the goal of improving the singleton live birth rate per IVF cycle. There is still room for enhancement in success rates alongside cost reduction. Continued innovation is needed, especially for patients with challenging underlying biological issues.
Can you provide insight into the next potential breakthrough in IVF that may reduce costs, be less invasive, and maintain optimal pregnancy rates?
I am very excited about recent breakthroughs in in vitro maturation (IVM) of oocytes. The bottleneck in IVF clinics (and significant expense) primarily relates to the need to stimulate the ovaries to get mature and competent eggs. The technology of IVM has existed for decades but has yet to be fully embraced by clinics because of the poor competency of oocytes matured in the laboratory.
Immature eggs resume meiosis immediately upon removal from the ovary. Nuclear maturation of eggs in the lab is easy. In fact, it happens too quickly, thereby not allowing for the maturation of the egg cytoplasm. This has previously led to poor development of embryos following fertilization and low success rates.
Recently, a new laboratory strategy has resulted in a significant improvement in success. This improved culture system uses a peptide that prevents the resumption of meiosis for the initial culture time frame. Substances, including follicle stimulating hormone, can be added to the media to promote oocyte cytoplasmic maturation. Following this, the eggs are placed in a media without the meiosis inhibitor to allow for nuclear maturation. This results in a significantly higher proportion of competent mature eggs.
Dr. Trolice is director of The IVF Center in Winter Park, Fla., and professor of obstetrics and gynecology at the University of Central Florida, Orlando.
Does rapid weight loss from GLP-1s decrease muscle mass?
Recently, the glucagonlike peptide 1 (GLP-1) receptor agonist semaglutide has changed the obesity treatment landscape. This and other similar medications approaching the market are in high demand because of their ease of use, effectiveness, and lack of interactions with other medications.
Semaglutide is a weekly subcutaneous injection approved by the U.S. Food and Drug Administration for weight loss in conjunction with lifestyle change. It elicits an average weight loss of 15%-18% from baseline in adults with overweight or obesity (body mass index ≥ 27 with at least one obesity-related comorbidity or BMI ≥ 30) in a period of 52-68 weeks (Wilding et al; Rubino et al). Liraglutide is a daily GLP-1 agonist, which is FDA approved for treatment of overweight, with an average weight loss of 8% from treatment start.
Though GLP-1 agonists are very effective for weight loss, questions about side effects have arisen.
Current modalities of weight loss don’t specifically target fat mass (FM), so it is expected that, to a degree, fat-free mass (FFM), including muscle mass, will also be lost along with fat mass.
Loss of muscle mass is associated with an increased risk for lower bone density, fatigue, injuries, and decreased strength. In addition, sarcopenic obesity, a combination of high body fat percentage and low skeletal muscle mass, is concerning in patients older than 65 years and/or postmenopausal patients. Because GLP-1 agonists cause more rapid and sustainable weight loss, compared with intensive behavioral lifestyle therapy, there has been more media attention recently about possible muscle mass loss with GLP-1–agonist use.
However, proper well-rounded approaches to obesity treatment can mitigate the issue of muscle mass loss even when rapid weight loss occurs. When weight loss is achieved with very-low-calorie dietary changes alone (without exercise), it is also associated with significant reductions in lean muscle mass; however, incorporating exercise, preferably resistance training, can mitigate the muscle mass loss. The muscle-preserving effect of exercise is especially prominent in older populations where it is needed most and should be incorporated (Armanento-Villareal et al.; Winter et al.; Batsis and Zagaria; Mason et al.).
Furthermore, studies in rat models demonstrate liraglutide induces myogenesis in myoblasts and protects against muscular atrophy. In human studies, GLP-1 infusion was associated with an improved skeletal and cardiac muscle microvasculature, suggesting that GLP-1 agonists may have some positive effects on the muscle. A 2020 systematic review examined the effect of gradual vs. rapid weight loss and demonstrated no significant difference in muscle loss between the rapid weight-loss group and gradual weight-loss group. Even after gastric bypass surgery, most of the muscle mass loss occurred during the first year, when weight loss is happening. However, after the first year, skeletal muscle was maintained even without introducing additional dietary or exercise interventions.
Age, although a consideration, should not be a discriminating factor against treating obesity. Sarcopenic obesity is a serious risk especially in patients aged 65 years or older, but GLP-1–agonist therapy can be beneficial to prevent muscle atrophy and increase blood flow to skeletal and cardiac muscle. In addition, patients must be encouraged to maintain an appropriate dietary and exercise regimen to treat their obesity. Management of obesity is complex and multifaceted, and patients should understand their responsibility to follow clinician recommendations during this journey to decrease the associated side effects.
Overall, with any level of weight loss achieved with current strategies, a certain amount of muscle mass loss is expected. All efforts to actively preserve muscle mass can prevent too much muscle loss.
Therefore, providers prescribing medications like GLP-1 agonists to treat obesity must also counsel patients about incorporating aerobic exercise and resistance training as part of the treatment plan as well as ensuring they eat a high-protein diet. Generally, resistance training is preferred over aerobic exercise for muscle mass preservation and increased strength, but studies also demonstrate benefit with aerobic exercise.
In the first few visits of initiating obesity treatment, patients should be encouraged to start to incorporate light physical activity as tolerable while starting to make dietary changes to include at least 0.8g/kg/day of protein (Fappi et al.). These initial visits are also an important opportunity for clinicians to ingrain the importance of exercise as part of healthy weight loss. At every visit, physical activity level should be assessed.
Dr. Ahn is a clinical fellow in obesity medicine, Weight Management Center, at Massachusetts General Hospital, Boston. Dr. Singhal is an assistant professor of pediatrics, Harvard Medical School, Boston, and director, Pediatric Program, MGH Weight Center, Massachusetts General Hospital. Dr. Singhal reported that his spouse consults with AstraZeneca, Dilachi Pharma, Eli Lilly, Genetech, Immunomedics, Pfizer, Sanofi, and Novartis.
A version of this article first appeared on Medscape.com.
Recently, the glucagonlike peptide 1 (GLP-1) receptor agonist semaglutide has changed the obesity treatment landscape. This and other similar medications approaching the market are in high demand because of their ease of use, effectiveness, and lack of interactions with other medications.
Semaglutide is a weekly subcutaneous injection approved by the U.S. Food and Drug Administration for weight loss in conjunction with lifestyle change. It elicits an average weight loss of 15%-18% from baseline in adults with overweight or obesity (body mass index ≥ 27 with at least one obesity-related comorbidity or BMI ≥ 30) in a period of 52-68 weeks (Wilding et al; Rubino et al). Liraglutide is a daily GLP-1 agonist, which is FDA approved for treatment of overweight, with an average weight loss of 8% from treatment start.
Though GLP-1 agonists are very effective for weight loss, questions about side effects have arisen.
Current modalities of weight loss don’t specifically target fat mass (FM), so it is expected that, to a degree, fat-free mass (FFM), including muscle mass, will also be lost along with fat mass.
Loss of muscle mass is associated with an increased risk for lower bone density, fatigue, injuries, and decreased strength. In addition, sarcopenic obesity, a combination of high body fat percentage and low skeletal muscle mass, is concerning in patients older than 65 years and/or postmenopausal patients. Because GLP-1 agonists cause more rapid and sustainable weight loss, compared with intensive behavioral lifestyle therapy, there has been more media attention recently about possible muscle mass loss with GLP-1–agonist use.
However, proper well-rounded approaches to obesity treatment can mitigate the issue of muscle mass loss even when rapid weight loss occurs. When weight loss is achieved with very-low-calorie dietary changes alone (without exercise), it is also associated with significant reductions in lean muscle mass; however, incorporating exercise, preferably resistance training, can mitigate the muscle mass loss. The muscle-preserving effect of exercise is especially prominent in older populations where it is needed most and should be incorporated (Armanento-Villareal et al.; Winter et al.; Batsis and Zagaria; Mason et al.).
Furthermore, studies in rat models demonstrate liraglutide induces myogenesis in myoblasts and protects against muscular atrophy. In human studies, GLP-1 infusion was associated with an improved skeletal and cardiac muscle microvasculature, suggesting that GLP-1 agonists may have some positive effects on the muscle. A 2020 systematic review examined the effect of gradual vs. rapid weight loss and demonstrated no significant difference in muscle loss between the rapid weight-loss group and gradual weight-loss group. Even after gastric bypass surgery, most of the muscle mass loss occurred during the first year, when weight loss is happening. However, after the first year, skeletal muscle was maintained even without introducing additional dietary or exercise interventions.
Age, although a consideration, should not be a discriminating factor against treating obesity. Sarcopenic obesity is a serious risk especially in patients aged 65 years or older, but GLP-1–agonist therapy can be beneficial to prevent muscle atrophy and increase blood flow to skeletal and cardiac muscle. In addition, patients must be encouraged to maintain an appropriate dietary and exercise regimen to treat their obesity. Management of obesity is complex and multifaceted, and patients should understand their responsibility to follow clinician recommendations during this journey to decrease the associated side effects.
Overall, with any level of weight loss achieved with current strategies, a certain amount of muscle mass loss is expected. All efforts to actively preserve muscle mass can prevent too much muscle loss.
Therefore, providers prescribing medications like GLP-1 agonists to treat obesity must also counsel patients about incorporating aerobic exercise and resistance training as part of the treatment plan as well as ensuring they eat a high-protein diet. Generally, resistance training is preferred over aerobic exercise for muscle mass preservation and increased strength, but studies also demonstrate benefit with aerobic exercise.
In the first few visits of initiating obesity treatment, patients should be encouraged to start to incorporate light physical activity as tolerable while starting to make dietary changes to include at least 0.8g/kg/day of protein (Fappi et al.). These initial visits are also an important opportunity for clinicians to ingrain the importance of exercise as part of healthy weight loss. At every visit, physical activity level should be assessed.
Dr. Ahn is a clinical fellow in obesity medicine, Weight Management Center, at Massachusetts General Hospital, Boston. Dr. Singhal is an assistant professor of pediatrics, Harvard Medical School, Boston, and director, Pediatric Program, MGH Weight Center, Massachusetts General Hospital. Dr. Singhal reported that his spouse consults with AstraZeneca, Dilachi Pharma, Eli Lilly, Genetech, Immunomedics, Pfizer, Sanofi, and Novartis.
A version of this article first appeared on Medscape.com.
Recently, the glucagonlike peptide 1 (GLP-1) receptor agonist semaglutide has changed the obesity treatment landscape. This and other similar medications approaching the market are in high demand because of their ease of use, effectiveness, and lack of interactions with other medications.
Semaglutide is a weekly subcutaneous injection approved by the U.S. Food and Drug Administration for weight loss in conjunction with lifestyle change. It elicits an average weight loss of 15%-18% from baseline in adults with overweight or obesity (body mass index ≥ 27 with at least one obesity-related comorbidity or BMI ≥ 30) in a period of 52-68 weeks (Wilding et al; Rubino et al). Liraglutide is a daily GLP-1 agonist, which is FDA approved for treatment of overweight, with an average weight loss of 8% from treatment start.
Though GLP-1 agonists are very effective for weight loss, questions about side effects have arisen.
Current modalities of weight loss don’t specifically target fat mass (FM), so it is expected that, to a degree, fat-free mass (FFM), including muscle mass, will also be lost along with fat mass.
Loss of muscle mass is associated with an increased risk for lower bone density, fatigue, injuries, and decreased strength. In addition, sarcopenic obesity, a combination of high body fat percentage and low skeletal muscle mass, is concerning in patients older than 65 years and/or postmenopausal patients. Because GLP-1 agonists cause more rapid and sustainable weight loss, compared with intensive behavioral lifestyle therapy, there has been more media attention recently about possible muscle mass loss with GLP-1–agonist use.
However, proper well-rounded approaches to obesity treatment can mitigate the issue of muscle mass loss even when rapid weight loss occurs. When weight loss is achieved with very-low-calorie dietary changes alone (without exercise), it is also associated with significant reductions in lean muscle mass; however, incorporating exercise, preferably resistance training, can mitigate the muscle mass loss. The muscle-preserving effect of exercise is especially prominent in older populations where it is needed most and should be incorporated (Armanento-Villareal et al.; Winter et al.; Batsis and Zagaria; Mason et al.).
Furthermore, studies in rat models demonstrate liraglutide induces myogenesis in myoblasts and protects against muscular atrophy. In human studies, GLP-1 infusion was associated with an improved skeletal and cardiac muscle microvasculature, suggesting that GLP-1 agonists may have some positive effects on the muscle. A 2020 systematic review examined the effect of gradual vs. rapid weight loss and demonstrated no significant difference in muscle loss between the rapid weight-loss group and gradual weight-loss group. Even after gastric bypass surgery, most of the muscle mass loss occurred during the first year, when weight loss is happening. However, after the first year, skeletal muscle was maintained even without introducing additional dietary or exercise interventions.
Age, although a consideration, should not be a discriminating factor against treating obesity. Sarcopenic obesity is a serious risk especially in patients aged 65 years or older, but GLP-1–agonist therapy can be beneficial to prevent muscle atrophy and increase blood flow to skeletal and cardiac muscle. In addition, patients must be encouraged to maintain an appropriate dietary and exercise regimen to treat their obesity. Management of obesity is complex and multifaceted, and patients should understand their responsibility to follow clinician recommendations during this journey to decrease the associated side effects.
Overall, with any level of weight loss achieved with current strategies, a certain amount of muscle mass loss is expected. All efforts to actively preserve muscle mass can prevent too much muscle loss.
Therefore, providers prescribing medications like GLP-1 agonists to treat obesity must also counsel patients about incorporating aerobic exercise and resistance training as part of the treatment plan as well as ensuring they eat a high-protein diet. Generally, resistance training is preferred over aerobic exercise for muscle mass preservation and increased strength, but studies also demonstrate benefit with aerobic exercise.
In the first few visits of initiating obesity treatment, patients should be encouraged to start to incorporate light physical activity as tolerable while starting to make dietary changes to include at least 0.8g/kg/day of protein (Fappi et al.). These initial visits are also an important opportunity for clinicians to ingrain the importance of exercise as part of healthy weight loss. At every visit, physical activity level should be assessed.
Dr. Ahn is a clinical fellow in obesity medicine, Weight Management Center, at Massachusetts General Hospital, Boston. Dr. Singhal is an assistant professor of pediatrics, Harvard Medical School, Boston, and director, Pediatric Program, MGH Weight Center, Massachusetts General Hospital. Dr. Singhal reported that his spouse consults with AstraZeneca, Dilachi Pharma, Eli Lilly, Genetech, Immunomedics, Pfizer, Sanofi, and Novartis.
A version of this article first appeared on Medscape.com.
Brisk walking: No-cost option for patients to improve cancer outcomes
This transcript has been edited for clarity.
I’m Maurie Markman, MD, from Cancer Treatment Centers of America in Philadelphia. I wanted to discuss a highly provocative paper that I think deserves attention. It was published in the Journal of Clinical Oncology, titled Physical Activity in Stage III Colon Cancer: CALGB/SWOG 80702 (Alliance).
This is an incredibly important paper that highlights something that has not been emphasized enough in oncology practice. What are the things that we can recommend to our patients that are not expensive, but which they can do for themselves to impact a potential for adding to a positive outcome? In this case, we’re talking about physical activity.
This was an extremely well-conducted study. It was a prospective cohort study that was built into an ongoing phase 3 randomized, multicenter study looking at adjuvant therapy of stage III colon cancer. The median follow-up in this population was almost 6 years. We’re talking about 1,696 patients.
The investigators did a survey, asking patients when they started treatment and then a short time after that, and measured the level of recreational physical activity. They didn’t do a design. They asked the individuals how much activity they had.
There were a number of analyses done in terms of looking at this that were reported in the paper. I want to highlight one because it’s so simple. The investigators looked at brisk walking. For brisk walking, the 3-year disease-free survival was 81.7% for individuals who had less than 1 hour per week of brisk walking versus 88.4% for individuals who walked briskly more than 3 hours per week.
There is no additional expense. It’s walking. There were other activities that were looked at here, including aerobic activities.
The bottom line is that physical activity is positive, is not expensive, and focuses on what the individual patient can do for themselves. It’s something I believe that, in the oncology community, we need to emphasize more.
I encourage you to review this paper and use your own opinion as to what you want to do with this information, but I strongly urge you to look at this – and other types of activities – that we can recommend that individuals do themselves to impact their outcomes related to cancer.
Dr. Markman is a clinical professor of medicine at Drexel University, Philadelphia. He reported conflicts of interest with Genentech, AstraZeneca, Celgene, Clovis, and Amgen.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’m Maurie Markman, MD, from Cancer Treatment Centers of America in Philadelphia. I wanted to discuss a highly provocative paper that I think deserves attention. It was published in the Journal of Clinical Oncology, titled Physical Activity in Stage III Colon Cancer: CALGB/SWOG 80702 (Alliance).
This is an incredibly important paper that highlights something that has not been emphasized enough in oncology practice. What are the things that we can recommend to our patients that are not expensive, but which they can do for themselves to impact a potential for adding to a positive outcome? In this case, we’re talking about physical activity.
This was an extremely well-conducted study. It was a prospective cohort study that was built into an ongoing phase 3 randomized, multicenter study looking at adjuvant therapy of stage III colon cancer. The median follow-up in this population was almost 6 years. We’re talking about 1,696 patients.
The investigators did a survey, asking patients when they started treatment and then a short time after that, and measured the level of recreational physical activity. They didn’t do a design. They asked the individuals how much activity they had.
There were a number of analyses done in terms of looking at this that were reported in the paper. I want to highlight one because it’s so simple. The investigators looked at brisk walking. For brisk walking, the 3-year disease-free survival was 81.7% for individuals who had less than 1 hour per week of brisk walking versus 88.4% for individuals who walked briskly more than 3 hours per week.
There is no additional expense. It’s walking. There were other activities that were looked at here, including aerobic activities.
The bottom line is that physical activity is positive, is not expensive, and focuses on what the individual patient can do for themselves. It’s something I believe that, in the oncology community, we need to emphasize more.
I encourage you to review this paper and use your own opinion as to what you want to do with this information, but I strongly urge you to look at this – and other types of activities – that we can recommend that individuals do themselves to impact their outcomes related to cancer.
Dr. Markman is a clinical professor of medicine at Drexel University, Philadelphia. He reported conflicts of interest with Genentech, AstraZeneca, Celgene, Clovis, and Amgen.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’m Maurie Markman, MD, from Cancer Treatment Centers of America in Philadelphia. I wanted to discuss a highly provocative paper that I think deserves attention. It was published in the Journal of Clinical Oncology, titled Physical Activity in Stage III Colon Cancer: CALGB/SWOG 80702 (Alliance).
This is an incredibly important paper that highlights something that has not been emphasized enough in oncology practice. What are the things that we can recommend to our patients that are not expensive, but which they can do for themselves to impact a potential for adding to a positive outcome? In this case, we’re talking about physical activity.
This was an extremely well-conducted study. It was a prospective cohort study that was built into an ongoing phase 3 randomized, multicenter study looking at adjuvant therapy of stage III colon cancer. The median follow-up in this population was almost 6 years. We’re talking about 1,696 patients.
The investigators did a survey, asking patients when they started treatment and then a short time after that, and measured the level of recreational physical activity. They didn’t do a design. They asked the individuals how much activity they had.
There were a number of analyses done in terms of looking at this that were reported in the paper. I want to highlight one because it’s so simple. The investigators looked at brisk walking. For brisk walking, the 3-year disease-free survival was 81.7% for individuals who had less than 1 hour per week of brisk walking versus 88.4% for individuals who walked briskly more than 3 hours per week.
There is no additional expense. It’s walking. There were other activities that were looked at here, including aerobic activities.
The bottom line is that physical activity is positive, is not expensive, and focuses on what the individual patient can do for themselves. It’s something I believe that, in the oncology community, we need to emphasize more.
I encourage you to review this paper and use your own opinion as to what you want to do with this information, but I strongly urge you to look at this – and other types of activities – that we can recommend that individuals do themselves to impact their outcomes related to cancer.
Dr. Markman is a clinical professor of medicine at Drexel University, Philadelphia. He reported conflicts of interest with Genentech, AstraZeneca, Celgene, Clovis, and Amgen.
A version of this article first appeared on Medscape.com.
Has the time come to bury BMI in favor of other screening measures?
In 1832, Belgian statistician Adolphe Quetelet introduced the concept of body mass index (BMI) – one’s weight (in kilograms) divided by the square of one’s height (in meters) as a measurement of ideal body weight. Approximately 140 years later, nutritional epidemiologist Ancel Keys proposed the use of BMI as a surrogate marker for evaluating body fat percentage within a population.
For the past 50 years, the scientific and medical communities have relied on BMI as a research and study tool to categorize patients’ weight (that is, severely underweight, underweight, normal weight, overweight, and obesity). The World Health Organization, National Institutes of Health, and U.S. Centers for Disease Control and Prevention use the following BMI weight classifications for adult patients:
- Underweight: BMI < 18.5
- Normal weight: BMI ≥ 18.5 to 24.9
- Overweight: BMI ≥ 25 to 29.9
- Obesity: BMI ≥ 30
Of note, BMI categories for children and adolescents (aged 2-19 years) are based on sex- and age-specific percentiles and will not be addressed in this article.
BMI appears to be a straightforward, easy, and cost-effective way to identify “healthy” weight and assess a patient’s risk for related conditions. For example, studies show that a BMI ≥ 35 kg/m2 correlates to higher prevalence of type 2 diabetes, hypertension, dyslipidemia, and decreased lifespan. At least 13 types of cancer have been linked to obesity, regardless of dietary or physical activity behaviors. While the health dangers associated with BMI ≥ 35 are substantial and difficult to dispute, concerns arise when BMI alone is used to determine healthy weight and disease risk in patients with a BMI of 25-35.
BMI limitations
There are troubling limitations to using BMI alone to assess a patient’s weight and health status. BMI only takes into account a patient’s height and weight, neither of which are sole determinants of health. Moreover, BMI measurements do not distinguish between fat mass and fat-free mass, each of which has very distinct effects on health. High fat mass is associated with an increased risk for disease and mortality, while higher lean body mass correlates with increased physical fitness and longevity. BMI also does not consider age, sex, race, ethnicity, or types of adipose tissue, all of which tremendously influence disease risk across all BMI categories.
Body composition and adipose tissue
Body composition and type of excess adipose tissue better correlate disease risk than does BMI. The World Health Organization defines obesity as having a body fat percentage > 25% for men and > 35% for women. Body composition can be measured by skin-fold thickness, bioelectrical impedance, dual-energy x-ray absorptiometry (DXA), CT, or MRI.
A cross-sectional study by Shah and colleagues) comparing BMI and DXA found that BMI underestimated obesity prevalence. In the study, BMI characterized 26% of participants as obese while DXA (a direct measurement of fat) characterized 64%. Further, 39% of patients categorized as nonobese based on BMI were found to be obese on DXA. Also, BMI misclassified 25% of men and 48% of women in the study. These findings and those of other studies suggest that BMI has a high specificity but low sensitivity for diagnosing obesity, questioning its reliability as a clinical screening tool.
Current guideline recommendations on pharmacologic and surgical treatment options for patients with overweight or obesity, including those of the American Association of Clinical Endocrinology and American College of Endocrinology (AACE/ACE) and the American College of Cardiology/American Heart Association and The Obesity Society (ACC/AHA/TOS), rely on BMI, diminishing their utilization. For example, a recent literature search by Li and associates found that Asian American patients with lower BMIs and BMIs of 25 or 27 are at increased risk for metabolic disease. On the basis of study findings, some organizations recommend considering pharmacotherapy at a lower BMI cutoff of ≥ 25.0 or ≥ 27.5 for Asian people to ensure early treatment intervention in this patient population because guidelines do not recommend pharmacologic treatment unless the BMI is 27 with weight-related complications or 30. Under the current guidelines, a patient of Asian descent has greater disease severity with potentially more complications by the time pharmacotherapy is initiated.
As previously noted, body composition, which requires the use of special equipment (skinfold calipers, DXA, CT, MRI, body impedance scale), best captures the ratio of fat mass to fat-free mass. DXA is frequently used in research studies looking at body composition because of its lower cost, faster time to obtain the study, and ability to measure bone density. MRI has been found to be as accurate as CT for assessing visceral adipose tissue (VAT), skeletal muscle mass, and organ mass, and does not expose patients to ionizing radiation like CT does. MRI clinical use, however, is limited because of its high cost, and it may be problematic for patients with claustrophobia or who are unable to remain immobile for an extended period.
Patients with a high VAT mass, compared with subcutaneous adipose tissue (SAT), are at increased risk for metabolic syndrome, nonalcoholic fatty liver disease, and cardiovascular disease regardless of BMI, underscoring the clinical usefulness of measuring visceral adiposity over BMI.
One of the barriers to implementing VAT assessment in clinical practice is the cost of imaging studies. Fortunately, data suggest that waist circumference and/or waist-to-hip ratio measurements can be a valuable surrogate for VAT measurement. A waist circumference greater than 35 inches (88 cm) or a waist-to-hip ratio greater than 0.8 for women, and greater than 40 inches (102 cm) or a waist-to-hip ratio greater than 0.95 for men, increases metabolic disease risk. Obtaining these measurements requires a tape measure and a few extra minutes and offers more potent data than BMI alone. For example, a large cardiometabolic study found that within each BMI category, increasing gender-specific waist circumferences were associated with significantly higher VAT, liver fat, and a more harmful cardiometabolic risk profile. Men and women with a lower or normal BMI and a high waist circumference are at greatest relative health risk, compared with those with low waist circumference values. Yet, using the BMI alone in these patients would not raise any clinical concern, which is a missed opportunity for cardiometabolic risk reduction.
Biomarkers
Specific biomarkers are closely related to obesity. Leptin and resistin protein levels increase with adipose mass, while adiponectin decreases, probably contributing to insulin resistance. The higher levels of tumor necrosis factor–alpha and interleukin-6 from obesity contribute to chronic inflammation. The combined effect of chronic inflammation and insulin resistance allows greater bioavailability of insulinlike growth factor-1 (IGF-1), which has a role in initiating type 2 diabetes, cardiovascular disease, and cancer. Ideally, measuring these biomarkers could provide more advantageous information than BMI. Unfortunately, for now, the lack of standardized assays and imperfect knowledge of exactly how these biomarkers elicit disease prevents clinical use.
Obesity is a common, highly complex, chronic, and relapsing disease. Thankfully, a number of effective treatments and interventions are available. Although an accurate diagnosis of obesity is essential, underdiagnosed cases and missed opportunities for metabolic disease risk reduction persist. Overdiagnosing obesity, however, has the potential to incur unnecessary health care costs and result in weight bias and stigma.
While BMI is a quick and inexpensive means to assess obesity, by itself it lacks the necessary components for an accurate diagnosis. Particularly for individuals with a normal BMI or less severe overweight/obesity (BMI 27-34.9), other factors must be accounted for, including age, gender, and race. At a minimum, waist circumference should be measured to best risk-stratify and determine treatment intensity. Body composition analysis with BMI calculation refines the diagnosis of obesity.
Finally, clinicians may find best practices by using BMI delta change models. As with so many other clinical measurements, the trajectory tells the most astute story. For example, a patient whose BMI decreased from 45 to 35 may warrant less intensive treatment than a patient whose BMI increased from 26 to 31. Any change in BMI warrants clinical attention. A rapidly or consistently increasing BMI, even within normal range, should prompt clinicians to assess other factors related to obesity and metabolic disease risk (for example, lifestyle factors, waist circumference, blood pressure, cholesterol, diabetes screening) and initiate a conversation about weight management. Similarly, a consistently or rapidly decreasing BMI – even in elevated ranges and particularly with unintentional weight loss – should prompt evaluation.
Although BMI continues to be useful in clinical practice, epidemiology, and research, it should be used in combination with other clinical factors to provide the utmost quality of care.
Dr. Bartfield is assistant professor, obesity medicine specialist, Wake Forest Baptists Medical Center/Atrium Health Weight Management Center, Greensboro, N.C. She has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
In 1832, Belgian statistician Adolphe Quetelet introduced the concept of body mass index (BMI) – one’s weight (in kilograms) divided by the square of one’s height (in meters) as a measurement of ideal body weight. Approximately 140 years later, nutritional epidemiologist Ancel Keys proposed the use of BMI as a surrogate marker for evaluating body fat percentage within a population.
For the past 50 years, the scientific and medical communities have relied on BMI as a research and study tool to categorize patients’ weight (that is, severely underweight, underweight, normal weight, overweight, and obesity). The World Health Organization, National Institutes of Health, and U.S. Centers for Disease Control and Prevention use the following BMI weight classifications for adult patients:
- Underweight: BMI < 18.5
- Normal weight: BMI ≥ 18.5 to 24.9
- Overweight: BMI ≥ 25 to 29.9
- Obesity: BMI ≥ 30
Of note, BMI categories for children and adolescents (aged 2-19 years) are based on sex- and age-specific percentiles and will not be addressed in this article.
BMI appears to be a straightforward, easy, and cost-effective way to identify “healthy” weight and assess a patient’s risk for related conditions. For example, studies show that a BMI ≥ 35 kg/m2 correlates to higher prevalence of type 2 diabetes, hypertension, dyslipidemia, and decreased lifespan. At least 13 types of cancer have been linked to obesity, regardless of dietary or physical activity behaviors. While the health dangers associated with BMI ≥ 35 are substantial and difficult to dispute, concerns arise when BMI alone is used to determine healthy weight and disease risk in patients with a BMI of 25-35.
BMI limitations
There are troubling limitations to using BMI alone to assess a patient’s weight and health status. BMI only takes into account a patient’s height and weight, neither of which are sole determinants of health. Moreover, BMI measurements do not distinguish between fat mass and fat-free mass, each of which has very distinct effects on health. High fat mass is associated with an increased risk for disease and mortality, while higher lean body mass correlates with increased physical fitness and longevity. BMI also does not consider age, sex, race, ethnicity, or types of adipose tissue, all of which tremendously influence disease risk across all BMI categories.
Body composition and adipose tissue
Body composition and type of excess adipose tissue better correlate disease risk than does BMI. The World Health Organization defines obesity as having a body fat percentage > 25% for men and > 35% for women. Body composition can be measured by skin-fold thickness, bioelectrical impedance, dual-energy x-ray absorptiometry (DXA), CT, or MRI.
A cross-sectional study by Shah and colleagues) comparing BMI and DXA found that BMI underestimated obesity prevalence. In the study, BMI characterized 26% of participants as obese while DXA (a direct measurement of fat) characterized 64%. Further, 39% of patients categorized as nonobese based on BMI were found to be obese on DXA. Also, BMI misclassified 25% of men and 48% of women in the study. These findings and those of other studies suggest that BMI has a high specificity but low sensitivity for diagnosing obesity, questioning its reliability as a clinical screening tool.
Current guideline recommendations on pharmacologic and surgical treatment options for patients with overweight or obesity, including those of the American Association of Clinical Endocrinology and American College of Endocrinology (AACE/ACE) and the American College of Cardiology/American Heart Association and The Obesity Society (ACC/AHA/TOS), rely on BMI, diminishing their utilization. For example, a recent literature search by Li and associates found that Asian American patients with lower BMIs and BMIs of 25 or 27 are at increased risk for metabolic disease. On the basis of study findings, some organizations recommend considering pharmacotherapy at a lower BMI cutoff of ≥ 25.0 or ≥ 27.5 for Asian people to ensure early treatment intervention in this patient population because guidelines do not recommend pharmacologic treatment unless the BMI is 27 with weight-related complications or 30. Under the current guidelines, a patient of Asian descent has greater disease severity with potentially more complications by the time pharmacotherapy is initiated.
As previously noted, body composition, which requires the use of special equipment (skinfold calipers, DXA, CT, MRI, body impedance scale), best captures the ratio of fat mass to fat-free mass. DXA is frequently used in research studies looking at body composition because of its lower cost, faster time to obtain the study, and ability to measure bone density. MRI has been found to be as accurate as CT for assessing visceral adipose tissue (VAT), skeletal muscle mass, and organ mass, and does not expose patients to ionizing radiation like CT does. MRI clinical use, however, is limited because of its high cost, and it may be problematic for patients with claustrophobia or who are unable to remain immobile for an extended period.
Patients with a high VAT mass, compared with subcutaneous adipose tissue (SAT), are at increased risk for metabolic syndrome, nonalcoholic fatty liver disease, and cardiovascular disease regardless of BMI, underscoring the clinical usefulness of measuring visceral adiposity over BMI.
One of the barriers to implementing VAT assessment in clinical practice is the cost of imaging studies. Fortunately, data suggest that waist circumference and/or waist-to-hip ratio measurements can be a valuable surrogate for VAT measurement. A waist circumference greater than 35 inches (88 cm) or a waist-to-hip ratio greater than 0.8 for women, and greater than 40 inches (102 cm) or a waist-to-hip ratio greater than 0.95 for men, increases metabolic disease risk. Obtaining these measurements requires a tape measure and a few extra minutes and offers more potent data than BMI alone. For example, a large cardiometabolic study found that within each BMI category, increasing gender-specific waist circumferences were associated with significantly higher VAT, liver fat, and a more harmful cardiometabolic risk profile. Men and women with a lower or normal BMI and a high waist circumference are at greatest relative health risk, compared with those with low waist circumference values. Yet, using the BMI alone in these patients would not raise any clinical concern, which is a missed opportunity for cardiometabolic risk reduction.
Biomarkers
Specific biomarkers are closely related to obesity. Leptin and resistin protein levels increase with adipose mass, while adiponectin decreases, probably contributing to insulin resistance. The higher levels of tumor necrosis factor–alpha and interleukin-6 from obesity contribute to chronic inflammation. The combined effect of chronic inflammation and insulin resistance allows greater bioavailability of insulinlike growth factor-1 (IGF-1), which has a role in initiating type 2 diabetes, cardiovascular disease, and cancer. Ideally, measuring these biomarkers could provide more advantageous information than BMI. Unfortunately, for now, the lack of standardized assays and imperfect knowledge of exactly how these biomarkers elicit disease prevents clinical use.
Obesity is a common, highly complex, chronic, and relapsing disease. Thankfully, a number of effective treatments and interventions are available. Although an accurate diagnosis of obesity is essential, underdiagnosed cases and missed opportunities for metabolic disease risk reduction persist. Overdiagnosing obesity, however, has the potential to incur unnecessary health care costs and result in weight bias and stigma.
While BMI is a quick and inexpensive means to assess obesity, by itself it lacks the necessary components for an accurate diagnosis. Particularly for individuals with a normal BMI or less severe overweight/obesity (BMI 27-34.9), other factors must be accounted for, including age, gender, and race. At a minimum, waist circumference should be measured to best risk-stratify and determine treatment intensity. Body composition analysis with BMI calculation refines the diagnosis of obesity.
Finally, clinicians may find best practices by using BMI delta change models. As with so many other clinical measurements, the trajectory tells the most astute story. For example, a patient whose BMI decreased from 45 to 35 may warrant less intensive treatment than a patient whose BMI increased from 26 to 31. Any change in BMI warrants clinical attention. A rapidly or consistently increasing BMI, even within normal range, should prompt clinicians to assess other factors related to obesity and metabolic disease risk (for example, lifestyle factors, waist circumference, blood pressure, cholesterol, diabetes screening) and initiate a conversation about weight management. Similarly, a consistently or rapidly decreasing BMI – even in elevated ranges and particularly with unintentional weight loss – should prompt evaluation.
Although BMI continues to be useful in clinical practice, epidemiology, and research, it should be used in combination with other clinical factors to provide the utmost quality of care.
Dr. Bartfield is assistant professor, obesity medicine specialist, Wake Forest Baptists Medical Center/Atrium Health Weight Management Center, Greensboro, N.C. She has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
In 1832, Belgian statistician Adolphe Quetelet introduced the concept of body mass index (BMI) – one’s weight (in kilograms) divided by the square of one’s height (in meters) as a measurement of ideal body weight. Approximately 140 years later, nutritional epidemiologist Ancel Keys proposed the use of BMI as a surrogate marker for evaluating body fat percentage within a population.
For the past 50 years, the scientific and medical communities have relied on BMI as a research and study tool to categorize patients’ weight (that is, severely underweight, underweight, normal weight, overweight, and obesity). The World Health Organization, National Institutes of Health, and U.S. Centers for Disease Control and Prevention use the following BMI weight classifications for adult patients:
- Underweight: BMI < 18.5
- Normal weight: BMI ≥ 18.5 to 24.9
- Overweight: BMI ≥ 25 to 29.9
- Obesity: BMI ≥ 30
Of note, BMI categories for children and adolescents (aged 2-19 years) are based on sex- and age-specific percentiles and will not be addressed in this article.
BMI appears to be a straightforward, easy, and cost-effective way to identify “healthy” weight and assess a patient’s risk for related conditions. For example, studies show that a BMI ≥ 35 kg/m2 correlates to higher prevalence of type 2 diabetes, hypertension, dyslipidemia, and decreased lifespan. At least 13 types of cancer have been linked to obesity, regardless of dietary or physical activity behaviors. While the health dangers associated with BMI ≥ 35 are substantial and difficult to dispute, concerns arise when BMI alone is used to determine healthy weight and disease risk in patients with a BMI of 25-35.
BMI limitations
There are troubling limitations to using BMI alone to assess a patient’s weight and health status. BMI only takes into account a patient’s height and weight, neither of which are sole determinants of health. Moreover, BMI measurements do not distinguish between fat mass and fat-free mass, each of which has very distinct effects on health. High fat mass is associated with an increased risk for disease and mortality, while higher lean body mass correlates with increased physical fitness and longevity. BMI also does not consider age, sex, race, ethnicity, or types of adipose tissue, all of which tremendously influence disease risk across all BMI categories.
Body composition and adipose tissue
Body composition and type of excess adipose tissue better correlate disease risk than does BMI. The World Health Organization defines obesity as having a body fat percentage > 25% for men and > 35% for women. Body composition can be measured by skin-fold thickness, bioelectrical impedance, dual-energy x-ray absorptiometry (DXA), CT, or MRI.
A cross-sectional study by Shah and colleagues) comparing BMI and DXA found that BMI underestimated obesity prevalence. In the study, BMI characterized 26% of participants as obese while DXA (a direct measurement of fat) characterized 64%. Further, 39% of patients categorized as nonobese based on BMI were found to be obese on DXA. Also, BMI misclassified 25% of men and 48% of women in the study. These findings and those of other studies suggest that BMI has a high specificity but low sensitivity for diagnosing obesity, questioning its reliability as a clinical screening tool.
Current guideline recommendations on pharmacologic and surgical treatment options for patients with overweight or obesity, including those of the American Association of Clinical Endocrinology and American College of Endocrinology (AACE/ACE) and the American College of Cardiology/American Heart Association and The Obesity Society (ACC/AHA/TOS), rely on BMI, diminishing their utilization. For example, a recent literature search by Li and associates found that Asian American patients with lower BMIs and BMIs of 25 or 27 are at increased risk for metabolic disease. On the basis of study findings, some organizations recommend considering pharmacotherapy at a lower BMI cutoff of ≥ 25.0 or ≥ 27.5 for Asian people to ensure early treatment intervention in this patient population because guidelines do not recommend pharmacologic treatment unless the BMI is 27 with weight-related complications or 30. Under the current guidelines, a patient of Asian descent has greater disease severity with potentially more complications by the time pharmacotherapy is initiated.
As previously noted, body composition, which requires the use of special equipment (skinfold calipers, DXA, CT, MRI, body impedance scale), best captures the ratio of fat mass to fat-free mass. DXA is frequently used in research studies looking at body composition because of its lower cost, faster time to obtain the study, and ability to measure bone density. MRI has been found to be as accurate as CT for assessing visceral adipose tissue (VAT), skeletal muscle mass, and organ mass, and does not expose patients to ionizing radiation like CT does. MRI clinical use, however, is limited because of its high cost, and it may be problematic for patients with claustrophobia or who are unable to remain immobile for an extended period.
Patients with a high VAT mass, compared with subcutaneous adipose tissue (SAT), are at increased risk for metabolic syndrome, nonalcoholic fatty liver disease, and cardiovascular disease regardless of BMI, underscoring the clinical usefulness of measuring visceral adiposity over BMI.
One of the barriers to implementing VAT assessment in clinical practice is the cost of imaging studies. Fortunately, data suggest that waist circumference and/or waist-to-hip ratio measurements can be a valuable surrogate for VAT measurement. A waist circumference greater than 35 inches (88 cm) or a waist-to-hip ratio greater than 0.8 for women, and greater than 40 inches (102 cm) or a waist-to-hip ratio greater than 0.95 for men, increases metabolic disease risk. Obtaining these measurements requires a tape measure and a few extra minutes and offers more potent data than BMI alone. For example, a large cardiometabolic study found that within each BMI category, increasing gender-specific waist circumferences were associated with significantly higher VAT, liver fat, and a more harmful cardiometabolic risk profile. Men and women with a lower or normal BMI and a high waist circumference are at greatest relative health risk, compared with those with low waist circumference values. Yet, using the BMI alone in these patients would not raise any clinical concern, which is a missed opportunity for cardiometabolic risk reduction.
Biomarkers
Specific biomarkers are closely related to obesity. Leptin and resistin protein levels increase with adipose mass, while adiponectin decreases, probably contributing to insulin resistance. The higher levels of tumor necrosis factor–alpha and interleukin-6 from obesity contribute to chronic inflammation. The combined effect of chronic inflammation and insulin resistance allows greater bioavailability of insulinlike growth factor-1 (IGF-1), which has a role in initiating type 2 diabetes, cardiovascular disease, and cancer. Ideally, measuring these biomarkers could provide more advantageous information than BMI. Unfortunately, for now, the lack of standardized assays and imperfect knowledge of exactly how these biomarkers elicit disease prevents clinical use.
Obesity is a common, highly complex, chronic, and relapsing disease. Thankfully, a number of effective treatments and interventions are available. Although an accurate diagnosis of obesity is essential, underdiagnosed cases and missed opportunities for metabolic disease risk reduction persist. Overdiagnosing obesity, however, has the potential to incur unnecessary health care costs and result in weight bias and stigma.
While BMI is a quick and inexpensive means to assess obesity, by itself it lacks the necessary components for an accurate diagnosis. Particularly for individuals with a normal BMI or less severe overweight/obesity (BMI 27-34.9), other factors must be accounted for, including age, gender, and race. At a minimum, waist circumference should be measured to best risk-stratify and determine treatment intensity. Body composition analysis with BMI calculation refines the diagnosis of obesity.
Finally, clinicians may find best practices by using BMI delta change models. As with so many other clinical measurements, the trajectory tells the most astute story. For example, a patient whose BMI decreased from 45 to 35 may warrant less intensive treatment than a patient whose BMI increased from 26 to 31. Any change in BMI warrants clinical attention. A rapidly or consistently increasing BMI, even within normal range, should prompt clinicians to assess other factors related to obesity and metabolic disease risk (for example, lifestyle factors, waist circumference, blood pressure, cholesterol, diabetes screening) and initiate a conversation about weight management. Similarly, a consistently or rapidly decreasing BMI – even in elevated ranges and particularly with unintentional weight loss – should prompt evaluation.
Although BMI continues to be useful in clinical practice, epidemiology, and research, it should be used in combination with other clinical factors to provide the utmost quality of care.
Dr. Bartfield is assistant professor, obesity medicine specialist, Wake Forest Baptists Medical Center/Atrium Health Weight Management Center, Greensboro, N.C. She has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Debate: Initial combination therapy for type 2 diabetes?
SAN DIEGO –
This question was debated by two well-known clinician-researchers in the diabetes world at the recent annual scientific sessions of the American Diabetes Association.
Ralph A. DeFronzo, MD, argued for combination therapy at the time of diagnosis, and David M. Nathan, MD, countered that sequential therapy is a better way to go.
‘The ominous octet’: Addressing multiple underlying defects
Of course, Dr. DeFronzo said, the right agents must be selected. “The drugs we’re going to use as combination at a minimum have to correct the underlying insulin resistance and beta-cell failure, or we are not going to be successful.”
In addition, he said, these drugs should also provide protection against cardiovascular, kidney, and fatty liver disease, because “[managing] diabetes is more than just controlling the glucose.”
Recent U.S. data suggest that half of people with diabetes have a hemoglobin A1c above 7%, and a quarter remain above 8%. “We’re not really doing a very good job in terms of glycemic control,” said Dr. DeFronzo, chief of the diabetes division at University of Texas, San Antonio.
One reason for this failure, he said, is the complex pathophysiology of type 2 diabetes represented by eight major defects, what he called the “ominous octet”: decreased pancreatic insulin secretion, gut incretin effects, glucose uptake in the muscle, increased lipolysis, glucose reabsorption in the kidney, hepatic glucose production, increased glucagon secretion, and neurotransmitter dysfunction.
“There are eight problems, so you’re going to need multiple drugs in combination ... not ones that just lower the A1c.”
And, Dr. DeFronzo said, these drugs “must be started early in the natural history of type 2 diabetes if progressive beta-cell failure is to be prevented.”
He pointed to the United Kingdom Prospective Diabetes Study (UKPDS), in which the sulfonylurea glyburide was used first, followed by metformin. With each drug, the A1c decreased initially but then rose within 3 years. By 15 years, 65% of participants were taking insulin.
More recently, the GRADE study examined the effects of adding four different glucose-lowering agents (glimepiride, sitagliptin, liraglutide, or insulin glargine) in people who hadn’t achieved target A1c with metformin.
“So, by definition, drug number one failed,” he observed.
During the study, all participants showed an initial A1c drop, followed by progressive failure, “again ... showing that stepwise therapy doesn’t work.”
All patients with type 2 diabetes at his center are treated using the “DeFronzo algorithm” consisting of three drug classes: a glucagon-like peptide-1 (GLP-1) agonist, a sodium-glucose cotransporter-2 (SGLT2) inhibitor, and pioglitazone, as each of them targets more than one of the “ominous octet” defects.
“The drugs that clearly do not work on a long-term basis are metformin and sulfonylureas,” he emphasized.
Several studies demonstrate the efficacy of combination therapy, he said. In one, DURATION 8, the combination of exenatide and dapagliflozin was superior to either agent individually in lowering A1c, cardiovascular events, and all-cause mortality over 2 years.
And in the 5-year VERIFY study, early combination therapy with vildagliptin plus metformin proved superior in A1c-lowering to starting patients on metformin and adding vildagliptin later.
Dr. DeFronzo’s own “knock-out punch” study, EDICT, in people with new-onset type 2 diabetes, compared the initial combination of metformin, pioglitazone, and exenatide with conventional sequential add-on therapy with metformin, glipizide, and insulin glargine.
The primary endpoint – the difference in the proportion of patients with A1c less than 6.5% – was 70% versus 29% with combination compared with sequential therapy, a difference “as robust as you can be going against the stepwise approach” at P < .00001, he said.
The combination therapy virtually normalized both insulin sensitivity and beta-cell function, whereas the conventional therapy did neither.
Also from Dr. DeFronzo’s group, in the Qatar study, which compared exenatide plus pioglitazone with basal-bolus insulin in people with about 10 years’ duration of type 2 diabetes and A1c above 7.5% taking sulfonylurea plus metformin, the combination therapy produced an A1c of 6.2% versus 7.1% with insulin.
Dr. DeFronzo pointed to new language added to the ADA Standards of Medical Care in Diabetes in 2022.
While still endorsing stepwise therapy, the document also says that “there are data to support initial combination therapy for more rapid attainment of glycemic targets and longer durability of glycemic effect.” The two references cited are EDICT and VERIFY.
“Finally, the American Diabetes Association has gotten the message,” he concluded.
Sequential therapy: Far more data, lower cost
Dr. Nathan began by pointing out that the ADA Standards of Care continue to advise use of metformin as first-line therapy for type 2 diabetes “because of its high efficacy in lowering A1c, minimal hypoglycemia risk when used as monotherapy, weight neutrality with the potential for modest weight loss, good safety profile, and low cost.”
He emphasized that he was not arguing “against the use of early or even initial combination therapy when there are co-existent morbidities [such as cardiovascular or chronic kidney disease] that merit use of demonstrably effective medications.” But Dr. Nathan pointed out, those patients are not the majority with type 2 diabetes.
He laid out four main arguments for choosing traditional sequential therapy over initial combination therapy. For one, it “enables determination of efficacy of adding individual medications, while initial combination precludes determining benefits of individual drugs.”
Second, traditional sequential therapy allows for assessment of side effects from individual drugs.
“With Dr. DeFronzo’s algorithm you throw everything at them, and if they get nausea, vomiting, or diarrhea, you won’t know which drug it is ... If they get an allergic reaction, you won’t know which medication it is,” observed Dr. Nathan, who is director of the clinical research center and the diabetes center at Massachusetts General Hospital, Boston.
Moreover, he said, traditional sequential therapy “promotes individualization, with selection of drugs, which is something we’re laboring to achieve. Initial combination obviously limits that.”
Further, sequential therapy is “parsimonious and cost-effective, whereas initial combination therapy is expensive, with modest advantages at most.”
And, there are “lots of data” supporting traditional sequential therapy and relatively little for initial combination therapy.
Dr. Nathan added that when he searched the literature for relevant randomized clinical trials, he found 16 investigating initial combination therapy versus monotherapy, but only three that examined combination versus sequential therapy.
“Very few of them, except for EDICT and VERIFY, actually include the sequential therapy that we would use in practice,” he said.
Moreover, he observed, except for the VERIFY study, most are less than half a year in duration. And in VERIFY, there was an initial 20% difference in the proportions of patients with A1c below 7.0%, but by 12 months, that difference had shrunk to just 5%-6%.
“So, looking over time is very important,” Dr. Nathan cautioned. “We really have to be careful ... Six months is barely enough time to see A1c equilibrate ... You really need to study a long-term, chronic, progressive disease like type 2 diabetes over a long enough period of time to be clinically meaningful.”
Dr. Nathan acknowledged to Dr. DeFronzo that the latter’s EDICT study was “well conducted” and “long enough,” and that the researchers did examine monotherapy versus sequential therapy. However, he pointed out that it was a small study with 249 patients and the dropout rate was high, with 58% of patients remaining in the study with triple therapy versus 68% for conventional treatment. “That’s a bit problematic,” Dr. Nathan noted.
At 2 years, the “trivial” difference in A1c was 6.5% with conventional therapy versus 6.0% with triple therapy. “This is all on the very flat complications curve with regard to A1c,” he observed.
Patients treated with sequential therapy with sulfonylurea and insulin had higher rates of hypoglycemia and weight gain, whereas the combination triple therapy group had more gastrointestinal side effects and edema.
However, the most dramatic difference was cost: the average wholesale price for sequential therapy totaled about $85 per month, compared with $1,310 for initial combination therapy. For the approximately 1.5 million patients with new-onset type 2 diabetes in the United States, that difference comes to an additional cost per year of about $22 billion, Dr. Nathan calculated.
“Although current sequential therapy leaves much to be desired ... initial combination therapy has generally only been tested for brief, clinically insufficient periods.
“And therefore, I think sequential therapy is still what is called for,” he concluded. “Well-powered, acceptable-duration studies need to be performed before we can adopt initial/early combination therapy as the standard of care.”
Dr. DeFronzo has reported receiving research support from Boehringer Ingelheim, AstraZeneca, and Merck; payment or honoraria for lectures, presentations, speakers bureaus, manuscript writing, or educational events from AstraZeneca; and participation on a data safety monitoring board or advisory board for AstraZeneca, Intarcia, Novo Nordisk, and Boehringer Ingelheim. Dr. Nathan has reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
SAN DIEGO –
This question was debated by two well-known clinician-researchers in the diabetes world at the recent annual scientific sessions of the American Diabetes Association.
Ralph A. DeFronzo, MD, argued for combination therapy at the time of diagnosis, and David M. Nathan, MD, countered that sequential therapy is a better way to go.
‘The ominous octet’: Addressing multiple underlying defects
Of course, Dr. DeFronzo said, the right agents must be selected. “The drugs we’re going to use as combination at a minimum have to correct the underlying insulin resistance and beta-cell failure, or we are not going to be successful.”
In addition, he said, these drugs should also provide protection against cardiovascular, kidney, and fatty liver disease, because “[managing] diabetes is more than just controlling the glucose.”
Recent U.S. data suggest that half of people with diabetes have a hemoglobin A1c above 7%, and a quarter remain above 8%. “We’re not really doing a very good job in terms of glycemic control,” said Dr. DeFronzo, chief of the diabetes division at University of Texas, San Antonio.
One reason for this failure, he said, is the complex pathophysiology of type 2 diabetes represented by eight major defects, what he called the “ominous octet”: decreased pancreatic insulin secretion, gut incretin effects, glucose uptake in the muscle, increased lipolysis, glucose reabsorption in the kidney, hepatic glucose production, increased glucagon secretion, and neurotransmitter dysfunction.
“There are eight problems, so you’re going to need multiple drugs in combination ... not ones that just lower the A1c.”
And, Dr. DeFronzo said, these drugs “must be started early in the natural history of type 2 diabetes if progressive beta-cell failure is to be prevented.”
He pointed to the United Kingdom Prospective Diabetes Study (UKPDS), in which the sulfonylurea glyburide was used first, followed by metformin. With each drug, the A1c decreased initially but then rose within 3 years. By 15 years, 65% of participants were taking insulin.
More recently, the GRADE study examined the effects of adding four different glucose-lowering agents (glimepiride, sitagliptin, liraglutide, or insulin glargine) in people who hadn’t achieved target A1c with metformin.
“So, by definition, drug number one failed,” he observed.
During the study, all participants showed an initial A1c drop, followed by progressive failure, “again ... showing that stepwise therapy doesn’t work.”
All patients with type 2 diabetes at his center are treated using the “DeFronzo algorithm” consisting of three drug classes: a glucagon-like peptide-1 (GLP-1) agonist, a sodium-glucose cotransporter-2 (SGLT2) inhibitor, and pioglitazone, as each of them targets more than one of the “ominous octet” defects.
“The drugs that clearly do not work on a long-term basis are metformin and sulfonylureas,” he emphasized.
Several studies demonstrate the efficacy of combination therapy, he said. In one, DURATION 8, the combination of exenatide and dapagliflozin was superior to either agent individually in lowering A1c, cardiovascular events, and all-cause mortality over 2 years.
And in the 5-year VERIFY study, early combination therapy with vildagliptin plus metformin proved superior in A1c-lowering to starting patients on metformin and adding vildagliptin later.
Dr. DeFronzo’s own “knock-out punch” study, EDICT, in people with new-onset type 2 diabetes, compared the initial combination of metformin, pioglitazone, and exenatide with conventional sequential add-on therapy with metformin, glipizide, and insulin glargine.
The primary endpoint – the difference in the proportion of patients with A1c less than 6.5% – was 70% versus 29% with combination compared with sequential therapy, a difference “as robust as you can be going against the stepwise approach” at P < .00001, he said.
The combination therapy virtually normalized both insulin sensitivity and beta-cell function, whereas the conventional therapy did neither.
Also from Dr. DeFronzo’s group, in the Qatar study, which compared exenatide plus pioglitazone with basal-bolus insulin in people with about 10 years’ duration of type 2 diabetes and A1c above 7.5% taking sulfonylurea plus metformin, the combination therapy produced an A1c of 6.2% versus 7.1% with insulin.
Dr. DeFronzo pointed to new language added to the ADA Standards of Medical Care in Diabetes in 2022.
While still endorsing stepwise therapy, the document also says that “there are data to support initial combination therapy for more rapid attainment of glycemic targets and longer durability of glycemic effect.” The two references cited are EDICT and VERIFY.
“Finally, the American Diabetes Association has gotten the message,” he concluded.
Sequential therapy: Far more data, lower cost
Dr. Nathan began by pointing out that the ADA Standards of Care continue to advise use of metformin as first-line therapy for type 2 diabetes “because of its high efficacy in lowering A1c, minimal hypoglycemia risk when used as monotherapy, weight neutrality with the potential for modest weight loss, good safety profile, and low cost.”
He emphasized that he was not arguing “against the use of early or even initial combination therapy when there are co-existent morbidities [such as cardiovascular or chronic kidney disease] that merit use of demonstrably effective medications.” But Dr. Nathan pointed out, those patients are not the majority with type 2 diabetes.
He laid out four main arguments for choosing traditional sequential therapy over initial combination therapy. For one, it “enables determination of efficacy of adding individual medications, while initial combination precludes determining benefits of individual drugs.”
Second, traditional sequential therapy allows for assessment of side effects from individual drugs.
“With Dr. DeFronzo’s algorithm you throw everything at them, and if they get nausea, vomiting, or diarrhea, you won’t know which drug it is ... If they get an allergic reaction, you won’t know which medication it is,” observed Dr. Nathan, who is director of the clinical research center and the diabetes center at Massachusetts General Hospital, Boston.
Moreover, he said, traditional sequential therapy “promotes individualization, with selection of drugs, which is something we’re laboring to achieve. Initial combination obviously limits that.”
Further, sequential therapy is “parsimonious and cost-effective, whereas initial combination therapy is expensive, with modest advantages at most.”
And, there are “lots of data” supporting traditional sequential therapy and relatively little for initial combination therapy.
Dr. Nathan added that when he searched the literature for relevant randomized clinical trials, he found 16 investigating initial combination therapy versus monotherapy, but only three that examined combination versus sequential therapy.
“Very few of them, except for EDICT and VERIFY, actually include the sequential therapy that we would use in practice,” he said.
Moreover, he observed, except for the VERIFY study, most are less than half a year in duration. And in VERIFY, there was an initial 20% difference in the proportions of patients with A1c below 7.0%, but by 12 months, that difference had shrunk to just 5%-6%.
“So, looking over time is very important,” Dr. Nathan cautioned. “We really have to be careful ... Six months is barely enough time to see A1c equilibrate ... You really need to study a long-term, chronic, progressive disease like type 2 diabetes over a long enough period of time to be clinically meaningful.”
Dr. Nathan acknowledged to Dr. DeFronzo that the latter’s EDICT study was “well conducted” and “long enough,” and that the researchers did examine monotherapy versus sequential therapy. However, he pointed out that it was a small study with 249 patients and the dropout rate was high, with 58% of patients remaining in the study with triple therapy versus 68% for conventional treatment. “That’s a bit problematic,” Dr. Nathan noted.
At 2 years, the “trivial” difference in A1c was 6.5% with conventional therapy versus 6.0% with triple therapy. “This is all on the very flat complications curve with regard to A1c,” he observed.
Patients treated with sequential therapy with sulfonylurea and insulin had higher rates of hypoglycemia and weight gain, whereas the combination triple therapy group had more gastrointestinal side effects and edema.
However, the most dramatic difference was cost: the average wholesale price for sequential therapy totaled about $85 per month, compared with $1,310 for initial combination therapy. For the approximately 1.5 million patients with new-onset type 2 diabetes in the United States, that difference comes to an additional cost per year of about $22 billion, Dr. Nathan calculated.
“Although current sequential therapy leaves much to be desired ... initial combination therapy has generally only been tested for brief, clinically insufficient periods.
“And therefore, I think sequential therapy is still what is called for,” he concluded. “Well-powered, acceptable-duration studies need to be performed before we can adopt initial/early combination therapy as the standard of care.”
Dr. DeFronzo has reported receiving research support from Boehringer Ingelheim, AstraZeneca, and Merck; payment or honoraria for lectures, presentations, speakers bureaus, manuscript writing, or educational events from AstraZeneca; and participation on a data safety monitoring board or advisory board for AstraZeneca, Intarcia, Novo Nordisk, and Boehringer Ingelheim. Dr. Nathan has reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
SAN DIEGO –
This question was debated by two well-known clinician-researchers in the diabetes world at the recent annual scientific sessions of the American Diabetes Association.
Ralph A. DeFronzo, MD, argued for combination therapy at the time of diagnosis, and David M. Nathan, MD, countered that sequential therapy is a better way to go.
‘The ominous octet’: Addressing multiple underlying defects
Of course, Dr. DeFronzo said, the right agents must be selected. “The drugs we’re going to use as combination at a minimum have to correct the underlying insulin resistance and beta-cell failure, or we are not going to be successful.”
In addition, he said, these drugs should also provide protection against cardiovascular, kidney, and fatty liver disease, because “[managing] diabetes is more than just controlling the glucose.”
Recent U.S. data suggest that half of people with diabetes have a hemoglobin A1c above 7%, and a quarter remain above 8%. “We’re not really doing a very good job in terms of glycemic control,” said Dr. DeFronzo, chief of the diabetes division at University of Texas, San Antonio.
One reason for this failure, he said, is the complex pathophysiology of type 2 diabetes represented by eight major defects, what he called the “ominous octet”: decreased pancreatic insulin secretion, gut incretin effects, glucose uptake in the muscle, increased lipolysis, glucose reabsorption in the kidney, hepatic glucose production, increased glucagon secretion, and neurotransmitter dysfunction.
“There are eight problems, so you’re going to need multiple drugs in combination ... not ones that just lower the A1c.”
And, Dr. DeFronzo said, these drugs “must be started early in the natural history of type 2 diabetes if progressive beta-cell failure is to be prevented.”
He pointed to the United Kingdom Prospective Diabetes Study (UKPDS), in which the sulfonylurea glyburide was used first, followed by metformin. With each drug, the A1c decreased initially but then rose within 3 years. By 15 years, 65% of participants were taking insulin.
More recently, the GRADE study examined the effects of adding four different glucose-lowering agents (glimepiride, sitagliptin, liraglutide, or insulin glargine) in people who hadn’t achieved target A1c with metformin.
“So, by definition, drug number one failed,” he observed.
During the study, all participants showed an initial A1c drop, followed by progressive failure, “again ... showing that stepwise therapy doesn’t work.”
All patients with type 2 diabetes at his center are treated using the “DeFronzo algorithm” consisting of three drug classes: a glucagon-like peptide-1 (GLP-1) agonist, a sodium-glucose cotransporter-2 (SGLT2) inhibitor, and pioglitazone, as each of them targets more than one of the “ominous octet” defects.
“The drugs that clearly do not work on a long-term basis are metformin and sulfonylureas,” he emphasized.
Several studies demonstrate the efficacy of combination therapy, he said. In one, DURATION 8, the combination of exenatide and dapagliflozin was superior to either agent individually in lowering A1c, cardiovascular events, and all-cause mortality over 2 years.
And in the 5-year VERIFY study, early combination therapy with vildagliptin plus metformin proved superior in A1c-lowering to starting patients on metformin and adding vildagliptin later.
Dr. DeFronzo’s own “knock-out punch” study, EDICT, in people with new-onset type 2 diabetes, compared the initial combination of metformin, pioglitazone, and exenatide with conventional sequential add-on therapy with metformin, glipizide, and insulin glargine.
The primary endpoint – the difference in the proportion of patients with A1c less than 6.5% – was 70% versus 29% with combination compared with sequential therapy, a difference “as robust as you can be going against the stepwise approach” at P < .00001, he said.
The combination therapy virtually normalized both insulin sensitivity and beta-cell function, whereas the conventional therapy did neither.
Also from Dr. DeFronzo’s group, in the Qatar study, which compared exenatide plus pioglitazone with basal-bolus insulin in people with about 10 years’ duration of type 2 diabetes and A1c above 7.5% taking sulfonylurea plus metformin, the combination therapy produced an A1c of 6.2% versus 7.1% with insulin.
Dr. DeFronzo pointed to new language added to the ADA Standards of Medical Care in Diabetes in 2022.
While still endorsing stepwise therapy, the document also says that “there are data to support initial combination therapy for more rapid attainment of glycemic targets and longer durability of glycemic effect.” The two references cited are EDICT and VERIFY.
“Finally, the American Diabetes Association has gotten the message,” he concluded.
Sequential therapy: Far more data, lower cost
Dr. Nathan began by pointing out that the ADA Standards of Care continue to advise use of metformin as first-line therapy for type 2 diabetes “because of its high efficacy in lowering A1c, minimal hypoglycemia risk when used as monotherapy, weight neutrality with the potential for modest weight loss, good safety profile, and low cost.”
He emphasized that he was not arguing “against the use of early or even initial combination therapy when there are co-existent morbidities [such as cardiovascular or chronic kidney disease] that merit use of demonstrably effective medications.” But Dr. Nathan pointed out, those patients are not the majority with type 2 diabetes.
He laid out four main arguments for choosing traditional sequential therapy over initial combination therapy. For one, it “enables determination of efficacy of adding individual medications, while initial combination precludes determining benefits of individual drugs.”
Second, traditional sequential therapy allows for assessment of side effects from individual drugs.
“With Dr. DeFronzo’s algorithm you throw everything at them, and if they get nausea, vomiting, or diarrhea, you won’t know which drug it is ... If they get an allergic reaction, you won’t know which medication it is,” observed Dr. Nathan, who is director of the clinical research center and the diabetes center at Massachusetts General Hospital, Boston.
Moreover, he said, traditional sequential therapy “promotes individualization, with selection of drugs, which is something we’re laboring to achieve. Initial combination obviously limits that.”
Further, sequential therapy is “parsimonious and cost-effective, whereas initial combination therapy is expensive, with modest advantages at most.”
And, there are “lots of data” supporting traditional sequential therapy and relatively little for initial combination therapy.
Dr. Nathan added that when he searched the literature for relevant randomized clinical trials, he found 16 investigating initial combination therapy versus monotherapy, but only three that examined combination versus sequential therapy.
“Very few of them, except for EDICT and VERIFY, actually include the sequential therapy that we would use in practice,” he said.
Moreover, he observed, except for the VERIFY study, most are less than half a year in duration. And in VERIFY, there was an initial 20% difference in the proportions of patients with A1c below 7.0%, but by 12 months, that difference had shrunk to just 5%-6%.
“So, looking over time is very important,” Dr. Nathan cautioned. “We really have to be careful ... Six months is barely enough time to see A1c equilibrate ... You really need to study a long-term, chronic, progressive disease like type 2 diabetes over a long enough period of time to be clinically meaningful.”
Dr. Nathan acknowledged to Dr. DeFronzo that the latter’s EDICT study was “well conducted” and “long enough,” and that the researchers did examine monotherapy versus sequential therapy. However, he pointed out that it was a small study with 249 patients and the dropout rate was high, with 58% of patients remaining in the study with triple therapy versus 68% for conventional treatment. “That’s a bit problematic,” Dr. Nathan noted.
At 2 years, the “trivial” difference in A1c was 6.5% with conventional therapy versus 6.0% with triple therapy. “This is all on the very flat complications curve with regard to A1c,” he observed.
Patients treated with sequential therapy with sulfonylurea and insulin had higher rates of hypoglycemia and weight gain, whereas the combination triple therapy group had more gastrointestinal side effects and edema.
However, the most dramatic difference was cost: the average wholesale price for sequential therapy totaled about $85 per month, compared with $1,310 for initial combination therapy. For the approximately 1.5 million patients with new-onset type 2 diabetes in the United States, that difference comes to an additional cost per year of about $22 billion, Dr. Nathan calculated.
“Although current sequential therapy leaves much to be desired ... initial combination therapy has generally only been tested for brief, clinically insufficient periods.
“And therefore, I think sequential therapy is still what is called for,” he concluded. “Well-powered, acceptable-duration studies need to be performed before we can adopt initial/early combination therapy as the standard of care.”
Dr. DeFronzo has reported receiving research support from Boehringer Ingelheim, AstraZeneca, and Merck; payment or honoraria for lectures, presentations, speakers bureaus, manuscript writing, or educational events from AstraZeneca; and participation on a data safety monitoring board or advisory board for AstraZeneca, Intarcia, Novo Nordisk, and Boehringer Ingelheim. Dr. Nathan has reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
Celiac disease: Update on diagnosis and monitoring
Celiac disease is a small bowel disorder. Specific antibodies along with a duodenal biopsy allow a secure diagnosis of celiac disease. Case detection rates have improved but many patients remain undiagnosed.
The only treatment available at present is a gluten-free diet (GFD). Most patients respond clinically to a GFD but histologic recovery is not always complete and may result in clinical consequences.
The anti-tissue transglutaminase IgA test (tTg-IgA) is the best initial serology test. A total IgA level appropriate for age is required to interpret a negative result. In patients with IgA deficiency, the deamidated gliadin peptide (DGP) antibodies, and/or tTg-IgA, may be helpful for diagnosis along with a duodenal biopsy.
First-degree female relatives with homozygous DQ2 positivity are at highest risk.
Both serology and duodenal biopsy have pitfalls in the diagnosis of celiac disease. In children, the diagnosis is secure with a tTg-IgA rate of at least 10 times the upper limit of normal (≥10×ULN) with positive endomysial antibodies (EMA).
There is less data on the correlation of tTg-IgA ≥10×ULN positive with villous atrophy in adults. All others require biopsy for diagnosis.
Considerations to forgo biopsy in adults include: tTg-IgA of ≥10×ULN positive, serology positive test in patients following GFD, or otherwise unable to undergo endoscopy with duodenal biopsy, or shared decision-making. Celiac disease recovery is assessed by clinical response to a GFD and antibody conversion to negative, which does not always correlate with histology.
Clinical consequences of persistent villous atrophy include increased risks for lymphoproliferative malignancy, hip fracture, and refractory celiac disease.
Dr. Semrad is director of the small bowel disease and nutrition program at the University of Chicago Medicine where she is a professor of medicine. She disclosed no conflicts of interest.
References
Rubio-Tapia et al. Am J Gastroenterol. 2023;118:59-76.
Husby S et al. J Pediatr Gastroenterol Nutr. 2020;70:141-57.
Celiac disease is a small bowel disorder. Specific antibodies along with a duodenal biopsy allow a secure diagnosis of celiac disease. Case detection rates have improved but many patients remain undiagnosed.
The only treatment available at present is a gluten-free diet (GFD). Most patients respond clinically to a GFD but histologic recovery is not always complete and may result in clinical consequences.
The anti-tissue transglutaminase IgA test (tTg-IgA) is the best initial serology test. A total IgA level appropriate for age is required to interpret a negative result. In patients with IgA deficiency, the deamidated gliadin peptide (DGP) antibodies, and/or tTg-IgA, may be helpful for diagnosis along with a duodenal biopsy.
First-degree female relatives with homozygous DQ2 positivity are at highest risk.
Both serology and duodenal biopsy have pitfalls in the diagnosis of celiac disease. In children, the diagnosis is secure with a tTg-IgA rate of at least 10 times the upper limit of normal (≥10×ULN) with positive endomysial antibodies (EMA).
There is less data on the correlation of tTg-IgA ≥10×ULN positive with villous atrophy in adults. All others require biopsy for diagnosis.
Considerations to forgo biopsy in adults include: tTg-IgA of ≥10×ULN positive, serology positive test in patients following GFD, or otherwise unable to undergo endoscopy with duodenal biopsy, or shared decision-making. Celiac disease recovery is assessed by clinical response to a GFD and antibody conversion to negative, which does not always correlate with histology.
Clinical consequences of persistent villous atrophy include increased risks for lymphoproliferative malignancy, hip fracture, and refractory celiac disease.
Dr. Semrad is director of the small bowel disease and nutrition program at the University of Chicago Medicine where she is a professor of medicine. She disclosed no conflicts of interest.
References
Rubio-Tapia et al. Am J Gastroenterol. 2023;118:59-76.
Husby S et al. J Pediatr Gastroenterol Nutr. 2020;70:141-57.
Celiac disease is a small bowel disorder. Specific antibodies along with a duodenal biopsy allow a secure diagnosis of celiac disease. Case detection rates have improved but many patients remain undiagnosed.
The only treatment available at present is a gluten-free diet (GFD). Most patients respond clinically to a GFD but histologic recovery is not always complete and may result in clinical consequences.
The anti-tissue transglutaminase IgA test (tTg-IgA) is the best initial serology test. A total IgA level appropriate for age is required to interpret a negative result. In patients with IgA deficiency, the deamidated gliadin peptide (DGP) antibodies, and/or tTg-IgA, may be helpful for diagnosis along with a duodenal biopsy.
First-degree female relatives with homozygous DQ2 positivity are at highest risk.
Both serology and duodenal biopsy have pitfalls in the diagnosis of celiac disease. In children, the diagnosis is secure with a tTg-IgA rate of at least 10 times the upper limit of normal (≥10×ULN) with positive endomysial antibodies (EMA).
There is less data on the correlation of tTg-IgA ≥10×ULN positive with villous atrophy in adults. All others require biopsy for diagnosis.
Considerations to forgo biopsy in adults include: tTg-IgA of ≥10×ULN positive, serology positive test in patients following GFD, or otherwise unable to undergo endoscopy with duodenal biopsy, or shared decision-making. Celiac disease recovery is assessed by clinical response to a GFD and antibody conversion to negative, which does not always correlate with histology.
Clinical consequences of persistent villous atrophy include increased risks for lymphoproliferative malignancy, hip fracture, and refractory celiac disease.
Dr. Semrad is director of the small bowel disease and nutrition program at the University of Chicago Medicine where she is a professor of medicine. She disclosed no conflicts of interest.
References
Rubio-Tapia et al. Am J Gastroenterol. 2023;118:59-76.
Husby S et al. J Pediatr Gastroenterol Nutr. 2020;70:141-57.
Advances in pancreaticobiliary disease interventions: More options and better outcomes
Highlights of advances in pancreaticobiliary disease interventions were reviewed at this year’s Digestive Disease Week (DDW) as part of the American Gastroenterological Association (AGA) postgraduate course.
Over the last several decades, the endoscopic treatment of pancreaticobiliary disease has advanced exponentially. Evidence-based advances are changing the landscape of pancreaticobiliary disease management.
While endoscopic retrograde cholangiopancreatography (ERCP) with transpapillary stent placement is first-line for the treatment of biliary obstruction, endoscopic ultrasound (EUS)-guided biliary drainage has emerged as an effective alternative in cases of failed ERCP. These procedures can be performed via a transhepatic approach (hepaticogastrostomy) from the proximal stomach, an extrahepatic approach (choledochoduodenostomy) from the duodenum, or via the gallbladder. Numerous studies have proved the safety and efficacy of these interventions in malignant biliary obstruction. A recent systematic meta-analysis pooled all of these approaches and concluded that EUS-guided biliary drainage is also reasonable to offer in benign disease when ERCP has failed or is not technically possible.
EUS-guided gallbladder drainage is similarly emerging as an alternative approach for management of acute cholecystitis. This is a reasonable option in patients with acute cholecystitis who are poor surgical candidates, have no evidence of gallbladder perforation, and will tolerate sedation. Moreover, this approach may be preferred over ERCP with cystic duct stent placement in the setting of a large stone burden, gastric outlet obstruction, or when an indwelling metal biliary stent occludes the cystic duct. Multidisciplinary discussion with surgical and interventional radiology services is essential, especially given this technique may preclude future cholecystectomy.
Indeterminate biliary strictures historically pose a major diagnostic challenge, and current approaches in the evaluation of such strictures lack diagnostic sensitivity. ERCP with concurrent brushing of the bile duct for cytology remains the most commonly used method of acquiring tissue. However, the sensitivity of diagnosis on brush cytology remains frustratingly low. Recent compelling evidence for increasing the number of brush passes to 30 in an indeterminate stricture improves diagnostic sensitivity and is a simple, safe, and low-cost intervention. This approach may ultimately decrease the number of patients requiring surgical intervention, which is particularly important when up to one-fifth of suspected biliary malignancies are found to be benign after surgical resection.
Not only have studies addressed increasing the diagnostic yield of stricture evaluation, but the treatment of biliary strictures has also evolved. Various stents are available, and different practice patterns have emerged for management of this entity. In an updated meta-analysis of randomized controlled trials evaluating multiple plastic stents versus a single covered metal stent for benign biliary strictures, no difference was found in stricture resolution, stricture recurrence, stent migration or adverse events. However, those patients treated with covered metal stents required fewer sessions of ERCP for stricture resolution. Moreover, no difference in stricture resolution was seen in subgroup analysis between anastomotic strictures, chronic pancreatitis, or bile duct injury. Despite higher cost of the stent itself, covered metal stents may ultimately lead to an overall decrease in health care expenditure.
The above examples are only a small subset of the progress that has been made in endoscopic management of pancreaticobiliary disease. The armamentarium of tools and techniques will continue to evolve to help us provide better minimally invasive care for our patients.
Dr. Schulman is associate professor in the division of gastroenterology and hepatology and the department of surgery at the University of Michigan. She is the incoming chief of endoscopy and the director of bariatric endoscopy. She disclosed consultancy work with Apollo Endosurgery, Boston Scientific, Olympus and MicroTech. She also disclosed research and grant support from GI Dynamics and Fractyl.
Highlights of advances in pancreaticobiliary disease interventions were reviewed at this year’s Digestive Disease Week (DDW) as part of the American Gastroenterological Association (AGA) postgraduate course.
Over the last several decades, the endoscopic treatment of pancreaticobiliary disease has advanced exponentially. Evidence-based advances are changing the landscape of pancreaticobiliary disease management.
While endoscopic retrograde cholangiopancreatography (ERCP) with transpapillary stent placement is first-line for the treatment of biliary obstruction, endoscopic ultrasound (EUS)-guided biliary drainage has emerged as an effective alternative in cases of failed ERCP. These procedures can be performed via a transhepatic approach (hepaticogastrostomy) from the proximal stomach, an extrahepatic approach (choledochoduodenostomy) from the duodenum, or via the gallbladder. Numerous studies have proved the safety and efficacy of these interventions in malignant biliary obstruction. A recent systematic meta-analysis pooled all of these approaches and concluded that EUS-guided biliary drainage is also reasonable to offer in benign disease when ERCP has failed or is not technically possible.
EUS-guided gallbladder drainage is similarly emerging as an alternative approach for management of acute cholecystitis. This is a reasonable option in patients with acute cholecystitis who are poor surgical candidates, have no evidence of gallbladder perforation, and will tolerate sedation. Moreover, this approach may be preferred over ERCP with cystic duct stent placement in the setting of a large stone burden, gastric outlet obstruction, or when an indwelling metal biliary stent occludes the cystic duct. Multidisciplinary discussion with surgical and interventional radiology services is essential, especially given this technique may preclude future cholecystectomy.
Indeterminate biliary strictures historically pose a major diagnostic challenge, and current approaches in the evaluation of such strictures lack diagnostic sensitivity. ERCP with concurrent brushing of the bile duct for cytology remains the most commonly used method of acquiring tissue. However, the sensitivity of diagnosis on brush cytology remains frustratingly low. Recent compelling evidence for increasing the number of brush passes to 30 in an indeterminate stricture improves diagnostic sensitivity and is a simple, safe, and low-cost intervention. This approach may ultimately decrease the number of patients requiring surgical intervention, which is particularly important when up to one-fifth of suspected biliary malignancies are found to be benign after surgical resection.
Not only have studies addressed increasing the diagnostic yield of stricture evaluation, but the treatment of biliary strictures has also evolved. Various stents are available, and different practice patterns have emerged for management of this entity. In an updated meta-analysis of randomized controlled trials evaluating multiple plastic stents versus a single covered metal stent for benign biliary strictures, no difference was found in stricture resolution, stricture recurrence, stent migration or adverse events. However, those patients treated with covered metal stents required fewer sessions of ERCP for stricture resolution. Moreover, no difference in stricture resolution was seen in subgroup analysis between anastomotic strictures, chronic pancreatitis, or bile duct injury. Despite higher cost of the stent itself, covered metal stents may ultimately lead to an overall decrease in health care expenditure.
The above examples are only a small subset of the progress that has been made in endoscopic management of pancreaticobiliary disease. The armamentarium of tools and techniques will continue to evolve to help us provide better minimally invasive care for our patients.
Dr. Schulman is associate professor in the division of gastroenterology and hepatology and the department of surgery at the University of Michigan. She is the incoming chief of endoscopy and the director of bariatric endoscopy. She disclosed consultancy work with Apollo Endosurgery, Boston Scientific, Olympus and MicroTech. She also disclosed research and grant support from GI Dynamics and Fractyl.
Highlights of advances in pancreaticobiliary disease interventions were reviewed at this year’s Digestive Disease Week (DDW) as part of the American Gastroenterological Association (AGA) postgraduate course.
Over the last several decades, the endoscopic treatment of pancreaticobiliary disease has advanced exponentially. Evidence-based advances are changing the landscape of pancreaticobiliary disease management.
While endoscopic retrograde cholangiopancreatography (ERCP) with transpapillary stent placement is first-line for the treatment of biliary obstruction, endoscopic ultrasound (EUS)-guided biliary drainage has emerged as an effective alternative in cases of failed ERCP. These procedures can be performed via a transhepatic approach (hepaticogastrostomy) from the proximal stomach, an extrahepatic approach (choledochoduodenostomy) from the duodenum, or via the gallbladder. Numerous studies have proved the safety and efficacy of these interventions in malignant biliary obstruction. A recent systematic meta-analysis pooled all of these approaches and concluded that EUS-guided biliary drainage is also reasonable to offer in benign disease when ERCP has failed or is not technically possible.
EUS-guided gallbladder drainage is similarly emerging as an alternative approach for management of acute cholecystitis. This is a reasonable option in patients with acute cholecystitis who are poor surgical candidates, have no evidence of gallbladder perforation, and will tolerate sedation. Moreover, this approach may be preferred over ERCP with cystic duct stent placement in the setting of a large stone burden, gastric outlet obstruction, or when an indwelling metal biliary stent occludes the cystic duct. Multidisciplinary discussion with surgical and interventional radiology services is essential, especially given this technique may preclude future cholecystectomy.
Indeterminate biliary strictures historically pose a major diagnostic challenge, and current approaches in the evaluation of such strictures lack diagnostic sensitivity. ERCP with concurrent brushing of the bile duct for cytology remains the most commonly used method of acquiring tissue. However, the sensitivity of diagnosis on brush cytology remains frustratingly low. Recent compelling evidence for increasing the number of brush passes to 30 in an indeterminate stricture improves diagnostic sensitivity and is a simple, safe, and low-cost intervention. This approach may ultimately decrease the number of patients requiring surgical intervention, which is particularly important when up to one-fifth of suspected biliary malignancies are found to be benign after surgical resection.
Not only have studies addressed increasing the diagnostic yield of stricture evaluation, but the treatment of biliary strictures has also evolved. Various stents are available, and different practice patterns have emerged for management of this entity. In an updated meta-analysis of randomized controlled trials evaluating multiple plastic stents versus a single covered metal stent for benign biliary strictures, no difference was found in stricture resolution, stricture recurrence, stent migration or adverse events. However, those patients treated with covered metal stents required fewer sessions of ERCP for stricture resolution. Moreover, no difference in stricture resolution was seen in subgroup analysis between anastomotic strictures, chronic pancreatitis, or bile duct injury. Despite higher cost of the stent itself, covered metal stents may ultimately lead to an overall decrease in health care expenditure.
The above examples are only a small subset of the progress that has been made in endoscopic management of pancreaticobiliary disease. The armamentarium of tools and techniques will continue to evolve to help us provide better minimally invasive care for our patients.
Dr. Schulman is associate professor in the division of gastroenterology and hepatology and the department of surgery at the University of Michigan. She is the incoming chief of endoscopy and the director of bariatric endoscopy. She disclosed consultancy work with Apollo Endosurgery, Boston Scientific, Olympus and MicroTech. She also disclosed research and grant support from GI Dynamics and Fractyl.
Conflicting blood pressure targets: Déjà vu all over again
Stop me if you’ve heard this before. There’s a controversy over blood pressure targets. Some argue for 140/90 mm Hg, others for 130/80 mm Hg, and some super ambitious folks think that we should aim for 120/80 mm Hg. If this sounds familiar, it should. We did it in 2017. It’s unclear what, if anything, we learned from the experience. On the upside, it’s not as bad as it was 100 years ago.
When high blood pressure was a ‘good’ thing
Back then, many believed that you needed higher blood pressure as you got older to push the blood through your progressively stiffened and hardened arteries. Hence the name “essential” hypertension. The concern was that lowering blood pressure would hypoperfuse your organs and be dangerous. In the 1930s, John Hay told an audience at a British Medical Association lecture: “The greatest danger to a man with high blood pressure lies in its discovery, because then some fool is certain to try and reduce it.”
The 1900s were a simpler time when people had fatal strokes in their 50s, and their families were consoled by the knowledge that they had lived a good life.
If our thinking around blood pressure had evolved slightly faster, perhaps President Roosevelt wouldn’t have died of a stroke during World War II as his doctors watched his systolic blood pressure climb above 200 mm Hg and suggested massages and barbiturates to take the edge off.
The current controversy
Not that long ago, 180 mm Hg was considered mild hypertension. Now, we are arguing about a systolic blood pressure of 140 versus 130 mm Hg.
The American Academy of Family Physicians takes the view that 140/90 mm Hg is good enough for most people. Their most recent clinical practice guideline, based primarily on two 2020 Cochrane Reviews of blood pressure targets in patients with and without cardiovascular disease, did not find any mortality benefit for a lower blood pressure threshold.
This puts the AAFP guideline in conflict with the 2017 guideline issued jointly by the American College of Cardiology, American Heart Association, and nine other groups, which recommended a target of 130/80 mm Hg for pretty much everyone. Though they say greater than 140/90 mm Hg should be the threshold for low-risk patients or for starting therapy post stroke, we often forget those nuances. The main point of contention is that the AAFP guideline was looking for a mortality benefit, whereas the ACC/AHA/everyone else guideline was looking at preventing cardiovascular events. The latter guideline was driven mainly by the results of the SPRINT trial. ACC/AHA argue for more aggressive targets to prevent the things that cardiologists care about, namely heart attacks.
The AAFP guideline conceded that more aggressive control will result in fewer myocardial infarctions but warn that it comes with more adverse events. Treating 1,000 patients to this lower target would theoretically prevent four MIs, possibly prevent three strokes, but result in 30 adverse events.
In the end, what we are seeing here is not so much a debate over the evidence as a debate over priorities. Interventions that don’t improve mortality can be questioned in terms of their cost effectiveness. But you probably don’t want to have a heart attack (even a nonfatal one). And you certainly don’t want to have a stroke. However, lower blood pressure targets inevitably require more medications. Notwithstanding the economic costs, the dangers of polypharmacy, medication interactions, side effects, and syncope leading to falls cannot be ignored. Falls are not benign adverse events, especially in older adults.
The counter argument is that physicians are human and often let things slide. Set the target at 140/90 mm Hg, and many physicians won’t jump on a systolic blood pressure of 144 mm Hg. Set the target at 130 mm Hg, and maybe they’ll be more likely to react. There’s a fine line between permissiveness and complacency.
If you zoom out and look at the multitude of blood pressure guidelines, you start to notice an important fact. There is not much daylight between them. There are subtle differences in what constitutes high risk and different definitions of older (older should be defined as 10 years older than the reader’s current age). But otherwise, the blood pressure targets are not that different.
Does that final 10 mm Hg really matter when barriers to care mean that tens of millions in the United States are unaware they have hypertension? Even among those diagnosed, many are either untreated or inadequately treated.
With this context, perhaps the most insightful thing that can be said about the blood pressure guideline controversy is that it’s not all that controversial. We can likely all agree that we need to be better at treating hypertension and that creative solutions to reach underserved communities are necessary.
Arguing about 140/90 mm Hg or 130/80 mm Hg is less important than acknowledging that we should be aggressive in screening for and treating hypertension. We should acknowledge that beyond a certain point any cardiovascular benefit comes at the cost of hypotension and side effects. That tipping point will be different for different groups, and probably at a higher set point in older patients.
Individualizing care isn’t difficult. We do it all the time. We just shouldn’t be letting people walk around with untreated hypertension. It’s not the 1900s anymore.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Stop me if you’ve heard this before. There’s a controversy over blood pressure targets. Some argue for 140/90 mm Hg, others for 130/80 mm Hg, and some super ambitious folks think that we should aim for 120/80 mm Hg. If this sounds familiar, it should. We did it in 2017. It’s unclear what, if anything, we learned from the experience. On the upside, it’s not as bad as it was 100 years ago.
When high blood pressure was a ‘good’ thing
Back then, many believed that you needed higher blood pressure as you got older to push the blood through your progressively stiffened and hardened arteries. Hence the name “essential” hypertension. The concern was that lowering blood pressure would hypoperfuse your organs and be dangerous. In the 1930s, John Hay told an audience at a British Medical Association lecture: “The greatest danger to a man with high blood pressure lies in its discovery, because then some fool is certain to try and reduce it.”
The 1900s were a simpler time when people had fatal strokes in their 50s, and their families were consoled by the knowledge that they had lived a good life.
If our thinking around blood pressure had evolved slightly faster, perhaps President Roosevelt wouldn’t have died of a stroke during World War II as his doctors watched his systolic blood pressure climb above 200 mm Hg and suggested massages and barbiturates to take the edge off.
The current controversy
Not that long ago, 180 mm Hg was considered mild hypertension. Now, we are arguing about a systolic blood pressure of 140 versus 130 mm Hg.
The American Academy of Family Physicians takes the view that 140/90 mm Hg is good enough for most people. Their most recent clinical practice guideline, based primarily on two 2020 Cochrane Reviews of blood pressure targets in patients with and without cardiovascular disease, did not find any mortality benefit for a lower blood pressure threshold.
This puts the AAFP guideline in conflict with the 2017 guideline issued jointly by the American College of Cardiology, American Heart Association, and nine other groups, which recommended a target of 130/80 mm Hg for pretty much everyone. Though they say greater than 140/90 mm Hg should be the threshold for low-risk patients or for starting therapy post stroke, we often forget those nuances. The main point of contention is that the AAFP guideline was looking for a mortality benefit, whereas the ACC/AHA/everyone else guideline was looking at preventing cardiovascular events. The latter guideline was driven mainly by the results of the SPRINT trial. ACC/AHA argue for more aggressive targets to prevent the things that cardiologists care about, namely heart attacks.
The AAFP guideline conceded that more aggressive control will result in fewer myocardial infarctions but warn that it comes with more adverse events. Treating 1,000 patients to this lower target would theoretically prevent four MIs, possibly prevent three strokes, but result in 30 adverse events.
In the end, what we are seeing here is not so much a debate over the evidence as a debate over priorities. Interventions that don’t improve mortality can be questioned in terms of their cost effectiveness. But you probably don’t want to have a heart attack (even a nonfatal one). And you certainly don’t want to have a stroke. However, lower blood pressure targets inevitably require more medications. Notwithstanding the economic costs, the dangers of polypharmacy, medication interactions, side effects, and syncope leading to falls cannot be ignored. Falls are not benign adverse events, especially in older adults.
The counter argument is that physicians are human and often let things slide. Set the target at 140/90 mm Hg, and many physicians won’t jump on a systolic blood pressure of 144 mm Hg. Set the target at 130 mm Hg, and maybe they’ll be more likely to react. There’s a fine line between permissiveness and complacency.
If you zoom out and look at the multitude of blood pressure guidelines, you start to notice an important fact. There is not much daylight between them. There are subtle differences in what constitutes high risk and different definitions of older (older should be defined as 10 years older than the reader’s current age). But otherwise, the blood pressure targets are not that different.
Does that final 10 mm Hg really matter when barriers to care mean that tens of millions in the United States are unaware they have hypertension? Even among those diagnosed, many are either untreated or inadequately treated.
With this context, perhaps the most insightful thing that can be said about the blood pressure guideline controversy is that it’s not all that controversial. We can likely all agree that we need to be better at treating hypertension and that creative solutions to reach underserved communities are necessary.
Arguing about 140/90 mm Hg or 130/80 mm Hg is less important than acknowledging that we should be aggressive in screening for and treating hypertension. We should acknowledge that beyond a certain point any cardiovascular benefit comes at the cost of hypotension and side effects. That tipping point will be different for different groups, and probably at a higher set point in older patients.
Individualizing care isn’t difficult. We do it all the time. We just shouldn’t be letting people walk around with untreated hypertension. It’s not the 1900s anymore.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Stop me if you’ve heard this before. There’s a controversy over blood pressure targets. Some argue for 140/90 mm Hg, others for 130/80 mm Hg, and some super ambitious folks think that we should aim for 120/80 mm Hg. If this sounds familiar, it should. We did it in 2017. It’s unclear what, if anything, we learned from the experience. On the upside, it’s not as bad as it was 100 years ago.
When high blood pressure was a ‘good’ thing
Back then, many believed that you needed higher blood pressure as you got older to push the blood through your progressively stiffened and hardened arteries. Hence the name “essential” hypertension. The concern was that lowering blood pressure would hypoperfuse your organs and be dangerous. In the 1930s, John Hay told an audience at a British Medical Association lecture: “The greatest danger to a man with high blood pressure lies in its discovery, because then some fool is certain to try and reduce it.”
The 1900s were a simpler time when people had fatal strokes in their 50s, and their families were consoled by the knowledge that they had lived a good life.
If our thinking around blood pressure had evolved slightly faster, perhaps President Roosevelt wouldn’t have died of a stroke during World War II as his doctors watched his systolic blood pressure climb above 200 mm Hg and suggested massages and barbiturates to take the edge off.
The current controversy
Not that long ago, 180 mm Hg was considered mild hypertension. Now, we are arguing about a systolic blood pressure of 140 versus 130 mm Hg.
The American Academy of Family Physicians takes the view that 140/90 mm Hg is good enough for most people. Their most recent clinical practice guideline, based primarily on two 2020 Cochrane Reviews of blood pressure targets in patients with and without cardiovascular disease, did not find any mortality benefit for a lower blood pressure threshold.
This puts the AAFP guideline in conflict with the 2017 guideline issued jointly by the American College of Cardiology, American Heart Association, and nine other groups, which recommended a target of 130/80 mm Hg for pretty much everyone. Though they say greater than 140/90 mm Hg should be the threshold for low-risk patients or for starting therapy post stroke, we often forget those nuances. The main point of contention is that the AAFP guideline was looking for a mortality benefit, whereas the ACC/AHA/everyone else guideline was looking at preventing cardiovascular events. The latter guideline was driven mainly by the results of the SPRINT trial. ACC/AHA argue for more aggressive targets to prevent the things that cardiologists care about, namely heart attacks.
The AAFP guideline conceded that more aggressive control will result in fewer myocardial infarctions but warn that it comes with more adverse events. Treating 1,000 patients to this lower target would theoretically prevent four MIs, possibly prevent three strokes, but result in 30 adverse events.
In the end, what we are seeing here is not so much a debate over the evidence as a debate over priorities. Interventions that don’t improve mortality can be questioned in terms of their cost effectiveness. But you probably don’t want to have a heart attack (even a nonfatal one). And you certainly don’t want to have a stroke. However, lower blood pressure targets inevitably require more medications. Notwithstanding the economic costs, the dangers of polypharmacy, medication interactions, side effects, and syncope leading to falls cannot be ignored. Falls are not benign adverse events, especially in older adults.
The counter argument is that physicians are human and often let things slide. Set the target at 140/90 mm Hg, and many physicians won’t jump on a systolic blood pressure of 144 mm Hg. Set the target at 130 mm Hg, and maybe they’ll be more likely to react. There’s a fine line between permissiveness and complacency.
If you zoom out and look at the multitude of blood pressure guidelines, you start to notice an important fact. There is not much daylight between them. There are subtle differences in what constitutes high risk and different definitions of older (older should be defined as 10 years older than the reader’s current age). But otherwise, the blood pressure targets are not that different.
Does that final 10 mm Hg really matter when barriers to care mean that tens of millions in the United States are unaware they have hypertension? Even among those diagnosed, many are either untreated or inadequately treated.
With this context, perhaps the most insightful thing that can be said about the blood pressure guideline controversy is that it’s not all that controversial. We can likely all agree that we need to be better at treating hypertension and that creative solutions to reach underserved communities are necessary.
Arguing about 140/90 mm Hg or 130/80 mm Hg is less important than acknowledging that we should be aggressive in screening for and treating hypertension. We should acknowledge that beyond a certain point any cardiovascular benefit comes at the cost of hypotension and side effects. That tipping point will be different for different groups, and probably at a higher set point in older patients.
Individualizing care isn’t difficult. We do it all the time. We just shouldn’t be letting people walk around with untreated hypertension. It’s not the 1900s anymore.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Eosinophilic esophagitis: A year in review
At the AGA postgraduate course in May, we highlighted recent noteworthy randomized controlled trials (RCT) using eosinophil-targeting biologic therapy, esophageal-optimized corticosteroid preparations, and dietary elimination in EoE.
Dupilumab, a monoclonal antibody that blocks interleukin-4 and IL-13 signaling, was tested in a phase 3 trial for adults and adolescents with EoE.1 In this double-blind, randomized, placebo-controlled trial, the efficacy of subcutaneous dupilumab 300 mg weekly or every other week was compared against placebo. Stringent histologic remission (≤ 6 eosinophils/high power field) occurred in approximately 60% who received dupilumab (either dose) versus 5% in placebo. However, significant symptom improvement was seen only with 300 g weekly dupilumab.
On the topical corticosteroid front, the results of two RCTs using fluticasone orally disintegrating tablet (APT-1011) and budesonide oral suspension (BOS) were published. In the APT-1011 phase 2b trial, patients were randomized to receive 1.5 mg or 3 mg daily or b.i.d. versus placebo for 12 weeks.2 High histologic response rates and improvement in dysphagia frequency were seen with all ≥ 3-mg daily-dose APT-1011, compared with placebo. However, adverse events (that is, candidiasis) were highest among those on 3 mg b.i.d. Thus, 3 mg daily APT-1011 was thought to offer the most favorable risk-benefit profile. In the BOS phase 3 trial, patients were randomized 2:1 to received BOS 2 mg b.i.d. or placebo for 12 weeks.3 BOS was superior to placebo in histologic, symptomatic, and endoscopic outcomes.
Diet remains the only therapy targeting the cause of EoE and offers a potential drug-free remission. In the randomized, open label trial of 1- versus 6-food elimination diet, adult patients were allocated 1:1 to 1FED (animal milk) or 6FED (animal milk, wheat, egg, soy, fish/shellfish, and peanuts/tree nuts) for 6 weeks.4 No significant difference in partial or stringent remission was found between the two groups. Step-up therapy resulted in an additional 43% histologic response in those who underwent 6FED after failing 1FED and 82% histologic response in those who received swallowed fluticasone 880 mcg b.i.d after failing 6FED. Hence, eliminating animal milk alone in a step-up treatment approach is reasonable.
We have witnessed major progress to expand EoE treatment options in the last year. Long-term efficacy and side-effect data, as well as studies comparing between therapies are needed to improve shared decision-making and strategies to implement tailored care in EoE.
Dr. Chen is with the division of gastroenterology and hepatology, department of internal medicine at the University of Michigan, Ann Arbor. She disclosed consultancy work with Phathom Pharmaceuticals.
References
1. Dellon ES et al. N Engl J Med. 2022;387(25):2317-30.
2. Dellon ES et al. Clin Gastroenterol Hepatol. 2022;20(11):2485-94e15.
3. Hirano I et al. Budesonide. Clin Gastroenterol Hepatol. 2022;20(3):525-34e10.
4. Kliewer KL et al. Lancet Gastroenterol Hepatol. 2023;8(5):408-21.
At the AGA postgraduate course in May, we highlighted recent noteworthy randomized controlled trials (RCT) using eosinophil-targeting biologic therapy, esophageal-optimized corticosteroid preparations, and dietary elimination in EoE.
Dupilumab, a monoclonal antibody that blocks interleukin-4 and IL-13 signaling, was tested in a phase 3 trial for adults and adolescents with EoE.1 In this double-blind, randomized, placebo-controlled trial, the efficacy of subcutaneous dupilumab 300 mg weekly or every other week was compared against placebo. Stringent histologic remission (≤ 6 eosinophils/high power field) occurred in approximately 60% who received dupilumab (either dose) versus 5% in placebo. However, significant symptom improvement was seen only with 300 g weekly dupilumab.
On the topical corticosteroid front, the results of two RCTs using fluticasone orally disintegrating tablet (APT-1011) and budesonide oral suspension (BOS) were published. In the APT-1011 phase 2b trial, patients were randomized to receive 1.5 mg or 3 mg daily or b.i.d. versus placebo for 12 weeks.2 High histologic response rates and improvement in dysphagia frequency were seen with all ≥ 3-mg daily-dose APT-1011, compared with placebo. However, adverse events (that is, candidiasis) were highest among those on 3 mg b.i.d. Thus, 3 mg daily APT-1011 was thought to offer the most favorable risk-benefit profile. In the BOS phase 3 trial, patients were randomized 2:1 to received BOS 2 mg b.i.d. or placebo for 12 weeks.3 BOS was superior to placebo in histologic, symptomatic, and endoscopic outcomes.
Diet remains the only therapy targeting the cause of EoE and offers a potential drug-free remission. In the randomized, open label trial of 1- versus 6-food elimination diet, adult patients were allocated 1:1 to 1FED (animal milk) or 6FED (animal milk, wheat, egg, soy, fish/shellfish, and peanuts/tree nuts) for 6 weeks.4 No significant difference in partial or stringent remission was found between the two groups. Step-up therapy resulted in an additional 43% histologic response in those who underwent 6FED after failing 1FED and 82% histologic response in those who received swallowed fluticasone 880 mcg b.i.d after failing 6FED. Hence, eliminating animal milk alone in a step-up treatment approach is reasonable.
We have witnessed major progress to expand EoE treatment options in the last year. Long-term efficacy and side-effect data, as well as studies comparing between therapies are needed to improve shared decision-making and strategies to implement tailored care in EoE.
Dr. Chen is with the division of gastroenterology and hepatology, department of internal medicine at the University of Michigan, Ann Arbor. She disclosed consultancy work with Phathom Pharmaceuticals.
References
1. Dellon ES et al. N Engl J Med. 2022;387(25):2317-30.
2. Dellon ES et al. Clin Gastroenterol Hepatol. 2022;20(11):2485-94e15.
3. Hirano I et al. Budesonide. Clin Gastroenterol Hepatol. 2022;20(3):525-34e10.
4. Kliewer KL et al. Lancet Gastroenterol Hepatol. 2023;8(5):408-21.
At the AGA postgraduate course in May, we highlighted recent noteworthy randomized controlled trials (RCT) using eosinophil-targeting biologic therapy, esophageal-optimized corticosteroid preparations, and dietary elimination in EoE.
Dupilumab, a monoclonal antibody that blocks interleukin-4 and IL-13 signaling, was tested in a phase 3 trial for adults and adolescents with EoE.1 In this double-blind, randomized, placebo-controlled trial, the efficacy of subcutaneous dupilumab 300 mg weekly or every other week was compared against placebo. Stringent histologic remission (≤ 6 eosinophils/high power field) occurred in approximately 60% who received dupilumab (either dose) versus 5% in placebo. However, significant symptom improvement was seen only with 300 g weekly dupilumab.
On the topical corticosteroid front, the results of two RCTs using fluticasone orally disintegrating tablet (APT-1011) and budesonide oral suspension (BOS) were published. In the APT-1011 phase 2b trial, patients were randomized to receive 1.5 mg or 3 mg daily or b.i.d. versus placebo for 12 weeks.2 High histologic response rates and improvement in dysphagia frequency were seen with all ≥ 3-mg daily-dose APT-1011, compared with placebo. However, adverse events (that is, candidiasis) were highest among those on 3 mg b.i.d. Thus, 3 mg daily APT-1011 was thought to offer the most favorable risk-benefit profile. In the BOS phase 3 trial, patients were randomized 2:1 to received BOS 2 mg b.i.d. or placebo for 12 weeks.3 BOS was superior to placebo in histologic, symptomatic, and endoscopic outcomes.
Diet remains the only therapy targeting the cause of EoE and offers a potential drug-free remission. In the randomized, open label trial of 1- versus 6-food elimination diet, adult patients were allocated 1:1 to 1FED (animal milk) or 6FED (animal milk, wheat, egg, soy, fish/shellfish, and peanuts/tree nuts) for 6 weeks.4 No significant difference in partial or stringent remission was found between the two groups. Step-up therapy resulted in an additional 43% histologic response in those who underwent 6FED after failing 1FED and 82% histologic response in those who received swallowed fluticasone 880 mcg b.i.d after failing 6FED. Hence, eliminating animal milk alone in a step-up treatment approach is reasonable.
We have witnessed major progress to expand EoE treatment options in the last year. Long-term efficacy and side-effect data, as well as studies comparing between therapies are needed to improve shared decision-making and strategies to implement tailored care in EoE.
Dr. Chen is with the division of gastroenterology and hepatology, department of internal medicine at the University of Michigan, Ann Arbor. She disclosed consultancy work with Phathom Pharmaceuticals.
References
1. Dellon ES et al. N Engl J Med. 2022;387(25):2317-30.
2. Dellon ES et al. Clin Gastroenterol Hepatol. 2022;20(11):2485-94e15.
3. Hirano I et al. Budesonide. Clin Gastroenterol Hepatol. 2022;20(3):525-34e10.
4. Kliewer KL et al. Lancet Gastroenterol Hepatol. 2023;8(5):408-21.