Evidence is essential but not sufficient to move guidelines

Article Type
Changed
Thu, 03/28/2019 - 14:38

 

– For those considering how to navigate their innovative health care strategy into a position that will lead to an eventual guideline recommendation, it is important to think beyond demonstration of efficacy and safety in the design of randomized trials, according to an overview of how guideline committees currently function.

“In the old days, it was only the strength of the evidence. Now, in addition to the evidence, we have three other issues we look at to form the strength of a recommendation,” John M. Inadomi, MD, AGAF, head of the division of gastroenterology, University of Washington, Seattle, said at the 2018 AGA Tech Summit, sponsored by the AGA Center for GI Innovation and Technology.

Robert Lodge/MDedge News
Dr. John M. Inadomi
These additional considerations include patient preferences, the balance of harms and benefits, and the resources consumed, according to Dr. Inadomi, who has participated in several guideline committees. All three issues for any new strategy must be considered in the context of alternative management. By itself, positive outcomes from a randomized controlled trial are not enough to guarantee a strong guideline recommendation.

“I think the big thing is that we are trying to move away from is just-the-evidence [approach],” Dr. Inadomi explained to an audience that included physician entrepreneurs and investors with an interest in how to establish a new diagnostic tool or treatment device as a standard of care.

There is no doubt that randomized controlled trial data are critical for objectively establishing safety and efficacy, but there has been an evolutionary change. According to Dr. Inadomi, guideline committees are posing more pointed questions about the practical value of one strategy relative to others. They also have increased their scrutiny of the quality and consistency of the RCT data in relation to the specific indication being considered.

“The implication of a strong recommendation is that most people in the situation would want the recommended course of action and that only a small proportion would not,” Dr. Inadomi explained. On the basis of this criterion, an inconvenient, costly, or poorly accepted therapy may not receive a strong recommendation even if effective. Strong recommendations typically set a standard.

“For the health care provider, that means that most patients should receive that course of action,” Dr. Inadomi said. Conversely, “for a weak recommendation, it implies that the majority of people would want this, but many would not.”

 

 


Strong versus weak recommendations have an impact on health care policy, Dr. Inadomi added. Those measuring quality of care might, in some cases, evaluate the frequency with which patients receive guideline-based care that has been given a 1A rating, which identifies the strongest recommendation. Weak recommendations encourage a greater emphasis on shared decision making that recognizes alternative treatment strategies in the context of patient preferences and values.

A reorientation that considers the limits of objective data by itself is reflected in a less restrictive view on the source of the data used in guideline deliberations, according to Dr. Inadomi. “It was once thought that all RCTs are good and observational studies are bad,” he said, adding that this view has changed with greater appreciation of publication bias and RCT study limitations, such as enrollment of nonrepresentative patient populations. While RCT data are preferred, he contended that observational studies are influential to guideline committees when there is a large effect size and there is consistency of evidence.

The move away from evidence-only guidelines is driven by a greater appreciation of value, Dr. Inadomi suggested. For entrepreneurs who hope to shepherd their devices or tools into a central position in clinical medicine, safety and efficacy are critical but may no longer be sufficient.

Dr. Inadomi has no disclosures relevant to this topic.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– For those considering how to navigate their innovative health care strategy into a position that will lead to an eventual guideline recommendation, it is important to think beyond demonstration of efficacy and safety in the design of randomized trials, according to an overview of how guideline committees currently function.

“In the old days, it was only the strength of the evidence. Now, in addition to the evidence, we have three other issues we look at to form the strength of a recommendation,” John M. Inadomi, MD, AGAF, head of the division of gastroenterology, University of Washington, Seattle, said at the 2018 AGA Tech Summit, sponsored by the AGA Center for GI Innovation and Technology.

Robert Lodge/MDedge News
Dr. John M. Inadomi
These additional considerations include patient preferences, the balance of harms and benefits, and the resources consumed, according to Dr. Inadomi, who has participated in several guideline committees. All three issues for any new strategy must be considered in the context of alternative management. By itself, positive outcomes from a randomized controlled trial are not enough to guarantee a strong guideline recommendation.

“I think the big thing is that we are trying to move away from is just-the-evidence [approach],” Dr. Inadomi explained to an audience that included physician entrepreneurs and investors with an interest in how to establish a new diagnostic tool or treatment device as a standard of care.

There is no doubt that randomized controlled trial data are critical for objectively establishing safety and efficacy, but there has been an evolutionary change. According to Dr. Inadomi, guideline committees are posing more pointed questions about the practical value of one strategy relative to others. They also have increased their scrutiny of the quality and consistency of the RCT data in relation to the specific indication being considered.

“The implication of a strong recommendation is that most people in the situation would want the recommended course of action and that only a small proportion would not,” Dr. Inadomi explained. On the basis of this criterion, an inconvenient, costly, or poorly accepted therapy may not receive a strong recommendation even if effective. Strong recommendations typically set a standard.

“For the health care provider, that means that most patients should receive that course of action,” Dr. Inadomi said. Conversely, “for a weak recommendation, it implies that the majority of people would want this, but many would not.”

 

 


Strong versus weak recommendations have an impact on health care policy, Dr. Inadomi added. Those measuring quality of care might, in some cases, evaluate the frequency with which patients receive guideline-based care that has been given a 1A rating, which identifies the strongest recommendation. Weak recommendations encourage a greater emphasis on shared decision making that recognizes alternative treatment strategies in the context of patient preferences and values.

A reorientation that considers the limits of objective data by itself is reflected in a less restrictive view on the source of the data used in guideline deliberations, according to Dr. Inadomi. “It was once thought that all RCTs are good and observational studies are bad,” he said, adding that this view has changed with greater appreciation of publication bias and RCT study limitations, such as enrollment of nonrepresentative patient populations. While RCT data are preferred, he contended that observational studies are influential to guideline committees when there is a large effect size and there is consistency of evidence.

The move away from evidence-only guidelines is driven by a greater appreciation of value, Dr. Inadomi suggested. For entrepreneurs who hope to shepherd their devices or tools into a central position in clinical medicine, safety and efficacy are critical but may no longer be sufficient.

Dr. Inadomi has no disclosures relevant to this topic.

 

– For those considering how to navigate their innovative health care strategy into a position that will lead to an eventual guideline recommendation, it is important to think beyond demonstration of efficacy and safety in the design of randomized trials, according to an overview of how guideline committees currently function.

“In the old days, it was only the strength of the evidence. Now, in addition to the evidence, we have three other issues we look at to form the strength of a recommendation,” John M. Inadomi, MD, AGAF, head of the division of gastroenterology, University of Washington, Seattle, said at the 2018 AGA Tech Summit, sponsored by the AGA Center for GI Innovation and Technology.

Robert Lodge/MDedge News
Dr. John M. Inadomi
These additional considerations include patient preferences, the balance of harms and benefits, and the resources consumed, according to Dr. Inadomi, who has participated in several guideline committees. All three issues for any new strategy must be considered in the context of alternative management. By itself, positive outcomes from a randomized controlled trial are not enough to guarantee a strong guideline recommendation.

“I think the big thing is that we are trying to move away from is just-the-evidence [approach],” Dr. Inadomi explained to an audience that included physician entrepreneurs and investors with an interest in how to establish a new diagnostic tool or treatment device as a standard of care.

There is no doubt that randomized controlled trial data are critical for objectively establishing safety and efficacy, but there has been an evolutionary change. According to Dr. Inadomi, guideline committees are posing more pointed questions about the practical value of one strategy relative to others. They also have increased their scrutiny of the quality and consistency of the RCT data in relation to the specific indication being considered.

“The implication of a strong recommendation is that most people in the situation would want the recommended course of action and that only a small proportion would not,” Dr. Inadomi explained. On the basis of this criterion, an inconvenient, costly, or poorly accepted therapy may not receive a strong recommendation even if effective. Strong recommendations typically set a standard.

“For the health care provider, that means that most patients should receive that course of action,” Dr. Inadomi said. Conversely, “for a weak recommendation, it implies that the majority of people would want this, but many would not.”

 

 


Strong versus weak recommendations have an impact on health care policy, Dr. Inadomi added. Those measuring quality of care might, in some cases, evaluate the frequency with which patients receive guideline-based care that has been given a 1A rating, which identifies the strongest recommendation. Weak recommendations encourage a greater emphasis on shared decision making that recognizes alternative treatment strategies in the context of patient preferences and values.

A reorientation that considers the limits of objective data by itself is reflected in a less restrictive view on the source of the data used in guideline deliberations, according to Dr. Inadomi. “It was once thought that all RCTs are good and observational studies are bad,” he said, adding that this view has changed with greater appreciation of publication bias and RCT study limitations, such as enrollment of nonrepresentative patient populations. While RCT data are preferred, he contended that observational studies are influential to guideline committees when there is a large effect size and there is consistency of evidence.

The move away from evidence-only guidelines is driven by a greater appreciation of value, Dr. Inadomi suggested. For entrepreneurs who hope to shepherd their devices or tools into a central position in clinical medicine, safety and efficacy are critical but may no longer be sufficient.

Dr. Inadomi has no disclosures relevant to this topic.

Publications
Publications
Topics
Article Type
Sections
Article Source

EXPERT ANALYSIS FROM 2018 AGA TECH SUMMIT

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Physiology, not mechanics, explains benefit of bariatric procedures

Article Type
Changed
Mon, 04/23/2018 - 14:34

 

– Rather than being a better strategy to block absorption of ingested calories, the future of bariatric surgery depends on treatment combinations that promote weight control through healthy physiology, according to three experts participating in a panel on this topic at the 2018 AGA Tech Summit, sponsored by the AGA Center for GI Innovation and Technology.

“When we think about the mechanisms of surgery, the mechanical model is dead. There is no good supporting evidence for the mechanical model. The current model is all physiological, involving changes in signaling from the gut to the rest of the body,” asserted Lee Kaplan, MD, PhD, AGAF, director of the Weight Center at Massachusetts General Hospital, Boston.

Robert Lodge/MDedge News
Essentially all bariatric surgery and bariatric endoscopic devices block or restrict absorption of food in an effort to achieve weight loss by mechanically obstructing food absorption. However, Dr. Kaplan said mechanics do not explain what is observed clinically.

The list of evidence suggesting that change in physiologic function is a far more important explanation for weight loss from bariatric interventions is long, according to Dr. Kaplan. Of his many examples, he noted that pregnant women gain weight normally after bariatric surgery.

“Now, if you cannot absorb food normally after bariatric surgery, how do you gain weight normally when pregnant?” Dr. Kaplan asked. The answer to this and other examples of a disconnect between a simple food-blocking mechanism and what is observed is that bariatric procedures favorably alter signals that control hunger, satiety, and metabolism.

The two other experts on the panel largely agreed. In discussing advances in small-bowel devices for the treatment of type 2 diabetes mellitus, Christopher Thompson, MD, AGAF, director of therapeutic endoscopy at Brigham and Women’s Hospital, Boston, also looked to physiologic effects of bariatric surgery. He placed particular emphasis on the foregut and hindgut hypotheses. These hypotheses are “not yet written in stone,” but they provide a conceptual basis for understanding metabolic changes observed after bariatric procedures.

“One way that gastric bypass might work is that it alters the incretins that drive insulin secretion and sensitivity,” Dr. Thompson said. The same principle has been proposed for a novel incisionless magnetic device developed by Dr. Thompson that is now in clinical trials. The device, which creates an anastomosis and a partial jejunal diversion, achieved a 40% excess weight loss and a significant reduction in hemoglobin A1c levels among patients with type 2 diabetes mellitus in an initial study. Dr. Thompson contended that this effect cannot be explained by a change in nutrient absorption.

 

 


A surgeon serving on the panel, Marina Kurian, MD, of New York University’s Langone Medical Center, New York, also referenced the evidence for physiologic effects when speaking about gastric bypass and sleeve gastrectomy. Although both involve a blocking function for food absorption, she agreed that there are several reasons why this may not account for benefits.

“Certainly with gastric bypass, we talk about foregut and hindgut theory in terms of incretin effect,” Dr. Kurian said. She also noted that even the procedures that produce the greatest restriction on food absorption are not typically effective as a single therapeutic approach. Rather, her major point was that no approach, whether surgical, endoscopic, or lifestyle, is generally sufficient to achieve and maintain weight loss indefinitely. In her own practice, she has been moving to a “one-stop shopping” approach to coordinate multiple options.

“Those of us working in obesity are very aware of its chronicity and how one intervention is not enough,” Dr. Kurian said. She suggested that coordinated care among surgeons, gastroenterologists, dietitians, behavioral therapists, and others will provide the road forward even if the next set of surgical procedures or endoscopic devices are incrementally more effective than current options for weight loss.

One reason that a single intervention may not be enough is that obesity is not a single disease but the product of multiple different pathological processes, according to Dr. Kaplan. This is supported by the varied response to current therapies. Producing a variety of examples, he showed that, although there are large weight reductions with the most successful therapies, some patients are exceptional responders, while a proportion of patients lose little or no weight and others actually gain weight. He expressed doubt that there will be a single solution applicable to all patients.

 

 


“Patients who respond to one therapy may not respond to another and vice versa, and so the goal is to match each patient with the therapy that is most appropriate and protective for them,” Dr. Kaplan said.

GIs are uniquely positioned to lead a care team to help patients with obesity achieve a healthy weight. The POWER (Practice Guide on Obesity and Weight Management, Education and Resources) white paper provides physicians with a comprehensive, multidisciplinary process to guide and personalize innovative obesity care for safe and effective weight management.

Learn more at http://www.cghjournal.org/article/S1542-3565(16)309880/fulltext.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Rather than being a better strategy to block absorption of ingested calories, the future of bariatric surgery depends on treatment combinations that promote weight control through healthy physiology, according to three experts participating in a panel on this topic at the 2018 AGA Tech Summit, sponsored by the AGA Center for GI Innovation and Technology.

“When we think about the mechanisms of surgery, the mechanical model is dead. There is no good supporting evidence for the mechanical model. The current model is all physiological, involving changes in signaling from the gut to the rest of the body,” asserted Lee Kaplan, MD, PhD, AGAF, director of the Weight Center at Massachusetts General Hospital, Boston.

Robert Lodge/MDedge News
Essentially all bariatric surgery and bariatric endoscopic devices block or restrict absorption of food in an effort to achieve weight loss by mechanically obstructing food absorption. However, Dr. Kaplan said mechanics do not explain what is observed clinically.

The list of evidence suggesting that change in physiologic function is a far more important explanation for weight loss from bariatric interventions is long, according to Dr. Kaplan. Of his many examples, he noted that pregnant women gain weight normally after bariatric surgery.

“Now, if you cannot absorb food normally after bariatric surgery, how do you gain weight normally when pregnant?” Dr. Kaplan asked. The answer to this and other examples of a disconnect between a simple food-blocking mechanism and what is observed is that bariatric procedures favorably alter signals that control hunger, satiety, and metabolism.

The two other experts on the panel largely agreed. In discussing advances in small-bowel devices for the treatment of type 2 diabetes mellitus, Christopher Thompson, MD, AGAF, director of therapeutic endoscopy at Brigham and Women’s Hospital, Boston, also looked to physiologic effects of bariatric surgery. He placed particular emphasis on the foregut and hindgut hypotheses. These hypotheses are “not yet written in stone,” but they provide a conceptual basis for understanding metabolic changes observed after bariatric procedures.

“One way that gastric bypass might work is that it alters the incretins that drive insulin secretion and sensitivity,” Dr. Thompson said. The same principle has been proposed for a novel incisionless magnetic device developed by Dr. Thompson that is now in clinical trials. The device, which creates an anastomosis and a partial jejunal diversion, achieved a 40% excess weight loss and a significant reduction in hemoglobin A1c levels among patients with type 2 diabetes mellitus in an initial study. Dr. Thompson contended that this effect cannot be explained by a change in nutrient absorption.

 

 


A surgeon serving on the panel, Marina Kurian, MD, of New York University’s Langone Medical Center, New York, also referenced the evidence for physiologic effects when speaking about gastric bypass and sleeve gastrectomy. Although both involve a blocking function for food absorption, she agreed that there are several reasons why this may not account for benefits.

“Certainly with gastric bypass, we talk about foregut and hindgut theory in terms of incretin effect,” Dr. Kurian said. She also noted that even the procedures that produce the greatest restriction on food absorption are not typically effective as a single therapeutic approach. Rather, her major point was that no approach, whether surgical, endoscopic, or lifestyle, is generally sufficient to achieve and maintain weight loss indefinitely. In her own practice, she has been moving to a “one-stop shopping” approach to coordinate multiple options.

“Those of us working in obesity are very aware of its chronicity and how one intervention is not enough,” Dr. Kurian said. She suggested that coordinated care among surgeons, gastroenterologists, dietitians, behavioral therapists, and others will provide the road forward even if the next set of surgical procedures or endoscopic devices are incrementally more effective than current options for weight loss.

One reason that a single intervention may not be enough is that obesity is not a single disease but the product of multiple different pathological processes, according to Dr. Kaplan. This is supported by the varied response to current therapies. Producing a variety of examples, he showed that, although there are large weight reductions with the most successful therapies, some patients are exceptional responders, while a proportion of patients lose little or no weight and others actually gain weight. He expressed doubt that there will be a single solution applicable to all patients.

 

 


“Patients who respond to one therapy may not respond to another and vice versa, and so the goal is to match each patient with the therapy that is most appropriate and protective for them,” Dr. Kaplan said.

GIs are uniquely positioned to lead a care team to help patients with obesity achieve a healthy weight. The POWER (Practice Guide on Obesity and Weight Management, Education and Resources) white paper provides physicians with a comprehensive, multidisciplinary process to guide and personalize innovative obesity care for safe and effective weight management.

Learn more at http://www.cghjournal.org/article/S1542-3565(16)309880/fulltext.

 

– Rather than being a better strategy to block absorption of ingested calories, the future of bariatric surgery depends on treatment combinations that promote weight control through healthy physiology, according to three experts participating in a panel on this topic at the 2018 AGA Tech Summit, sponsored by the AGA Center for GI Innovation and Technology.

“When we think about the mechanisms of surgery, the mechanical model is dead. There is no good supporting evidence for the mechanical model. The current model is all physiological, involving changes in signaling from the gut to the rest of the body,” asserted Lee Kaplan, MD, PhD, AGAF, director of the Weight Center at Massachusetts General Hospital, Boston.

Robert Lodge/MDedge News
Essentially all bariatric surgery and bariatric endoscopic devices block or restrict absorption of food in an effort to achieve weight loss by mechanically obstructing food absorption. However, Dr. Kaplan said mechanics do not explain what is observed clinically.

The list of evidence suggesting that change in physiologic function is a far more important explanation for weight loss from bariatric interventions is long, according to Dr. Kaplan. Of his many examples, he noted that pregnant women gain weight normally after bariatric surgery.

“Now, if you cannot absorb food normally after bariatric surgery, how do you gain weight normally when pregnant?” Dr. Kaplan asked. The answer to this and other examples of a disconnect between a simple food-blocking mechanism and what is observed is that bariatric procedures favorably alter signals that control hunger, satiety, and metabolism.

The two other experts on the panel largely agreed. In discussing advances in small-bowel devices for the treatment of type 2 diabetes mellitus, Christopher Thompson, MD, AGAF, director of therapeutic endoscopy at Brigham and Women’s Hospital, Boston, also looked to physiologic effects of bariatric surgery. He placed particular emphasis on the foregut and hindgut hypotheses. These hypotheses are “not yet written in stone,” but they provide a conceptual basis for understanding metabolic changes observed after bariatric procedures.

“One way that gastric bypass might work is that it alters the incretins that drive insulin secretion and sensitivity,” Dr. Thompson said. The same principle has been proposed for a novel incisionless magnetic device developed by Dr. Thompson that is now in clinical trials. The device, which creates an anastomosis and a partial jejunal diversion, achieved a 40% excess weight loss and a significant reduction in hemoglobin A1c levels among patients with type 2 diabetes mellitus in an initial study. Dr. Thompson contended that this effect cannot be explained by a change in nutrient absorption.

 

 


A surgeon serving on the panel, Marina Kurian, MD, of New York University’s Langone Medical Center, New York, also referenced the evidence for physiologic effects when speaking about gastric bypass and sleeve gastrectomy. Although both involve a blocking function for food absorption, she agreed that there are several reasons why this may not account for benefits.

“Certainly with gastric bypass, we talk about foregut and hindgut theory in terms of incretin effect,” Dr. Kurian said. She also noted that even the procedures that produce the greatest restriction on food absorption are not typically effective as a single therapeutic approach. Rather, her major point was that no approach, whether surgical, endoscopic, or lifestyle, is generally sufficient to achieve and maintain weight loss indefinitely. In her own practice, she has been moving to a “one-stop shopping” approach to coordinate multiple options.

“Those of us working in obesity are very aware of its chronicity and how one intervention is not enough,” Dr. Kurian said. She suggested that coordinated care among surgeons, gastroenterologists, dietitians, behavioral therapists, and others will provide the road forward even if the next set of surgical procedures or endoscopic devices are incrementally more effective than current options for weight loss.

One reason that a single intervention may not be enough is that obesity is not a single disease but the product of multiple different pathological processes, according to Dr. Kaplan. This is supported by the varied response to current therapies. Producing a variety of examples, he showed that, although there are large weight reductions with the most successful therapies, some patients are exceptional responders, while a proportion of patients lose little or no weight and others actually gain weight. He expressed doubt that there will be a single solution applicable to all patients.

 

 


“Patients who respond to one therapy may not respond to another and vice versa, and so the goal is to match each patient with the therapy that is most appropriate and protective for them,” Dr. Kaplan said.

GIs are uniquely positioned to lead a care team to help patients with obesity achieve a healthy weight. The POWER (Practice Guide on Obesity and Weight Management, Education and Resources) white paper provides physicians with a comprehensive, multidisciplinary process to guide and personalize innovative obesity care for safe and effective weight management.

Learn more at http://www.cghjournal.org/article/S1542-3565(16)309880/fulltext.

Publications
Publications
Topics
Article Type
Sections
Article Source

EXPERT ANALYSIS FROM 2018 AGA TECH SUMMIT

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Eating Fish May Be Associated With a Reduced Risk of MS

Article Type
Changed
Wed, 01/16/2019 - 15:39
Omega-3 fatty acids, combined with a specific genetic profile, may modulate MS risk.

LOS ANGELES—Eating fish at least once per week or eating fish one to three times per month, in addition to taking daily fish oil supplements, may be associated with a reduced risk of multiple sclerosis (MS), according to a preliminary study presented at the American Academy of Neurology’s 70th Annual Meeting. These findings suggest that the omega-3 fatty acids found in fish may be associated with lowering the risk of developing MS.

“Consuming fish that contain omega-3 fatty acids has been shown to have a variety of health benefits, so we wanted to see if this simple lifestyle modification, regularly eating fish and taking fish oil supplements, could reduce the risk of MS,” said lead study author Annette Langer-Gould, MD, PhD, Regional Lead for Clinical and Translational Neuroscience for the Southern California Permanente Medical Group in Pasadena, and Clinical Assistant Professor at the Keck School of Medicine of the University of Southern California in Los Angeles.

Annette Langer-Gould, MD, PhD

For this study, researchers examined the diets of 1,153 people (average age 36) from the MS Sunshine Study, a multi-ethnic matched case-control study of incident MS or clinically isolated syndrome (CIS), recruited from Kaiser Permanente Southern California.

Researchers queried participants about how much fish they consumed regularly. Investigators also examined 13 single nucleotide polymorphisms (SNPs) in FADS1, FADS2, and ELOV2, which regulate fatty acid biosynthesis.

High fish intake was defined as either eating one serving of fish per week or eating one to three servings per month in addition to taking daily fish oil supplements. Low intake was defined as less than one serving of fish per month and no fish oil supplements.

High fish intake was associated with a 45% reduced risk of MS or CIS, when compared with those who ate fish less than once a month and did not take fish oil supplements. A total of 180 of participants with MS had high fish intake compared with 251 of the healthy controls.

In addition, two SNPs, rs174611 and rs174618, in FADS2 were independently associated with a lower risk of MS, even after accounting for high fish intake. This suggests that some people may have a genetic advantage when it comes to regulating fatty acid levels, the researchers noted.

While the study suggests that omega-3 fatty acids, and how they are processed by the body, may play an important role in reducing MS risk. Dr. Langer-Gould and colleagues emphasized that their findings show an association, and not cause and effect. More research is needed to confirm the findings and to examine how omega-3 fatty acids may affect inflammation, metabolism, and nerve function.

The study was supported by the National Institute of Neurological Disorders and Stroke.

 

 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Omega-3 fatty acids, combined with a specific genetic profile, may modulate MS risk.
Omega-3 fatty acids, combined with a specific genetic profile, may modulate MS risk.

LOS ANGELES—Eating fish at least once per week or eating fish one to three times per month, in addition to taking daily fish oil supplements, may be associated with a reduced risk of multiple sclerosis (MS), according to a preliminary study presented at the American Academy of Neurology’s 70th Annual Meeting. These findings suggest that the omega-3 fatty acids found in fish may be associated with lowering the risk of developing MS.

“Consuming fish that contain omega-3 fatty acids has been shown to have a variety of health benefits, so we wanted to see if this simple lifestyle modification, regularly eating fish and taking fish oil supplements, could reduce the risk of MS,” said lead study author Annette Langer-Gould, MD, PhD, Regional Lead for Clinical and Translational Neuroscience for the Southern California Permanente Medical Group in Pasadena, and Clinical Assistant Professor at the Keck School of Medicine of the University of Southern California in Los Angeles.

Annette Langer-Gould, MD, PhD

For this study, researchers examined the diets of 1,153 people (average age 36) from the MS Sunshine Study, a multi-ethnic matched case-control study of incident MS or clinically isolated syndrome (CIS), recruited from Kaiser Permanente Southern California.

Researchers queried participants about how much fish they consumed regularly. Investigators also examined 13 single nucleotide polymorphisms (SNPs) in FADS1, FADS2, and ELOV2, which regulate fatty acid biosynthesis.

High fish intake was defined as either eating one serving of fish per week or eating one to three servings per month in addition to taking daily fish oil supplements. Low intake was defined as less than one serving of fish per month and no fish oil supplements.

High fish intake was associated with a 45% reduced risk of MS or CIS, when compared with those who ate fish less than once a month and did not take fish oil supplements. A total of 180 of participants with MS had high fish intake compared with 251 of the healthy controls.

In addition, two SNPs, rs174611 and rs174618, in FADS2 were independently associated with a lower risk of MS, even after accounting for high fish intake. This suggests that some people may have a genetic advantage when it comes to regulating fatty acid levels, the researchers noted.

While the study suggests that omega-3 fatty acids, and how they are processed by the body, may play an important role in reducing MS risk. Dr. Langer-Gould and colleagues emphasized that their findings show an association, and not cause and effect. More research is needed to confirm the findings and to examine how omega-3 fatty acids may affect inflammation, metabolism, and nerve function.

The study was supported by the National Institute of Neurological Disorders and Stroke.

 

 

LOS ANGELES—Eating fish at least once per week or eating fish one to three times per month, in addition to taking daily fish oil supplements, may be associated with a reduced risk of multiple sclerosis (MS), according to a preliminary study presented at the American Academy of Neurology’s 70th Annual Meeting. These findings suggest that the omega-3 fatty acids found in fish may be associated with lowering the risk of developing MS.

“Consuming fish that contain omega-3 fatty acids has been shown to have a variety of health benefits, so we wanted to see if this simple lifestyle modification, regularly eating fish and taking fish oil supplements, could reduce the risk of MS,” said lead study author Annette Langer-Gould, MD, PhD, Regional Lead for Clinical and Translational Neuroscience for the Southern California Permanente Medical Group in Pasadena, and Clinical Assistant Professor at the Keck School of Medicine of the University of Southern California in Los Angeles.

Annette Langer-Gould, MD, PhD

For this study, researchers examined the diets of 1,153 people (average age 36) from the MS Sunshine Study, a multi-ethnic matched case-control study of incident MS or clinically isolated syndrome (CIS), recruited from Kaiser Permanente Southern California.

Researchers queried participants about how much fish they consumed regularly. Investigators also examined 13 single nucleotide polymorphisms (SNPs) in FADS1, FADS2, and ELOV2, which regulate fatty acid biosynthesis.

High fish intake was defined as either eating one serving of fish per week or eating one to three servings per month in addition to taking daily fish oil supplements. Low intake was defined as less than one serving of fish per month and no fish oil supplements.

High fish intake was associated with a 45% reduced risk of MS or CIS, when compared with those who ate fish less than once a month and did not take fish oil supplements. A total of 180 of participants with MS had high fish intake compared with 251 of the healthy controls.

In addition, two SNPs, rs174611 and rs174618, in FADS2 were independently associated with a lower risk of MS, even after accounting for high fish intake. This suggests that some people may have a genetic advantage when it comes to regulating fatty acid levels, the researchers noted.

While the study suggests that omega-3 fatty acids, and how they are processed by the body, may play an important role in reducing MS risk. Dr. Langer-Gould and colleagues emphasized that their findings show an association, and not cause and effect. More research is needed to confirm the findings and to examine how omega-3 fatty acids may affect inflammation, metabolism, and nerve function.

The study was supported by the National Institute of Neurological Disorders and Stroke.

 

 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 04/23/2018 - 14:15
Un-Gate On Date
Mon, 04/23/2018 - 14:15
Use ProPublica
CFC Schedule Remove Status
Mon, 04/23/2018 - 14:15

Hints of altered microRNA expression in women exposed to EDCs

Article Type
Changed
Fri, 01/18/2019 - 17:34

 

Endocrine-disrupting chemicals (EDCs) are structurally similar to endogenous hormones and are therefore capable of mimicking these natural hormones, interfering with their biosynthesis, transport, binding action, and/or elimination. In animal studies and human clinical observational and epidemiologic studies of various EDCs, these chemicals have consistently been associated with diabetes mellitus, obesity, hormone-sensitive cancers, neurodevelopmental disorders in children exposed prenatally, and reproductive health.

In 2009, the Endocrine Society published a scientific statement in which it called EDCs a significant concern to human health (Endocr Rev. 2009;30[4]:293-342). Several years later, the American College of Obstetricians and Gynecologists and the American Society for Reproductive Medicine issued a Committee Opinion on Exposure to Toxic Environmental Agents, warning that patient exposure to EDCs and other toxic environmental agents can have a “profound and lasting effect” on reproductive health outcomes across the life course and calling the reduction of exposure a “critical area of intervention” for ob.gyns. and other reproductive health care professionals (Obstet Gynecol. 2013;122[4]:931-5).

University of Cincinnati
More recently, the International Federation of Gynecology and Obstetrics similarly called for action to both prioritize research on women’s health and toxic reproductive agents and to address the consequences of exposure (Int J Gynaecol Obstet. 2015 Oct 1;131[3]:219-25).

Despite strong calls by each of these organizations to not overlook EDCs in the clinical arena, as well as emerging evidence that EDCs may be a risk factor for gestational diabetes (GDM), EDC exposure may not be on the practicing ob.gyn.’s radar. Clinicians should know what these chemicals are and how to talk about them in preconception and prenatal visits. We should carefully consider their known – and potential – risks, and encourage our patients to identify and reduce exposure without being alarmist.
 

Low-dose effects

EDCs are used in the manufacture of pesticides, industrial chemicals, plastics and plasticizers, hand sanitizers, medical equipment, dental sealants, a variety of personal care products, cosmetics, and other common consumer and household products. They’re found, for example, in sunscreens, canned foods and beverages, food-packaging materials, baby bottles, flame-retardant furniture, stain-resistant carpet, and shoes. We are all ingesting and breathing them in to some degree.

Bisphenol A (BPA), one of the most extensively studied EDCs, is found in the thermal receipt paper routinely used by gas stations, supermarkets, and other stores. In a small study we conducted at Harvard, we found that urinary BPA concentrations increased after continual handling of receipts for 2 hours without gloves but did not increase significantly when gloves were used (JAMA. 2014 Feb 26;311[8]:859-60).

 

 


EDCs are among the 80,000-plus chemicals that have been introduced into the environment in the past 5 decades with no testing for their safety. Unlike in Europe, where chemicals are tested for safety before being brought to the market, in the United States, most chemicals enter the marketplace, as the Committee Opinion points out, “without comprehensive and standardized information regarding their reproductive and other long-term toxic effects.” (This opinion was reaffirmed in 2016.)

Informed consumers can then affect the market through their purchasing choices, but the removal of concerning chemicals from products takes a long time, and it’s not always immediately clear that replacement chemicals are safer. For instance, the BPA in “BPA-free” water bottles and canned foods has been replaced by bisphenol S (BPS), which has a very similar molecular structure to BPA. The potential adverse health effects of these replacement chemicals are now being examined in experimental and epidemiologic studies.



Through its National Health and Nutrition Examination Survey, the Centers for Disease Control and Prevention has reported detection rates of between 75% and 99% for different EDCs in urine samples collected from a representative sample of the U.S. population. In other human research, several EDCs have been shown to cross the placenta and have been measured in maternal blood and urine and in cord blood and amniotic fluid, as well as in placental tissue at birth.

It is interesting to note that BPA’s structure is similar to that of diethylstilbestrol (DES). BPA was first shown to have estrogenic activity in 1936 and was originally considered for use in pharmaceuticals to prevent miscarriages, spontaneous abortions, and premature labor but was put aside in favor of DES. (DES was eventually found to be carcinogenic and was taken off the market.) In the 1950s, the use of BPA was resuscitated though not in pharmaceuticals.

 

 


The Endocrine Society’s statement in 2009 was a wake-up call to the scientific community about the possible dangers of EDCs on health and disease. In the subsequent years, a huge body of scientific literature has been published elucidating potential associations with various adverse health outcomes and their underlying mechanisms. This has led us one step closer to informing public policy and identifying and regulating EDCs.

A better understanding about the mechanisms of action and dose-response patterns of EDCs has indicated that EDCs can act at low doses, and in many cases a nonmonotonic dose-response association has been demonstrated. This is a paradigm shift for traditional toxicology in which it is “the dose that makes the poison,” and some toxicologists have been critical of the claims of low-dose potency for EDCs.

A team of epidemiologists, toxicologists, and other scientists, including myself, critically analyzed in vitro, animal, and epidemiologic studies as part of a National Institute of Environmental Health Sciences working group on BPA to determine the strength of the evidence for low-dose effects (doses lower than those tested in traditional toxicology assessments) of BPA. We found that consistent, reproducible, and often adverse low-dose effects have been demonstrated for BPA in cell lines, primary cells and tissues, laboratory animals, and human populations. We also concluded that EDCs can pose the greatest threats when exposure occurs during early development, organogenesis, and during critical postnatal periods when tissues are differentiating (Endocr Disruptors [Austin, Tex.]. 2013 Sep;1:e25078-1-13).

A potential risk factor for GDM

Quite a lot of research has been done on EDCs and the risk of type 2 diabetes. A recent meta-analysis that included 41 cross-sectional and 8 prospective studies found that serum concentrations of dioxins, polychlorinated biphenyls, and chlorinated pesticides – and urine concentrations of BPA and phthalates – were significantly associated with type 2 diabetes risk. Comparing the highest and lowest concentration categories, the pooled relative risk was 1.45 for BPA and phthalates. EDC concentrations also were associated with indicators of impaired fasting glucose and insulin resistance (J Diabetes. 2016 Jul;8[4]:516-32).

 

 

Cincinnati Children's Hospital Medical Center
Dr. Shelley R. Ehrlich
Studies have shown that BPA and other environmental phenols can induce insulin resistance and metabolic dysfunction by acting on several endogenous pathways, including that those regulate energy and glucose metabolism. EDCs also have been found to be epigenetically toxic. In a landmark study in the agouti mouse model, maternal BPA exposure was shown to alter the animals’ epigenetic programming, leading to offspring that had yellow coats and were obese, rather than brown and small (Nutr Rev. 2008;66[Suppl 1]:S7-11). (The agouti mouse model has been used to study the impact of nutritional and environmental influences on the fetal epigenome; fur-color variation is correlated to epigenetic marks established early in development.)

Despite the mounting evidence for an association between BPA and type 2 diabetes, and despite the fact that the increased incidence of GDM in the past 20 years has mirrored the increasing use of EDCs, there has been a dearth of research examining the possible relationship between EDCs and GDM. The effects of BPA on GDM were identified as a knowledge gap by the National Institute of Environmental Health Sciences after a review of the literature from 2007 to 2013 (Environ Health Perspect. 2014 Aug:122[8]:775-86).

To understand the association between EDCs and GDM and the underlying mechanistic pathway of EDCs, we are conducting research that uses a growing body of evidence that suggests that environmental toxins are involved in the control of microRNA (miRNA) expression in trophoblast cells.

MiRNA, a single-stranded, short, noncoding RNA that is involved in posttranslational gene expression, can be packaged along with other signaling molecules inside extracellular vesicles in the placenta called exosomes. These exosomes appear to be shed from the placenta into the maternal circulation as early as 6-7 weeks into pregnancy. Once released into the maternal circulation, research has shown that the exosomes can target and reprogram other cells via the transfer of noncoding miRNAs, thereby changing the gene expression in these cells.

 

 


Such an exosome-mediated signaling pathway provides us with the opportunity to isolate exosomes, sequence the miRNAs, and look at whether women who are exposed to higher levels of EDCs (as indicated in urine concentration) have a particular miRNA signature that correlates with GDM. In other words, we’re working to determine whether particular EDCs and exposure levels affect the miRNA placental profiles, and if these profiles are predictive of GDM.

Thus far, in a pilot prospective cohort study of pregnant women, we are seeing hints of altered miRNA expression in relation to GDM. We have selected study participants who are at high risk of developing GDM (for example, prepregnancy body mass index greater than 30, past pregnancy with GDM, or macrosomia) because we suspect that, in many women, EDCs are a tipping point for the development of GDM rather than a sole causative factor. In addition to understanding the impact of EDCs on GDM, it is our hope that miRNAs in maternal circulation will serve as a noninvasive biomarker for early detection of GDM development or susceptibility.

Dr. Ehrlich is an assistant professor of pediatrics and environmental health at Cincinnati Children’s Hospital Medical Center.

Publications
Topics
Sections

 

Endocrine-disrupting chemicals (EDCs) are structurally similar to endogenous hormones and are therefore capable of mimicking these natural hormones, interfering with their biosynthesis, transport, binding action, and/or elimination. In animal studies and human clinical observational and epidemiologic studies of various EDCs, these chemicals have consistently been associated with diabetes mellitus, obesity, hormone-sensitive cancers, neurodevelopmental disorders in children exposed prenatally, and reproductive health.

In 2009, the Endocrine Society published a scientific statement in which it called EDCs a significant concern to human health (Endocr Rev. 2009;30[4]:293-342). Several years later, the American College of Obstetricians and Gynecologists and the American Society for Reproductive Medicine issued a Committee Opinion on Exposure to Toxic Environmental Agents, warning that patient exposure to EDCs and other toxic environmental agents can have a “profound and lasting effect” on reproductive health outcomes across the life course and calling the reduction of exposure a “critical area of intervention” for ob.gyns. and other reproductive health care professionals (Obstet Gynecol. 2013;122[4]:931-5).

University of Cincinnati
More recently, the International Federation of Gynecology and Obstetrics similarly called for action to both prioritize research on women’s health and toxic reproductive agents and to address the consequences of exposure (Int J Gynaecol Obstet. 2015 Oct 1;131[3]:219-25).

Despite strong calls by each of these organizations to not overlook EDCs in the clinical arena, as well as emerging evidence that EDCs may be a risk factor for gestational diabetes (GDM), EDC exposure may not be on the practicing ob.gyn.’s radar. Clinicians should know what these chemicals are and how to talk about them in preconception and prenatal visits. We should carefully consider their known – and potential – risks, and encourage our patients to identify and reduce exposure without being alarmist.
 

Low-dose effects

EDCs are used in the manufacture of pesticides, industrial chemicals, plastics and plasticizers, hand sanitizers, medical equipment, dental sealants, a variety of personal care products, cosmetics, and other common consumer and household products. They’re found, for example, in sunscreens, canned foods and beverages, food-packaging materials, baby bottles, flame-retardant furniture, stain-resistant carpet, and shoes. We are all ingesting and breathing them in to some degree.

Bisphenol A (BPA), one of the most extensively studied EDCs, is found in the thermal receipt paper routinely used by gas stations, supermarkets, and other stores. In a small study we conducted at Harvard, we found that urinary BPA concentrations increased after continual handling of receipts for 2 hours without gloves but did not increase significantly when gloves were used (JAMA. 2014 Feb 26;311[8]:859-60).

 

 


EDCs are among the 80,000-plus chemicals that have been introduced into the environment in the past 5 decades with no testing for their safety. Unlike in Europe, where chemicals are tested for safety before being brought to the market, in the United States, most chemicals enter the marketplace, as the Committee Opinion points out, “without comprehensive and standardized information regarding their reproductive and other long-term toxic effects.” (This opinion was reaffirmed in 2016.)

Informed consumers can then affect the market through their purchasing choices, but the removal of concerning chemicals from products takes a long time, and it’s not always immediately clear that replacement chemicals are safer. For instance, the BPA in “BPA-free” water bottles and canned foods has been replaced by bisphenol S (BPS), which has a very similar molecular structure to BPA. The potential adverse health effects of these replacement chemicals are now being examined in experimental and epidemiologic studies.



Through its National Health and Nutrition Examination Survey, the Centers for Disease Control and Prevention has reported detection rates of between 75% and 99% for different EDCs in urine samples collected from a representative sample of the U.S. population. In other human research, several EDCs have been shown to cross the placenta and have been measured in maternal blood and urine and in cord blood and amniotic fluid, as well as in placental tissue at birth.

It is interesting to note that BPA’s structure is similar to that of diethylstilbestrol (DES). BPA was first shown to have estrogenic activity in 1936 and was originally considered for use in pharmaceuticals to prevent miscarriages, spontaneous abortions, and premature labor but was put aside in favor of DES. (DES was eventually found to be carcinogenic and was taken off the market.) In the 1950s, the use of BPA was resuscitated though not in pharmaceuticals.

 

 


The Endocrine Society’s statement in 2009 was a wake-up call to the scientific community about the possible dangers of EDCs on health and disease. In the subsequent years, a huge body of scientific literature has been published elucidating potential associations with various adverse health outcomes and their underlying mechanisms. This has led us one step closer to informing public policy and identifying and regulating EDCs.

A better understanding about the mechanisms of action and dose-response patterns of EDCs has indicated that EDCs can act at low doses, and in many cases a nonmonotonic dose-response association has been demonstrated. This is a paradigm shift for traditional toxicology in which it is “the dose that makes the poison,” and some toxicologists have been critical of the claims of low-dose potency for EDCs.

A team of epidemiologists, toxicologists, and other scientists, including myself, critically analyzed in vitro, animal, and epidemiologic studies as part of a National Institute of Environmental Health Sciences working group on BPA to determine the strength of the evidence for low-dose effects (doses lower than those tested in traditional toxicology assessments) of BPA. We found that consistent, reproducible, and often adverse low-dose effects have been demonstrated for BPA in cell lines, primary cells and tissues, laboratory animals, and human populations. We also concluded that EDCs can pose the greatest threats when exposure occurs during early development, organogenesis, and during critical postnatal periods when tissues are differentiating (Endocr Disruptors [Austin, Tex.]. 2013 Sep;1:e25078-1-13).

A potential risk factor for GDM

Quite a lot of research has been done on EDCs and the risk of type 2 diabetes. A recent meta-analysis that included 41 cross-sectional and 8 prospective studies found that serum concentrations of dioxins, polychlorinated biphenyls, and chlorinated pesticides – and urine concentrations of BPA and phthalates – were significantly associated with type 2 diabetes risk. Comparing the highest and lowest concentration categories, the pooled relative risk was 1.45 for BPA and phthalates. EDC concentrations also were associated with indicators of impaired fasting glucose and insulin resistance (J Diabetes. 2016 Jul;8[4]:516-32).

 

 

Cincinnati Children's Hospital Medical Center
Dr. Shelley R. Ehrlich
Studies have shown that BPA and other environmental phenols can induce insulin resistance and metabolic dysfunction by acting on several endogenous pathways, including that those regulate energy and glucose metabolism. EDCs also have been found to be epigenetically toxic. In a landmark study in the agouti mouse model, maternal BPA exposure was shown to alter the animals’ epigenetic programming, leading to offspring that had yellow coats and were obese, rather than brown and small (Nutr Rev. 2008;66[Suppl 1]:S7-11). (The agouti mouse model has been used to study the impact of nutritional and environmental influences on the fetal epigenome; fur-color variation is correlated to epigenetic marks established early in development.)

Despite the mounting evidence for an association between BPA and type 2 diabetes, and despite the fact that the increased incidence of GDM in the past 20 years has mirrored the increasing use of EDCs, there has been a dearth of research examining the possible relationship between EDCs and GDM. The effects of BPA on GDM were identified as a knowledge gap by the National Institute of Environmental Health Sciences after a review of the literature from 2007 to 2013 (Environ Health Perspect. 2014 Aug:122[8]:775-86).

To understand the association between EDCs and GDM and the underlying mechanistic pathway of EDCs, we are conducting research that uses a growing body of evidence that suggests that environmental toxins are involved in the control of microRNA (miRNA) expression in trophoblast cells.

MiRNA, a single-stranded, short, noncoding RNA that is involved in posttranslational gene expression, can be packaged along with other signaling molecules inside extracellular vesicles in the placenta called exosomes. These exosomes appear to be shed from the placenta into the maternal circulation as early as 6-7 weeks into pregnancy. Once released into the maternal circulation, research has shown that the exosomes can target and reprogram other cells via the transfer of noncoding miRNAs, thereby changing the gene expression in these cells.

 

 


Such an exosome-mediated signaling pathway provides us with the opportunity to isolate exosomes, sequence the miRNAs, and look at whether women who are exposed to higher levels of EDCs (as indicated in urine concentration) have a particular miRNA signature that correlates with GDM. In other words, we’re working to determine whether particular EDCs and exposure levels affect the miRNA placental profiles, and if these profiles are predictive of GDM.

Thus far, in a pilot prospective cohort study of pregnant women, we are seeing hints of altered miRNA expression in relation to GDM. We have selected study participants who are at high risk of developing GDM (for example, prepregnancy body mass index greater than 30, past pregnancy with GDM, or macrosomia) because we suspect that, in many women, EDCs are a tipping point for the development of GDM rather than a sole causative factor. In addition to understanding the impact of EDCs on GDM, it is our hope that miRNAs in maternal circulation will serve as a noninvasive biomarker for early detection of GDM development or susceptibility.

Dr. Ehrlich is an assistant professor of pediatrics and environmental health at Cincinnati Children’s Hospital Medical Center.

 

Endocrine-disrupting chemicals (EDCs) are structurally similar to endogenous hormones and are therefore capable of mimicking these natural hormones, interfering with their biosynthesis, transport, binding action, and/or elimination. In animal studies and human clinical observational and epidemiologic studies of various EDCs, these chemicals have consistently been associated with diabetes mellitus, obesity, hormone-sensitive cancers, neurodevelopmental disorders in children exposed prenatally, and reproductive health.

In 2009, the Endocrine Society published a scientific statement in which it called EDCs a significant concern to human health (Endocr Rev. 2009;30[4]:293-342). Several years later, the American College of Obstetricians and Gynecologists and the American Society for Reproductive Medicine issued a Committee Opinion on Exposure to Toxic Environmental Agents, warning that patient exposure to EDCs and other toxic environmental agents can have a “profound and lasting effect” on reproductive health outcomes across the life course and calling the reduction of exposure a “critical area of intervention” for ob.gyns. and other reproductive health care professionals (Obstet Gynecol. 2013;122[4]:931-5).

University of Cincinnati
More recently, the International Federation of Gynecology and Obstetrics similarly called for action to both prioritize research on women’s health and toxic reproductive agents and to address the consequences of exposure (Int J Gynaecol Obstet. 2015 Oct 1;131[3]:219-25).

Despite strong calls by each of these organizations to not overlook EDCs in the clinical arena, as well as emerging evidence that EDCs may be a risk factor for gestational diabetes (GDM), EDC exposure may not be on the practicing ob.gyn.’s radar. Clinicians should know what these chemicals are and how to talk about them in preconception and prenatal visits. We should carefully consider their known – and potential – risks, and encourage our patients to identify and reduce exposure without being alarmist.
 

Low-dose effects

EDCs are used in the manufacture of pesticides, industrial chemicals, plastics and plasticizers, hand sanitizers, medical equipment, dental sealants, a variety of personal care products, cosmetics, and other common consumer and household products. They’re found, for example, in sunscreens, canned foods and beverages, food-packaging materials, baby bottles, flame-retardant furniture, stain-resistant carpet, and shoes. We are all ingesting and breathing them in to some degree.

Bisphenol A (BPA), one of the most extensively studied EDCs, is found in the thermal receipt paper routinely used by gas stations, supermarkets, and other stores. In a small study we conducted at Harvard, we found that urinary BPA concentrations increased after continual handling of receipts for 2 hours without gloves but did not increase significantly when gloves were used (JAMA. 2014 Feb 26;311[8]:859-60).

 

 


EDCs are among the 80,000-plus chemicals that have been introduced into the environment in the past 5 decades with no testing for their safety. Unlike in Europe, where chemicals are tested for safety before being brought to the market, in the United States, most chemicals enter the marketplace, as the Committee Opinion points out, “without comprehensive and standardized information regarding their reproductive and other long-term toxic effects.” (This opinion was reaffirmed in 2016.)

Informed consumers can then affect the market through their purchasing choices, but the removal of concerning chemicals from products takes a long time, and it’s not always immediately clear that replacement chemicals are safer. For instance, the BPA in “BPA-free” water bottles and canned foods has been replaced by bisphenol S (BPS), which has a very similar molecular structure to BPA. The potential adverse health effects of these replacement chemicals are now being examined in experimental and epidemiologic studies.



Through its National Health and Nutrition Examination Survey, the Centers for Disease Control and Prevention has reported detection rates of between 75% and 99% for different EDCs in urine samples collected from a representative sample of the U.S. population. In other human research, several EDCs have been shown to cross the placenta and have been measured in maternal blood and urine and in cord blood and amniotic fluid, as well as in placental tissue at birth.

It is interesting to note that BPA’s structure is similar to that of diethylstilbestrol (DES). BPA was first shown to have estrogenic activity in 1936 and was originally considered for use in pharmaceuticals to prevent miscarriages, spontaneous abortions, and premature labor but was put aside in favor of DES. (DES was eventually found to be carcinogenic and was taken off the market.) In the 1950s, the use of BPA was resuscitated though not in pharmaceuticals.

 

 


The Endocrine Society’s statement in 2009 was a wake-up call to the scientific community about the possible dangers of EDCs on health and disease. In the subsequent years, a huge body of scientific literature has been published elucidating potential associations with various adverse health outcomes and their underlying mechanisms. This has led us one step closer to informing public policy and identifying and regulating EDCs.

A better understanding about the mechanisms of action and dose-response patterns of EDCs has indicated that EDCs can act at low doses, and in many cases a nonmonotonic dose-response association has been demonstrated. This is a paradigm shift for traditional toxicology in which it is “the dose that makes the poison,” and some toxicologists have been critical of the claims of low-dose potency for EDCs.

A team of epidemiologists, toxicologists, and other scientists, including myself, critically analyzed in vitro, animal, and epidemiologic studies as part of a National Institute of Environmental Health Sciences working group on BPA to determine the strength of the evidence for low-dose effects (doses lower than those tested in traditional toxicology assessments) of BPA. We found that consistent, reproducible, and often adverse low-dose effects have been demonstrated for BPA in cell lines, primary cells and tissues, laboratory animals, and human populations. We also concluded that EDCs can pose the greatest threats when exposure occurs during early development, organogenesis, and during critical postnatal periods when tissues are differentiating (Endocr Disruptors [Austin, Tex.]. 2013 Sep;1:e25078-1-13).

A potential risk factor for GDM

Quite a lot of research has been done on EDCs and the risk of type 2 diabetes. A recent meta-analysis that included 41 cross-sectional and 8 prospective studies found that serum concentrations of dioxins, polychlorinated biphenyls, and chlorinated pesticides – and urine concentrations of BPA and phthalates – were significantly associated with type 2 diabetes risk. Comparing the highest and lowest concentration categories, the pooled relative risk was 1.45 for BPA and phthalates. EDC concentrations also were associated with indicators of impaired fasting glucose and insulin resistance (J Diabetes. 2016 Jul;8[4]:516-32).

 

 

Cincinnati Children's Hospital Medical Center
Dr. Shelley R. Ehrlich
Studies have shown that BPA and other environmental phenols can induce insulin resistance and metabolic dysfunction by acting on several endogenous pathways, including that those regulate energy and glucose metabolism. EDCs also have been found to be epigenetically toxic. In a landmark study in the agouti mouse model, maternal BPA exposure was shown to alter the animals’ epigenetic programming, leading to offspring that had yellow coats and were obese, rather than brown and small (Nutr Rev. 2008;66[Suppl 1]:S7-11). (The agouti mouse model has been used to study the impact of nutritional and environmental influences on the fetal epigenome; fur-color variation is correlated to epigenetic marks established early in development.)

Despite the mounting evidence for an association between BPA and type 2 diabetes, and despite the fact that the increased incidence of GDM in the past 20 years has mirrored the increasing use of EDCs, there has been a dearth of research examining the possible relationship between EDCs and GDM. The effects of BPA on GDM were identified as a knowledge gap by the National Institute of Environmental Health Sciences after a review of the literature from 2007 to 2013 (Environ Health Perspect. 2014 Aug:122[8]:775-86).

To understand the association between EDCs and GDM and the underlying mechanistic pathway of EDCs, we are conducting research that uses a growing body of evidence that suggests that environmental toxins are involved in the control of microRNA (miRNA) expression in trophoblast cells.

MiRNA, a single-stranded, short, noncoding RNA that is involved in posttranslational gene expression, can be packaged along with other signaling molecules inside extracellular vesicles in the placenta called exosomes. These exosomes appear to be shed from the placenta into the maternal circulation as early as 6-7 weeks into pregnancy. Once released into the maternal circulation, research has shown that the exosomes can target and reprogram other cells via the transfer of noncoding miRNAs, thereby changing the gene expression in these cells.

 

 


Such an exosome-mediated signaling pathway provides us with the opportunity to isolate exosomes, sequence the miRNAs, and look at whether women who are exposed to higher levels of EDCs (as indicated in urine concentration) have a particular miRNA signature that correlates with GDM. In other words, we’re working to determine whether particular EDCs and exposure levels affect the miRNA placental profiles, and if these profiles are predictive of GDM.

Thus far, in a pilot prospective cohort study of pregnant women, we are seeing hints of altered miRNA expression in relation to GDM. We have selected study participants who are at high risk of developing GDM (for example, prepregnancy body mass index greater than 30, past pregnancy with GDM, or macrosomia) because we suspect that, in many women, EDCs are a tipping point for the development of GDM rather than a sole causative factor. In addition to understanding the impact of EDCs on GDM, it is our hope that miRNAs in maternal circulation will serve as a noninvasive biomarker for early detection of GDM development or susceptibility.

Dr. Ehrlich is an assistant professor of pediatrics and environmental health at Cincinnati Children’s Hospital Medical Center.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Studying the gestational diabetes risk associated with endocrine-disrupting chemicals

Article Type
Changed
Fri, 01/18/2019 - 17:34

Pregnancy presents a unique opportunity for ob.gyns. to counsel their patients on the benefits of adopting healthy lifestyle habits. Women routinely seek care from a practitioner on a regular basis. Expectant mothers are highly motivated to take care of themselves for the sake of their developing babies. Patients can be much more receptive to recommendations from their health care teams during pregnancy than they might be outside of pregnancy. Frequent biometric analyses allow ob.gyns. to monitor patients’ progress and let them know, in a supportive manner, where they might be “falling short” of their health goals.

Dr. E. Albert Reece
Although ob.gyns. might affect a woman’s diet, exercise, or even tobacco product use during pregnancy, one of the influences on pregnancy outcomes we cannot control is her exposure to environmental factors such as pollution, pathogenic microbes, and chemicals that are part and parcel of modern life. For example, the 2016 Zika virus pandemic brought to the fore how vulnerable patients – both mothers and babies – are to the external conditions surrounding their homes. However, not every harmful entity found in our environment can be contained with vigilant destruction of mosquito-conducive conditions or blanketing affected neighborhoods with insecticides.

There are a number of chemicals with which we come in contact every day, sometimes multiple times in a day, which may deeply affect our health. This month’s Master Class highlights one such group of compounds, endocrine-disrupting chemicals, the most widely known of which is bisphenol A (BPA).



Several years ago, our guest author, Dr. Shelley Ehrlich of the University of Cincinnati, spoke at a diabetes in pregnancy meeting about her research on BPA and its potential association with the development of gestational diabetes mellitus (GDM). As a perinatologist who worked for many years with patients who had diabetes in pregnancy, I was particularly struck by her preliminary findings which indicated that BPA might be altering gene expression, thereby leading to pregnancy-related disorders. At the time, Dr. Ehrlich’s research was still in the very early stages. However, her results were a new way of answering the age-old question of why some women, including those without other overt risk factors, might develop GDM.

Therefore, I’m delighted that Dr. Ehrlich agreed to author this month’s class to provide an overview of where her last few years of research has taken her.

Dr. Reece, who specializes in maternal-fetal medicine, is vice president for medical affairs at the University of Maryland, Baltimore, as well as the John Z. and Akiko K. Bowers Distinguished Professor and dean of the school of medicine. Dr. Reece said he had no relevant financial disclosures. He is the medical editor of this column. Contact him at [email protected].

Publications
Topics
Sections

Pregnancy presents a unique opportunity for ob.gyns. to counsel their patients on the benefits of adopting healthy lifestyle habits. Women routinely seek care from a practitioner on a regular basis. Expectant mothers are highly motivated to take care of themselves for the sake of their developing babies. Patients can be much more receptive to recommendations from their health care teams during pregnancy than they might be outside of pregnancy. Frequent biometric analyses allow ob.gyns. to monitor patients’ progress and let them know, in a supportive manner, where they might be “falling short” of their health goals.

Dr. E. Albert Reece
Although ob.gyns. might affect a woman’s diet, exercise, or even tobacco product use during pregnancy, one of the influences on pregnancy outcomes we cannot control is her exposure to environmental factors such as pollution, pathogenic microbes, and chemicals that are part and parcel of modern life. For example, the 2016 Zika virus pandemic brought to the fore how vulnerable patients – both mothers and babies – are to the external conditions surrounding their homes. However, not every harmful entity found in our environment can be contained with vigilant destruction of mosquito-conducive conditions or blanketing affected neighborhoods with insecticides.

There are a number of chemicals with which we come in contact every day, sometimes multiple times in a day, which may deeply affect our health. This month’s Master Class highlights one such group of compounds, endocrine-disrupting chemicals, the most widely known of which is bisphenol A (BPA).



Several years ago, our guest author, Dr. Shelley Ehrlich of the University of Cincinnati, spoke at a diabetes in pregnancy meeting about her research on BPA and its potential association with the development of gestational diabetes mellitus (GDM). As a perinatologist who worked for many years with patients who had diabetes in pregnancy, I was particularly struck by her preliminary findings which indicated that BPA might be altering gene expression, thereby leading to pregnancy-related disorders. At the time, Dr. Ehrlich’s research was still in the very early stages. However, her results were a new way of answering the age-old question of why some women, including those without other overt risk factors, might develop GDM.

Therefore, I’m delighted that Dr. Ehrlich agreed to author this month’s class to provide an overview of where her last few years of research has taken her.

Dr. Reece, who specializes in maternal-fetal medicine, is vice president for medical affairs at the University of Maryland, Baltimore, as well as the John Z. and Akiko K. Bowers Distinguished Professor and dean of the school of medicine. Dr. Reece said he had no relevant financial disclosures. He is the medical editor of this column. Contact him at [email protected].

Pregnancy presents a unique opportunity for ob.gyns. to counsel their patients on the benefits of adopting healthy lifestyle habits. Women routinely seek care from a practitioner on a regular basis. Expectant mothers are highly motivated to take care of themselves for the sake of their developing babies. Patients can be much more receptive to recommendations from their health care teams during pregnancy than they might be outside of pregnancy. Frequent biometric analyses allow ob.gyns. to monitor patients’ progress and let them know, in a supportive manner, where they might be “falling short” of their health goals.

Dr. E. Albert Reece
Although ob.gyns. might affect a woman’s diet, exercise, or even tobacco product use during pregnancy, one of the influences on pregnancy outcomes we cannot control is her exposure to environmental factors such as pollution, pathogenic microbes, and chemicals that are part and parcel of modern life. For example, the 2016 Zika virus pandemic brought to the fore how vulnerable patients – both mothers and babies – are to the external conditions surrounding their homes. However, not every harmful entity found in our environment can be contained with vigilant destruction of mosquito-conducive conditions or blanketing affected neighborhoods with insecticides.

There are a number of chemicals with which we come in contact every day, sometimes multiple times in a day, which may deeply affect our health. This month’s Master Class highlights one such group of compounds, endocrine-disrupting chemicals, the most widely known of which is bisphenol A (BPA).



Several years ago, our guest author, Dr. Shelley Ehrlich of the University of Cincinnati, spoke at a diabetes in pregnancy meeting about her research on BPA and its potential association with the development of gestational diabetes mellitus (GDM). As a perinatologist who worked for many years with patients who had diabetes in pregnancy, I was particularly struck by her preliminary findings which indicated that BPA might be altering gene expression, thereby leading to pregnancy-related disorders. At the time, Dr. Ehrlich’s research was still in the very early stages. However, her results were a new way of answering the age-old question of why some women, including those without other overt risk factors, might develop GDM.

Therefore, I’m delighted that Dr. Ehrlich agreed to author this month’s class to provide an overview of where her last few years of research has taken her.

Dr. Reece, who specializes in maternal-fetal medicine, is vice president for medical affairs at the University of Maryland, Baltimore, as well as the John Z. and Akiko K. Bowers Distinguished Professor and dean of the school of medicine. Dr. Reece said he had no relevant financial disclosures. He is the medical editor of this column. Contact him at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

CVD risk high in individuals who once had metabolically healthy obesity

Metabolically healthy obesity not so healthy after all
Article Type
Changed
Fri, 01/18/2019 - 17:34

Many individuals with metabolically healthy obesity (MHO) will progress to metabolic syndrome over time, putting them at increased risk of cardiovascular disease (CVD), analysis of population-based longitudinal cohort study suggests.

Nearly half of the individuals with MHO developed metabolic syndrome over time, according to the analysis of 6,809 participants followed since the year 2000 in the Multi-Ethnic Study of Atherosclerosis.

Those who developed metabolic syndrome had an increased risk of CVD, compared with those who did not, according to results published in the Journal of the American College of Cardiology.

The results provide new evidence that MHO alone is not a stable or reliable characterization of lower CVD risk, according to Morgana Mongraw-Chaffin, PhD, of the department of epidemiology and prevention at Wake Forest University, Winston-Salem, N.C., and her coauthors.

“Instead, MHO signals an opportunity for weight reduction, and prevention and management of existing metabolic syndrome components should be prioritized,” Dr. Mongraw-Chaffin and her colleagues wrote.

Individuals with MHO, defined in this study as a body mass index of 30 kg/m2 or greater without metabolic syndrome, have a relatively favorable metabolic profile. However, their precise level of CVD risk remains contentious, the investigators noted.

“Although the accumulating evidence is leaning toward the consensus that MHO is not a low-risk state compared with metabolically healthy normal weight, many questions remain about the risk stratification for this group and what causes the heterogeneity seen in the literature,” they wrote.

 

 


In this study, 501 out of 1,051 individuals with MHO at baseline (48%) developed metabolic syndrome over a median follow-up of 12.2 years. Moreover, they then had increased odds of CVD (odds ratio, 1.60; 95% confidence interval, 1.14-2.25), compared with individuals who had stable MHO or normal weight.

Duration of metabolic syndrome was linearly associated with CVD risk, with an odds ratio of 0.41 for those with metabolic syndrome at one out of five study visits, 2.19 for metabolic syndrome at two or three visits, and 2.50 for metabolic syndrome at four or five visits, the researchers said.

The results of this study may explain why some previous meta-analyses found individuals with MHO had increased risk, but only with longer duration of follow-up.

“Both transition to metabolic syndrome and longer duration of metabolic syndrome were associated with CVD, indicating that those with MHO may experience a lag in risk while they progress to metabolic syndrome and develop the resultant cardiometabolic risk,” Dr. Mongraw-Chaffin and her coauthors wrote.
 

 



The results mean that MHO represents an opportunity for primary prevention of CVD, they added.

“Prevention of incident metabolic syndrome and resulting CVD at the population level will necessitate the prevention of obesity,” they explained in a discussion of the results.

Dr. Mongraw-Chaffin and her associates reported that they had no relevant relationships to disclose.

SOURCE: Mongraw-Chaffin M et al. J Am Coll Cardiol 2018 May 1;71(17):1857-65.

Body

While obesity has inherent adverse effects on cardiometabolic parameters and cardiovascular disease (CVD) risk factors, metabolically healthy obesity (MHO) has emerged as a categorization of obese individuals who may not be at increased CVD risk because of relatively normal levels of lipids, blood pressure, and glucose.

An increasing body of research, however, including the present study by Dr. Mongraw-Chaffin and colleagues, highlights “dangers and long-term outcomes” of the MHO phenotype, Dr. Prakash Deedwania and Dr. Carl J. Lavie wrote in an editorial.

The analysis of 6,809 individuals in the Multi-Ethic Study of Atherosclerosis found that MHO is not a stable condition, as almost one-half of individuals developed metabolic syndrome over 12.2 years of follow-up, they noted.

Moreover, CVD risk was indeed elevated in these individuals with “unstable” MHO.

“Clearly, therefore, prevention of obesity in the first place is most prudent,” Dr. Deedwania and Dr. Lavie said in their editorial. “Prevention of progressive weight gain over time among the overweight and mildly obese is also of high importance to prevent development of metabolic syndrome and subsequent risk of CVD.”

If individuals with MHO can be identified early, the authors said, there is an excellent opportunity for primary prevention through lifestyle changes, including weight loss and regular physical exercise that might prevent MHO from converting to metabolically unhealthy obesity.

“Such population-wide healthy interventions are the only hope of preventing the oncoming tsunami of metabolic syndrome, diabetes, and CVD,” the editorial authors concluded.

Prakash Deedwania, MD, is with the University of California at San Francisco School of Medicine Program at Fresno. Carl J. Lavie, MD, is with the John Ochsner Heart and Vascular Institute, New Orleans, and the University of Queensland in Brisbane, Australia. These comments are derived from their editorial in the Journal of the American College of Cardiology ( 2018 May 1;71[17]:1866-8) . Both authors reported they had no relevant relationships to disclose.

Publications
Topics
Sections
Body

While obesity has inherent adverse effects on cardiometabolic parameters and cardiovascular disease (CVD) risk factors, metabolically healthy obesity (MHO) has emerged as a categorization of obese individuals who may not be at increased CVD risk because of relatively normal levels of lipids, blood pressure, and glucose.

An increasing body of research, however, including the present study by Dr. Mongraw-Chaffin and colleagues, highlights “dangers and long-term outcomes” of the MHO phenotype, Dr. Prakash Deedwania and Dr. Carl J. Lavie wrote in an editorial.

The analysis of 6,809 individuals in the Multi-Ethic Study of Atherosclerosis found that MHO is not a stable condition, as almost one-half of individuals developed metabolic syndrome over 12.2 years of follow-up, they noted.

Moreover, CVD risk was indeed elevated in these individuals with “unstable” MHO.

“Clearly, therefore, prevention of obesity in the first place is most prudent,” Dr. Deedwania and Dr. Lavie said in their editorial. “Prevention of progressive weight gain over time among the overweight and mildly obese is also of high importance to prevent development of metabolic syndrome and subsequent risk of CVD.”

If individuals with MHO can be identified early, the authors said, there is an excellent opportunity for primary prevention through lifestyle changes, including weight loss and regular physical exercise that might prevent MHO from converting to metabolically unhealthy obesity.

“Such population-wide healthy interventions are the only hope of preventing the oncoming tsunami of metabolic syndrome, diabetes, and CVD,” the editorial authors concluded.

Prakash Deedwania, MD, is with the University of California at San Francisco School of Medicine Program at Fresno. Carl J. Lavie, MD, is with the John Ochsner Heart and Vascular Institute, New Orleans, and the University of Queensland in Brisbane, Australia. These comments are derived from their editorial in the Journal of the American College of Cardiology ( 2018 May 1;71[17]:1866-8) . Both authors reported they had no relevant relationships to disclose.

Body

While obesity has inherent adverse effects on cardiometabolic parameters and cardiovascular disease (CVD) risk factors, metabolically healthy obesity (MHO) has emerged as a categorization of obese individuals who may not be at increased CVD risk because of relatively normal levels of lipids, blood pressure, and glucose.

An increasing body of research, however, including the present study by Dr. Mongraw-Chaffin and colleagues, highlights “dangers and long-term outcomes” of the MHO phenotype, Dr. Prakash Deedwania and Dr. Carl J. Lavie wrote in an editorial.

The analysis of 6,809 individuals in the Multi-Ethic Study of Atherosclerosis found that MHO is not a stable condition, as almost one-half of individuals developed metabolic syndrome over 12.2 years of follow-up, they noted.

Moreover, CVD risk was indeed elevated in these individuals with “unstable” MHO.

“Clearly, therefore, prevention of obesity in the first place is most prudent,” Dr. Deedwania and Dr. Lavie said in their editorial. “Prevention of progressive weight gain over time among the overweight and mildly obese is also of high importance to prevent development of metabolic syndrome and subsequent risk of CVD.”

If individuals with MHO can be identified early, the authors said, there is an excellent opportunity for primary prevention through lifestyle changes, including weight loss and regular physical exercise that might prevent MHO from converting to metabolically unhealthy obesity.

“Such population-wide healthy interventions are the only hope of preventing the oncoming tsunami of metabolic syndrome, diabetes, and CVD,” the editorial authors concluded.

Prakash Deedwania, MD, is with the University of California at San Francisco School of Medicine Program at Fresno. Carl J. Lavie, MD, is with the John Ochsner Heart and Vascular Institute, New Orleans, and the University of Queensland in Brisbane, Australia. These comments are derived from their editorial in the Journal of the American College of Cardiology ( 2018 May 1;71[17]:1866-8) . Both authors reported they had no relevant relationships to disclose.

Title
Metabolically healthy obesity not so healthy after all
Metabolically healthy obesity not so healthy after all

Many individuals with metabolically healthy obesity (MHO) will progress to metabolic syndrome over time, putting them at increased risk of cardiovascular disease (CVD), analysis of population-based longitudinal cohort study suggests.

Nearly half of the individuals with MHO developed metabolic syndrome over time, according to the analysis of 6,809 participants followed since the year 2000 in the Multi-Ethnic Study of Atherosclerosis.

Those who developed metabolic syndrome had an increased risk of CVD, compared with those who did not, according to results published in the Journal of the American College of Cardiology.

The results provide new evidence that MHO alone is not a stable or reliable characterization of lower CVD risk, according to Morgana Mongraw-Chaffin, PhD, of the department of epidemiology and prevention at Wake Forest University, Winston-Salem, N.C., and her coauthors.

“Instead, MHO signals an opportunity for weight reduction, and prevention and management of existing metabolic syndrome components should be prioritized,” Dr. Mongraw-Chaffin and her colleagues wrote.

Individuals with MHO, defined in this study as a body mass index of 30 kg/m2 or greater without metabolic syndrome, have a relatively favorable metabolic profile. However, their precise level of CVD risk remains contentious, the investigators noted.

“Although the accumulating evidence is leaning toward the consensus that MHO is not a low-risk state compared with metabolically healthy normal weight, many questions remain about the risk stratification for this group and what causes the heterogeneity seen in the literature,” they wrote.

 

 


In this study, 501 out of 1,051 individuals with MHO at baseline (48%) developed metabolic syndrome over a median follow-up of 12.2 years. Moreover, they then had increased odds of CVD (odds ratio, 1.60; 95% confidence interval, 1.14-2.25), compared with individuals who had stable MHO or normal weight.

Duration of metabolic syndrome was linearly associated with CVD risk, with an odds ratio of 0.41 for those with metabolic syndrome at one out of five study visits, 2.19 for metabolic syndrome at two or three visits, and 2.50 for metabolic syndrome at four or five visits, the researchers said.

The results of this study may explain why some previous meta-analyses found individuals with MHO had increased risk, but only with longer duration of follow-up.

“Both transition to metabolic syndrome and longer duration of metabolic syndrome were associated with CVD, indicating that those with MHO may experience a lag in risk while they progress to metabolic syndrome and develop the resultant cardiometabolic risk,” Dr. Mongraw-Chaffin and her coauthors wrote.
 

 



The results mean that MHO represents an opportunity for primary prevention of CVD, they added.

“Prevention of incident metabolic syndrome and resulting CVD at the population level will necessitate the prevention of obesity,” they explained in a discussion of the results.

Dr. Mongraw-Chaffin and her associates reported that they had no relevant relationships to disclose.

SOURCE: Mongraw-Chaffin M et al. J Am Coll Cardiol 2018 May 1;71(17):1857-65.

Many individuals with metabolically healthy obesity (MHO) will progress to metabolic syndrome over time, putting them at increased risk of cardiovascular disease (CVD), analysis of population-based longitudinal cohort study suggests.

Nearly half of the individuals with MHO developed metabolic syndrome over time, according to the analysis of 6,809 participants followed since the year 2000 in the Multi-Ethnic Study of Atherosclerosis.

Those who developed metabolic syndrome had an increased risk of CVD, compared with those who did not, according to results published in the Journal of the American College of Cardiology.

The results provide new evidence that MHO alone is not a stable or reliable characterization of lower CVD risk, according to Morgana Mongraw-Chaffin, PhD, of the department of epidemiology and prevention at Wake Forest University, Winston-Salem, N.C., and her coauthors.

“Instead, MHO signals an opportunity for weight reduction, and prevention and management of existing metabolic syndrome components should be prioritized,” Dr. Mongraw-Chaffin and her colleagues wrote.

Individuals with MHO, defined in this study as a body mass index of 30 kg/m2 or greater without metabolic syndrome, have a relatively favorable metabolic profile. However, their precise level of CVD risk remains contentious, the investigators noted.

“Although the accumulating evidence is leaning toward the consensus that MHO is not a low-risk state compared with metabolically healthy normal weight, many questions remain about the risk stratification for this group and what causes the heterogeneity seen in the literature,” they wrote.

 

 


In this study, 501 out of 1,051 individuals with MHO at baseline (48%) developed metabolic syndrome over a median follow-up of 12.2 years. Moreover, they then had increased odds of CVD (odds ratio, 1.60; 95% confidence interval, 1.14-2.25), compared with individuals who had stable MHO or normal weight.

Duration of metabolic syndrome was linearly associated with CVD risk, with an odds ratio of 0.41 for those with metabolic syndrome at one out of five study visits, 2.19 for metabolic syndrome at two or three visits, and 2.50 for metabolic syndrome at four or five visits, the researchers said.

The results of this study may explain why some previous meta-analyses found individuals with MHO had increased risk, but only with longer duration of follow-up.

“Both transition to metabolic syndrome and longer duration of metabolic syndrome were associated with CVD, indicating that those with MHO may experience a lag in risk while they progress to metabolic syndrome and develop the resultant cardiometabolic risk,” Dr. Mongraw-Chaffin and her coauthors wrote.
 

 



The results mean that MHO represents an opportunity for primary prevention of CVD, they added.

“Prevention of incident metabolic syndrome and resulting CVD at the population level will necessitate the prevention of obesity,” they explained in a discussion of the results.

Dr. Mongraw-Chaffin and her associates reported that they had no relevant relationships to disclose.

SOURCE: Mongraw-Chaffin M et al. J Am Coll Cardiol 2018 May 1;71(17):1857-65.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

Key clinical point: Metabolically healthy obesity (MHO) was transient and was not a reliable indicator of future risk of cardiovascular disease risk, prompting investigators to recommend weight loss and lifestyle management for any individual with obesity.

Major finding: Nearly half of patients with MHO developed metabolic syndrome over a median follow-up of 12.2 years. They had increased risk of cardiovascular disease, compared with individuals who had stable MHO or normal weight (odds ratio, 1.60, 95% CI, 1.14-2.25).

Study details: Analysis based on data for 6,809 participants in the Multi-Ethnic Study of Atherosclerosis (MESA), a six-center U.S. population-based longitudinal cohort study started in 2000.

Disclosures: The authors reported they had no relevant relationships to disclose.

Source: Mongraw-Chaffin M et al. J Am Coll Cardiol. 2018 May 1;71(17):1857-65.

Disqus Comments
Default
Use ProPublica

Nitrofurantoin beats fosfomycin for uncomplicated UTI

Article Type
Changed
Fri, 01/18/2019 - 17:34

 

Five days of nitrofurantoin was significantly more effective than was a single large dose of fosfomycin in effecting both clinical and microbiological cure among women with uncomplicated lower urinary tract infections (UTIs), a randomized study has determined.

By 28 days, clinical resolution had occurred in 70% of those who took nitrofurantoin and 58% of those who took fosfomycin – a statistically significant 12% difference, Angela Huttner, MD, said at the European Society of Clinical Microbiology and Infectious Diseases annual congress.

But the benefit was even more pronounced in women whose infections were caused by Escherichia coli, with a 28% spread in clinical resolution, (78% vs. 50%) and a 14-point spread in microbiological cure (72% vs. 58%), said Dr. Huttner of Geneva University, Switzerland.

The results were simultaneously published online in JAMA (2018 Apr 22. doi: 10.1001/jama.2018.3627).

Michele G. Sullivan/MDedge News
Dr. Angela Huttner
“This was very clearly a superiority trial,” said Dr. Huttner. “We were very surprised at the strength of the findings among patients with E. coli.”

Despite its success, nitrofurantoin did not live up to its purported 96% UTI cure rate – an established number based on study data from the 1950s-1970s.

Such efficacy was probably a false finding, she said. Studies of that era were much less rigorous than they are today, Dr. Huttner pointed out. The primary endpoint in those studies was typically defined not as complete resolution of symptoms – as it was in her study – but as resolution or improvement.

 

 

“Also, improvement was often defined microbiologically, often something like a decrease from 105 colony-forming units to 104, which is never something we would use today.”

The study was conducted in Geneva, Poland, and Israel. It randomized 512 women with an uncomplicated lower UTI to either 5 days of thrice-daily nitrofurantoin 100 mg or to a single 3-gram dose of fosfomycin. The women returned for clinical exam and urine culture at 14 and 28 days after they completed their treatment.

The primary outcome was 28-day clinical response. Success was defined as complete resolution of symptoms, a characterization that Dr. Huttner and her colleagues chose carefully. Many UTI studies include “improvement” in the clinical picture as part of a successful response. Dr. Huttner disagreed with that. “Our patients don’t want a partial response. They don’t want just an improvement. They want complete resolution of their symptoms.”

Failure was defined as the need for additional antibiotics or a change in antibiotic treatment. There was also an “indeterminate” category, for the small minority of patients who still felt some mild symptoms but were without microbiological signs of infection.
 

 

The mean age of the women was 44 years. All had an uncomplicated UTI characterized by dysuria, urgency, frequency, or suprapubic tenderness; 73% had a positive baseline urine culture. E. coli was the most common infective organism (about 60%) followed by different Klebsiella species, Proteus, and Enterococci. A few women had mixed pathogen infections. Only six patients had infective pathogens that were resistant to either of the study drugs.

After 28 days of treatment, a clinical cure was determined in 70% of those taking nitrofurantoin and 58% of those taking fosfomycin – an absolute difference of 12 points.

“The difference was obvious at 14 days,” Dr. Huttner noted. At that point, 75% of those taking nitrofurantoin and 66% of those taking fosfomycin reported resolution of their symptoms.

Pathology reflected the improving clinical picture: Microbiologic resolution occurred in 74% of the nitrofurantoin group and 63% of the fosfomycin group.
 

 

A post hoc analysis looked at results among the 214 women with confirmed E. coli infections.

The difference in clinical response was “even more pronounced” in these patients, Dr. Huttner said. Through day 28, clinical resolution occurred in 78% of those taking nitrofurantoin and 50% of those taking fosfomycin – a significant difference of 28 points.

Patients with E. coli infections were 4.48 times more likely to fail treatment if they received fosfomycin than if they received nitrofurantoin.

Adverse events were few and primarily gastrointestinal. The most common were mild to moderate nausea and diarrhea (less than 4% in each group).
 

 


Both of the antibiotics were popular from the 1950s on, but gradually fell out of favor as more powerful therapies were developed. However, as antibiotic resistance began to develop, infectious disease specialists began to support bringing nitrofurantoin and fosfomycin out of mothballs. In 2011, a panel of international experts convened by the Infectious Diseases Society of America (IDSA) and the European Society of Clinical Microbiology and Infectious Diseases (ESCMID) recommended both of the medications as first-line therapy for women with acute uncomplicated cystitis and pyelonephritis.

The group recommended fosfomycin in a single 3-gram dose and a nitrofurantoin regimen of 100 mg twice daily for 5 days. The fosfomycin recommendation is clearly inadequate, Dr. Huttner said.

“Fosfomycin is not a bad drug. I just think it’s underdosed in this setting,” she said.

Dr. Huttner had no financial disclosures.
 

 

SOURCE: Huttner A et al. ECCMID 2018. Abstract O0479.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Five days of nitrofurantoin was significantly more effective than was a single large dose of fosfomycin in effecting both clinical and microbiological cure among women with uncomplicated lower urinary tract infections (UTIs), a randomized study has determined.

By 28 days, clinical resolution had occurred in 70% of those who took nitrofurantoin and 58% of those who took fosfomycin – a statistically significant 12% difference, Angela Huttner, MD, said at the European Society of Clinical Microbiology and Infectious Diseases annual congress.

But the benefit was even more pronounced in women whose infections were caused by Escherichia coli, with a 28% spread in clinical resolution, (78% vs. 50%) and a 14-point spread in microbiological cure (72% vs. 58%), said Dr. Huttner of Geneva University, Switzerland.

The results were simultaneously published online in JAMA (2018 Apr 22. doi: 10.1001/jama.2018.3627).

Michele G. Sullivan/MDedge News
Dr. Angela Huttner
“This was very clearly a superiority trial,” said Dr. Huttner. “We were very surprised at the strength of the findings among patients with E. coli.”

Despite its success, nitrofurantoin did not live up to its purported 96% UTI cure rate – an established number based on study data from the 1950s-1970s.

Such efficacy was probably a false finding, she said. Studies of that era were much less rigorous than they are today, Dr. Huttner pointed out. The primary endpoint in those studies was typically defined not as complete resolution of symptoms – as it was in her study – but as resolution or improvement.

 

 

“Also, improvement was often defined microbiologically, often something like a decrease from 105 colony-forming units to 104, which is never something we would use today.”

The study was conducted in Geneva, Poland, and Israel. It randomized 512 women with an uncomplicated lower UTI to either 5 days of thrice-daily nitrofurantoin 100 mg or to a single 3-gram dose of fosfomycin. The women returned for clinical exam and urine culture at 14 and 28 days after they completed their treatment.

The primary outcome was 28-day clinical response. Success was defined as complete resolution of symptoms, a characterization that Dr. Huttner and her colleagues chose carefully. Many UTI studies include “improvement” in the clinical picture as part of a successful response. Dr. Huttner disagreed with that. “Our patients don’t want a partial response. They don’t want just an improvement. They want complete resolution of their symptoms.”

Failure was defined as the need for additional antibiotics or a change in antibiotic treatment. There was also an “indeterminate” category, for the small minority of patients who still felt some mild symptoms but were without microbiological signs of infection.
 

 

The mean age of the women was 44 years. All had an uncomplicated UTI characterized by dysuria, urgency, frequency, or suprapubic tenderness; 73% had a positive baseline urine culture. E. coli was the most common infective organism (about 60%) followed by different Klebsiella species, Proteus, and Enterococci. A few women had mixed pathogen infections. Only six patients had infective pathogens that were resistant to either of the study drugs.

After 28 days of treatment, a clinical cure was determined in 70% of those taking nitrofurantoin and 58% of those taking fosfomycin – an absolute difference of 12 points.

“The difference was obvious at 14 days,” Dr. Huttner noted. At that point, 75% of those taking nitrofurantoin and 66% of those taking fosfomycin reported resolution of their symptoms.

Pathology reflected the improving clinical picture: Microbiologic resolution occurred in 74% of the nitrofurantoin group and 63% of the fosfomycin group.
 

 

A post hoc analysis looked at results among the 214 women with confirmed E. coli infections.

The difference in clinical response was “even more pronounced” in these patients, Dr. Huttner said. Through day 28, clinical resolution occurred in 78% of those taking nitrofurantoin and 50% of those taking fosfomycin – a significant difference of 28 points.

Patients with E. coli infections were 4.48 times more likely to fail treatment if they received fosfomycin than if they received nitrofurantoin.

Adverse events were few and primarily gastrointestinal. The most common were mild to moderate nausea and diarrhea (less than 4% in each group).
 

 


Both of the antibiotics were popular from the 1950s on, but gradually fell out of favor as more powerful therapies were developed. However, as antibiotic resistance began to develop, infectious disease specialists began to support bringing nitrofurantoin and fosfomycin out of mothballs. In 2011, a panel of international experts convened by the Infectious Diseases Society of America (IDSA) and the European Society of Clinical Microbiology and Infectious Diseases (ESCMID) recommended both of the medications as first-line therapy for women with acute uncomplicated cystitis and pyelonephritis.

The group recommended fosfomycin in a single 3-gram dose and a nitrofurantoin regimen of 100 mg twice daily for 5 days. The fosfomycin recommendation is clearly inadequate, Dr. Huttner said.

“Fosfomycin is not a bad drug. I just think it’s underdosed in this setting,” she said.

Dr. Huttner had no financial disclosures.
 

 

SOURCE: Huttner A et al. ECCMID 2018. Abstract O0479.

 

Five days of nitrofurantoin was significantly more effective than was a single large dose of fosfomycin in effecting both clinical and microbiological cure among women with uncomplicated lower urinary tract infections (UTIs), a randomized study has determined.

By 28 days, clinical resolution had occurred in 70% of those who took nitrofurantoin and 58% of those who took fosfomycin – a statistically significant 12% difference, Angela Huttner, MD, said at the European Society of Clinical Microbiology and Infectious Diseases annual congress.

But the benefit was even more pronounced in women whose infections were caused by Escherichia coli, with a 28% spread in clinical resolution, (78% vs. 50%) and a 14-point spread in microbiological cure (72% vs. 58%), said Dr. Huttner of Geneva University, Switzerland.

The results were simultaneously published online in JAMA (2018 Apr 22. doi: 10.1001/jama.2018.3627).

Michele G. Sullivan/MDedge News
Dr. Angela Huttner
“This was very clearly a superiority trial,” said Dr. Huttner. “We were very surprised at the strength of the findings among patients with E. coli.”

Despite its success, nitrofurantoin did not live up to its purported 96% UTI cure rate – an established number based on study data from the 1950s-1970s.

Such efficacy was probably a false finding, she said. Studies of that era were much less rigorous than they are today, Dr. Huttner pointed out. The primary endpoint in those studies was typically defined not as complete resolution of symptoms – as it was in her study – but as resolution or improvement.

 

 

“Also, improvement was often defined microbiologically, often something like a decrease from 105 colony-forming units to 104, which is never something we would use today.”

The study was conducted in Geneva, Poland, and Israel. It randomized 512 women with an uncomplicated lower UTI to either 5 days of thrice-daily nitrofurantoin 100 mg or to a single 3-gram dose of fosfomycin. The women returned for clinical exam and urine culture at 14 and 28 days after they completed their treatment.

The primary outcome was 28-day clinical response. Success was defined as complete resolution of symptoms, a characterization that Dr. Huttner and her colleagues chose carefully. Many UTI studies include “improvement” in the clinical picture as part of a successful response. Dr. Huttner disagreed with that. “Our patients don’t want a partial response. They don’t want just an improvement. They want complete resolution of their symptoms.”

Failure was defined as the need for additional antibiotics or a change in antibiotic treatment. There was also an “indeterminate” category, for the small minority of patients who still felt some mild symptoms but were without microbiological signs of infection.
 

 

The mean age of the women was 44 years. All had an uncomplicated UTI characterized by dysuria, urgency, frequency, or suprapubic tenderness; 73% had a positive baseline urine culture. E. coli was the most common infective organism (about 60%) followed by different Klebsiella species, Proteus, and Enterococci. A few women had mixed pathogen infections. Only six patients had infective pathogens that were resistant to either of the study drugs.

After 28 days of treatment, a clinical cure was determined in 70% of those taking nitrofurantoin and 58% of those taking fosfomycin – an absolute difference of 12 points.

“The difference was obvious at 14 days,” Dr. Huttner noted. At that point, 75% of those taking nitrofurantoin and 66% of those taking fosfomycin reported resolution of their symptoms.

Pathology reflected the improving clinical picture: Microbiologic resolution occurred in 74% of the nitrofurantoin group and 63% of the fosfomycin group.
 

 

A post hoc analysis looked at results among the 214 women with confirmed E. coli infections.

The difference in clinical response was “even more pronounced” in these patients, Dr. Huttner said. Through day 28, clinical resolution occurred in 78% of those taking nitrofurantoin and 50% of those taking fosfomycin – a significant difference of 28 points.

Patients with E. coli infections were 4.48 times more likely to fail treatment if they received fosfomycin than if they received nitrofurantoin.

Adverse events were few and primarily gastrointestinal. The most common were mild to moderate nausea and diarrhea (less than 4% in each group).
 

 


Both of the antibiotics were popular from the 1950s on, but gradually fell out of favor as more powerful therapies were developed. However, as antibiotic resistance began to develop, infectious disease specialists began to support bringing nitrofurantoin and fosfomycin out of mothballs. In 2011, a panel of international experts convened by the Infectious Diseases Society of America (IDSA) and the European Society of Clinical Microbiology and Infectious Diseases (ESCMID) recommended both of the medications as first-line therapy for women with acute uncomplicated cystitis and pyelonephritis.

The group recommended fosfomycin in a single 3-gram dose and a nitrofurantoin regimen of 100 mg twice daily for 5 days. The fosfomycin recommendation is clearly inadequate, Dr. Huttner said.

“Fosfomycin is not a bad drug. I just think it’s underdosed in this setting,” she said.

Dr. Huttner had no financial disclosures.
 

 

SOURCE: Huttner A et al. ECCMID 2018. Abstract O0479.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ECCMID 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Nitrofurantoin was significantly more effective than was fosfomycin for a clinical and microbiological cure of uncomplicated UTI in women.

Major finding: By 28 days, clinical resolution had occurred in 70% of those who took nitrofurantoin and 58% of those who took fosfomycin .

Study details: The prospective study randomized 512 women.

Disclosures: Dr. Huttner had no financial disclosures.

Source: Huttner A et al. ECCMID 2018. Abstract O0479.

Disqus Comments
Default
Use ProPublica

Fetal exposure to depression: How does ‘dose’ figure in?

Article Type
Changed
Fri, 01/18/2019 - 17:34

 

The last two decades have seen an ever-growing number of reports on risks of fetal exposure to medicines used to treat depression during pregnancy. These reports have described issues ranging from estimated risk of congenital malformations following fetal exposure to various psychotropics such as SSRIs or atypical antipsychotics to adverse neonatal effects such as poor neonatal adaptation syndrome. More recent reports, derived primarily from large administrative databases, have focused on concerns regarding both risk for later childhood psychopathology such as autism or ADHD or neurobehavioral sequelae such as motor or speech delay following fetal exposure to antidepressants.

When considering the potential risks of fetal exposure to antidepressants on the spectrum of relevant outcomes, it is important to keep in mind the risks of not receiving antidepressant treatment. Data on known risks of antidepressant use during pregnancy are well described, but the literature supporting adverse effects of untreated depression during pregnancy has also grown substantially. For example, accumulated data over the last several years supports heightened risk for obstetrical and neonatal complications among women who suffer from untreated depression and recent data is inconclusive regarding the effects of untreated maternal depression on gene expression in the CNS during gestation and the effects of depression during pregnancy on development of actual brain structures in areas of the brain that modulate emotion and behavior.

monkeybusinessimages/Thinkstock
In my opinion, one of the greatest recent advances in perinatal psychiatry has been the increased appreciation of the effect that perinatal psychiatric illness has on critical obstetrical and neonatal outcomes, as well as risk for later child psychopathology. However, few studies, to date, have systematically examined whether duration of the exposure to perinatal psychiatric illness (or the severity of the illness) is a relevant concern.

The ability to factor “dose and duration” of exposure to perinatal psychiatric illness into a model predicting risk for a number of obstetrical or neonatal outcomes allows for a more refined risk-benefit decision with respect to use of antidepressants during pregnancy. For example, there may be a threshold over which it’s even more imperative to treat depression during pregnancy than in women who do not suffer from such severe histories of psychiatric disorder.

Research along these lines has been published in a study in Nursing Research in which the question of the effect of maternal mood on infant outcomes was examined, specifically looking at stress, depression, and intimate partner violence, and not just the presence of these elements, but their duration and intensity both before and during the pregnancy (2015 Sep-Oct;64[5]:331-41).

To do this, researchers examined survey data from Utah’s Pregnancy Risk Assessment Monitoring System of 4,296 women who gave birth during 2009-2011. Stress, depression, and intimate partner violence, and the duration and severity of each, were determined by questionnaire. Those determinations were compared with the outcomes of gestational age, birth weight, neonatal ICU admission, and the symptoms and diagnosis of postpartum depression.

Results of the study included the following: Increased duration of depression was associated with a greater risk of neonatal ICU admission, particularly in women who were depressed both before and during their pregnancy (adjusted odds ratio, 2.48), compared with women who had no depression.

 

 


We’ve known for a long time that a history of depression predicts increased risk for postpartum depression. In this particular study, it was actually shown that not just a history of depression, but the duration of experienced depression influenced the risk for postpartum depression.

For example, compared with women with no depression, women who were depressed before but not during their pregnancy had an aOR of 7.67, women depressed during pregnancy but not before had an aOR of 17.65, and women depressed both before and during pregnancy had an aOR of 58.35 – an extraordinary stratification of risk, basically.

What these data begin to suggest is that there may be a continuum of risk when it comes to the effects of exposure to depression (factoring in now dose and duration of exposure) during pregnancy. If risk of adverse outcome increases with greater severity of perinatal psychiatric illness, then a mandate to treat depression during pregnancy, whether with pharmacologic or nonpharmacologic interventions (or, commonly, a combination of the two) becomes that much more imperative. Regardless of the treatment interventions that are used, the importance of treating depression during pregnancy and keeping women well before, during, and after pregnancy is so critical. Such a recommendation dovetails with the literature showing the intergenerational effects of untreated depression. Maternal depression is one of the strongest predictors of later childhood psychopathology. With current national trends moving toward mandating screening initiatives for postpartum depression, the appreciation of the extent to which depression before and during pregnancy drives risk for postpartum mood disorder broadens how we think about mitigating risk for puerperal mood disturbance. Specifically, mitigating the effects of postpartum depression on women, their children, and their families must include more effective management of depression both before and during pregnancy.



Dr. Lee S. Cohen
 

 

Dr. Cohen is the director of the Ammon-Pinizzotto Center for Women’s Mental Health at Massachusetts General Hospital in Boston, which provides information resources and conducts clinical care and research in reproductive mental health. He has been a consultant to manufacturers of psychiatric medications.

Publications
Topics
Sections

 

The last two decades have seen an ever-growing number of reports on risks of fetal exposure to medicines used to treat depression during pregnancy. These reports have described issues ranging from estimated risk of congenital malformations following fetal exposure to various psychotropics such as SSRIs or atypical antipsychotics to adverse neonatal effects such as poor neonatal adaptation syndrome. More recent reports, derived primarily from large administrative databases, have focused on concerns regarding both risk for later childhood psychopathology such as autism or ADHD or neurobehavioral sequelae such as motor or speech delay following fetal exposure to antidepressants.

When considering the potential risks of fetal exposure to antidepressants on the spectrum of relevant outcomes, it is important to keep in mind the risks of not receiving antidepressant treatment. Data on known risks of antidepressant use during pregnancy are well described, but the literature supporting adverse effects of untreated depression during pregnancy has also grown substantially. For example, accumulated data over the last several years supports heightened risk for obstetrical and neonatal complications among women who suffer from untreated depression and recent data is inconclusive regarding the effects of untreated maternal depression on gene expression in the CNS during gestation and the effects of depression during pregnancy on development of actual brain structures in areas of the brain that modulate emotion and behavior.

monkeybusinessimages/Thinkstock
In my opinion, one of the greatest recent advances in perinatal psychiatry has been the increased appreciation of the effect that perinatal psychiatric illness has on critical obstetrical and neonatal outcomes, as well as risk for later child psychopathology. However, few studies, to date, have systematically examined whether duration of the exposure to perinatal psychiatric illness (or the severity of the illness) is a relevant concern.

The ability to factor “dose and duration” of exposure to perinatal psychiatric illness into a model predicting risk for a number of obstetrical or neonatal outcomes allows for a more refined risk-benefit decision with respect to use of antidepressants during pregnancy. For example, there may be a threshold over which it’s even more imperative to treat depression during pregnancy than in women who do not suffer from such severe histories of psychiatric disorder.

Research along these lines has been published in a study in Nursing Research in which the question of the effect of maternal mood on infant outcomes was examined, specifically looking at stress, depression, and intimate partner violence, and not just the presence of these elements, but their duration and intensity both before and during the pregnancy (2015 Sep-Oct;64[5]:331-41).

To do this, researchers examined survey data from Utah’s Pregnancy Risk Assessment Monitoring System of 4,296 women who gave birth during 2009-2011. Stress, depression, and intimate partner violence, and the duration and severity of each, were determined by questionnaire. Those determinations were compared with the outcomes of gestational age, birth weight, neonatal ICU admission, and the symptoms and diagnosis of postpartum depression.

Results of the study included the following: Increased duration of depression was associated with a greater risk of neonatal ICU admission, particularly in women who were depressed both before and during their pregnancy (adjusted odds ratio, 2.48), compared with women who had no depression.

 

 


We’ve known for a long time that a history of depression predicts increased risk for postpartum depression. In this particular study, it was actually shown that not just a history of depression, but the duration of experienced depression influenced the risk for postpartum depression.

For example, compared with women with no depression, women who were depressed before but not during their pregnancy had an aOR of 7.67, women depressed during pregnancy but not before had an aOR of 17.65, and women depressed both before and during pregnancy had an aOR of 58.35 – an extraordinary stratification of risk, basically.

What these data begin to suggest is that there may be a continuum of risk when it comes to the effects of exposure to depression (factoring in now dose and duration of exposure) during pregnancy. If risk of adverse outcome increases with greater severity of perinatal psychiatric illness, then a mandate to treat depression during pregnancy, whether with pharmacologic or nonpharmacologic interventions (or, commonly, a combination of the two) becomes that much more imperative. Regardless of the treatment interventions that are used, the importance of treating depression during pregnancy and keeping women well before, during, and after pregnancy is so critical. Such a recommendation dovetails with the literature showing the intergenerational effects of untreated depression. Maternal depression is one of the strongest predictors of later childhood psychopathology. With current national trends moving toward mandating screening initiatives for postpartum depression, the appreciation of the extent to which depression before and during pregnancy drives risk for postpartum mood disorder broadens how we think about mitigating risk for puerperal mood disturbance. Specifically, mitigating the effects of postpartum depression on women, their children, and their families must include more effective management of depression both before and during pregnancy.



Dr. Lee S. Cohen
 

 

Dr. Cohen is the director of the Ammon-Pinizzotto Center for Women’s Mental Health at Massachusetts General Hospital in Boston, which provides information resources and conducts clinical care and research in reproductive mental health. He has been a consultant to manufacturers of psychiatric medications.

 

The last two decades have seen an ever-growing number of reports on risks of fetal exposure to medicines used to treat depression during pregnancy. These reports have described issues ranging from estimated risk of congenital malformations following fetal exposure to various psychotropics such as SSRIs or atypical antipsychotics to adverse neonatal effects such as poor neonatal adaptation syndrome. More recent reports, derived primarily from large administrative databases, have focused on concerns regarding both risk for later childhood psychopathology such as autism or ADHD or neurobehavioral sequelae such as motor or speech delay following fetal exposure to antidepressants.

When considering the potential risks of fetal exposure to antidepressants on the spectrum of relevant outcomes, it is important to keep in mind the risks of not receiving antidepressant treatment. Data on known risks of antidepressant use during pregnancy are well described, but the literature supporting adverse effects of untreated depression during pregnancy has also grown substantially. For example, accumulated data over the last several years supports heightened risk for obstetrical and neonatal complications among women who suffer from untreated depression and recent data is inconclusive regarding the effects of untreated maternal depression on gene expression in the CNS during gestation and the effects of depression during pregnancy on development of actual brain structures in areas of the brain that modulate emotion and behavior.

monkeybusinessimages/Thinkstock
In my opinion, one of the greatest recent advances in perinatal psychiatry has been the increased appreciation of the effect that perinatal psychiatric illness has on critical obstetrical and neonatal outcomes, as well as risk for later child psychopathology. However, few studies, to date, have systematically examined whether duration of the exposure to perinatal psychiatric illness (or the severity of the illness) is a relevant concern.

The ability to factor “dose and duration” of exposure to perinatal psychiatric illness into a model predicting risk for a number of obstetrical or neonatal outcomes allows for a more refined risk-benefit decision with respect to use of antidepressants during pregnancy. For example, there may be a threshold over which it’s even more imperative to treat depression during pregnancy than in women who do not suffer from such severe histories of psychiatric disorder.

Research along these lines has been published in a study in Nursing Research in which the question of the effect of maternal mood on infant outcomes was examined, specifically looking at stress, depression, and intimate partner violence, and not just the presence of these elements, but their duration and intensity both before and during the pregnancy (2015 Sep-Oct;64[5]:331-41).

To do this, researchers examined survey data from Utah’s Pregnancy Risk Assessment Monitoring System of 4,296 women who gave birth during 2009-2011. Stress, depression, and intimate partner violence, and the duration and severity of each, were determined by questionnaire. Those determinations were compared with the outcomes of gestational age, birth weight, neonatal ICU admission, and the symptoms and diagnosis of postpartum depression.

Results of the study included the following: Increased duration of depression was associated with a greater risk of neonatal ICU admission, particularly in women who were depressed both before and during their pregnancy (adjusted odds ratio, 2.48), compared with women who had no depression.

 

 


We’ve known for a long time that a history of depression predicts increased risk for postpartum depression. In this particular study, it was actually shown that not just a history of depression, but the duration of experienced depression influenced the risk for postpartum depression.

For example, compared with women with no depression, women who were depressed before but not during their pregnancy had an aOR of 7.67, women depressed during pregnancy but not before had an aOR of 17.65, and women depressed both before and during pregnancy had an aOR of 58.35 – an extraordinary stratification of risk, basically.

What these data begin to suggest is that there may be a continuum of risk when it comes to the effects of exposure to depression (factoring in now dose and duration of exposure) during pregnancy. If risk of adverse outcome increases with greater severity of perinatal psychiatric illness, then a mandate to treat depression during pregnancy, whether with pharmacologic or nonpharmacologic interventions (or, commonly, a combination of the two) becomes that much more imperative. Regardless of the treatment interventions that are used, the importance of treating depression during pregnancy and keeping women well before, during, and after pregnancy is so critical. Such a recommendation dovetails with the literature showing the intergenerational effects of untreated depression. Maternal depression is one of the strongest predictors of later childhood psychopathology. With current national trends moving toward mandating screening initiatives for postpartum depression, the appreciation of the extent to which depression before and during pregnancy drives risk for postpartum mood disorder broadens how we think about mitigating risk for puerperal mood disturbance. Specifically, mitigating the effects of postpartum depression on women, their children, and their families must include more effective management of depression both before and during pregnancy.



Dr. Lee S. Cohen
 

 

Dr. Cohen is the director of the Ammon-Pinizzotto Center for Women’s Mental Health at Massachusetts General Hospital in Boston, which provides information resources and conducts clinical care and research in reproductive mental health. He has been a consultant to manufacturers of psychiatric medications.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Adolescents, young adults endorse marijuana for IBD

Article Type
Changed
Fri, 01/18/2019 - 17:34

 

Many adolescents and young adults with inflammatory bowel disease (IBD) use marijuana and perceive little to no harm from regular use, according to study findings.

In a cross-sectional study of 99 patients with IBD aged 13-22 years, 32% of participants reported ever having used marijuana or endorsing use in the past 6 months. Additionally, 42% of patients perceived little to no risk of harm with regular use, reported Edward J. Hoffenberg, MD, of the departments of pediatrics and psychiatry at the University of Colorado, Aurora, and his associates.

Stockphoto4u/iStockphoto
Investigators used the Research Electronic Data Capture (REDCap) tool to collect self-reported data on appetite, pain, quality of life, depression, anxiety, and marijuana use in patients with IBD seen at Children’s Hospital Colorado between December 2015 and June 2017. Motivation for marijuana use was assessed via 35 yes/no questions. Serum tetrahydrocannabinol and cannabidiol levels were measured at enrollment, and patients also were asked about marijuana use in the past 6 months. Participants who had ever used marijuana and/or endorsed use in the past 6 months were considered “ever users,” wrote Dr. Hoffenberg and his colleagues in the study published in the Journal of Pediatrics.

Overall, 62 patients had a diagnosis of Crohn’s disease, 27 had ulcerative colitis, and 10 had indeterminate/unknown colitis. Patients in the ever-use group were older (mean, 17 years) than those in the never-use group (mean, 15.9 years). Serum cannabinoids were detected in 50% of patients in the ever-use group. “The detection of serum cannabinoids only in the ever-users is consistent with truthful reporting,” the researchers said.

Additionally, 80% of ever-users and 25% of never users perceived low to no risk of harm with regular use. After adjustment for age, ever-users were 10.7 times more likely to perceive low to no risk of harm (odds ratio, 10.7; P less than .001), the authors reported.

Weekly and daily marijuana use was reported by 52% and 31% of ever-users, respectively; 9% reported daily or almost daily use. Medical reasons for use was endorsed by 57%, and 53% reported physical pain relief as a reason. Nonmedical recreational or psychological reasons for use were reported by 87%. Problems with use were reported by 37% of users, including cravings or strong desire to use (20%), needing to use more to achieve the same effect (17%), and using a larger amount for longer than intended (17%).

“There is a need for further understanding of the potential medical benefits of marijuana use in IBD,” Dr. Hoffenberg and his associates wrote. “Theoretically, a different study design, such as a randomized controlled trial of marijuana use or placebo, could better evaluate the safety and benefit of frequent marijuana use for induction or maintenance of remission.”

Limitations of the study include difficulty determining differences in disease activity between groups because of the large number of patients with inactive or mild disease, as well as the need to group patients with Crohn’s disease and ulcerative colitis together because of the small total number of participants who endorse marijuana use.

The study was funded by the Colorado Department of Public Health and Environment. No conflicts of interest were reported.

SOURCE: Hoffenberg EJ et al. 2018. doi: 10.1016/j.jpeds.2018.03.041.

Publications
Topics
Sections

 

Many adolescents and young adults with inflammatory bowel disease (IBD) use marijuana and perceive little to no harm from regular use, according to study findings.

In a cross-sectional study of 99 patients with IBD aged 13-22 years, 32% of participants reported ever having used marijuana or endorsing use in the past 6 months. Additionally, 42% of patients perceived little to no risk of harm with regular use, reported Edward J. Hoffenberg, MD, of the departments of pediatrics and psychiatry at the University of Colorado, Aurora, and his associates.

Stockphoto4u/iStockphoto
Investigators used the Research Electronic Data Capture (REDCap) tool to collect self-reported data on appetite, pain, quality of life, depression, anxiety, and marijuana use in patients with IBD seen at Children’s Hospital Colorado between December 2015 and June 2017. Motivation for marijuana use was assessed via 35 yes/no questions. Serum tetrahydrocannabinol and cannabidiol levels were measured at enrollment, and patients also were asked about marijuana use in the past 6 months. Participants who had ever used marijuana and/or endorsed use in the past 6 months were considered “ever users,” wrote Dr. Hoffenberg and his colleagues in the study published in the Journal of Pediatrics.

Overall, 62 patients had a diagnosis of Crohn’s disease, 27 had ulcerative colitis, and 10 had indeterminate/unknown colitis. Patients in the ever-use group were older (mean, 17 years) than those in the never-use group (mean, 15.9 years). Serum cannabinoids were detected in 50% of patients in the ever-use group. “The detection of serum cannabinoids only in the ever-users is consistent with truthful reporting,” the researchers said.

Additionally, 80% of ever-users and 25% of never users perceived low to no risk of harm with regular use. After adjustment for age, ever-users were 10.7 times more likely to perceive low to no risk of harm (odds ratio, 10.7; P less than .001), the authors reported.

Weekly and daily marijuana use was reported by 52% and 31% of ever-users, respectively; 9% reported daily or almost daily use. Medical reasons for use was endorsed by 57%, and 53% reported physical pain relief as a reason. Nonmedical recreational or psychological reasons for use were reported by 87%. Problems with use were reported by 37% of users, including cravings or strong desire to use (20%), needing to use more to achieve the same effect (17%), and using a larger amount for longer than intended (17%).

“There is a need for further understanding of the potential medical benefits of marijuana use in IBD,” Dr. Hoffenberg and his associates wrote. “Theoretically, a different study design, such as a randomized controlled trial of marijuana use or placebo, could better evaluate the safety and benefit of frequent marijuana use for induction or maintenance of remission.”

Limitations of the study include difficulty determining differences in disease activity between groups because of the large number of patients with inactive or mild disease, as well as the need to group patients with Crohn’s disease and ulcerative colitis together because of the small total number of participants who endorse marijuana use.

The study was funded by the Colorado Department of Public Health and Environment. No conflicts of interest were reported.

SOURCE: Hoffenberg EJ et al. 2018. doi: 10.1016/j.jpeds.2018.03.041.

 

Many adolescents and young adults with inflammatory bowel disease (IBD) use marijuana and perceive little to no harm from regular use, according to study findings.

In a cross-sectional study of 99 patients with IBD aged 13-22 years, 32% of participants reported ever having used marijuana or endorsing use in the past 6 months. Additionally, 42% of patients perceived little to no risk of harm with regular use, reported Edward J. Hoffenberg, MD, of the departments of pediatrics and psychiatry at the University of Colorado, Aurora, and his associates.

Stockphoto4u/iStockphoto
Investigators used the Research Electronic Data Capture (REDCap) tool to collect self-reported data on appetite, pain, quality of life, depression, anxiety, and marijuana use in patients with IBD seen at Children’s Hospital Colorado between December 2015 and June 2017. Motivation for marijuana use was assessed via 35 yes/no questions. Serum tetrahydrocannabinol and cannabidiol levels were measured at enrollment, and patients also were asked about marijuana use in the past 6 months. Participants who had ever used marijuana and/or endorsed use in the past 6 months were considered “ever users,” wrote Dr. Hoffenberg and his colleagues in the study published in the Journal of Pediatrics.

Overall, 62 patients had a diagnosis of Crohn’s disease, 27 had ulcerative colitis, and 10 had indeterminate/unknown colitis. Patients in the ever-use group were older (mean, 17 years) than those in the never-use group (mean, 15.9 years). Serum cannabinoids were detected in 50% of patients in the ever-use group. “The detection of serum cannabinoids only in the ever-users is consistent with truthful reporting,” the researchers said.

Additionally, 80% of ever-users and 25% of never users perceived low to no risk of harm with regular use. After adjustment for age, ever-users were 10.7 times more likely to perceive low to no risk of harm (odds ratio, 10.7; P less than .001), the authors reported.

Weekly and daily marijuana use was reported by 52% and 31% of ever-users, respectively; 9% reported daily or almost daily use. Medical reasons for use was endorsed by 57%, and 53% reported physical pain relief as a reason. Nonmedical recreational or psychological reasons for use were reported by 87%. Problems with use were reported by 37% of users, including cravings or strong desire to use (20%), needing to use more to achieve the same effect (17%), and using a larger amount for longer than intended (17%).

“There is a need for further understanding of the potential medical benefits of marijuana use in IBD,” Dr. Hoffenberg and his associates wrote. “Theoretically, a different study design, such as a randomized controlled trial of marijuana use or placebo, could better evaluate the safety and benefit of frequent marijuana use for induction or maintenance of remission.”

Limitations of the study include difficulty determining differences in disease activity between groups because of the large number of patients with inactive or mild disease, as well as the need to group patients with Crohn’s disease and ulcerative colitis together because of the small total number of participants who endorse marijuana use.

The study was funded by the Colorado Department of Public Health and Environment. No conflicts of interest were reported.

SOURCE: Hoffenberg EJ et al. 2018. doi: 10.1016/j.jpeds.2018.03.041.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF PEDIATRICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Many adolescents and young adults with IBD use marijuana and perceive little to no harm from regular use.

Major finding: Of the participants in the study, 32% reported ever having used marijuana or endorsing use in the past 6 months, and 42% perceived little to no risk of harm with regular use.

Study details: A cross-sectional study of 99 IBD patients aged 13-22 years at Children’s Hospital Colorado.

Disclosures: The study was funded by the Colorado Department of Public Health and Environment. No conflicts of interest were reported.

Source: Hoffenberg EJ et al. J Pediatr. 2018. doi: 10.1016/j.jpeds.2018.03.041.

Disqus Comments
Default
Use ProPublica

Early results favorable for combo TLR9 agonist + pembro in advanced melanoma

Article Type
Changed
Mon, 01/14/2019 - 10:21

 

– The intratumoral Toll-Like Receptor 9 (TLR-9) agonist, CMP-001, in combination with pembrolizumab in advanced melanoma patients, was well tolerated with a durable systemic clinical response, according to early results from an ongoing phase 1 trial.

Objective response rates on weekly (n = 56) and every 3 weeks schedules (n = 13) were 23% (13%-36%) and 15% (2%-45%) respectively, reported Mohammed M. Milhem, MBBS, of the University of Iowa, Iowa City.

For those dosed weekly at low dose (less than 5 mL) and high dose (5 mL or more), the ORR was 19% (n = 43, 95% confidence interval, 8%-33%) and 27% (n = 26, 95% CI, 12%-48%), respectively. Activity was demonstrated in subjects regardless of tumor burden, Dr. Milhem said at the annual meeting of the American Association for Cancer Research.

In this phase 1b study with a 3+3 design of dose escalation and expansion, the researchers enrolled patients with advanced melanoma who did not respond or had progressed resistant on prior anti-PD-1 monotherapy or in combination. CMP-001 was injected intratumorally in combination with pembrolizumab as per label intravenously.

The study drug CMP-001 has two components, a 30-mer CpG-A DNA oligonucleotide and a nonvirulent virus-like particle (VLP). The CpG-A DNA is packaged within the VLP that protects it from degradation and also allows TLR9 receptor uptake. CpG-A DNA acts as a TLR9 agonist by binding to it, thereby activating plasmacytoid dendritic cells (pDCs) within the tumor microenvironment. The activation results in secretion of large amounts of type 1 interferon and Th1 chemokines, changing the microenvironment from a “cold/desert-like” immune suppressed state to a “hot” antitumor inflamed state, Dr. Milhem said.

“The T cells thus generated can mediate tumor rejection both in the injected and noninjected tumor,” he said. Two CMP-001 schedules were evaluated, weekly for 7 weeks or weekly for 2 weeks, followed thereafter by every 3 weeks until discontinuation (due to progression, toxicity, investigator decision, or withdrawal of consent). Scans were done every 12 weeks and tumor response was assessed by RECIST v1.1.

The CMP-001 dose escalation scheme ranged from 1 mg to 10 mg. The maximum tolerated dose was not reached and the dose of 5 mg/weekly plus pembrolizumab was used for the dose expansion phase. It was up to the investigator to increase the dose to 10 mg since maximum tolerated dose was not reached. The key inclusion criteria were metastatic or unresectable melanoma; in the dose escalation phase prior best response to anti-PD1-based therapy was disease progression or stable disease. In the dose expansion phase, patients who had progressed on anti-PD1 based therapy were allowed regardless of best response. There was no restriction on the number of prior lines of therapy.

 

 


A total of 69 subjects were treated, 44 subjects from dose escalation and 25 in the expansion phase (ongoing). Two subjects discontinued because of treatment-related adverse events. The rest of the patients had a manageable toxicity profile consisting predominantly of fever, nausea/vomiting, hypotension and rigors. Severe grade 3/4 treatment-related adverse events were reported in more than 1 subject, with hypotension (n = 9, 13%) being the most prominent AE, followed by anemia (n = 2, 3%), chills (n = 2, 3%), and hypertension (n = 2, 3%). Hypotension was manageable by responsive fluid resuscitation and in some patients required stress dose steroids. Most of these side effects occurred 1-4 hours after the CMP-001 injection.

Of the 18 responders, 1 progressed, 2 withdrew consent, and 13 remain on study with 2 subjects maintaining their response though week 72. The median duration of response was not reached. Regression of noninjected tumors occurred in cutaneous, nodal, hepatic, and splenic metastases.

“CMP-001 plus pembrolizumab induced systemic antitumor activity, and not just local efficacy since both injected and noninjected target lesions changed from baseline per RECIST,” Dr. Milhem said. Not only did the responders show a rapid reduction in target lesions from baseline, but also a durable tumor regression as usually seen with other immunotherapeutics.

Immunohistochemical analysis of tumor biopsies demonstrated increase in CD8 (greater than fivefold) and PD-L1 expression, 5 weeks after therapy in a subset of patients with pre- and posttreatment biopsies. Transcriptional analysis by RNA-seq revealed induction of T cell inflamed gene signature, notably significant upregulation of TLR, and IFN-responsive genes.

 

 


It would be interesting to further investigate how this combination therapy compares with other strategies in a similar clinical scenario, such as oncolytic virus, other TLR ligands or means of APC activation, discussant Jedd Wolchok, MD, PhD, pointed out. Understanding resistance mechanisms at an individual patient level and optimal patient selection for this combination therapy remains a challenge, he said.

Dr. Milhem had no financial relationships to disclose.

SOURCE: Milhem MD et al. AACR Annual Meeting Abstract CT144.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– The intratumoral Toll-Like Receptor 9 (TLR-9) agonist, CMP-001, in combination with pembrolizumab in advanced melanoma patients, was well tolerated with a durable systemic clinical response, according to early results from an ongoing phase 1 trial.

Objective response rates on weekly (n = 56) and every 3 weeks schedules (n = 13) were 23% (13%-36%) and 15% (2%-45%) respectively, reported Mohammed M. Milhem, MBBS, of the University of Iowa, Iowa City.

For those dosed weekly at low dose (less than 5 mL) and high dose (5 mL or more), the ORR was 19% (n = 43, 95% confidence interval, 8%-33%) and 27% (n = 26, 95% CI, 12%-48%), respectively. Activity was demonstrated in subjects regardless of tumor burden, Dr. Milhem said at the annual meeting of the American Association for Cancer Research.

In this phase 1b study with a 3+3 design of dose escalation and expansion, the researchers enrolled patients with advanced melanoma who did not respond or had progressed resistant on prior anti-PD-1 monotherapy or in combination. CMP-001 was injected intratumorally in combination with pembrolizumab as per label intravenously.

The study drug CMP-001 has two components, a 30-mer CpG-A DNA oligonucleotide and a nonvirulent virus-like particle (VLP). The CpG-A DNA is packaged within the VLP that protects it from degradation and also allows TLR9 receptor uptake. CpG-A DNA acts as a TLR9 agonist by binding to it, thereby activating plasmacytoid dendritic cells (pDCs) within the tumor microenvironment. The activation results in secretion of large amounts of type 1 interferon and Th1 chemokines, changing the microenvironment from a “cold/desert-like” immune suppressed state to a “hot” antitumor inflamed state, Dr. Milhem said.

“The T cells thus generated can mediate tumor rejection both in the injected and noninjected tumor,” he said. Two CMP-001 schedules were evaluated, weekly for 7 weeks or weekly for 2 weeks, followed thereafter by every 3 weeks until discontinuation (due to progression, toxicity, investigator decision, or withdrawal of consent). Scans were done every 12 weeks and tumor response was assessed by RECIST v1.1.

The CMP-001 dose escalation scheme ranged from 1 mg to 10 mg. The maximum tolerated dose was not reached and the dose of 5 mg/weekly plus pembrolizumab was used for the dose expansion phase. It was up to the investigator to increase the dose to 10 mg since maximum tolerated dose was not reached. The key inclusion criteria were metastatic or unresectable melanoma; in the dose escalation phase prior best response to anti-PD1-based therapy was disease progression or stable disease. In the dose expansion phase, patients who had progressed on anti-PD1 based therapy were allowed regardless of best response. There was no restriction on the number of prior lines of therapy.

 

 


A total of 69 subjects were treated, 44 subjects from dose escalation and 25 in the expansion phase (ongoing). Two subjects discontinued because of treatment-related adverse events. The rest of the patients had a manageable toxicity profile consisting predominantly of fever, nausea/vomiting, hypotension and rigors. Severe grade 3/4 treatment-related adverse events were reported in more than 1 subject, with hypotension (n = 9, 13%) being the most prominent AE, followed by anemia (n = 2, 3%), chills (n = 2, 3%), and hypertension (n = 2, 3%). Hypotension was manageable by responsive fluid resuscitation and in some patients required stress dose steroids. Most of these side effects occurred 1-4 hours after the CMP-001 injection.

Of the 18 responders, 1 progressed, 2 withdrew consent, and 13 remain on study with 2 subjects maintaining their response though week 72. The median duration of response was not reached. Regression of noninjected tumors occurred in cutaneous, nodal, hepatic, and splenic metastases.

“CMP-001 plus pembrolizumab induced systemic antitumor activity, and not just local efficacy since both injected and noninjected target lesions changed from baseline per RECIST,” Dr. Milhem said. Not only did the responders show a rapid reduction in target lesions from baseline, but also a durable tumor regression as usually seen with other immunotherapeutics.

Immunohistochemical analysis of tumor biopsies demonstrated increase in CD8 (greater than fivefold) and PD-L1 expression, 5 weeks after therapy in a subset of patients with pre- and posttreatment biopsies. Transcriptional analysis by RNA-seq revealed induction of T cell inflamed gene signature, notably significant upregulation of TLR, and IFN-responsive genes.

 

 


It would be interesting to further investigate how this combination therapy compares with other strategies in a similar clinical scenario, such as oncolytic virus, other TLR ligands or means of APC activation, discussant Jedd Wolchok, MD, PhD, pointed out. Understanding resistance mechanisms at an individual patient level and optimal patient selection for this combination therapy remains a challenge, he said.

Dr. Milhem had no financial relationships to disclose.

SOURCE: Milhem MD et al. AACR Annual Meeting Abstract CT144.

 

– The intratumoral Toll-Like Receptor 9 (TLR-9) agonist, CMP-001, in combination with pembrolizumab in advanced melanoma patients, was well tolerated with a durable systemic clinical response, according to early results from an ongoing phase 1 trial.

Objective response rates on weekly (n = 56) and every 3 weeks schedules (n = 13) were 23% (13%-36%) and 15% (2%-45%) respectively, reported Mohammed M. Milhem, MBBS, of the University of Iowa, Iowa City.

For those dosed weekly at low dose (less than 5 mL) and high dose (5 mL or more), the ORR was 19% (n = 43, 95% confidence interval, 8%-33%) and 27% (n = 26, 95% CI, 12%-48%), respectively. Activity was demonstrated in subjects regardless of tumor burden, Dr. Milhem said at the annual meeting of the American Association for Cancer Research.

In this phase 1b study with a 3+3 design of dose escalation and expansion, the researchers enrolled patients with advanced melanoma who did not respond or had progressed resistant on prior anti-PD-1 monotherapy or in combination. CMP-001 was injected intratumorally in combination with pembrolizumab as per label intravenously.

The study drug CMP-001 has two components, a 30-mer CpG-A DNA oligonucleotide and a nonvirulent virus-like particle (VLP). The CpG-A DNA is packaged within the VLP that protects it from degradation and also allows TLR9 receptor uptake. CpG-A DNA acts as a TLR9 agonist by binding to it, thereby activating plasmacytoid dendritic cells (pDCs) within the tumor microenvironment. The activation results in secretion of large amounts of type 1 interferon and Th1 chemokines, changing the microenvironment from a “cold/desert-like” immune suppressed state to a “hot” antitumor inflamed state, Dr. Milhem said.

“The T cells thus generated can mediate tumor rejection both in the injected and noninjected tumor,” he said. Two CMP-001 schedules were evaluated, weekly for 7 weeks or weekly for 2 weeks, followed thereafter by every 3 weeks until discontinuation (due to progression, toxicity, investigator decision, or withdrawal of consent). Scans were done every 12 weeks and tumor response was assessed by RECIST v1.1.

The CMP-001 dose escalation scheme ranged from 1 mg to 10 mg. The maximum tolerated dose was not reached and the dose of 5 mg/weekly plus pembrolizumab was used for the dose expansion phase. It was up to the investigator to increase the dose to 10 mg since maximum tolerated dose was not reached. The key inclusion criteria were metastatic or unresectable melanoma; in the dose escalation phase prior best response to anti-PD1-based therapy was disease progression or stable disease. In the dose expansion phase, patients who had progressed on anti-PD1 based therapy were allowed regardless of best response. There was no restriction on the number of prior lines of therapy.

 

 


A total of 69 subjects were treated, 44 subjects from dose escalation and 25 in the expansion phase (ongoing). Two subjects discontinued because of treatment-related adverse events. The rest of the patients had a manageable toxicity profile consisting predominantly of fever, nausea/vomiting, hypotension and rigors. Severe grade 3/4 treatment-related adverse events were reported in more than 1 subject, with hypotension (n = 9, 13%) being the most prominent AE, followed by anemia (n = 2, 3%), chills (n = 2, 3%), and hypertension (n = 2, 3%). Hypotension was manageable by responsive fluid resuscitation and in some patients required stress dose steroids. Most of these side effects occurred 1-4 hours after the CMP-001 injection.

Of the 18 responders, 1 progressed, 2 withdrew consent, and 13 remain on study with 2 subjects maintaining their response though week 72. The median duration of response was not reached. Regression of noninjected tumors occurred in cutaneous, nodal, hepatic, and splenic metastases.

“CMP-001 plus pembrolizumab induced systemic antitumor activity, and not just local efficacy since both injected and noninjected target lesions changed from baseline per RECIST,” Dr. Milhem said. Not only did the responders show a rapid reduction in target lesions from baseline, but also a durable tumor regression as usually seen with other immunotherapeutics.

Immunohistochemical analysis of tumor biopsies demonstrated increase in CD8 (greater than fivefold) and PD-L1 expression, 5 weeks after therapy in a subset of patients with pre- and posttreatment biopsies. Transcriptional analysis by RNA-seq revealed induction of T cell inflamed gene signature, notably significant upregulation of TLR, and IFN-responsive genes.

 

 


It would be interesting to further investigate how this combination therapy compares with other strategies in a similar clinical scenario, such as oncolytic virus, other TLR ligands or means of APC activation, discussant Jedd Wolchok, MD, PhD, pointed out. Understanding resistance mechanisms at an individual patient level and optimal patient selection for this combination therapy remains a challenge, he said.

Dr. Milhem had no financial relationships to disclose.

SOURCE: Milhem MD et al. AACR Annual Meeting Abstract CT144.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

REPORTING FROM THE AACR ANNUAL MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: The combination demonstrated a manageable toxicity profile with ORR of 22%.

Major finding: Objective response rates on weekly (n = 56) and every 3 weeks schedules (n = 13) were 23% (13%-36%) and 15% (2%-45%) respectively.

Study details: This phase 1b study comprised 69 patients (44 in escalation and 25 in expansion).

Disclosures: Dr. Milhem had no financial relationships to disclose.

Source: Milhem MD et al. AACR Annual Meeting. Abstract CT144.

Disqus Comments
Default
Use ProPublica