The medical management of early-stage endometrial cancer: When surgery isn’t possible, or desired

Article Type
Changed

The standard management for early-stage endometrial cancer involves surgery with hysterectomy, salpingectomy with or without oophorectomy, and staging lymph node sampling. Surgery serves as both a therapeutic and diagnostic intervention because surgical pathology results are in turn used to predict the likelihood of relapse and guide adjuvant therapy decisions. However, in some cases, surgical intervention is not feasible or desired, particularly if fertility preservation is a goal. Fortunately, there are nonsurgical options that are associated with favorable outcomes to offer these patients.

Endometrial cancer is associated with obesity attributable to causative mechanisms that promote endometrial hyperplasia, cellular proliferation, and heightened hormonal and growth factor signaling. Not only does obesity drive the development of endometrial cancer, but it also complicates the treatment of the disease. For example, endometrial cancer staging surgery is less successfully completed through a minimally invasive route as body mass index increases, primarily because of limitations in surgical exposure.1 In fact, obesity can prevent surgery from being offered through any route. In addition to body habitus, determination of inoperability is also significantly influenced by the presence of coronary artery disease, hypertension, and diabetes.2 Given that these comorbidities are more commonly experienced by women who are overweight, obesity creates a perfect storm of causative and complicating factors for optimal treatment.

Dr. Emma C. Rossi

While surgeons may determine the candidacy of patients for hysterectomy, patients themselves also drive this decision-making, particularly in the case of young patients who desire fertility preservation. Approximately 10% of patients with endometrial cancer are premenopausal, a number that is increasing over time. These women may have experienced infertility prior to their diagnosis, yet still strongly desire the attempt to conceive, particularly if they have suffered from anovulatory menstrual cycles or polycystic ovarian disease. Women with Lynch syndrome are at a higher risk for developing their cancer in premenopausal years. Therefore, it is critical that gynecologic oncologists consider nonsurgical remedies for these women and understand their potential for success.

Certain criteria should be met for women undergoing nonsurgical management of endometrial cancer, particularly if chosen electively for fertility preservation. Diagnosis should be obtained with a curettage specimen (rather than a pipelle) to optimize the accuracy of establishing tumor grade and to “debulk” the endometrial tissue. Pretreatment imaging is necessary to rule out distant metastatic disease. MRI is particularly helpful in approximating the depth of myometrial invasion of the malignancy and is recommended for patients desiring fertility preservation. Patients who have an endometrial cancer that is deeply invasive into the myometrium are poor candidates for fertility preservation and have a higher risk for metastatic disease, particularly to lymph nodes, and treatment decisions (such as surgery, or, if inoperable, radiation which treats nodal basins) should be considered for these women.

Hormonal therapy has long been identified as a highly effective systemic therapy for endometrial cancers, particularly those that are low grade and express estrogen and progesterone receptors. Progesterone can be administered orally in preparations such as megestrol or medroxyprogesterone or “locally” with levonorgestrel-releasing intrauterine devices. Oral preparations are straightforward, typically low-cost agents. Likelihood of success is 50%-75%. However, the systemic side effects of these agents, which include increased venous thromboembolism risk and appetite stimulation, are particularly problematic in this population. Therefore, many providers prefer to place progestin-releasing intrauterine devices to “bypass” these side effects, avoid issues with adherence to dosing, and provide some preventative endometrial coverage after resolution of the cancer. Recent trials have observed elimination of endometrial cancer on repeat sampling in 67%-76% of cases.3-5 This strategy may be more successful when it is paired with the addition of GnRH agonists.4

When hormonal therapy is chosen for primary endometrial cancer treatment, it is typically monitored for efficacy with repeat endometrial samplings, most commonly with pipelle biopsies to avoid displacement of an intrauterine device, though repeat D&C may be more effective in achieving a complete pathologic response to treatment. Most providers recommend resampling the endometrium at 3-month intervals until resolution of the malignancy has been documented, and thereafter if any new bleeding events develop. For women who have demonstrated resolution of carcinoma on repeat sampling, data are lacking to guide decision-making regarding resumption of conception efforts, ongoing surveillance, and completion hysterectomy after they finish childbearing. If malignancy continues to be identified after 6 months of hormonal therapy, consideration should be made of a more definitive treatment (such as surgery, if feasible, or radiation if not). Continued hormonal therapy can also be considered, as delayed responses remain common even 1 year after starting therapy.6 If hormonal therapy is prolonged for persistent disease, repeat MRI is recommended at 6 months to document lack of progression.

Radiation, preferably with both intracavitary and external beam treatment, is the most definitive intervention for inoperable early-stage endometrial cancer. Unfortunately, fertility is not preserved with this approach. However, for patients with high-grade tumors that are less likely to express hormone receptors or respond to hormonal therapies, this may be the only treatment option available. Typical treatment courses include 5 weeks of external beam radiation treatments, focused on treating the pelvis as a whole, including occult metastases not identified on imaging. Optimal therapy also includes placement of intracavitary radiation implants, such as Heymans capsules, to concentrate the dose at the uterine fundus, while minimizing toxicity to the adjacent bladder and bowel structures. While definitive radiation is considered inferior to a primary surgical effort, disease-specific survival can be observed in more than 80% of patients treated this way.7

While surgery remains the standard intervention for women with early-stage endometrial cancer, hormonal therapy or radiation remain viable options with high rates of success for women who are not surgical candidates or who desire fertility preservation.

Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill.

References

1. Walker JL et al. J Clin Oncol. 2009;27(32):5331-6.

2. Ertel M et al. Ann Surg Oncol. 2021;28(13):8987-95.

3. Janda M et al. Gynecol Oncol. 2021;161(1):143-51.

4. Novikova OV et al. Gynecol Oncol. 2021;161(1):152-9.

5. Westin SN et al. Am J Obstet Gynecol. 2021;224(2):191.e1-15.

6. Cho A et al. Gynecol Oncol. 2021;160(2):413-17.

7. Dutta SW et al. Brachytherapy. 2017;16(3):526-33.

Publications
Topics
Sections

The standard management for early-stage endometrial cancer involves surgery with hysterectomy, salpingectomy with or without oophorectomy, and staging lymph node sampling. Surgery serves as both a therapeutic and diagnostic intervention because surgical pathology results are in turn used to predict the likelihood of relapse and guide adjuvant therapy decisions. However, in some cases, surgical intervention is not feasible or desired, particularly if fertility preservation is a goal. Fortunately, there are nonsurgical options that are associated with favorable outcomes to offer these patients.

Endometrial cancer is associated with obesity attributable to causative mechanisms that promote endometrial hyperplasia, cellular proliferation, and heightened hormonal and growth factor signaling. Not only does obesity drive the development of endometrial cancer, but it also complicates the treatment of the disease. For example, endometrial cancer staging surgery is less successfully completed through a minimally invasive route as body mass index increases, primarily because of limitations in surgical exposure.1 In fact, obesity can prevent surgery from being offered through any route. In addition to body habitus, determination of inoperability is also significantly influenced by the presence of coronary artery disease, hypertension, and diabetes.2 Given that these comorbidities are more commonly experienced by women who are overweight, obesity creates a perfect storm of causative and complicating factors for optimal treatment.

Dr. Emma C. Rossi

While surgeons may determine the candidacy of patients for hysterectomy, patients themselves also drive this decision-making, particularly in the case of young patients who desire fertility preservation. Approximately 10% of patients with endometrial cancer are premenopausal, a number that is increasing over time. These women may have experienced infertility prior to their diagnosis, yet still strongly desire the attempt to conceive, particularly if they have suffered from anovulatory menstrual cycles or polycystic ovarian disease. Women with Lynch syndrome are at a higher risk for developing their cancer in premenopausal years. Therefore, it is critical that gynecologic oncologists consider nonsurgical remedies for these women and understand their potential for success.

Certain criteria should be met for women undergoing nonsurgical management of endometrial cancer, particularly if chosen electively for fertility preservation. Diagnosis should be obtained with a curettage specimen (rather than a pipelle) to optimize the accuracy of establishing tumor grade and to “debulk” the endometrial tissue. Pretreatment imaging is necessary to rule out distant metastatic disease. MRI is particularly helpful in approximating the depth of myometrial invasion of the malignancy and is recommended for patients desiring fertility preservation. Patients who have an endometrial cancer that is deeply invasive into the myometrium are poor candidates for fertility preservation and have a higher risk for metastatic disease, particularly to lymph nodes, and treatment decisions (such as surgery, or, if inoperable, radiation which treats nodal basins) should be considered for these women.

Hormonal therapy has long been identified as a highly effective systemic therapy for endometrial cancers, particularly those that are low grade and express estrogen and progesterone receptors. Progesterone can be administered orally in preparations such as megestrol or medroxyprogesterone or “locally” with levonorgestrel-releasing intrauterine devices. Oral preparations are straightforward, typically low-cost agents. Likelihood of success is 50%-75%. However, the systemic side effects of these agents, which include increased venous thromboembolism risk and appetite stimulation, are particularly problematic in this population. Therefore, many providers prefer to place progestin-releasing intrauterine devices to “bypass” these side effects, avoid issues with adherence to dosing, and provide some preventative endometrial coverage after resolution of the cancer. Recent trials have observed elimination of endometrial cancer on repeat sampling in 67%-76% of cases.3-5 This strategy may be more successful when it is paired with the addition of GnRH agonists.4

When hormonal therapy is chosen for primary endometrial cancer treatment, it is typically monitored for efficacy with repeat endometrial samplings, most commonly with pipelle biopsies to avoid displacement of an intrauterine device, though repeat D&C may be more effective in achieving a complete pathologic response to treatment. Most providers recommend resampling the endometrium at 3-month intervals until resolution of the malignancy has been documented, and thereafter if any new bleeding events develop. For women who have demonstrated resolution of carcinoma on repeat sampling, data are lacking to guide decision-making regarding resumption of conception efforts, ongoing surveillance, and completion hysterectomy after they finish childbearing. If malignancy continues to be identified after 6 months of hormonal therapy, consideration should be made of a more definitive treatment (such as surgery, if feasible, or radiation if not). Continued hormonal therapy can also be considered, as delayed responses remain common even 1 year after starting therapy.6 If hormonal therapy is prolonged for persistent disease, repeat MRI is recommended at 6 months to document lack of progression.

Radiation, preferably with both intracavitary and external beam treatment, is the most definitive intervention for inoperable early-stage endometrial cancer. Unfortunately, fertility is not preserved with this approach. However, for patients with high-grade tumors that are less likely to express hormone receptors or respond to hormonal therapies, this may be the only treatment option available. Typical treatment courses include 5 weeks of external beam radiation treatments, focused on treating the pelvis as a whole, including occult metastases not identified on imaging. Optimal therapy also includes placement of intracavitary radiation implants, such as Heymans capsules, to concentrate the dose at the uterine fundus, while minimizing toxicity to the adjacent bladder and bowel structures. While definitive radiation is considered inferior to a primary surgical effort, disease-specific survival can be observed in more than 80% of patients treated this way.7

While surgery remains the standard intervention for women with early-stage endometrial cancer, hormonal therapy or radiation remain viable options with high rates of success for women who are not surgical candidates or who desire fertility preservation.

Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill.

References

1. Walker JL et al. J Clin Oncol. 2009;27(32):5331-6.

2. Ertel M et al. Ann Surg Oncol. 2021;28(13):8987-95.

3. Janda M et al. Gynecol Oncol. 2021;161(1):143-51.

4. Novikova OV et al. Gynecol Oncol. 2021;161(1):152-9.

5. Westin SN et al. Am J Obstet Gynecol. 2021;224(2):191.e1-15.

6. Cho A et al. Gynecol Oncol. 2021;160(2):413-17.

7. Dutta SW et al. Brachytherapy. 2017;16(3):526-33.

The standard management for early-stage endometrial cancer involves surgery with hysterectomy, salpingectomy with or without oophorectomy, and staging lymph node sampling. Surgery serves as both a therapeutic and diagnostic intervention because surgical pathology results are in turn used to predict the likelihood of relapse and guide adjuvant therapy decisions. However, in some cases, surgical intervention is not feasible or desired, particularly if fertility preservation is a goal. Fortunately, there are nonsurgical options that are associated with favorable outcomes to offer these patients.

Endometrial cancer is associated with obesity attributable to causative mechanisms that promote endometrial hyperplasia, cellular proliferation, and heightened hormonal and growth factor signaling. Not only does obesity drive the development of endometrial cancer, but it also complicates the treatment of the disease. For example, endometrial cancer staging surgery is less successfully completed through a minimally invasive route as body mass index increases, primarily because of limitations in surgical exposure.1 In fact, obesity can prevent surgery from being offered through any route. In addition to body habitus, determination of inoperability is also significantly influenced by the presence of coronary artery disease, hypertension, and diabetes.2 Given that these comorbidities are more commonly experienced by women who are overweight, obesity creates a perfect storm of causative and complicating factors for optimal treatment.

Dr. Emma C. Rossi

While surgeons may determine the candidacy of patients for hysterectomy, patients themselves also drive this decision-making, particularly in the case of young patients who desire fertility preservation. Approximately 10% of patients with endometrial cancer are premenopausal, a number that is increasing over time. These women may have experienced infertility prior to their diagnosis, yet still strongly desire the attempt to conceive, particularly if they have suffered from anovulatory menstrual cycles or polycystic ovarian disease. Women with Lynch syndrome are at a higher risk for developing their cancer in premenopausal years. Therefore, it is critical that gynecologic oncologists consider nonsurgical remedies for these women and understand their potential for success.

Certain criteria should be met for women undergoing nonsurgical management of endometrial cancer, particularly if chosen electively for fertility preservation. Diagnosis should be obtained with a curettage specimen (rather than a pipelle) to optimize the accuracy of establishing tumor grade and to “debulk” the endometrial tissue. Pretreatment imaging is necessary to rule out distant metastatic disease. MRI is particularly helpful in approximating the depth of myometrial invasion of the malignancy and is recommended for patients desiring fertility preservation. Patients who have an endometrial cancer that is deeply invasive into the myometrium are poor candidates for fertility preservation and have a higher risk for metastatic disease, particularly to lymph nodes, and treatment decisions (such as surgery, or, if inoperable, radiation which treats nodal basins) should be considered for these women.

Hormonal therapy has long been identified as a highly effective systemic therapy for endometrial cancers, particularly those that are low grade and express estrogen and progesterone receptors. Progesterone can be administered orally in preparations such as megestrol or medroxyprogesterone or “locally” with levonorgestrel-releasing intrauterine devices. Oral preparations are straightforward, typically low-cost agents. Likelihood of success is 50%-75%. However, the systemic side effects of these agents, which include increased venous thromboembolism risk and appetite stimulation, are particularly problematic in this population. Therefore, many providers prefer to place progestin-releasing intrauterine devices to “bypass” these side effects, avoid issues with adherence to dosing, and provide some preventative endometrial coverage after resolution of the cancer. Recent trials have observed elimination of endometrial cancer on repeat sampling in 67%-76% of cases.3-5 This strategy may be more successful when it is paired with the addition of GnRH agonists.4

When hormonal therapy is chosen for primary endometrial cancer treatment, it is typically monitored for efficacy with repeat endometrial samplings, most commonly with pipelle biopsies to avoid displacement of an intrauterine device, though repeat D&C may be more effective in achieving a complete pathologic response to treatment. Most providers recommend resampling the endometrium at 3-month intervals until resolution of the malignancy has been documented, and thereafter if any new bleeding events develop. For women who have demonstrated resolution of carcinoma on repeat sampling, data are lacking to guide decision-making regarding resumption of conception efforts, ongoing surveillance, and completion hysterectomy after they finish childbearing. If malignancy continues to be identified after 6 months of hormonal therapy, consideration should be made of a more definitive treatment (such as surgery, if feasible, or radiation if not). Continued hormonal therapy can also be considered, as delayed responses remain common even 1 year after starting therapy.6 If hormonal therapy is prolonged for persistent disease, repeat MRI is recommended at 6 months to document lack of progression.

Radiation, preferably with both intracavitary and external beam treatment, is the most definitive intervention for inoperable early-stage endometrial cancer. Unfortunately, fertility is not preserved with this approach. However, for patients with high-grade tumors that are less likely to express hormone receptors or respond to hormonal therapies, this may be the only treatment option available. Typical treatment courses include 5 weeks of external beam radiation treatments, focused on treating the pelvis as a whole, including occult metastases not identified on imaging. Optimal therapy also includes placement of intracavitary radiation implants, such as Heymans capsules, to concentrate the dose at the uterine fundus, while minimizing toxicity to the adjacent bladder and bowel structures. While definitive radiation is considered inferior to a primary surgical effort, disease-specific survival can be observed in more than 80% of patients treated this way.7

While surgery remains the standard intervention for women with early-stage endometrial cancer, hormonal therapy or radiation remain viable options with high rates of success for women who are not surgical candidates or who desire fertility preservation.

Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill.

References

1. Walker JL et al. J Clin Oncol. 2009;27(32):5331-6.

2. Ertel M et al. Ann Surg Oncol. 2021;28(13):8987-95.

3. Janda M et al. Gynecol Oncol. 2021;161(1):143-51.

4. Novikova OV et al. Gynecol Oncol. 2021;161(1):152-9.

5. Westin SN et al. Am J Obstet Gynecol. 2021;224(2):191.e1-15.

6. Cho A et al. Gynecol Oncol. 2021;160(2):413-17.

7. Dutta SW et al. Brachytherapy. 2017;16(3):526-33.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Man who received first modified pig heart transplant dies

Article Type
Changed

 

David Bennett Sr, the 57-year-old patient with terminal heart disease who became the first person to receive a genetically modified pig heart, has died. He passed away March 8, according to a statement from the University of Maryland Medical Center (UMMC), Baltimore, where the transplant was performed.

Mr. Bennett received the transplant on January 7 and lived for 2 months following the surgery.   

Although not providing the exact cause of his death, UMMC said Mr. Bennett’s condition began deteriorating several days before his death.

When it became clear that he would not recover, he was given compassionate palliative care and was able to communicate with his family during his final hours.

“We are devastated by the loss of Mr. Bennett. He proved to be a brave and noble patient who fought all the way to the end. We extend our sincerest condolences to his family,” Bartley P. Griffith, MD, who performed the transplant, said in the statement.

“We are grateful to Mr. Bennett for his unique and historic role in helping to contribute to a vast array of knowledge to the field of xenotransplantation,” added Muhammad M. Mohiuddin, MD, director of the cardiac xenotransplantation program at University of Maryland School of Medicine.

Before receiving the genetically modified pig heart, Mr. Bennett had required mechanical circulatory support to stay alive but was rejected for standard heart transplantation at UMMC and other centers. He was ineligible for an implanted ventricular assist device due to ventricular arrhythmias.

Following surgery, the transplanted pig heart performed well for several weeks without any signs of rejection. The patient was able to spend time with his family and participate in physical therapy to help regain strength.

“This organ transplant demonstrated for the first time that a genetically modified animal heart can function like a human heart without immediate rejection by the body,” UMMC said in a statement issued 3 days after the surgery.

Thanks to Mr. Bennett, “we have gained invaluable insights learning that the genetically modified pig heart can function well within the human body while the immune system is adequately suppressed,” said Dr. Mohiuddin. “We remain optimistic and plan on continuing our work in future clinical trials.”

The patient’s son, David Bennett Jr, said the family is “profoundly grateful for the life-extending opportunity” provided to his father by the “stellar team” at the University of Maryland School of Medicine and the University of Maryland Medical Center.

“We were able to spend some precious weeks together while he recovered from the transplant surgery, weeks we would not have had without this miraculous effort,” he said.

“We also hope that what was learned from his surgery will benefit future patients and hopefully, one day, end the organ shortage that costs so many lives each year,” he added.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

David Bennett Sr, the 57-year-old patient with terminal heart disease who became the first person to receive a genetically modified pig heart, has died. He passed away March 8, according to a statement from the University of Maryland Medical Center (UMMC), Baltimore, where the transplant was performed.

Mr. Bennett received the transplant on January 7 and lived for 2 months following the surgery.   

Although not providing the exact cause of his death, UMMC said Mr. Bennett’s condition began deteriorating several days before his death.

When it became clear that he would not recover, he was given compassionate palliative care and was able to communicate with his family during his final hours.

“We are devastated by the loss of Mr. Bennett. He proved to be a brave and noble patient who fought all the way to the end. We extend our sincerest condolences to his family,” Bartley P. Griffith, MD, who performed the transplant, said in the statement.

“We are grateful to Mr. Bennett for his unique and historic role in helping to contribute to a vast array of knowledge to the field of xenotransplantation,” added Muhammad M. Mohiuddin, MD, director of the cardiac xenotransplantation program at University of Maryland School of Medicine.

Before receiving the genetically modified pig heart, Mr. Bennett had required mechanical circulatory support to stay alive but was rejected for standard heart transplantation at UMMC and other centers. He was ineligible for an implanted ventricular assist device due to ventricular arrhythmias.

Following surgery, the transplanted pig heart performed well for several weeks without any signs of rejection. The patient was able to spend time with his family and participate in physical therapy to help regain strength.

“This organ transplant demonstrated for the first time that a genetically modified animal heart can function like a human heart without immediate rejection by the body,” UMMC said in a statement issued 3 days after the surgery.

Thanks to Mr. Bennett, “we have gained invaluable insights learning that the genetically modified pig heart can function well within the human body while the immune system is adequately suppressed,” said Dr. Mohiuddin. “We remain optimistic and plan on continuing our work in future clinical trials.”

The patient’s son, David Bennett Jr, said the family is “profoundly grateful for the life-extending opportunity” provided to his father by the “stellar team” at the University of Maryland School of Medicine and the University of Maryland Medical Center.

“We were able to spend some precious weeks together while he recovered from the transplant surgery, weeks we would not have had without this miraculous effort,” he said.

“We also hope that what was learned from his surgery will benefit future patients and hopefully, one day, end the organ shortage that costs so many lives each year,” he added.

A version of this article first appeared on Medscape.com.

 

David Bennett Sr, the 57-year-old patient with terminal heart disease who became the first person to receive a genetically modified pig heart, has died. He passed away March 8, according to a statement from the University of Maryland Medical Center (UMMC), Baltimore, where the transplant was performed.

Mr. Bennett received the transplant on January 7 and lived for 2 months following the surgery.   

Although not providing the exact cause of his death, UMMC said Mr. Bennett’s condition began deteriorating several days before his death.

When it became clear that he would not recover, he was given compassionate palliative care and was able to communicate with his family during his final hours.

“We are devastated by the loss of Mr. Bennett. He proved to be a brave and noble patient who fought all the way to the end. We extend our sincerest condolences to his family,” Bartley P. Griffith, MD, who performed the transplant, said in the statement.

“We are grateful to Mr. Bennett for his unique and historic role in helping to contribute to a vast array of knowledge to the field of xenotransplantation,” added Muhammad M. Mohiuddin, MD, director of the cardiac xenotransplantation program at University of Maryland School of Medicine.

Before receiving the genetically modified pig heart, Mr. Bennett had required mechanical circulatory support to stay alive but was rejected for standard heart transplantation at UMMC and other centers. He was ineligible for an implanted ventricular assist device due to ventricular arrhythmias.

Following surgery, the transplanted pig heart performed well for several weeks without any signs of rejection. The patient was able to spend time with his family and participate in physical therapy to help regain strength.

“This organ transplant demonstrated for the first time that a genetically modified animal heart can function like a human heart without immediate rejection by the body,” UMMC said in a statement issued 3 days after the surgery.

Thanks to Mr. Bennett, “we have gained invaluable insights learning that the genetically modified pig heart can function well within the human body while the immune system is adequately suppressed,” said Dr. Mohiuddin. “We remain optimistic and plan on continuing our work in future clinical trials.”

The patient’s son, David Bennett Jr, said the family is “profoundly grateful for the life-extending opportunity” provided to his father by the “stellar team” at the University of Maryland School of Medicine and the University of Maryland Medical Center.

“We were able to spend some precious weeks together while he recovered from the transplant surgery, weeks we would not have had without this miraculous effort,” he said.

“We also hope that what was learned from his surgery will benefit future patients and hopefully, one day, end the organ shortage that costs so many lives each year,” he added.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Baby-friendly’ steps help women meet prenatal breastfeeding goals

Article Type
Changed

 

A first-ever study of the effect of evidence-based maternity care practices on prenatal breastfeeding intentions in women from low-income U.S. households shows that the use of “baby-friendly steps” during birth hospitalization made it possible for almost half to breastfeed exclusively for 1 month.

Analyses of national data from a longitudinal study of 1,080 women enrolled in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) revealed that 47% were able to meet their prenatal intention to breastfeed without formula or other milk for at least 30 days.

The odds of meeting prenatal breastfeeding intentions more than quadrupled when babies received only breast milk (risk ratio, 4.4; 95% confidence interval, 3.4-5.7), the study showed. Breastfeeding within 1 hour of birth was also associated with greater likelihood of breastfeeding success (RR, 1.3; 95% CI, 1.0-1.6).

The study, led by Heather C. Hamner, PhD, MS, MPH, of the National Center for Chronic Disease Prevention and Health Promotion, , Atlanta, was reported online in Pediatrics.

“This study confirms the relationship between experiencing maternity care practices supportive of breastfeeding and meeting one’s breastfeeding intentions, and adds evidence specifically among low-income women, who are known to be at higher risk of not breastfeeding,” the study authors wrote.

Women from low-income households often face additional barriers to meeting their breastfeeding goals, including lack of access to professional lactation services, Dr. Hamner said in an interview. “We want physicians to know how important maternity care practices supportive of breastfeeding are to helping all women achieve their breastfeeding goals. Physicians can be champions for implementation of evidence-based maternity care practices in the hospitals and practices in which they work.”

Dr. Hamner emphasized that physicians need to discuss the importance of breastfeeding with patients and their families, brief them on what to expect in the maternity care setting, and ensure women are connected to lactation resources. The American Academy of Pediatrics is working to increase physician capacity to support breastfeeding through the Physician Engagement and Training Focused on Breastfeeding project.

For the study, Dr. Hamner and colleagues analyzed data from the longitudinal WIC Infant and Toddler Feeding Practices Study-2 (ITFPS-2), which assessed the impact of 6 steps from a 10-step maternity care protocol known as The Ten Steps To Successful Breastfeeding. These steps are part of the worldwide Baby-Friendly Hospital Initiative (BFHI), which has been shown to improve rates of breastfeeding initiation, duration, and exclusivity.

After adjusting for sociodemographic and other factors, the study authors estimated risk ratios for associations between each of six maternity care practices assessed in ITFPS-2 and the success of women who reported an intention to breastfeed exclusively for 1 month. The six steps included initiation of breastfeeding within 1 hour of birth (step 4), showing moms how to breastfeed and maintain lactation (step 5), giving no food or drink other than breast milk unless medically indicated (step 6), rooming-in (step 7), breastfeeding on demand (step 8), and giving no pacifiers (step 9).

The analyses showed that only steps 4 and 6 – initiating breastfeeding at birth and giving only breast milk – remained significantly associated with meeting breastfeeding intentions. The results also revealed a dose-response relationship between the number of baby steps experienced during birth hospitalization and the likelihood of meeting breastfeeding goals, a finding in keeping with previous studies. In women who experienced all six steps, for example, 76% were breastfeeding exclusively at 1 month, compared with 16% of those who experienced zero to two steps.

Although the dose-response relationship did not appear to differ significantly by race or ethnicity, it was driven primarily by a hospital policy of providing infant formula or other supplementation, the study authors found. Notably, 44% of women reported that their infant had been fed something other than breast milk while in the hospital, and about 60% said they stopped breastfeeding earlier than intended.

“This finding reiterates the importance of limiting in-hospital formula or other supplementation of breastfed infants to only those with medical necessity,” Dr. Hamner and colleagues said.

Despite improvements in maternity care practices that promote breastfeeding, including an increase in the number of births occurring in U.S. hospitals with a baby-friendly designation, many women continue to experience significant barriers to breastfeeding, the investigators pointed out. Currently, there are 592 baby-friendly hospitals in the United States, representing 28.29% of annual births.

“I think more hospitals becoming baby friendly would really help,” Mary Franklin, DNP, CNM, assistant professor at Case Western Reserve University, Cleveland, said in an interview. More needs to be done to support women during birth hospitalization and after they return home, so they can continue to breastfeed for “longer than the initial 6 weeks,” added Dr. Franklin, who is also director of the nurse midwifery and women’s health NP program.

The AAP recommends exclusive breastfeeding for about 6 months followed by complementary food introduction and continued breastfeeding through 12 months or beyond.

Like Dr. Hamner, Dr. Franklin emphasized that physicians have an important role to play in the initiation, duration, and exclusivity of breastfeeding. This includes promoting enrichment of the pregnancy experience with prenatal education and increased support from health care providers and peers. At delivery, obstetricians can delay cord clamping to facilitate early breastfeeding. They can also support the elimination of the central nursery in hospitals so that mother and baby stay together from birth. In addition, prescriptions can be written for breast pumps, which are covered by Medicaid.

The study received no outside funding. Dr. Hamner and coauthors disclosed having no potential financial conflicts of interest. Dr. Franklin also disclosed having no financial conflicts of interest.

Publications
Topics
Sections

 

A first-ever study of the effect of evidence-based maternity care practices on prenatal breastfeeding intentions in women from low-income U.S. households shows that the use of “baby-friendly steps” during birth hospitalization made it possible for almost half to breastfeed exclusively for 1 month.

Analyses of national data from a longitudinal study of 1,080 women enrolled in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) revealed that 47% were able to meet their prenatal intention to breastfeed without formula or other milk for at least 30 days.

The odds of meeting prenatal breastfeeding intentions more than quadrupled when babies received only breast milk (risk ratio, 4.4; 95% confidence interval, 3.4-5.7), the study showed. Breastfeeding within 1 hour of birth was also associated with greater likelihood of breastfeeding success (RR, 1.3; 95% CI, 1.0-1.6).

The study, led by Heather C. Hamner, PhD, MS, MPH, of the National Center for Chronic Disease Prevention and Health Promotion, , Atlanta, was reported online in Pediatrics.

“This study confirms the relationship between experiencing maternity care practices supportive of breastfeeding and meeting one’s breastfeeding intentions, and adds evidence specifically among low-income women, who are known to be at higher risk of not breastfeeding,” the study authors wrote.

Women from low-income households often face additional barriers to meeting their breastfeeding goals, including lack of access to professional lactation services, Dr. Hamner said in an interview. “We want physicians to know how important maternity care practices supportive of breastfeeding are to helping all women achieve their breastfeeding goals. Physicians can be champions for implementation of evidence-based maternity care practices in the hospitals and practices in which they work.”

Dr. Hamner emphasized that physicians need to discuss the importance of breastfeeding with patients and their families, brief them on what to expect in the maternity care setting, and ensure women are connected to lactation resources. The American Academy of Pediatrics is working to increase physician capacity to support breastfeeding through the Physician Engagement and Training Focused on Breastfeeding project.

For the study, Dr. Hamner and colleagues analyzed data from the longitudinal WIC Infant and Toddler Feeding Practices Study-2 (ITFPS-2), which assessed the impact of 6 steps from a 10-step maternity care protocol known as The Ten Steps To Successful Breastfeeding. These steps are part of the worldwide Baby-Friendly Hospital Initiative (BFHI), which has been shown to improve rates of breastfeeding initiation, duration, and exclusivity.

After adjusting for sociodemographic and other factors, the study authors estimated risk ratios for associations between each of six maternity care practices assessed in ITFPS-2 and the success of women who reported an intention to breastfeed exclusively for 1 month. The six steps included initiation of breastfeeding within 1 hour of birth (step 4), showing moms how to breastfeed and maintain lactation (step 5), giving no food or drink other than breast milk unless medically indicated (step 6), rooming-in (step 7), breastfeeding on demand (step 8), and giving no pacifiers (step 9).

The analyses showed that only steps 4 and 6 – initiating breastfeeding at birth and giving only breast milk – remained significantly associated with meeting breastfeeding intentions. The results also revealed a dose-response relationship between the number of baby steps experienced during birth hospitalization and the likelihood of meeting breastfeeding goals, a finding in keeping with previous studies. In women who experienced all six steps, for example, 76% were breastfeeding exclusively at 1 month, compared with 16% of those who experienced zero to two steps.

Although the dose-response relationship did not appear to differ significantly by race or ethnicity, it was driven primarily by a hospital policy of providing infant formula or other supplementation, the study authors found. Notably, 44% of women reported that their infant had been fed something other than breast milk while in the hospital, and about 60% said they stopped breastfeeding earlier than intended.

“This finding reiterates the importance of limiting in-hospital formula or other supplementation of breastfed infants to only those with medical necessity,” Dr. Hamner and colleagues said.

Despite improvements in maternity care practices that promote breastfeeding, including an increase in the number of births occurring in U.S. hospitals with a baby-friendly designation, many women continue to experience significant barriers to breastfeeding, the investigators pointed out. Currently, there are 592 baby-friendly hospitals in the United States, representing 28.29% of annual births.

“I think more hospitals becoming baby friendly would really help,” Mary Franklin, DNP, CNM, assistant professor at Case Western Reserve University, Cleveland, said in an interview. More needs to be done to support women during birth hospitalization and after they return home, so they can continue to breastfeed for “longer than the initial 6 weeks,” added Dr. Franklin, who is also director of the nurse midwifery and women’s health NP program.

The AAP recommends exclusive breastfeeding for about 6 months followed by complementary food introduction and continued breastfeeding through 12 months or beyond.

Like Dr. Hamner, Dr. Franklin emphasized that physicians have an important role to play in the initiation, duration, and exclusivity of breastfeeding. This includes promoting enrichment of the pregnancy experience with prenatal education and increased support from health care providers and peers. At delivery, obstetricians can delay cord clamping to facilitate early breastfeeding. They can also support the elimination of the central nursery in hospitals so that mother and baby stay together from birth. In addition, prescriptions can be written for breast pumps, which are covered by Medicaid.

The study received no outside funding. Dr. Hamner and coauthors disclosed having no potential financial conflicts of interest. Dr. Franklin also disclosed having no financial conflicts of interest.

 

A first-ever study of the effect of evidence-based maternity care practices on prenatal breastfeeding intentions in women from low-income U.S. households shows that the use of “baby-friendly steps” during birth hospitalization made it possible for almost half to breastfeed exclusively for 1 month.

Analyses of national data from a longitudinal study of 1,080 women enrolled in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) revealed that 47% were able to meet their prenatal intention to breastfeed without formula or other milk for at least 30 days.

The odds of meeting prenatal breastfeeding intentions more than quadrupled when babies received only breast milk (risk ratio, 4.4; 95% confidence interval, 3.4-5.7), the study showed. Breastfeeding within 1 hour of birth was also associated with greater likelihood of breastfeeding success (RR, 1.3; 95% CI, 1.0-1.6).

The study, led by Heather C. Hamner, PhD, MS, MPH, of the National Center for Chronic Disease Prevention and Health Promotion, , Atlanta, was reported online in Pediatrics.

“This study confirms the relationship between experiencing maternity care practices supportive of breastfeeding and meeting one’s breastfeeding intentions, and adds evidence specifically among low-income women, who are known to be at higher risk of not breastfeeding,” the study authors wrote.

Women from low-income households often face additional barriers to meeting their breastfeeding goals, including lack of access to professional lactation services, Dr. Hamner said in an interview. “We want physicians to know how important maternity care practices supportive of breastfeeding are to helping all women achieve their breastfeeding goals. Physicians can be champions for implementation of evidence-based maternity care practices in the hospitals and practices in which they work.”

Dr. Hamner emphasized that physicians need to discuss the importance of breastfeeding with patients and their families, brief them on what to expect in the maternity care setting, and ensure women are connected to lactation resources. The American Academy of Pediatrics is working to increase physician capacity to support breastfeeding through the Physician Engagement and Training Focused on Breastfeeding project.

For the study, Dr. Hamner and colleagues analyzed data from the longitudinal WIC Infant and Toddler Feeding Practices Study-2 (ITFPS-2), which assessed the impact of 6 steps from a 10-step maternity care protocol known as The Ten Steps To Successful Breastfeeding. These steps are part of the worldwide Baby-Friendly Hospital Initiative (BFHI), which has been shown to improve rates of breastfeeding initiation, duration, and exclusivity.

After adjusting for sociodemographic and other factors, the study authors estimated risk ratios for associations between each of six maternity care practices assessed in ITFPS-2 and the success of women who reported an intention to breastfeed exclusively for 1 month. The six steps included initiation of breastfeeding within 1 hour of birth (step 4), showing moms how to breastfeed and maintain lactation (step 5), giving no food or drink other than breast milk unless medically indicated (step 6), rooming-in (step 7), breastfeeding on demand (step 8), and giving no pacifiers (step 9).

The analyses showed that only steps 4 and 6 – initiating breastfeeding at birth and giving only breast milk – remained significantly associated with meeting breastfeeding intentions. The results also revealed a dose-response relationship between the number of baby steps experienced during birth hospitalization and the likelihood of meeting breastfeeding goals, a finding in keeping with previous studies. In women who experienced all six steps, for example, 76% were breastfeeding exclusively at 1 month, compared with 16% of those who experienced zero to two steps.

Although the dose-response relationship did not appear to differ significantly by race or ethnicity, it was driven primarily by a hospital policy of providing infant formula or other supplementation, the study authors found. Notably, 44% of women reported that their infant had been fed something other than breast milk while in the hospital, and about 60% said they stopped breastfeeding earlier than intended.

“This finding reiterates the importance of limiting in-hospital formula or other supplementation of breastfed infants to only those with medical necessity,” Dr. Hamner and colleagues said.

Despite improvements in maternity care practices that promote breastfeeding, including an increase in the number of births occurring in U.S. hospitals with a baby-friendly designation, many women continue to experience significant barriers to breastfeeding, the investigators pointed out. Currently, there are 592 baby-friendly hospitals in the United States, representing 28.29% of annual births.

“I think more hospitals becoming baby friendly would really help,” Mary Franklin, DNP, CNM, assistant professor at Case Western Reserve University, Cleveland, said in an interview. More needs to be done to support women during birth hospitalization and after they return home, so they can continue to breastfeed for “longer than the initial 6 weeks,” added Dr. Franklin, who is also director of the nurse midwifery and women’s health NP program.

The AAP recommends exclusive breastfeeding for about 6 months followed by complementary food introduction and continued breastfeeding through 12 months or beyond.

Like Dr. Hamner, Dr. Franklin emphasized that physicians have an important role to play in the initiation, duration, and exclusivity of breastfeeding. This includes promoting enrichment of the pregnancy experience with prenatal education and increased support from health care providers and peers. At delivery, obstetricians can delay cord clamping to facilitate early breastfeeding. They can also support the elimination of the central nursery in hospitals so that mother and baby stay together from birth. In addition, prescriptions can be written for breast pumps, which are covered by Medicaid.

The study received no outside funding. Dr. Hamner and coauthors disclosed having no potential financial conflicts of interest. Dr. Franklin also disclosed having no financial conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PEDIATRICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cat ownership in childhood linked ‘conditionally’ to psychosis in adult males

Article Type
Changed

 

Owning an outdoor cat as a child is associated with an increased risk of psychotic experiences in adulthood – but only in males, new research suggests.

Investigators found male children who owned cats that went outside had a small, but significantly increased, risk of psychotic experiences in adulthood, compared with their counterparts who had no cat during childhood or who had an indoor cat.

Courtesy McGill University
Dr. Vincent Paquin

The suspected culprit is not the cat itself but rather exposure to Toxoplasma gondii, a common parasite carried by rodents and sometimes found in cat feces. The study adds to a growing evidence showing exposure to T. gondii may be a risk factor for schizophrenia and other psychotic disorders.

“These are small pieces of evidence but it’s interesting to consider that there might be combinations of risk factors at play,” lead author Vincent Paquin, MD, psychiatry resident at McGill University, Montreal, said in an interview.

“And even if the magnitude of the risk is small at the individual level,” he added, “cats and Toxoplasma gondii are so present in our society that if we add up all these small potential effects then it becomes a potential public health question.”

The study was published online Jan. 30, 2022, in the Journal of Psychiatric Research.
 

Inconsistent evidence

T. gondii infects about 30% of the human population and is usually transmitted by cats. Most infections are asymptomatic, but T. gondii can cause toxoplasmosis in humans, which has been linked to increased risk of schizophreniasuicide attempts, and more recently, mild cognitive impairment.

Although some studies show an association between cat ownership and increased risk of mental illness, the research findings have been inconsistent.

“The evidence has been mixed about the association between cat ownership and psychosis expression, so our approach was to consider whether specific factors or combinations of factors could explain this mixed evidence,” Dr. Paquin said.

For the study, 2206 individuals aged 18-40 years completed the Community Assessment of Psychic Experiences (CAPE-42) and a questionnaire to gather information about cat ownership at any time between birth and age 13 and if the cats lived exclusively indoors (nonhunting) or if they were allowed outside (rodent hunting).

Participants were also asked about the number of residential moves between birth and age 15, date and place of birth, lifetime history of head trauma, and tobacco smoking history.

Rodent-hunting cat ownership was associated with higher risk of psychosis in male participants, compared with owning no cat or a nonhunting cat. When the investigators added head trauma and residential moves to rodent-hunting cat ownership, psychosis risk was elevated in both men and women.

Independent of cat ownership, younger age, moving more than three times as a child, a history of head trauma, and being a smoker were all associated with higher psychosis risk.

Courtesy McGill University
Dr. Suzanne King

The study wasn’t designed to explore potential biological mechanisms to explain the sex differences in psychosis risk seen among rodent-hunting cat owners, but “one possible explanation based on the animal model literature is that the neurobiological effects of parasitic exposure may be greater with male sex,” senior author Suzanne King, PhD, professor of psychiatry at McGill, said in an interview.

The new study is part of a larger, long-term project called EnviroGen, led by Dr. King, examining the environmental and genetic risk factors for schizophrenia.
 

Need for replication

Commenting on the findings, E. Fuller Torrey, MD, who was among the first researchers to identify a link between cat ownership, T. gondii infection, and schizophrenia, said the study is “an interesting addition to the studies of cat ownership in childhood as a risk factor for psychosis.”

Of the approximately 10 published studies on the topic, about half suggest a link between cat ownership and psychosis later in life, said Dr. Torrey, associate director for research at the Stanley Medical Research Institute in Rockville, Md.

“The Canadian study is interesting in that it is the first study that separates exposure to permanently indoor cats from cats that are allowed to go outdoors, and the results were positive only for outdoor cats,” Dr. Torrey said.

The study has limitations, Dr. Torrey added, including its retrospective design and the use of a self-report questionnaire to assess psychotic experiences in adulthood.

Also commenting on findings, James Kirkbride, PhD, professor of psychiatric and social epidemiology, University College London, noted the same limitations.

Dr. Kirkbride is the lead author of a 2017 study that showed no link between cat ownership and serious mental illness that included nearly 5,000 people born in 1991 or 1992 and followed until age 18. In this study, there was no link between psychosis and cat ownership during pregnancy or at ages 4 or 10 years.

“Researchers have long been fascinated with the idea that cat ownership may affect mental health. This paper may have them chasing their own tail,” Dr. Kirkbride said.

“Evidence of any association is limited to certain subgroups without a strong theoretical basis for why this may be the case,” he added. “The retrospective and cross-sectional nature of the survey also raise the possibility that the results are impacted by differential recall bias, as well as the broader issues of chance and unobserved confounding.”

Dr. King noted that recall bias is a limitation the researchers highlighted in their study, but “considering the exposures are relatively objective and factual, we do not believe the potential for recall bias is substantial.”

“Nonetheless, we strongly believe that replication of our results in prospective, population-representative cohorts will be crucial to making firmer conclusions,” he added.

The study was funded by grants from the Quebec Health Research Fund. The study authors and Dr. Kirkbride disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Owning an outdoor cat as a child is associated with an increased risk of psychotic experiences in adulthood – but only in males, new research suggests.

Investigators found male children who owned cats that went outside had a small, but significantly increased, risk of psychotic experiences in adulthood, compared with their counterparts who had no cat during childhood or who had an indoor cat.

Courtesy McGill University
Dr. Vincent Paquin

The suspected culprit is not the cat itself but rather exposure to Toxoplasma gondii, a common parasite carried by rodents and sometimes found in cat feces. The study adds to a growing evidence showing exposure to T. gondii may be a risk factor for schizophrenia and other psychotic disorders.

“These are small pieces of evidence but it’s interesting to consider that there might be combinations of risk factors at play,” lead author Vincent Paquin, MD, psychiatry resident at McGill University, Montreal, said in an interview.

“And even if the magnitude of the risk is small at the individual level,” he added, “cats and Toxoplasma gondii are so present in our society that if we add up all these small potential effects then it becomes a potential public health question.”

The study was published online Jan. 30, 2022, in the Journal of Psychiatric Research.
 

Inconsistent evidence

T. gondii infects about 30% of the human population and is usually transmitted by cats. Most infections are asymptomatic, but T. gondii can cause toxoplasmosis in humans, which has been linked to increased risk of schizophreniasuicide attempts, and more recently, mild cognitive impairment.

Although some studies show an association between cat ownership and increased risk of mental illness, the research findings have been inconsistent.

“The evidence has been mixed about the association between cat ownership and psychosis expression, so our approach was to consider whether specific factors or combinations of factors could explain this mixed evidence,” Dr. Paquin said.

For the study, 2206 individuals aged 18-40 years completed the Community Assessment of Psychic Experiences (CAPE-42) and a questionnaire to gather information about cat ownership at any time between birth and age 13 and if the cats lived exclusively indoors (nonhunting) or if they were allowed outside (rodent hunting).

Participants were also asked about the number of residential moves between birth and age 15, date and place of birth, lifetime history of head trauma, and tobacco smoking history.

Rodent-hunting cat ownership was associated with higher risk of psychosis in male participants, compared with owning no cat or a nonhunting cat. When the investigators added head trauma and residential moves to rodent-hunting cat ownership, psychosis risk was elevated in both men and women.

Independent of cat ownership, younger age, moving more than three times as a child, a history of head trauma, and being a smoker were all associated with higher psychosis risk.

Courtesy McGill University
Dr. Suzanne King

The study wasn’t designed to explore potential biological mechanisms to explain the sex differences in psychosis risk seen among rodent-hunting cat owners, but “one possible explanation based on the animal model literature is that the neurobiological effects of parasitic exposure may be greater with male sex,” senior author Suzanne King, PhD, professor of psychiatry at McGill, said in an interview.

The new study is part of a larger, long-term project called EnviroGen, led by Dr. King, examining the environmental and genetic risk factors for schizophrenia.
 

Need for replication

Commenting on the findings, E. Fuller Torrey, MD, who was among the first researchers to identify a link between cat ownership, T. gondii infection, and schizophrenia, said the study is “an interesting addition to the studies of cat ownership in childhood as a risk factor for psychosis.”

Of the approximately 10 published studies on the topic, about half suggest a link between cat ownership and psychosis later in life, said Dr. Torrey, associate director for research at the Stanley Medical Research Institute in Rockville, Md.

“The Canadian study is interesting in that it is the first study that separates exposure to permanently indoor cats from cats that are allowed to go outdoors, and the results were positive only for outdoor cats,” Dr. Torrey said.

The study has limitations, Dr. Torrey added, including its retrospective design and the use of a self-report questionnaire to assess psychotic experiences in adulthood.

Also commenting on findings, James Kirkbride, PhD, professor of psychiatric and social epidemiology, University College London, noted the same limitations.

Dr. Kirkbride is the lead author of a 2017 study that showed no link between cat ownership and serious mental illness that included nearly 5,000 people born in 1991 or 1992 and followed until age 18. In this study, there was no link between psychosis and cat ownership during pregnancy or at ages 4 or 10 years.

“Researchers have long been fascinated with the idea that cat ownership may affect mental health. This paper may have them chasing their own tail,” Dr. Kirkbride said.

“Evidence of any association is limited to certain subgroups without a strong theoretical basis for why this may be the case,” he added. “The retrospective and cross-sectional nature of the survey also raise the possibility that the results are impacted by differential recall bias, as well as the broader issues of chance and unobserved confounding.”

Dr. King noted that recall bias is a limitation the researchers highlighted in their study, but “considering the exposures are relatively objective and factual, we do not believe the potential for recall bias is substantial.”

“Nonetheless, we strongly believe that replication of our results in prospective, population-representative cohorts will be crucial to making firmer conclusions,” he added.

The study was funded by grants from the Quebec Health Research Fund. The study authors and Dr. Kirkbride disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Owning an outdoor cat as a child is associated with an increased risk of psychotic experiences in adulthood – but only in males, new research suggests.

Investigators found male children who owned cats that went outside had a small, but significantly increased, risk of psychotic experiences in adulthood, compared with their counterparts who had no cat during childhood or who had an indoor cat.

Courtesy McGill University
Dr. Vincent Paquin

The suspected culprit is not the cat itself but rather exposure to Toxoplasma gondii, a common parasite carried by rodents and sometimes found in cat feces. The study adds to a growing evidence showing exposure to T. gondii may be a risk factor for schizophrenia and other psychotic disorders.

“These are small pieces of evidence but it’s interesting to consider that there might be combinations of risk factors at play,” lead author Vincent Paquin, MD, psychiatry resident at McGill University, Montreal, said in an interview.

“And even if the magnitude of the risk is small at the individual level,” he added, “cats and Toxoplasma gondii are so present in our society that if we add up all these small potential effects then it becomes a potential public health question.”

The study was published online Jan. 30, 2022, in the Journal of Psychiatric Research.
 

Inconsistent evidence

T. gondii infects about 30% of the human population and is usually transmitted by cats. Most infections are asymptomatic, but T. gondii can cause toxoplasmosis in humans, which has been linked to increased risk of schizophreniasuicide attempts, and more recently, mild cognitive impairment.

Although some studies show an association between cat ownership and increased risk of mental illness, the research findings have been inconsistent.

“The evidence has been mixed about the association between cat ownership and psychosis expression, so our approach was to consider whether specific factors or combinations of factors could explain this mixed evidence,” Dr. Paquin said.

For the study, 2206 individuals aged 18-40 years completed the Community Assessment of Psychic Experiences (CAPE-42) and a questionnaire to gather information about cat ownership at any time between birth and age 13 and if the cats lived exclusively indoors (nonhunting) or if they were allowed outside (rodent hunting).

Participants were also asked about the number of residential moves between birth and age 15, date and place of birth, lifetime history of head trauma, and tobacco smoking history.

Rodent-hunting cat ownership was associated with higher risk of psychosis in male participants, compared with owning no cat or a nonhunting cat. When the investigators added head trauma and residential moves to rodent-hunting cat ownership, psychosis risk was elevated in both men and women.

Independent of cat ownership, younger age, moving more than three times as a child, a history of head trauma, and being a smoker were all associated with higher psychosis risk.

Courtesy McGill University
Dr. Suzanne King

The study wasn’t designed to explore potential biological mechanisms to explain the sex differences in psychosis risk seen among rodent-hunting cat owners, but “one possible explanation based on the animal model literature is that the neurobiological effects of parasitic exposure may be greater with male sex,” senior author Suzanne King, PhD, professor of psychiatry at McGill, said in an interview.

The new study is part of a larger, long-term project called EnviroGen, led by Dr. King, examining the environmental and genetic risk factors for schizophrenia.
 

Need for replication

Commenting on the findings, E. Fuller Torrey, MD, who was among the first researchers to identify a link between cat ownership, T. gondii infection, and schizophrenia, said the study is “an interesting addition to the studies of cat ownership in childhood as a risk factor for psychosis.”

Of the approximately 10 published studies on the topic, about half suggest a link between cat ownership and psychosis later in life, said Dr. Torrey, associate director for research at the Stanley Medical Research Institute in Rockville, Md.

“The Canadian study is interesting in that it is the first study that separates exposure to permanently indoor cats from cats that are allowed to go outdoors, and the results were positive only for outdoor cats,” Dr. Torrey said.

The study has limitations, Dr. Torrey added, including its retrospective design and the use of a self-report questionnaire to assess psychotic experiences in adulthood.

Also commenting on findings, James Kirkbride, PhD, professor of psychiatric and social epidemiology, University College London, noted the same limitations.

Dr. Kirkbride is the lead author of a 2017 study that showed no link between cat ownership and serious mental illness that included nearly 5,000 people born in 1991 or 1992 and followed until age 18. In this study, there was no link between psychosis and cat ownership during pregnancy or at ages 4 or 10 years.

“Researchers have long been fascinated with the idea that cat ownership may affect mental health. This paper may have them chasing their own tail,” Dr. Kirkbride said.

“Evidence of any association is limited to certain subgroups without a strong theoretical basis for why this may be the case,” he added. “The retrospective and cross-sectional nature of the survey also raise the possibility that the results are impacted by differential recall bias, as well as the broader issues of chance and unobserved confounding.”

Dr. King noted that recall bias is a limitation the researchers highlighted in their study, but “considering the exposures are relatively objective and factual, we do not believe the potential for recall bias is substantial.”

“Nonetheless, we strongly believe that replication of our results in prospective, population-representative cohorts will be crucial to making firmer conclusions,” he added.

The study was funded by grants from the Quebec Health Research Fund. The study authors and Dr. Kirkbride disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF PSYCHIATRIC RESEARCH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

No new safety signals reported for pembrolizumab as kidney cancer treatment

Article Type
Changed

 

A new analysis of KEYNOTE-564 demonstrates that pembrolizumab continues to show consistent and clinically meaningful improvement in disease-free survival for patients at high risk of a renal cell carcinoma recurrence.

The updated analysis was presented at the American Society of Clinical Oncology Genitourinary Cancers Symposium.

The study, which was led by Toni K. Choueiri, MD, director of the Lank Center for Genitourinary Oncology at Dana-Farber Cancer Institute, Boston, found no increase in high-grade, or other, adverse events with 30 months of follow-up.

Pembrolizumab (Keytruda, Merck) is approved by the Food and Drug Administration in combination with axitinib, as a first-line treatment for patients with advanced renal cell carcinoma (RCC). The combination is also recommended by the European Society for Medical Oncology for advanced RCC.

KEYNOTE-564 is a phase 3, double-blind, multicenter trial of pembrolizumab versus placebo after surgery in participants with renal cell carcinoma. In previously presented interim KEYNOTE-564 results, the primary endpoint of disease-free survival for adjuvant pembrolizumab versus placebo was met in the intent-to-treat population (hazard ratio, 0.68; 95% confidence interval, 0.53-0.87; P = .001).

KEYNOTE-564 includes 994 patients (mean age, 60 years; 71% male) with confirmed clear cell RCC who were either intermediate to high risk, high risk, or M1 with no evidence of disease after surgery. All patients had surgery 12 or fewer weeks prior to randomization.

At 30.1 months, the disease-free survival difference in the intent-to-treat population between pembrolizumab and placebo at 78.3% versus 67.3% had a HR of 0.63 (95% CI, 0.50-0.80; nominal P < .0001), greater than the 77.3% versus 68.1% difference at 24.1 months (hazard ratio, 0.68, 95% CI, 0.53-0.87; P = .001).

Analysis of disease-free survival by recurrence risk subgroups showed an increasing benefit with increasing risk. In the intermediate- to high-risk group, the 24-month rates were 81.1% and 72.0% for pembrolizumab and placebo, respectively (HR, 0.68; 95% CI, 0.52-0.89). In the high-risk group, the 24-month rates were 48.7% and 35.4%, respectively (HR, 0.60; 95% CI, 0.33-1.10). In the M1 NED group, the rates were 78.4% and 37.9% for pembrolizumab and placebo, respectively (HR, 0.28; 95% CI, 0.12-0.66).

During a discussion about the presentation, Daniel M Geynisman, MD, of Fox Chase Cancer Center, Philadelphia, pointed to the respective absolute differences between pembrolizumab and placebo of 9% in the intermediate- to high-risk group, 13% in the high-risk group, and a “whopping difference of 41% in the M1 NED group,” but underscored that the M1 NED population represents only a small part of the trial population (5.8%). The same relationship was evident among sarcomatoid status subgroups, with 24-month absolute differences of 10.1% favoring pembrolizumab (HR, 0.63) with sarcomatoid features absent and 19.8% (HR, 0.54) when they were present.

Overall survival estimates showed an increasing benefit for pembrolizumab versus placebo (96.6% vs. 93.5%; HR, 0.54) at 24.1 months (96.2% vs. 93.8%, HR, 0.52), but this wasn’t statistically significance because the data collection was incomplete.

Between 24.1 months and 30.1 months, rates of treatment-related adverse events remained the same at 79.1% for pembrolizumab and 53.4% for placebo, with a treatment-related discontinuations in the pembrolizumab group at 17.6% and 18.6% at 24.1 and 30.1 months, respectively.

The study was funded by Merck Sharp & Dohme.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

A new analysis of KEYNOTE-564 demonstrates that pembrolizumab continues to show consistent and clinically meaningful improvement in disease-free survival for patients at high risk of a renal cell carcinoma recurrence.

The updated analysis was presented at the American Society of Clinical Oncology Genitourinary Cancers Symposium.

The study, which was led by Toni K. Choueiri, MD, director of the Lank Center for Genitourinary Oncology at Dana-Farber Cancer Institute, Boston, found no increase in high-grade, or other, adverse events with 30 months of follow-up.

Pembrolizumab (Keytruda, Merck) is approved by the Food and Drug Administration in combination with axitinib, as a first-line treatment for patients with advanced renal cell carcinoma (RCC). The combination is also recommended by the European Society for Medical Oncology for advanced RCC.

KEYNOTE-564 is a phase 3, double-blind, multicenter trial of pembrolizumab versus placebo after surgery in participants with renal cell carcinoma. In previously presented interim KEYNOTE-564 results, the primary endpoint of disease-free survival for adjuvant pembrolizumab versus placebo was met in the intent-to-treat population (hazard ratio, 0.68; 95% confidence interval, 0.53-0.87; P = .001).

KEYNOTE-564 includes 994 patients (mean age, 60 years; 71% male) with confirmed clear cell RCC who were either intermediate to high risk, high risk, or M1 with no evidence of disease after surgery. All patients had surgery 12 or fewer weeks prior to randomization.

At 30.1 months, the disease-free survival difference in the intent-to-treat population between pembrolizumab and placebo at 78.3% versus 67.3% had a HR of 0.63 (95% CI, 0.50-0.80; nominal P < .0001), greater than the 77.3% versus 68.1% difference at 24.1 months (hazard ratio, 0.68, 95% CI, 0.53-0.87; P = .001).

Analysis of disease-free survival by recurrence risk subgroups showed an increasing benefit with increasing risk. In the intermediate- to high-risk group, the 24-month rates were 81.1% and 72.0% for pembrolizumab and placebo, respectively (HR, 0.68; 95% CI, 0.52-0.89). In the high-risk group, the 24-month rates were 48.7% and 35.4%, respectively (HR, 0.60; 95% CI, 0.33-1.10). In the M1 NED group, the rates were 78.4% and 37.9% for pembrolizumab and placebo, respectively (HR, 0.28; 95% CI, 0.12-0.66).

During a discussion about the presentation, Daniel M Geynisman, MD, of Fox Chase Cancer Center, Philadelphia, pointed to the respective absolute differences between pembrolizumab and placebo of 9% in the intermediate- to high-risk group, 13% in the high-risk group, and a “whopping difference of 41% in the M1 NED group,” but underscored that the M1 NED population represents only a small part of the trial population (5.8%). The same relationship was evident among sarcomatoid status subgroups, with 24-month absolute differences of 10.1% favoring pembrolizumab (HR, 0.63) with sarcomatoid features absent and 19.8% (HR, 0.54) when they were present.

Overall survival estimates showed an increasing benefit for pembrolizumab versus placebo (96.6% vs. 93.5%; HR, 0.54) at 24.1 months (96.2% vs. 93.8%, HR, 0.52), but this wasn’t statistically significance because the data collection was incomplete.

Between 24.1 months and 30.1 months, rates of treatment-related adverse events remained the same at 79.1% for pembrolizumab and 53.4% for placebo, with a treatment-related discontinuations in the pembrolizumab group at 17.6% and 18.6% at 24.1 and 30.1 months, respectively.

The study was funded by Merck Sharp & Dohme.

 

A new analysis of KEYNOTE-564 demonstrates that pembrolizumab continues to show consistent and clinically meaningful improvement in disease-free survival for patients at high risk of a renal cell carcinoma recurrence.

The updated analysis was presented at the American Society of Clinical Oncology Genitourinary Cancers Symposium.

The study, which was led by Toni K. Choueiri, MD, director of the Lank Center for Genitourinary Oncology at Dana-Farber Cancer Institute, Boston, found no increase in high-grade, or other, adverse events with 30 months of follow-up.

Pembrolizumab (Keytruda, Merck) is approved by the Food and Drug Administration in combination with axitinib, as a first-line treatment for patients with advanced renal cell carcinoma (RCC). The combination is also recommended by the European Society for Medical Oncology for advanced RCC.

KEYNOTE-564 is a phase 3, double-blind, multicenter trial of pembrolizumab versus placebo after surgery in participants with renal cell carcinoma. In previously presented interim KEYNOTE-564 results, the primary endpoint of disease-free survival for adjuvant pembrolizumab versus placebo was met in the intent-to-treat population (hazard ratio, 0.68; 95% confidence interval, 0.53-0.87; P = .001).

KEYNOTE-564 includes 994 patients (mean age, 60 years; 71% male) with confirmed clear cell RCC who were either intermediate to high risk, high risk, or M1 with no evidence of disease after surgery. All patients had surgery 12 or fewer weeks prior to randomization.

At 30.1 months, the disease-free survival difference in the intent-to-treat population between pembrolizumab and placebo at 78.3% versus 67.3% had a HR of 0.63 (95% CI, 0.50-0.80; nominal P < .0001), greater than the 77.3% versus 68.1% difference at 24.1 months (hazard ratio, 0.68, 95% CI, 0.53-0.87; P = .001).

Analysis of disease-free survival by recurrence risk subgroups showed an increasing benefit with increasing risk. In the intermediate- to high-risk group, the 24-month rates were 81.1% and 72.0% for pembrolizumab and placebo, respectively (HR, 0.68; 95% CI, 0.52-0.89). In the high-risk group, the 24-month rates were 48.7% and 35.4%, respectively (HR, 0.60; 95% CI, 0.33-1.10). In the M1 NED group, the rates were 78.4% and 37.9% for pembrolizumab and placebo, respectively (HR, 0.28; 95% CI, 0.12-0.66).

During a discussion about the presentation, Daniel M Geynisman, MD, of Fox Chase Cancer Center, Philadelphia, pointed to the respective absolute differences between pembrolizumab and placebo of 9% in the intermediate- to high-risk group, 13% in the high-risk group, and a “whopping difference of 41% in the M1 NED group,” but underscored that the M1 NED population represents only a small part of the trial population (5.8%). The same relationship was evident among sarcomatoid status subgroups, with 24-month absolute differences of 10.1% favoring pembrolizumab (HR, 0.63) with sarcomatoid features absent and 19.8% (HR, 0.54) when they were present.

Overall survival estimates showed an increasing benefit for pembrolizumab versus placebo (96.6% vs. 93.5%; HR, 0.54) at 24.1 months (96.2% vs. 93.8%, HR, 0.52), but this wasn’t statistically significance because the data collection was incomplete.

Between 24.1 months and 30.1 months, rates of treatment-related adverse events remained the same at 79.1% for pembrolizumab and 53.4% for placebo, with a treatment-related discontinuations in the pembrolizumab group at 17.6% and 18.6% at 24.1 and 30.1 months, respectively.

The study was funded by Merck Sharp & Dohme.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASCO GU 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

MRI biomarker to be tested in MS

Article Type
Changed

 

Multiple sclerosis (MS) remains a challenging disease to diagnose, but recent advances in magnetic resonance imaging (MRI) have the potential to improve both the sensitivity and specificity of diagnosis. One method involves use of MRI to detect central vein sign (CVS) in MRI images. The CVS is a hypointense vessel at the center of a hyperintense focal lesion, and various retrospective analyses have revealed an association between a greater percentage of lesions being CVS and a diagnosis of MS.

“This is a very frequent finding in MS patients, but it’s a very infrequent finding in non-MS patients, specifically radiological mimics or clinical mimics that are typically confused with MS at the time of clinical diagnosis, and could lead to misdiagnosis. The idea here of a central vein sign is to use it diagnostically as early as possible in the evaluation of the MS, and use it as complementary to the existing McDonald criteria to improve sensitivity and specificity,” Pascal Sati, PhD, said in an interview. Dr. Sati presented an overview of the topic at the annual meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis (ACTRIMS). He is associate professor of neurology at Cedars Sinai Medical Center in Los Angeles and the director of the neuroimaging program in the department of neurology at Cedars Sinai.

The findings could address an important issue in MS. “Misdiagnosis is a big problem in multiple sclerosis, and it has been for a long time. There are recent surveys of physicians that show that something like 95% of MS physicians have seen a case of misdiagnosis in the last year,” Kevin Patel, MD, said in an interview. Dr. Patel is assistant professor of neurology at the University of California, Los Angeles, and moderated the session where Dr. Sati presented.
 

What is the diagnostic threshold of CVS positivity?

A key question is what percentage of lesions should be CVS positive (CVS+) to predict a diagnosis of MS. One meta-analysis found that an average of 73% CVS positivity in MS patients, but levels below 40% in conditions that can mimic MS, such as migraine (22%), cerebral small vessel disease (28.5%), and neuromyelitis optica spectrum disorder (33.5%). However, both biological and technical effects can influence that percentage. The percentage is lower among older patients with MS, and those with vascular comorbidities, likely due to noninflammatory or ischemic plaques, or other lesion types, said Dr. Sati.

On the technical side, lower field strengths tend to reveal a lower proportion of CVS than higher field strengths, which are more sensitive. However, the choice of MRI technique can influence sensitivity, and the optimized T2* EPI and FLAIR* techniques have been shown to reveal higher percentages of CVS, even at lower field strengths like the commonly available 3-T.

Forty percent CVS positivity has been suggested as a diagnostic for MS, but this can be time consuming to determine in patients with large numbers of lesions. An alternative is to analyze a subset of lesions and make a diagnosis if these lesions display the CVS. For example, ‘Select 3*’ would establish an MS diagnosis if at least 3 brain lesions have the CVS, or the ‘6-lesion rule’ would establish an MS diagnosis if at least 6 out 10 brain lesions have the CVS. Recent retrospective studies have supported these simplified diagnostic criteria, suggesting that these approaches perform similarly to the 40% rule.
 

 

 

What will change clinical practice?

However, retrospective studies aren’t enough to change international diagnostic guidelines and clinical practice. Dr. Sati is part of a group of investigators from the North American Imaging in MS (NAIMS) Cooperative that is conducting a large prospective diagnostic study (CAVS-MS) with a $7.2 million grant from the National Institutes of Health, which is currently recruiting 400 patients being evaluated for MS. The MRI protocol will use the optimized T2* EPI/FLAIR* techniques developed by Dr. Sati on 3-T scanners. “It’s a twofold goal: First the evaluation of the diagnostic power of the central vein sign in a real-world cohort, and then the validation of the advanced MRI technology that we’re developing to image the central vein sign clinically,” said Dr. Sati.

Neurologists generally are becoming more aware of these techniques, according to Dr. Patel, but they aren’t yet widely used outside of research settings.

“We haven’t collected enough evidence to really warrant wide implementation. I suspect that that’s one of the major reasons why it is that we don’t see this deployed more widely. I think there’ll be a bit of time before this is integrated to the standard sequences that are done for evaluation of multiple sclerosis,” said Dr. Patel.

The technique must contend with comorbid factors, especially vascular comorbidities such as hypertension, diabetes, or high cholesterol, that can cause white matter hyperintensities as individuals age. “This can create difficulties with using this procedure, because if you have small vessel disease at the same time you have MS, you have much more T2 lesion volume. It becomes a little bit more difficult to suss out whether the person has MS. So there’s a little bit of work that needs to be done along those lines as well,” said Dr. Patel.

With more research, the technology has the potential to improve MS diagnosis, both among community neurologists and even among specialists, according to Dr. Patel. “There are definitely cases that are rather ambiguous that even though they present at a major academic center, it’s sometimes very difficult for us to determine as to whether the person has multiple sclerosis or not. And this sort of technique can potentially help us in distinguishing those cases. Sometimes even after they see folks at tertiary centers, folks still don’t have a definitive diagnosis,” said Dr. Patel.

Dr. Sati and Dr. Patel have no relevant financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Multiple sclerosis (MS) remains a challenging disease to diagnose, but recent advances in magnetic resonance imaging (MRI) have the potential to improve both the sensitivity and specificity of diagnosis. One method involves use of MRI to detect central vein sign (CVS) in MRI images. The CVS is a hypointense vessel at the center of a hyperintense focal lesion, and various retrospective analyses have revealed an association between a greater percentage of lesions being CVS and a diagnosis of MS.

“This is a very frequent finding in MS patients, but it’s a very infrequent finding in non-MS patients, specifically radiological mimics or clinical mimics that are typically confused with MS at the time of clinical diagnosis, and could lead to misdiagnosis. The idea here of a central vein sign is to use it diagnostically as early as possible in the evaluation of the MS, and use it as complementary to the existing McDonald criteria to improve sensitivity and specificity,” Pascal Sati, PhD, said in an interview. Dr. Sati presented an overview of the topic at the annual meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis (ACTRIMS). He is associate professor of neurology at Cedars Sinai Medical Center in Los Angeles and the director of the neuroimaging program in the department of neurology at Cedars Sinai.

The findings could address an important issue in MS. “Misdiagnosis is a big problem in multiple sclerosis, and it has been for a long time. There are recent surveys of physicians that show that something like 95% of MS physicians have seen a case of misdiagnosis in the last year,” Kevin Patel, MD, said in an interview. Dr. Patel is assistant professor of neurology at the University of California, Los Angeles, and moderated the session where Dr. Sati presented.
 

What is the diagnostic threshold of CVS positivity?

A key question is what percentage of lesions should be CVS positive (CVS+) to predict a diagnosis of MS. One meta-analysis found that an average of 73% CVS positivity in MS patients, but levels below 40% in conditions that can mimic MS, such as migraine (22%), cerebral small vessel disease (28.5%), and neuromyelitis optica spectrum disorder (33.5%). However, both biological and technical effects can influence that percentage. The percentage is lower among older patients with MS, and those with vascular comorbidities, likely due to noninflammatory or ischemic plaques, or other lesion types, said Dr. Sati.

On the technical side, lower field strengths tend to reveal a lower proportion of CVS than higher field strengths, which are more sensitive. However, the choice of MRI technique can influence sensitivity, and the optimized T2* EPI and FLAIR* techniques have been shown to reveal higher percentages of CVS, even at lower field strengths like the commonly available 3-T.

Forty percent CVS positivity has been suggested as a diagnostic for MS, but this can be time consuming to determine in patients with large numbers of lesions. An alternative is to analyze a subset of lesions and make a diagnosis if these lesions display the CVS. For example, ‘Select 3*’ would establish an MS diagnosis if at least 3 brain lesions have the CVS, or the ‘6-lesion rule’ would establish an MS diagnosis if at least 6 out 10 brain lesions have the CVS. Recent retrospective studies have supported these simplified diagnostic criteria, suggesting that these approaches perform similarly to the 40% rule.
 

 

 

What will change clinical practice?

However, retrospective studies aren’t enough to change international diagnostic guidelines and clinical practice. Dr. Sati is part of a group of investigators from the North American Imaging in MS (NAIMS) Cooperative that is conducting a large prospective diagnostic study (CAVS-MS) with a $7.2 million grant from the National Institutes of Health, which is currently recruiting 400 patients being evaluated for MS. The MRI protocol will use the optimized T2* EPI/FLAIR* techniques developed by Dr. Sati on 3-T scanners. “It’s a twofold goal: First the evaluation of the diagnostic power of the central vein sign in a real-world cohort, and then the validation of the advanced MRI technology that we’re developing to image the central vein sign clinically,” said Dr. Sati.

Neurologists generally are becoming more aware of these techniques, according to Dr. Patel, but they aren’t yet widely used outside of research settings.

“We haven’t collected enough evidence to really warrant wide implementation. I suspect that that’s one of the major reasons why it is that we don’t see this deployed more widely. I think there’ll be a bit of time before this is integrated to the standard sequences that are done for evaluation of multiple sclerosis,” said Dr. Patel.

The technique must contend with comorbid factors, especially vascular comorbidities such as hypertension, diabetes, or high cholesterol, that can cause white matter hyperintensities as individuals age. “This can create difficulties with using this procedure, because if you have small vessel disease at the same time you have MS, you have much more T2 lesion volume. It becomes a little bit more difficult to suss out whether the person has MS. So there’s a little bit of work that needs to be done along those lines as well,” said Dr. Patel.

With more research, the technology has the potential to improve MS diagnosis, both among community neurologists and even among specialists, according to Dr. Patel. “There are definitely cases that are rather ambiguous that even though they present at a major academic center, it’s sometimes very difficult for us to determine as to whether the person has multiple sclerosis or not. And this sort of technique can potentially help us in distinguishing those cases. Sometimes even after they see folks at tertiary centers, folks still don’t have a definitive diagnosis,” said Dr. Patel.

Dr. Sati and Dr. Patel have no relevant financial disclosures.

 

Multiple sclerosis (MS) remains a challenging disease to diagnose, but recent advances in magnetic resonance imaging (MRI) have the potential to improve both the sensitivity and specificity of diagnosis. One method involves use of MRI to detect central vein sign (CVS) in MRI images. The CVS is a hypointense vessel at the center of a hyperintense focal lesion, and various retrospective analyses have revealed an association between a greater percentage of lesions being CVS and a diagnosis of MS.

“This is a very frequent finding in MS patients, but it’s a very infrequent finding in non-MS patients, specifically radiological mimics or clinical mimics that are typically confused with MS at the time of clinical diagnosis, and could lead to misdiagnosis. The idea here of a central vein sign is to use it diagnostically as early as possible in the evaluation of the MS, and use it as complementary to the existing McDonald criteria to improve sensitivity and specificity,” Pascal Sati, PhD, said in an interview. Dr. Sati presented an overview of the topic at the annual meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis (ACTRIMS). He is associate professor of neurology at Cedars Sinai Medical Center in Los Angeles and the director of the neuroimaging program in the department of neurology at Cedars Sinai.

The findings could address an important issue in MS. “Misdiagnosis is a big problem in multiple sclerosis, and it has been for a long time. There are recent surveys of physicians that show that something like 95% of MS physicians have seen a case of misdiagnosis in the last year,” Kevin Patel, MD, said in an interview. Dr. Patel is assistant professor of neurology at the University of California, Los Angeles, and moderated the session where Dr. Sati presented.
 

What is the diagnostic threshold of CVS positivity?

A key question is what percentage of lesions should be CVS positive (CVS+) to predict a diagnosis of MS. One meta-analysis found that an average of 73% CVS positivity in MS patients, but levels below 40% in conditions that can mimic MS, such as migraine (22%), cerebral small vessel disease (28.5%), and neuromyelitis optica spectrum disorder (33.5%). However, both biological and technical effects can influence that percentage. The percentage is lower among older patients with MS, and those with vascular comorbidities, likely due to noninflammatory or ischemic plaques, or other lesion types, said Dr. Sati.

On the technical side, lower field strengths tend to reveal a lower proportion of CVS than higher field strengths, which are more sensitive. However, the choice of MRI technique can influence sensitivity, and the optimized T2* EPI and FLAIR* techniques have been shown to reveal higher percentages of CVS, even at lower field strengths like the commonly available 3-T.

Forty percent CVS positivity has been suggested as a diagnostic for MS, but this can be time consuming to determine in patients with large numbers of lesions. An alternative is to analyze a subset of lesions and make a diagnosis if these lesions display the CVS. For example, ‘Select 3*’ would establish an MS diagnosis if at least 3 brain lesions have the CVS, or the ‘6-lesion rule’ would establish an MS diagnosis if at least 6 out 10 brain lesions have the CVS. Recent retrospective studies have supported these simplified diagnostic criteria, suggesting that these approaches perform similarly to the 40% rule.
 

 

 

What will change clinical practice?

However, retrospective studies aren’t enough to change international diagnostic guidelines and clinical practice. Dr. Sati is part of a group of investigators from the North American Imaging in MS (NAIMS) Cooperative that is conducting a large prospective diagnostic study (CAVS-MS) with a $7.2 million grant from the National Institutes of Health, which is currently recruiting 400 patients being evaluated for MS. The MRI protocol will use the optimized T2* EPI/FLAIR* techniques developed by Dr. Sati on 3-T scanners. “It’s a twofold goal: First the evaluation of the diagnostic power of the central vein sign in a real-world cohort, and then the validation of the advanced MRI technology that we’re developing to image the central vein sign clinically,” said Dr. Sati.

Neurologists generally are becoming more aware of these techniques, according to Dr. Patel, but they aren’t yet widely used outside of research settings.

“We haven’t collected enough evidence to really warrant wide implementation. I suspect that that’s one of the major reasons why it is that we don’t see this deployed more widely. I think there’ll be a bit of time before this is integrated to the standard sequences that are done for evaluation of multiple sclerosis,” said Dr. Patel.

The technique must contend with comorbid factors, especially vascular comorbidities such as hypertension, diabetes, or high cholesterol, that can cause white matter hyperintensities as individuals age. “This can create difficulties with using this procedure, because if you have small vessel disease at the same time you have MS, you have much more T2 lesion volume. It becomes a little bit more difficult to suss out whether the person has MS. So there’s a little bit of work that needs to be done along those lines as well,” said Dr. Patel.

With more research, the technology has the potential to improve MS diagnosis, both among community neurologists and even among specialists, according to Dr. Patel. “There are definitely cases that are rather ambiguous that even though they present at a major academic center, it’s sometimes very difficult for us to determine as to whether the person has multiple sclerosis or not. And this sort of technique can potentially help us in distinguishing those cases. Sometimes even after they see folks at tertiary centers, folks still don’t have a definitive diagnosis,” said Dr. Patel.

Dr. Sati and Dr. Patel have no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ACTRIMS FORUM 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Daylight Savings: How an imposed time change alters your brain, and what you can do

Article Type
Changed

On March 13, most of the United States and Canada will advance the clock an hour to be on Daylight Saving Time. Most other countries in the Northern Hemisphere will do the same within a few weeks; and many countries across the Southern Hemisphere turn the clock back an hour around the same time. A friend of mine, who spent time on Capitol Hill, once told me that whether it’s adjusting to Daylight Saving Time (and losing an hour of sleep) or switching back to Standard Time (and picking up an hour), large numbers of Americans call their member of Congress every season to complain.

Why are so many of us annoyed by the semi-annual resetting of clocks? It turns out that there are biological reasons for our discomfort with time changes. Those reasons also come into play when we change time zones as we travel, when we work on the night shift, or when we live at higher latitudes, where depressive symptoms from seasonal affective disorder (SAD) can plague us as the period of daylight progressively shortens in winter.
 

Our internal clock(s)

Each of us has a biological master clock keeping track of where we are in our 24-hour day, making ongoing time-of-day-appropriate physical and neurologic adjustments. We refer to those automatic adjustments as “circadian” rhythms – from the Latin, for “around a day” rhythms.

©Thinglass/thinkstockphotos.com

One of the most important regulated functions that is influenced by this time keeping is our sleep-wake cycle. Our brain’s hypothalamus has a kind of “master clock” that receives inputs directly from our eyes, which is how our brain sets our daily cycle period at about 24 hours.

This master clock turns on a tiny structure in our brains, called the pineal gland, to release more of a sleep-inducing chemical, called melatonin, about the same time every evening. The level of melatonin slowly increases to reach maximum deep sleep in the night, then slowly declines as you advance toward morning awakening. The shift from darkness to daylight in the morning, causing your initial morning awakening, releases the excitatory neuromodulator norepinephrine, which, with other chemicals, “turns on the lights” in your brain.

That works well most of the time – but no one told your brain that you were going to arbitrarily go to bed an hour earlier (or in the fall, later) on Circadian Rhythm Time!

We also obviously shift the time on the mechanical clock – requiring a reset of the brain’s master clock – when we travel across time zones or work the night shift. That type of desynchronization of our master clock from the mechanical clock puts our waking and sleeping behaviors out of sync with the production of brain chemicals that affect our alertness and mood. The result may be that you find yourself tired, but not sleepy, and often grumpy or even depressed. As an example, on average, people who work the night shift are just a little bit more anxious and depressed than people who get up to rise and shine with the sun every morning.
 

 

 

Seasonal affective disorder

An extreme example of this desynchronization of the master clock can manifest as SAD. SAD is a type of depression that’s related to seasonal transitions. The most commonly cited cases of SAD are for the fall-to-winter transition. In North America, its prevalence is significantly influenced by the distance of one’s place of residence from the equator – with about 12 times the impact in Alaska versus Florida. Of note, a weaker effect of latitude has been recorded in Europe, where more settled populations have had thousands of years to biologically and culturally adapt to their seasonal patterns.

What can we do about our clocks being messed with?

The most common treatment for SAD is light therapy, in which patients sit or work under artificial lights in an early-morning period, to try to advance the chemical signaling that controls sleeping and waking. Alas, light therapy doesn’t work for everyone.

Another approach, with or without the lights, is to engage in activities early in the day that produce brain chemicals to contribute to bright and cheerful waking. Those “raring-to-go” brain chemicals include norepinephrine (produced when you encounter novelty and are just having fun), acetylcholine (produced when you are carefully paying attention and are in a learning and remembering mode), serotonin (produced when you are feeling positive and just a little bit euphoric), and dopamine (produced when you feel happy and all is right with the world).

In fact, you would benefit from creating the habit of starting every day with activity that wakes up your brain. I begin my day with computerized brain exercises that are attentionally demanding, filled with novelty, and richly neurologically rewarding. I then take a brisk morning walk in which I vary my path for the sake of novelty (pumping norepinephrine), pay close attention to my surroundings (pumping acetylcholine and serotonin), and delight in all of the wonderful things out there in my world (pumping dopamine). My dog Doug enjoys this process of waking up brain and body almost as much as I do! Of course, there are a thousand other stimulating things that could help you get your day off to a lively start.

If you anticipate feeling altered by a time change, you could also think about preparing for it in advance. If it’s the semi-annual 1-hour change that throws you off kilter, you might adjust your bedtime by 10 minutes a day for the week before. If you are traveling 12 time zones (and flipping night and day), you may need to make larger adjustments over the preceding couple of weeks. Generally, without that preparation, it takes about 1 day per time zone crossed to naturally adjust your circadian rhythms.

If you’re a little lazier, like me, you might also adjust to jet lag by not forgetting to take along your little bottle of melatonin tablets, to give your pineal gland a little help. Still, that pineal gland will work hard to tell you to take a nap every day – just when you’ll probably want to be wide awake.

And if, after reading this column, you find yourself still annoyed by the upcoming 1-hour time change, you might just look around at what’s happening out there in the world and decide that your troubles are very small by comparison, and that you should delight in the “extra” hour of sunshine each evening!

Dr. Merzenich is professor emeritus, department of neuroscience, at the University of California, San Francisco. He reported serving in various positions and speaking for Posit Science and Stronger Brain, and has also received funding from the National Institutes of Health. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

On March 13, most of the United States and Canada will advance the clock an hour to be on Daylight Saving Time. Most other countries in the Northern Hemisphere will do the same within a few weeks; and many countries across the Southern Hemisphere turn the clock back an hour around the same time. A friend of mine, who spent time on Capitol Hill, once told me that whether it’s adjusting to Daylight Saving Time (and losing an hour of sleep) or switching back to Standard Time (and picking up an hour), large numbers of Americans call their member of Congress every season to complain.

Why are so many of us annoyed by the semi-annual resetting of clocks? It turns out that there are biological reasons for our discomfort with time changes. Those reasons also come into play when we change time zones as we travel, when we work on the night shift, or when we live at higher latitudes, where depressive symptoms from seasonal affective disorder (SAD) can plague us as the period of daylight progressively shortens in winter.
 

Our internal clock(s)

Each of us has a biological master clock keeping track of where we are in our 24-hour day, making ongoing time-of-day-appropriate physical and neurologic adjustments. We refer to those automatic adjustments as “circadian” rhythms – from the Latin, for “around a day” rhythms.

©Thinglass/thinkstockphotos.com

One of the most important regulated functions that is influenced by this time keeping is our sleep-wake cycle. Our brain’s hypothalamus has a kind of “master clock” that receives inputs directly from our eyes, which is how our brain sets our daily cycle period at about 24 hours.

This master clock turns on a tiny structure in our brains, called the pineal gland, to release more of a sleep-inducing chemical, called melatonin, about the same time every evening. The level of melatonin slowly increases to reach maximum deep sleep in the night, then slowly declines as you advance toward morning awakening. The shift from darkness to daylight in the morning, causing your initial morning awakening, releases the excitatory neuromodulator norepinephrine, which, with other chemicals, “turns on the lights” in your brain.

That works well most of the time – but no one told your brain that you were going to arbitrarily go to bed an hour earlier (or in the fall, later) on Circadian Rhythm Time!

We also obviously shift the time on the mechanical clock – requiring a reset of the brain’s master clock – when we travel across time zones or work the night shift. That type of desynchronization of our master clock from the mechanical clock puts our waking and sleeping behaviors out of sync with the production of brain chemicals that affect our alertness and mood. The result may be that you find yourself tired, but not sleepy, and often grumpy or even depressed. As an example, on average, people who work the night shift are just a little bit more anxious and depressed than people who get up to rise and shine with the sun every morning.
 

 

 

Seasonal affective disorder

An extreme example of this desynchronization of the master clock can manifest as SAD. SAD is a type of depression that’s related to seasonal transitions. The most commonly cited cases of SAD are for the fall-to-winter transition. In North America, its prevalence is significantly influenced by the distance of one’s place of residence from the equator – with about 12 times the impact in Alaska versus Florida. Of note, a weaker effect of latitude has been recorded in Europe, where more settled populations have had thousands of years to biologically and culturally adapt to their seasonal patterns.

What can we do about our clocks being messed with?

The most common treatment for SAD is light therapy, in which patients sit or work under artificial lights in an early-morning period, to try to advance the chemical signaling that controls sleeping and waking. Alas, light therapy doesn’t work for everyone.

Another approach, with or without the lights, is to engage in activities early in the day that produce brain chemicals to contribute to bright and cheerful waking. Those “raring-to-go” brain chemicals include norepinephrine (produced when you encounter novelty and are just having fun), acetylcholine (produced when you are carefully paying attention and are in a learning and remembering mode), serotonin (produced when you are feeling positive and just a little bit euphoric), and dopamine (produced when you feel happy and all is right with the world).

In fact, you would benefit from creating the habit of starting every day with activity that wakes up your brain. I begin my day with computerized brain exercises that are attentionally demanding, filled with novelty, and richly neurologically rewarding. I then take a brisk morning walk in which I vary my path for the sake of novelty (pumping norepinephrine), pay close attention to my surroundings (pumping acetylcholine and serotonin), and delight in all of the wonderful things out there in my world (pumping dopamine). My dog Doug enjoys this process of waking up brain and body almost as much as I do! Of course, there are a thousand other stimulating things that could help you get your day off to a lively start.

If you anticipate feeling altered by a time change, you could also think about preparing for it in advance. If it’s the semi-annual 1-hour change that throws you off kilter, you might adjust your bedtime by 10 minutes a day for the week before. If you are traveling 12 time zones (and flipping night and day), you may need to make larger adjustments over the preceding couple of weeks. Generally, without that preparation, it takes about 1 day per time zone crossed to naturally adjust your circadian rhythms.

If you’re a little lazier, like me, you might also adjust to jet lag by not forgetting to take along your little bottle of melatonin tablets, to give your pineal gland a little help. Still, that pineal gland will work hard to tell you to take a nap every day – just when you’ll probably want to be wide awake.

And if, after reading this column, you find yourself still annoyed by the upcoming 1-hour time change, you might just look around at what’s happening out there in the world and decide that your troubles are very small by comparison, and that you should delight in the “extra” hour of sunshine each evening!

Dr. Merzenich is professor emeritus, department of neuroscience, at the University of California, San Francisco. He reported serving in various positions and speaking for Posit Science and Stronger Brain, and has also received funding from the National Institutes of Health. A version of this article first appeared on Medscape.com.

On March 13, most of the United States and Canada will advance the clock an hour to be on Daylight Saving Time. Most other countries in the Northern Hemisphere will do the same within a few weeks; and many countries across the Southern Hemisphere turn the clock back an hour around the same time. A friend of mine, who spent time on Capitol Hill, once told me that whether it’s adjusting to Daylight Saving Time (and losing an hour of sleep) or switching back to Standard Time (and picking up an hour), large numbers of Americans call their member of Congress every season to complain.

Why are so many of us annoyed by the semi-annual resetting of clocks? It turns out that there are biological reasons for our discomfort with time changes. Those reasons also come into play when we change time zones as we travel, when we work on the night shift, or when we live at higher latitudes, where depressive symptoms from seasonal affective disorder (SAD) can plague us as the period of daylight progressively shortens in winter.
 

Our internal clock(s)

Each of us has a biological master clock keeping track of where we are in our 24-hour day, making ongoing time-of-day-appropriate physical and neurologic adjustments. We refer to those automatic adjustments as “circadian” rhythms – from the Latin, for “around a day” rhythms.

©Thinglass/thinkstockphotos.com

One of the most important regulated functions that is influenced by this time keeping is our sleep-wake cycle. Our brain’s hypothalamus has a kind of “master clock” that receives inputs directly from our eyes, which is how our brain sets our daily cycle period at about 24 hours.

This master clock turns on a tiny structure in our brains, called the pineal gland, to release more of a sleep-inducing chemical, called melatonin, about the same time every evening. The level of melatonin slowly increases to reach maximum deep sleep in the night, then slowly declines as you advance toward morning awakening. The shift from darkness to daylight in the morning, causing your initial morning awakening, releases the excitatory neuromodulator norepinephrine, which, with other chemicals, “turns on the lights” in your brain.

That works well most of the time – but no one told your brain that you were going to arbitrarily go to bed an hour earlier (or in the fall, later) on Circadian Rhythm Time!

We also obviously shift the time on the mechanical clock – requiring a reset of the brain’s master clock – when we travel across time zones or work the night shift. That type of desynchronization of our master clock from the mechanical clock puts our waking and sleeping behaviors out of sync with the production of brain chemicals that affect our alertness and mood. The result may be that you find yourself tired, but not sleepy, and often grumpy or even depressed. As an example, on average, people who work the night shift are just a little bit more anxious and depressed than people who get up to rise and shine with the sun every morning.
 

 

 

Seasonal affective disorder

An extreme example of this desynchronization of the master clock can manifest as SAD. SAD is a type of depression that’s related to seasonal transitions. The most commonly cited cases of SAD are for the fall-to-winter transition. In North America, its prevalence is significantly influenced by the distance of one’s place of residence from the equator – with about 12 times the impact in Alaska versus Florida. Of note, a weaker effect of latitude has been recorded in Europe, where more settled populations have had thousands of years to biologically and culturally adapt to their seasonal patterns.

What can we do about our clocks being messed with?

The most common treatment for SAD is light therapy, in which patients sit or work under artificial lights in an early-morning period, to try to advance the chemical signaling that controls sleeping and waking. Alas, light therapy doesn’t work for everyone.

Another approach, with or without the lights, is to engage in activities early in the day that produce brain chemicals to contribute to bright and cheerful waking. Those “raring-to-go” brain chemicals include norepinephrine (produced when you encounter novelty and are just having fun), acetylcholine (produced when you are carefully paying attention and are in a learning and remembering mode), serotonin (produced when you are feeling positive and just a little bit euphoric), and dopamine (produced when you feel happy and all is right with the world).

In fact, you would benefit from creating the habit of starting every day with activity that wakes up your brain. I begin my day with computerized brain exercises that are attentionally demanding, filled with novelty, and richly neurologically rewarding. I then take a brisk morning walk in which I vary my path for the sake of novelty (pumping norepinephrine), pay close attention to my surroundings (pumping acetylcholine and serotonin), and delight in all of the wonderful things out there in my world (pumping dopamine). My dog Doug enjoys this process of waking up brain and body almost as much as I do! Of course, there are a thousand other stimulating things that could help you get your day off to a lively start.

If you anticipate feeling altered by a time change, you could also think about preparing for it in advance. If it’s the semi-annual 1-hour change that throws you off kilter, you might adjust your bedtime by 10 minutes a day for the week before. If you are traveling 12 time zones (and flipping night and day), you may need to make larger adjustments over the preceding couple of weeks. Generally, without that preparation, it takes about 1 day per time zone crossed to naturally adjust your circadian rhythms.

If you’re a little lazier, like me, you might also adjust to jet lag by not forgetting to take along your little bottle of melatonin tablets, to give your pineal gland a little help. Still, that pineal gland will work hard to tell you to take a nap every day – just when you’ll probably want to be wide awake.

And if, after reading this column, you find yourself still annoyed by the upcoming 1-hour time change, you might just look around at what’s happening out there in the world and decide that your troubles are very small by comparison, and that you should delight in the “extra” hour of sunshine each evening!

Dr. Merzenich is professor emeritus, department of neuroscience, at the University of California, San Francisco. He reported serving in various positions and speaking for Posit Science and Stronger Brain, and has also received funding from the National Institutes of Health. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

C. difficile vaccine: Pfizer’s phase 3 CLOVER trial shows mixed results

Article Type
Changed
Display Headline
C. difficile vaccine: Pfizer’s phase 3 CLOVER trial shows mixed results

There’s mixed news from Pfizer on results from their CLOVER trial (CLOstridium difficile Vaccine Efficacy TRial), a phase 3 study involving 17,500 adults aged 50 and older that evaluated their candidate vaccine (PF-06425090) against Clostridioides difficile (C. diff) for the prevention of C. diff. infection (CDI).

The bad news is that the trial didn’t meet its efficacy endpoint – the prevention of primary CDI. According to a Pfizer press release, “Vaccine efficacy under the primary endpoint was 31% (96.4%, confidence interval -38.7, 66.6) following the third dose and 28.6% (96.4%, CI -28.4, 61.0) following the second dose. For all CDI cases recorded at 14 days post dose 3, vaccine efficacy was 49%, 47%, and 31% up to 12 months, 24 months, and at final analysis, respectively.”

gaetan stoffel/gettyimages


This news organization requested an interview with a Pfizer spokesperson, but the company declined to comment further.

The good news is that the vaccine did meet its secondary endpoint. There were no cases of CDI requiring medical attention among vaccine recipients; by comparison, there were 11 cases among those who received placebo.

The Centers for Disease Control and Prevention classifies C. diff with other antimicrobial resistance “threat” organisms, as the two often go hand in hand. Their 2019 report noted that in 2017, 223,900 people in the United States required hospitalization for CDI, and at least 12,800 died. C. diff is the most common cause of health care-associated infection and increasingly occurs outside of acute care hospitals. Age older than 65 is a risk factor for disease. And at least 20% of patients experience recurrence.

The trial enrolled people older than 50 who were at higher risk of CDI because of having received antibiotics within the previous 12 weeks or because they were likely to have contact with health care systems. They received three doses of an investigational vaccine containing detoxified toxins A and B. These are the principal virulence factors produced by C. diff. Doses were given at 0, 1, and 6 months.

This news organization asked C. diff specialist David Aronoff, MD, chair of the department of medicine at Indiana University, for comment. Dr. Aronoff was not involved in the Pfizer clinical trials. He told this news organization via email, “Given the very low number of cases, I am impressed, from the limited data that have been made available, that the vaccine appears to have efficacy of around 50% for reducing CDI and, importantly, might reduce the severity of disease significantly, possibly preventing hospitalizations or worse clinical outcomes. It is unclear if the vaccine reduces the risk of recurrent CDI, but that would be a strong finding if true. I think we need to see these data after being subject to peer review, to better define its potential role in preventing CDI on a larger scale.”

Asked about the numbers needed to treat and cost-effectiveness of treatment, Dr. Aronoff added, “It is not clear how many people would need to receive the vaccine to prevent one hospitalization from CDI, or one death, or one case. Because the study groups had fewer episodes of CDI than anticipated, it watered down the power of this investigation to provide definitive answers regarding its true efficacy.”

Dr. Aronoff concluded, “All things considered, I am a cup half-full type of person on these topline results, since there are indications of reducing disease incidence and severity. We can build on these results.”

Dr. Aronoff had a basic science C. diff research grant from Pfizer in 2018-2019 that was not related to vaccines or therapeutics.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

There’s mixed news from Pfizer on results from their CLOVER trial (CLOstridium difficile Vaccine Efficacy TRial), a phase 3 study involving 17,500 adults aged 50 and older that evaluated their candidate vaccine (PF-06425090) against Clostridioides difficile (C. diff) for the prevention of C. diff. infection (CDI).

The bad news is that the trial didn’t meet its efficacy endpoint – the prevention of primary CDI. According to a Pfizer press release, “Vaccine efficacy under the primary endpoint was 31% (96.4%, confidence interval -38.7, 66.6) following the third dose and 28.6% (96.4%, CI -28.4, 61.0) following the second dose. For all CDI cases recorded at 14 days post dose 3, vaccine efficacy was 49%, 47%, and 31% up to 12 months, 24 months, and at final analysis, respectively.”

gaetan stoffel/gettyimages


This news organization requested an interview with a Pfizer spokesperson, but the company declined to comment further.

The good news is that the vaccine did meet its secondary endpoint. There were no cases of CDI requiring medical attention among vaccine recipients; by comparison, there were 11 cases among those who received placebo.

The Centers for Disease Control and Prevention classifies C. diff with other antimicrobial resistance “threat” organisms, as the two often go hand in hand. Their 2019 report noted that in 2017, 223,900 people in the United States required hospitalization for CDI, and at least 12,800 died. C. diff is the most common cause of health care-associated infection and increasingly occurs outside of acute care hospitals. Age older than 65 is a risk factor for disease. And at least 20% of patients experience recurrence.

The trial enrolled people older than 50 who were at higher risk of CDI because of having received antibiotics within the previous 12 weeks or because they were likely to have contact with health care systems. They received three doses of an investigational vaccine containing detoxified toxins A and B. These are the principal virulence factors produced by C. diff. Doses were given at 0, 1, and 6 months.

This news organization asked C. diff specialist David Aronoff, MD, chair of the department of medicine at Indiana University, for comment. Dr. Aronoff was not involved in the Pfizer clinical trials. He told this news organization via email, “Given the very low number of cases, I am impressed, from the limited data that have been made available, that the vaccine appears to have efficacy of around 50% for reducing CDI and, importantly, might reduce the severity of disease significantly, possibly preventing hospitalizations or worse clinical outcomes. It is unclear if the vaccine reduces the risk of recurrent CDI, but that would be a strong finding if true. I think we need to see these data after being subject to peer review, to better define its potential role in preventing CDI on a larger scale.”

Asked about the numbers needed to treat and cost-effectiveness of treatment, Dr. Aronoff added, “It is not clear how many people would need to receive the vaccine to prevent one hospitalization from CDI, or one death, or one case. Because the study groups had fewer episodes of CDI than anticipated, it watered down the power of this investigation to provide definitive answers regarding its true efficacy.”

Dr. Aronoff concluded, “All things considered, I am a cup half-full type of person on these topline results, since there are indications of reducing disease incidence and severity. We can build on these results.”

Dr. Aronoff had a basic science C. diff research grant from Pfizer in 2018-2019 that was not related to vaccines or therapeutics.

A version of this article first appeared on Medscape.com.

There’s mixed news from Pfizer on results from their CLOVER trial (CLOstridium difficile Vaccine Efficacy TRial), a phase 3 study involving 17,500 adults aged 50 and older that evaluated their candidate vaccine (PF-06425090) against Clostridioides difficile (C. diff) for the prevention of C. diff. infection (CDI).

The bad news is that the trial didn’t meet its efficacy endpoint – the prevention of primary CDI. According to a Pfizer press release, “Vaccine efficacy under the primary endpoint was 31% (96.4%, confidence interval -38.7, 66.6) following the third dose and 28.6% (96.4%, CI -28.4, 61.0) following the second dose. For all CDI cases recorded at 14 days post dose 3, vaccine efficacy was 49%, 47%, and 31% up to 12 months, 24 months, and at final analysis, respectively.”

gaetan stoffel/gettyimages


This news organization requested an interview with a Pfizer spokesperson, but the company declined to comment further.

The good news is that the vaccine did meet its secondary endpoint. There were no cases of CDI requiring medical attention among vaccine recipients; by comparison, there were 11 cases among those who received placebo.

The Centers for Disease Control and Prevention classifies C. diff with other antimicrobial resistance “threat” organisms, as the two often go hand in hand. Their 2019 report noted that in 2017, 223,900 people in the United States required hospitalization for CDI, and at least 12,800 died. C. diff is the most common cause of health care-associated infection and increasingly occurs outside of acute care hospitals. Age older than 65 is a risk factor for disease. And at least 20% of patients experience recurrence.

The trial enrolled people older than 50 who were at higher risk of CDI because of having received antibiotics within the previous 12 weeks or because they were likely to have contact with health care systems. They received three doses of an investigational vaccine containing detoxified toxins A and B. These are the principal virulence factors produced by C. diff. Doses were given at 0, 1, and 6 months.

This news organization asked C. diff specialist David Aronoff, MD, chair of the department of medicine at Indiana University, for comment. Dr. Aronoff was not involved in the Pfizer clinical trials. He told this news organization via email, “Given the very low number of cases, I am impressed, from the limited data that have been made available, that the vaccine appears to have efficacy of around 50% for reducing CDI and, importantly, might reduce the severity of disease significantly, possibly preventing hospitalizations or worse clinical outcomes. It is unclear if the vaccine reduces the risk of recurrent CDI, but that would be a strong finding if true. I think we need to see these data after being subject to peer review, to better define its potential role in preventing CDI on a larger scale.”

Asked about the numbers needed to treat and cost-effectiveness of treatment, Dr. Aronoff added, “It is not clear how many people would need to receive the vaccine to prevent one hospitalization from CDI, or one death, or one case. Because the study groups had fewer episodes of CDI than anticipated, it watered down the power of this investigation to provide definitive answers regarding its true efficacy.”

Dr. Aronoff concluded, “All things considered, I am a cup half-full type of person on these topline results, since there are indications of reducing disease incidence and severity. We can build on these results.”

Dr. Aronoff had a basic science C. diff research grant from Pfizer in 2018-2019 that was not related to vaccines or therapeutics.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Display Headline
C. difficile vaccine: Pfizer’s phase 3 CLOVER trial shows mixed results
Display Headline
C. difficile vaccine: Pfizer’s phase 3 CLOVER trial shows mixed results
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New guidelines on MRI use in patients with MS explained

Article Type
Changed

Magnetic Resonance Imaging has long been the standard for diagnosing and surveilling multiple sclerosis (MS), and new guidelines provide updates on the use of MRI for the diagnosis, prognosis, and treatment monitoring of MS.

MS affects approximately one million people in the United States. As family physicians, these guidelines are important to know, because we are often the ones who make the initial diagnosis of MS. Similarly, if we order the wrong imaging study, we can miss making an accurate diagnosis.

Dr. Linda Girgis

The new guidelines (MAGNIMS), which were sponsored by the Consortium of Multiple Sclerosis Centres, were published in August. The documents offers detailed guidance on the use of standardized MRI protocols as well as the use of IV gadolinium contrast agents, including in children and pregnant patients.

It is advised to use 3-D techniques (as opposed to two-dimensional) and it is noted that this is becoming more clinically available. Sagittal 3-D T2-weighted fluid-attenuated inversion recovery (FLAIR) is the core sequence considered for MS diagnosis and monitoring because of its high sensitivity. High-quality 2-D pulse sequences can be used alternatively when 3-D FLAIR is not available.

When 3 T scanners are not available, 1.5 T scanners are sufficient. However, 3 T scanners do have a higher detection rate for MS lesions. In evaluating the imaging, T2 lesion counts, gadolinium lesion counts, and interval changes should be reported.

The use of GBCAs (gadolinium-based contrast agents) is needed to diagnose MS and rule out other diseases. The time between injection of contrast should ideally be 10 minutes but no less than 5. Optic nerve MRI is recommended only in patients with atypical symptoms, such as new visual symptoms. Spinal cord MRI is also not routinely advised unless it is needed for prognosis.

When the initial MRI does not meet the full criteria of MS, brain MRI should be repeated every 6-12 months in suspected cases. The same modality should be used each time. After treatment is started, it is recommended to perform MRI without GBCAs for 3 months and annual follow ups. The use of GBCAs-free MRIs for routine follow up is a new recommendation compared to previous ones. However, if the use of GBCAs would change the management, then they should be utilized for monitoring.

The same imaging standards are recommended in pediatric patients. Spinal cord MRI should be utilized in kids with spinal cord symptoms or inconclusive brain MRI. Similar scan frequency is recommended as in adults. MRI is not contraindicated during pregnancy but should be decided on an individual basis. Standard protocols should be used as well as a magnetic field strength of 1.5 T. GBCAs should not be used during pregnancy. There are no limitations in the postpartum period.

The complete set of guidelines is quite extensive and adds to the previous guidelines published in 2017. They were first published in The Lancet Neurology.

While most of these patients will be referred to neurologists, as the primary care physician it is our responsibility to know all aspects of our patients’ diseases and treatments. While we may not be actively treating MS in these patients, we need to know their medications, how they interact with others, and how their disease is progressing

Additionally, we may be the ones asked to order MRIs for monitoring. It is imperative that we know the guidelines for how to do this.

Dr. Girgis practices family medicine in South River, N.J., and is a clinical assistant professor of family medicine at Robert Wood Johnson Medical School, New Brunswick, N.J. You can contact her at [email protected].

Publications
Topics
Sections

Magnetic Resonance Imaging has long been the standard for diagnosing and surveilling multiple sclerosis (MS), and new guidelines provide updates on the use of MRI for the diagnosis, prognosis, and treatment monitoring of MS.

MS affects approximately one million people in the United States. As family physicians, these guidelines are important to know, because we are often the ones who make the initial diagnosis of MS. Similarly, if we order the wrong imaging study, we can miss making an accurate diagnosis.

Dr. Linda Girgis

The new guidelines (MAGNIMS), which were sponsored by the Consortium of Multiple Sclerosis Centres, were published in August. The documents offers detailed guidance on the use of standardized MRI protocols as well as the use of IV gadolinium contrast agents, including in children and pregnant patients.

It is advised to use 3-D techniques (as opposed to two-dimensional) and it is noted that this is becoming more clinically available. Sagittal 3-D T2-weighted fluid-attenuated inversion recovery (FLAIR) is the core sequence considered for MS diagnosis and monitoring because of its high sensitivity. High-quality 2-D pulse sequences can be used alternatively when 3-D FLAIR is not available.

When 3 T scanners are not available, 1.5 T scanners are sufficient. However, 3 T scanners do have a higher detection rate for MS lesions. In evaluating the imaging, T2 lesion counts, gadolinium lesion counts, and interval changes should be reported.

The use of GBCAs (gadolinium-based contrast agents) is needed to diagnose MS and rule out other diseases. The time between injection of contrast should ideally be 10 minutes but no less than 5. Optic nerve MRI is recommended only in patients with atypical symptoms, such as new visual symptoms. Spinal cord MRI is also not routinely advised unless it is needed for prognosis.

When the initial MRI does not meet the full criteria of MS, brain MRI should be repeated every 6-12 months in suspected cases. The same modality should be used each time. After treatment is started, it is recommended to perform MRI without GBCAs for 3 months and annual follow ups. The use of GBCAs-free MRIs for routine follow up is a new recommendation compared to previous ones. However, if the use of GBCAs would change the management, then they should be utilized for monitoring.

The same imaging standards are recommended in pediatric patients. Spinal cord MRI should be utilized in kids with spinal cord symptoms or inconclusive brain MRI. Similar scan frequency is recommended as in adults. MRI is not contraindicated during pregnancy but should be decided on an individual basis. Standard protocols should be used as well as a magnetic field strength of 1.5 T. GBCAs should not be used during pregnancy. There are no limitations in the postpartum period.

The complete set of guidelines is quite extensive and adds to the previous guidelines published in 2017. They were first published in The Lancet Neurology.

While most of these patients will be referred to neurologists, as the primary care physician it is our responsibility to know all aspects of our patients’ diseases and treatments. While we may not be actively treating MS in these patients, we need to know their medications, how they interact with others, and how their disease is progressing

Additionally, we may be the ones asked to order MRIs for monitoring. It is imperative that we know the guidelines for how to do this.

Dr. Girgis practices family medicine in South River, N.J., and is a clinical assistant professor of family medicine at Robert Wood Johnson Medical School, New Brunswick, N.J. You can contact her at [email protected].

Magnetic Resonance Imaging has long been the standard for diagnosing and surveilling multiple sclerosis (MS), and new guidelines provide updates on the use of MRI for the diagnosis, prognosis, and treatment monitoring of MS.

MS affects approximately one million people in the United States. As family physicians, these guidelines are important to know, because we are often the ones who make the initial diagnosis of MS. Similarly, if we order the wrong imaging study, we can miss making an accurate diagnosis.

Dr. Linda Girgis

The new guidelines (MAGNIMS), which were sponsored by the Consortium of Multiple Sclerosis Centres, were published in August. The documents offers detailed guidance on the use of standardized MRI protocols as well as the use of IV gadolinium contrast agents, including in children and pregnant patients.

It is advised to use 3-D techniques (as opposed to two-dimensional) and it is noted that this is becoming more clinically available. Sagittal 3-D T2-weighted fluid-attenuated inversion recovery (FLAIR) is the core sequence considered for MS diagnosis and monitoring because of its high sensitivity. High-quality 2-D pulse sequences can be used alternatively when 3-D FLAIR is not available.

When 3 T scanners are not available, 1.5 T scanners are sufficient. However, 3 T scanners do have a higher detection rate for MS lesions. In evaluating the imaging, T2 lesion counts, gadolinium lesion counts, and interval changes should be reported.

The use of GBCAs (gadolinium-based contrast agents) is needed to diagnose MS and rule out other diseases. The time between injection of contrast should ideally be 10 minutes but no less than 5. Optic nerve MRI is recommended only in patients with atypical symptoms, such as new visual symptoms. Spinal cord MRI is also not routinely advised unless it is needed for prognosis.

When the initial MRI does not meet the full criteria of MS, brain MRI should be repeated every 6-12 months in suspected cases. The same modality should be used each time. After treatment is started, it is recommended to perform MRI without GBCAs for 3 months and annual follow ups. The use of GBCAs-free MRIs for routine follow up is a new recommendation compared to previous ones. However, if the use of GBCAs would change the management, then they should be utilized for monitoring.

The same imaging standards are recommended in pediatric patients. Spinal cord MRI should be utilized in kids with spinal cord symptoms or inconclusive brain MRI. Similar scan frequency is recommended as in adults. MRI is not contraindicated during pregnancy but should be decided on an individual basis. Standard protocols should be used as well as a magnetic field strength of 1.5 T. GBCAs should not be used during pregnancy. There are no limitations in the postpartum period.

The complete set of guidelines is quite extensive and adds to the previous guidelines published in 2017. They were first published in The Lancet Neurology.

While most of these patients will be referred to neurologists, as the primary care physician it is our responsibility to know all aspects of our patients’ diseases and treatments. While we may not be actively treating MS in these patients, we need to know their medications, how they interact with others, and how their disease is progressing

Additionally, we may be the ones asked to order MRIs for monitoring. It is imperative that we know the guidelines for how to do this.

Dr. Girgis practices family medicine in South River, N.J., and is a clinical assistant professor of family medicine at Robert Wood Johnson Medical School, New Brunswick, N.J. You can contact her at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New carcinogens added to toxicology list

Article Type
Changed

From environmental tobacco smoke to ultraviolet (UV) radiation, diesel exhaust particulates, lead, and now, chronic infection with Helicobacter pylori (H pylori) —the Report on Carcinogens has regularly updated the list of substances known or “reasonably anticipated” to cause cancer.

The 15th report, which is prepared by the National Toxicology Program (NTP) for the Department of Health and Human Services, has 8 new entries, bringing the number of human carcinogens (eg, metals, pesticides, and drugs) on the list to 256. (The first report, released in 1980, listed 26.) In addition to H. pylori infection, this edition adds the flame-retardant chemical antimony trioxide, and 6 haloacetic acids found as water disinfection byproducts.

In 1971, then-President Nixon declared “war on cancer” (the second leading cause of death in the United States) and signed the National Cancer Act. In 1978, Congress ordered the Report on Carcinogens, to educate the public and health professionals on potential environmental carcinogenic hazards.

Perhaps disheartening to know that even with 256 entries, the list probably understates the number of carcinogens humans and other creatures are exposed to. But things can change with time. Each list goes through a rigorous round of reviews. Sometimes substances are “delisted” after, for instance, litigation or new research. Saccharin, for example, was removed from the ninth edition. It was listed as “reasonably anticipated” in 1981, based on “sufficient evidence of carcinogenicity in experimental animals.” It was removed, however, after extensive review of decades of saccharin use determined that the data were not sufficient to meet current criteria. Further research had revealed, also, that the observed bladder tumors in rats arose from a mechanism not relevant to humans.

Other entries, such as the controversial listing of the cancer drug tamoxifen, walk a fine line between risk and benefit. Tamoxifen, first listed in the ninth report (and still in the 15th report), was included because studies revealed that it could increase the risk of uterine cancer in women. But there also was conclusive evidence that it may prevent or delay breast cancer in women who are at high risk.

Ultimately, the report’s authors make it clear that it is for informative value and guidance, not necessarily a dictate. As one report put it: “Personal decisions concerning voluntary exposures to carcinogenic agents need to be based on additional information that is beyond the scope” of the report.

“As the identification of carcinogens is a key step in cancer prevention,” said Rick Woychik, PhD, director of the National Institute of Environmental Health Sciences and NTP, “publication of the report represents an important government activity towards improving public health.”

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

From environmental tobacco smoke to ultraviolet (UV) radiation, diesel exhaust particulates, lead, and now, chronic infection with Helicobacter pylori (H pylori) —the Report on Carcinogens has regularly updated the list of substances known or “reasonably anticipated” to cause cancer.

The 15th report, which is prepared by the National Toxicology Program (NTP) for the Department of Health and Human Services, has 8 new entries, bringing the number of human carcinogens (eg, metals, pesticides, and drugs) on the list to 256. (The first report, released in 1980, listed 26.) In addition to H. pylori infection, this edition adds the flame-retardant chemical antimony trioxide, and 6 haloacetic acids found as water disinfection byproducts.

In 1971, then-President Nixon declared “war on cancer” (the second leading cause of death in the United States) and signed the National Cancer Act. In 1978, Congress ordered the Report on Carcinogens, to educate the public and health professionals on potential environmental carcinogenic hazards.

Perhaps disheartening to know that even with 256 entries, the list probably understates the number of carcinogens humans and other creatures are exposed to. But things can change with time. Each list goes through a rigorous round of reviews. Sometimes substances are “delisted” after, for instance, litigation or new research. Saccharin, for example, was removed from the ninth edition. It was listed as “reasonably anticipated” in 1981, based on “sufficient evidence of carcinogenicity in experimental animals.” It was removed, however, after extensive review of decades of saccharin use determined that the data were not sufficient to meet current criteria. Further research had revealed, also, that the observed bladder tumors in rats arose from a mechanism not relevant to humans.

Other entries, such as the controversial listing of the cancer drug tamoxifen, walk a fine line between risk and benefit. Tamoxifen, first listed in the ninth report (and still in the 15th report), was included because studies revealed that it could increase the risk of uterine cancer in women. But there also was conclusive evidence that it may prevent or delay breast cancer in women who are at high risk.

Ultimately, the report’s authors make it clear that it is for informative value and guidance, not necessarily a dictate. As one report put it: “Personal decisions concerning voluntary exposures to carcinogenic agents need to be based on additional information that is beyond the scope” of the report.

“As the identification of carcinogens is a key step in cancer prevention,” said Rick Woychik, PhD, director of the National Institute of Environmental Health Sciences and NTP, “publication of the report represents an important government activity towards improving public health.”

A version of this article first appeared on Medscape.com.

From environmental tobacco smoke to ultraviolet (UV) radiation, diesel exhaust particulates, lead, and now, chronic infection with Helicobacter pylori (H pylori) —the Report on Carcinogens has regularly updated the list of substances known or “reasonably anticipated” to cause cancer.

The 15th report, which is prepared by the National Toxicology Program (NTP) for the Department of Health and Human Services, has 8 new entries, bringing the number of human carcinogens (eg, metals, pesticides, and drugs) on the list to 256. (The first report, released in 1980, listed 26.) In addition to H. pylori infection, this edition adds the flame-retardant chemical antimony trioxide, and 6 haloacetic acids found as water disinfection byproducts.

In 1971, then-President Nixon declared “war on cancer” (the second leading cause of death in the United States) and signed the National Cancer Act. In 1978, Congress ordered the Report on Carcinogens, to educate the public and health professionals on potential environmental carcinogenic hazards.

Perhaps disheartening to know that even with 256 entries, the list probably understates the number of carcinogens humans and other creatures are exposed to. But things can change with time. Each list goes through a rigorous round of reviews. Sometimes substances are “delisted” after, for instance, litigation or new research. Saccharin, for example, was removed from the ninth edition. It was listed as “reasonably anticipated” in 1981, based on “sufficient evidence of carcinogenicity in experimental animals.” It was removed, however, after extensive review of decades of saccharin use determined that the data were not sufficient to meet current criteria. Further research had revealed, also, that the observed bladder tumors in rats arose from a mechanism not relevant to humans.

Other entries, such as the controversial listing of the cancer drug tamoxifen, walk a fine line between risk and benefit. Tamoxifen, first listed in the ninth report (and still in the 15th report), was included because studies revealed that it could increase the risk of uterine cancer in women. But there also was conclusive evidence that it may prevent or delay breast cancer in women who are at high risk.

Ultimately, the report’s authors make it clear that it is for informative value and guidance, not necessarily a dictate. As one report put it: “Personal decisions concerning voluntary exposures to carcinogenic agents need to be based on additional information that is beyond the scope” of the report.

“As the identification of carcinogens is a key step in cancer prevention,” said Rick Woychik, PhD, director of the National Institute of Environmental Health Sciences and NTP, “publication of the report represents an important government activity towards improving public health.”

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article