Vacuum device quickly stops postpartum hemorrhage

Article Type
Changed
Thu, 09/17/2020 - 15:43

A novel intrauterine device that uses suction to compress the uterus to control postpartum hemorrhage has received high marks for effectiveness and ease of use, researchers say.

Calling this approach “a stroke of brilliance,” James Byrne, MD, said in an interview that it is much quicker and much simpler than other techniques for managing postpartum hemorrhage and is less risky as well.

“This device can be placed in the uterus within a minute or so and does not need any initial anesthesia and would not be associated with the delay needed for a surgical approach,” Dr. Byrne explained. Dr. Byrne, who was not involved in the study, is chair of the department of obstetrics and gynecology at Santa Clara Valley Medical Center in San Jose, Calif.

To test the efficacy and safety of the device (Jada System, Alydia Health), Mary E. D’Alton, MD, and colleagues conducted a prospective, observational treatment study in 12 U.S. medical centers. They reported their findings in an article published online Sept. 9 in Obstetrics and Gynecology.

“The Jada System (novel intrauterine vacuum-induced hemorrhage-control device) was specifically designed to offer rapid treatment by applying low-level intrauterine vacuum to facilitate the physiologic forces of uterine contractions to constrict myometrial blood vessels and achieve hemostasis,” Dr. D’Alton, from New York–Presbyterian/Columbia University Irving Medical Center in New York, and colleagues wrote.

“The device had a low rate of adverse events during this study, all of which were expected risks and resolved with treatment without serious clinical sequelae. Investigators, all first-time users of the device, found the system easy to use, which suggests that, after device education and with availability of a quick reference guide outlining steps, there is a minimal learning curve for use,” they added.

Alydia Health, the company that developed the device, funded this study and supported the research staff, who recruited participants and gathered follow-up data on them. On Sept. 9, the U.S. Food and Drug Administration granted 510(k) clearance for the device, according to a company news release.
 

Effective, safe

The multicenter study included 107 patients (mean age, 29.7 years) with postpartum hemorrhage or abnormal postpartum uterine bleeding, 106 of whom received any study treatment with the device attached to vacuum. More than half (57%) of the participants were White, and just fewer than one-quarter (24%) were Black.

Treatment was successful in 94% (100/106) of participants, with “definitive control of abnormal bleeding” occurring in a median of 3 minutes after attachment to vacuum.

Eight adverse events were judged to have been possibly related to the device or procedure: four cases of endometritis, and one case each of presumed endometritis, bacterial vaginosisvaginal candidiasis, and disruption of a vaginal laceration repair. The eight adverse events were identified as potential risks, and all resolved without serious clinical consequences.

Thirty-five patients required transfusions of 1-3 U of red blood cells, and five patients required at least 4 U of red blood cells.
 

Uterine atony most frequent culprit

As many as 80% of postpartum hemorrhages are caused by uterine atony, according to the authors.

Dr. Byrne explained that the uterus is a muscular organ that contains many “spiral arteries” that are “squeezed” by the uterus as it tightens down after childbirth, which prevents them from bleeding excessively.

“With uterine atony, the uterus muscle doesn’t squeeze effectively, and therefore it’s not one or two arteries, it’s hundreds and hundreds of small arteries and capillaries [and] arterioles all bleeding; it’s a wide area of uterus,” he continued.

When medications alone are ineffective at controlling bleeding, tamponade is often added to put outward pressure on the inner wall of the uterus for 12-24 hours. Although tamponade is effective in approximately 87% of atony-related cases of postpartum hemorrhage, the use of outward pressure on the uterine walls “is counterintuitive if the ultimate goal is uterine contraction,” the authors wrote.

Dr. Byrne said he and his colleagues saw this device several years ago, and they felt at the time that it appeared to be “more intuitive to use vacuum to compress the uterus inward compared to the nonetheless valuable and effective Bakri balloon and other techniques that expand the uterus outward.”

The fact that there is no need for prophylactic antibiotics also sets the vacuum device apart from the Bakri balloon, use of which routinely involves administration of prophylactic antibiotics, Dr. Byrne said.

In the current study, 64% of participants were obese, which makes management of postpartum hemorrhage “really challenging” because it’s difficult to effectively massage the uterus through adipose tissue, Dr. Byrne explained. Patients with obesity “also have different hemodynamics for how effectively [injected medications will] be delivered to the uterus,” he added.

“A device like this that could be placed and works so efficiently – even with an obese patient – that’s actually very powerful,” Dr. Byrne said.
 

 

 

Quick placement, almost immediate improvement

The discomfort experienced during placement of the device is similar to that experienced during sweeping of the uterus, Dr. Byrne explained. “You’d want a patient comfortable, ideally with an epidural already active, but if it’s an emergency, you wouldn’t have to wait for that; you could sweep the uterus quickly, place this, initiate suction, and it would all be so quick you could usually talk a patient through it and get it done,” Dr. Byrne continued.

Almost all of the investigators (98%) said the device was easy to use, and 97% said they would recommend it.

The vacuum device is made of medical-grade silicone and consists of an oval-shaped intrauterine loop at one end and a vacuum connector at the other end that can be attached to a standard suction cannister. On the inner side of the intrauterine loop are 20 vacuum pores covered by a shield that protects uterine tissue and prevents the vacuum pores from clogging with tissue or clotted blood.

Before insertion of the vacuum device, the clinician manually sweeps the uterus to identify retained placental fragments and to assess the uterine cavity. The distal end of the device is inserted into the uterus, and a cervical seal, positioned just outside the cervical os, is filled with 60 to 120 cc of sterile fluid. The proximal end is attached to low-level vacuum at a pressure of 80 ± 10 mm Hg. The device is left in place with continued suction for at least 1 hour after bleeding is controlled, at which time the suction is disconnected and the cervical seal is emptied. The device remains in place for at least 30 minutes, during which the patient is observed closely.

“It looks like 75%-80% of cases stop bleeding within 5 minutes. ... Then you stop the pressure after an hour [and] wait at least 30 minutes. You could actually have this out of the patient’s body within 2 hours,” Dr. Byrne said.

Dr. Byrne has disclosed no such financial relationships.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

A novel intrauterine device that uses suction to compress the uterus to control postpartum hemorrhage has received high marks for effectiveness and ease of use, researchers say.

Calling this approach “a stroke of brilliance,” James Byrne, MD, said in an interview that it is much quicker and much simpler than other techniques for managing postpartum hemorrhage and is less risky as well.

“This device can be placed in the uterus within a minute or so and does not need any initial anesthesia and would not be associated with the delay needed for a surgical approach,” Dr. Byrne explained. Dr. Byrne, who was not involved in the study, is chair of the department of obstetrics and gynecology at Santa Clara Valley Medical Center in San Jose, Calif.

To test the efficacy and safety of the device (Jada System, Alydia Health), Mary E. D’Alton, MD, and colleagues conducted a prospective, observational treatment study in 12 U.S. medical centers. They reported their findings in an article published online Sept. 9 in Obstetrics and Gynecology.

“The Jada System (novel intrauterine vacuum-induced hemorrhage-control device) was specifically designed to offer rapid treatment by applying low-level intrauterine vacuum to facilitate the physiologic forces of uterine contractions to constrict myometrial blood vessels and achieve hemostasis,” Dr. D’Alton, from New York–Presbyterian/Columbia University Irving Medical Center in New York, and colleagues wrote.

“The device had a low rate of adverse events during this study, all of which were expected risks and resolved with treatment without serious clinical sequelae. Investigators, all first-time users of the device, found the system easy to use, which suggests that, after device education and with availability of a quick reference guide outlining steps, there is a minimal learning curve for use,” they added.

Alydia Health, the company that developed the device, funded this study and supported the research staff, who recruited participants and gathered follow-up data on them. On Sept. 9, the U.S. Food and Drug Administration granted 510(k) clearance for the device, according to a company news release.
 

Effective, safe

The multicenter study included 107 patients (mean age, 29.7 years) with postpartum hemorrhage or abnormal postpartum uterine bleeding, 106 of whom received any study treatment with the device attached to vacuum. More than half (57%) of the participants were White, and just fewer than one-quarter (24%) were Black.

Treatment was successful in 94% (100/106) of participants, with “definitive control of abnormal bleeding” occurring in a median of 3 minutes after attachment to vacuum.

Eight adverse events were judged to have been possibly related to the device or procedure: four cases of endometritis, and one case each of presumed endometritis, bacterial vaginosisvaginal candidiasis, and disruption of a vaginal laceration repair. The eight adverse events were identified as potential risks, and all resolved without serious clinical consequences.

Thirty-five patients required transfusions of 1-3 U of red blood cells, and five patients required at least 4 U of red blood cells.
 

Uterine atony most frequent culprit

As many as 80% of postpartum hemorrhages are caused by uterine atony, according to the authors.

Dr. Byrne explained that the uterus is a muscular organ that contains many “spiral arteries” that are “squeezed” by the uterus as it tightens down after childbirth, which prevents them from bleeding excessively.

“With uterine atony, the uterus muscle doesn’t squeeze effectively, and therefore it’s not one or two arteries, it’s hundreds and hundreds of small arteries and capillaries [and] arterioles all bleeding; it’s a wide area of uterus,” he continued.

When medications alone are ineffective at controlling bleeding, tamponade is often added to put outward pressure on the inner wall of the uterus for 12-24 hours. Although tamponade is effective in approximately 87% of atony-related cases of postpartum hemorrhage, the use of outward pressure on the uterine walls “is counterintuitive if the ultimate goal is uterine contraction,” the authors wrote.

Dr. Byrne said he and his colleagues saw this device several years ago, and they felt at the time that it appeared to be “more intuitive to use vacuum to compress the uterus inward compared to the nonetheless valuable and effective Bakri balloon and other techniques that expand the uterus outward.”

The fact that there is no need for prophylactic antibiotics also sets the vacuum device apart from the Bakri balloon, use of which routinely involves administration of prophylactic antibiotics, Dr. Byrne said.

In the current study, 64% of participants were obese, which makes management of postpartum hemorrhage “really challenging” because it’s difficult to effectively massage the uterus through adipose tissue, Dr. Byrne explained. Patients with obesity “also have different hemodynamics for how effectively [injected medications will] be delivered to the uterus,” he added.

“A device like this that could be placed and works so efficiently – even with an obese patient – that’s actually very powerful,” Dr. Byrne said.
 

 

 

Quick placement, almost immediate improvement

The discomfort experienced during placement of the device is similar to that experienced during sweeping of the uterus, Dr. Byrne explained. “You’d want a patient comfortable, ideally with an epidural already active, but if it’s an emergency, you wouldn’t have to wait for that; you could sweep the uterus quickly, place this, initiate suction, and it would all be so quick you could usually talk a patient through it and get it done,” Dr. Byrne continued.

Almost all of the investigators (98%) said the device was easy to use, and 97% said they would recommend it.

The vacuum device is made of medical-grade silicone and consists of an oval-shaped intrauterine loop at one end and a vacuum connector at the other end that can be attached to a standard suction cannister. On the inner side of the intrauterine loop are 20 vacuum pores covered by a shield that protects uterine tissue and prevents the vacuum pores from clogging with tissue or clotted blood.

Before insertion of the vacuum device, the clinician manually sweeps the uterus to identify retained placental fragments and to assess the uterine cavity. The distal end of the device is inserted into the uterus, and a cervical seal, positioned just outside the cervical os, is filled with 60 to 120 cc of sterile fluid. The proximal end is attached to low-level vacuum at a pressure of 80 ± 10 mm Hg. The device is left in place with continued suction for at least 1 hour after bleeding is controlled, at which time the suction is disconnected and the cervical seal is emptied. The device remains in place for at least 30 minutes, during which the patient is observed closely.

“It looks like 75%-80% of cases stop bleeding within 5 minutes. ... Then you stop the pressure after an hour [and] wait at least 30 minutes. You could actually have this out of the patient’s body within 2 hours,” Dr. Byrne said.

Dr. Byrne has disclosed no such financial relationships.
 

A version of this article originally appeared on Medscape.com.

A novel intrauterine device that uses suction to compress the uterus to control postpartum hemorrhage has received high marks for effectiveness and ease of use, researchers say.

Calling this approach “a stroke of brilliance,” James Byrne, MD, said in an interview that it is much quicker and much simpler than other techniques for managing postpartum hemorrhage and is less risky as well.

“This device can be placed in the uterus within a minute or so and does not need any initial anesthesia and would not be associated with the delay needed for a surgical approach,” Dr. Byrne explained. Dr. Byrne, who was not involved in the study, is chair of the department of obstetrics and gynecology at Santa Clara Valley Medical Center in San Jose, Calif.

To test the efficacy and safety of the device (Jada System, Alydia Health), Mary E. D’Alton, MD, and colleagues conducted a prospective, observational treatment study in 12 U.S. medical centers. They reported their findings in an article published online Sept. 9 in Obstetrics and Gynecology.

“The Jada System (novel intrauterine vacuum-induced hemorrhage-control device) was specifically designed to offer rapid treatment by applying low-level intrauterine vacuum to facilitate the physiologic forces of uterine contractions to constrict myometrial blood vessels and achieve hemostasis,” Dr. D’Alton, from New York–Presbyterian/Columbia University Irving Medical Center in New York, and colleagues wrote.

“The device had a low rate of adverse events during this study, all of which were expected risks and resolved with treatment without serious clinical sequelae. Investigators, all first-time users of the device, found the system easy to use, which suggests that, after device education and with availability of a quick reference guide outlining steps, there is a minimal learning curve for use,” they added.

Alydia Health, the company that developed the device, funded this study and supported the research staff, who recruited participants and gathered follow-up data on them. On Sept. 9, the U.S. Food and Drug Administration granted 510(k) clearance for the device, according to a company news release.
 

Effective, safe

The multicenter study included 107 patients (mean age, 29.7 years) with postpartum hemorrhage or abnormal postpartum uterine bleeding, 106 of whom received any study treatment with the device attached to vacuum. More than half (57%) of the participants were White, and just fewer than one-quarter (24%) were Black.

Treatment was successful in 94% (100/106) of participants, with “definitive control of abnormal bleeding” occurring in a median of 3 minutes after attachment to vacuum.

Eight adverse events were judged to have been possibly related to the device or procedure: four cases of endometritis, and one case each of presumed endometritis, bacterial vaginosisvaginal candidiasis, and disruption of a vaginal laceration repair. The eight adverse events were identified as potential risks, and all resolved without serious clinical consequences.

Thirty-five patients required transfusions of 1-3 U of red blood cells, and five patients required at least 4 U of red blood cells.
 

Uterine atony most frequent culprit

As many as 80% of postpartum hemorrhages are caused by uterine atony, according to the authors.

Dr. Byrne explained that the uterus is a muscular organ that contains many “spiral arteries” that are “squeezed” by the uterus as it tightens down after childbirth, which prevents them from bleeding excessively.

“With uterine atony, the uterus muscle doesn’t squeeze effectively, and therefore it’s not one or two arteries, it’s hundreds and hundreds of small arteries and capillaries [and] arterioles all bleeding; it’s a wide area of uterus,” he continued.

When medications alone are ineffective at controlling bleeding, tamponade is often added to put outward pressure on the inner wall of the uterus for 12-24 hours. Although tamponade is effective in approximately 87% of atony-related cases of postpartum hemorrhage, the use of outward pressure on the uterine walls “is counterintuitive if the ultimate goal is uterine contraction,” the authors wrote.

Dr. Byrne said he and his colleagues saw this device several years ago, and they felt at the time that it appeared to be “more intuitive to use vacuum to compress the uterus inward compared to the nonetheless valuable and effective Bakri balloon and other techniques that expand the uterus outward.”

The fact that there is no need for prophylactic antibiotics also sets the vacuum device apart from the Bakri balloon, use of which routinely involves administration of prophylactic antibiotics, Dr. Byrne said.

In the current study, 64% of participants were obese, which makes management of postpartum hemorrhage “really challenging” because it’s difficult to effectively massage the uterus through adipose tissue, Dr. Byrne explained. Patients with obesity “also have different hemodynamics for how effectively [injected medications will] be delivered to the uterus,” he added.

“A device like this that could be placed and works so efficiently – even with an obese patient – that’s actually very powerful,” Dr. Byrne said.
 

 

 

Quick placement, almost immediate improvement

The discomfort experienced during placement of the device is similar to that experienced during sweeping of the uterus, Dr. Byrne explained. “You’d want a patient comfortable, ideally with an epidural already active, but if it’s an emergency, you wouldn’t have to wait for that; you could sweep the uterus quickly, place this, initiate suction, and it would all be so quick you could usually talk a patient through it and get it done,” Dr. Byrne continued.

Almost all of the investigators (98%) said the device was easy to use, and 97% said they would recommend it.

The vacuum device is made of medical-grade silicone and consists of an oval-shaped intrauterine loop at one end and a vacuum connector at the other end that can be attached to a standard suction cannister. On the inner side of the intrauterine loop are 20 vacuum pores covered by a shield that protects uterine tissue and prevents the vacuum pores from clogging with tissue or clotted blood.

Before insertion of the vacuum device, the clinician manually sweeps the uterus to identify retained placental fragments and to assess the uterine cavity. The distal end of the device is inserted into the uterus, and a cervical seal, positioned just outside the cervical os, is filled with 60 to 120 cc of sterile fluid. The proximal end is attached to low-level vacuum at a pressure of 80 ± 10 mm Hg. The device is left in place with continued suction for at least 1 hour after bleeding is controlled, at which time the suction is disconnected and the cervical seal is emptied. The device remains in place for at least 30 minutes, during which the patient is observed closely.

“It looks like 75%-80% of cases stop bleeding within 5 minutes. ... Then you stop the pressure after an hour [and] wait at least 30 minutes. You could actually have this out of the patient’s body within 2 hours,” Dr. Byrne said.

Dr. Byrne has disclosed no such financial relationships.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

For BP screening, shorter rest time yields similar results

Article Type
Changed
Thu, 09/17/2020 - 15:08

Current guidelines recommend a 5-minute rest period before a blood pressure screening measurement, but that might not be necessary for all patients.

Dr. Tammy M. Brady

In a prospective crossover study, average differences in blood pressure measurements obtained after 0 or 2 minutes of rest were not significantly different than readings obtained after the recommended 5 minutes of rest in adults with systolic blood pressure below 140 mm Hg.

“The average differences in BP by rest period were small, and BPs obtained after shorter rest periods were noninferior to those obtained after 5 minutes when SBP is below 140,” Tammy M. Brady, MD, PhD, Johns Hopkins University, Baltimore, said in an interview.

“This suggests shorter rest times, even 0 minutes, may be reasonable for screening when the initial SBP is below 140,” said Brady.

She presented her research at the joint scientific sessions of the American Heart Association Council on Hypertension, AHA Council on Kidney in Cardiovascular Disease, and American Society of Hypertension..
 

A challenging recommendation

The 5-minute rest period is “challenging” to implement in busy clinical settings, Dr. Brady said. The researchers therefore set out to determine the effect of no rest and the effect of a shorter rest period (2 minutes) on blood pressure screening.

They recruited 113 adults (mean age, 55; 64% women, 74% Black) with SBP that ranged from below 115 mm Hg to above 145 mm Hg and with diastolic BP that ranged from below 75 mm Hg to above 105 mm Hg. About one-quarter (28%) had SBP in the stage 2 hypertension range (at least 140 mm Hg).

They obtained four sets of automated BP measurements after 5, 2, or 0 minutes of rest. All participants had their BP measured after a second 5-minute rest period as their last measurement to estimate repeatability.

Overall, there was no significant difference in the average BP obtained at any of the rest periods.

After the first and second 5-minute rest period, BPs were 127.5/74.7 mm Hg and 127.0/75.6 mm Hg, respectively. After 2 and 0 minutes of rest, BPs were 126.8/73.7 mm Hg and 126.5/74.0 mm Hg.

When looking just at adults with SBP below 140 mm Hg, there was no more than an average difference of ±2 mm Hg between BPs obtained at the 5-minute resting periods, compared with the shorter resting periods.

However, in those with SBP below 140 mm Hg, BP values were significantly different (defined as more than ±2 mm Hg) with shorter rest periods, “suggesting that shorter rest periods were in fact inferior to resting for 5 minutes in these patients,” Dr. Brady said.
 

More efficient, economic

“Economics play a significant role in blood pressure screenings, as clinics not as well-funded may find it especially challenging to implement a uniform, 5-minute rest period before testing, which could ultimately reduce the number of patients able to be screened,” Dr. Brady added in a conference statement.

“While our study sample was small, a reasonable approach based on these findings would be to measure blood pressure after minimal to no rest, and then repeat the measurements after 5 minutes only if a patient is found to have elevated blood pressure,” she said.

Weighing in on the results, Karen A. Griffin, MD, who chairs the AHA council on hypertension, said that “reducing the rest period to screen an individual for hypertension may result in faster throughput in the clinic and confer a cost savings.”

“At the present time, in order to maintain the clinic flow, some clinics use a single, often times ‘nonrested’ BP measurement as a screen, reserving the 5-minute rest automated-office BP measurement for patients found to have an elevated screening BP,” noted Dr. Griffin, professor of medicine, Loyola University Medical Center, Maywood, Ill.

“Nevertheless, even if limiting the use of automated-office BP to those who fail the initial screening BP, a cost savings would still be realized by reducing the currently recommended 5-minute rest to 2 minutes and have the most impact in very busy, less well-funded clinics,” said Dr. Griffin.

She cautioned, however, that further studies in a larger population will be needed before making a change to current clinical practice guidelines.

The study had no specific funding. Dr. Brady and Dr. Griffin have no relevant disclosures.

A version of this article originally appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Current guidelines recommend a 5-minute rest period before a blood pressure screening measurement, but that might not be necessary for all patients.

Dr. Tammy M. Brady

In a prospective crossover study, average differences in blood pressure measurements obtained after 0 or 2 minutes of rest were not significantly different than readings obtained after the recommended 5 minutes of rest in adults with systolic blood pressure below 140 mm Hg.

“The average differences in BP by rest period were small, and BPs obtained after shorter rest periods were noninferior to those obtained after 5 minutes when SBP is below 140,” Tammy M. Brady, MD, PhD, Johns Hopkins University, Baltimore, said in an interview.

“This suggests shorter rest times, even 0 minutes, may be reasonable for screening when the initial SBP is below 140,” said Brady.

She presented her research at the joint scientific sessions of the American Heart Association Council on Hypertension, AHA Council on Kidney in Cardiovascular Disease, and American Society of Hypertension..
 

A challenging recommendation

The 5-minute rest period is “challenging” to implement in busy clinical settings, Dr. Brady said. The researchers therefore set out to determine the effect of no rest and the effect of a shorter rest period (2 minutes) on blood pressure screening.

They recruited 113 adults (mean age, 55; 64% women, 74% Black) with SBP that ranged from below 115 mm Hg to above 145 mm Hg and with diastolic BP that ranged from below 75 mm Hg to above 105 mm Hg. About one-quarter (28%) had SBP in the stage 2 hypertension range (at least 140 mm Hg).

They obtained four sets of automated BP measurements after 5, 2, or 0 minutes of rest. All participants had their BP measured after a second 5-minute rest period as their last measurement to estimate repeatability.

Overall, there was no significant difference in the average BP obtained at any of the rest periods.

After the first and second 5-minute rest period, BPs were 127.5/74.7 mm Hg and 127.0/75.6 mm Hg, respectively. After 2 and 0 minutes of rest, BPs were 126.8/73.7 mm Hg and 126.5/74.0 mm Hg.

When looking just at adults with SBP below 140 mm Hg, there was no more than an average difference of ±2 mm Hg between BPs obtained at the 5-minute resting periods, compared with the shorter resting periods.

However, in those with SBP below 140 mm Hg, BP values were significantly different (defined as more than ±2 mm Hg) with shorter rest periods, “suggesting that shorter rest periods were in fact inferior to resting for 5 minutes in these patients,” Dr. Brady said.
 

More efficient, economic

“Economics play a significant role in blood pressure screenings, as clinics not as well-funded may find it especially challenging to implement a uniform, 5-minute rest period before testing, which could ultimately reduce the number of patients able to be screened,” Dr. Brady added in a conference statement.

“While our study sample was small, a reasonable approach based on these findings would be to measure blood pressure after minimal to no rest, and then repeat the measurements after 5 minutes only if a patient is found to have elevated blood pressure,” she said.

Weighing in on the results, Karen A. Griffin, MD, who chairs the AHA council on hypertension, said that “reducing the rest period to screen an individual for hypertension may result in faster throughput in the clinic and confer a cost savings.”

“At the present time, in order to maintain the clinic flow, some clinics use a single, often times ‘nonrested’ BP measurement as a screen, reserving the 5-minute rest automated-office BP measurement for patients found to have an elevated screening BP,” noted Dr. Griffin, professor of medicine, Loyola University Medical Center, Maywood, Ill.

“Nevertheless, even if limiting the use of automated-office BP to those who fail the initial screening BP, a cost savings would still be realized by reducing the currently recommended 5-minute rest to 2 minutes and have the most impact in very busy, less well-funded clinics,” said Dr. Griffin.

She cautioned, however, that further studies in a larger population will be needed before making a change to current clinical practice guidelines.

The study had no specific funding. Dr. Brady and Dr. Griffin have no relevant disclosures.

A version of this article originally appeared on Medscape.com.

Current guidelines recommend a 5-minute rest period before a blood pressure screening measurement, but that might not be necessary for all patients.

Dr. Tammy M. Brady

In a prospective crossover study, average differences in blood pressure measurements obtained after 0 or 2 minutes of rest were not significantly different than readings obtained after the recommended 5 minutes of rest in adults with systolic blood pressure below 140 mm Hg.

“The average differences in BP by rest period were small, and BPs obtained after shorter rest periods were noninferior to those obtained after 5 minutes when SBP is below 140,” Tammy M. Brady, MD, PhD, Johns Hopkins University, Baltimore, said in an interview.

“This suggests shorter rest times, even 0 minutes, may be reasonable for screening when the initial SBP is below 140,” said Brady.

She presented her research at the joint scientific sessions of the American Heart Association Council on Hypertension, AHA Council on Kidney in Cardiovascular Disease, and American Society of Hypertension..
 

A challenging recommendation

The 5-minute rest period is “challenging” to implement in busy clinical settings, Dr. Brady said. The researchers therefore set out to determine the effect of no rest and the effect of a shorter rest period (2 minutes) on blood pressure screening.

They recruited 113 adults (mean age, 55; 64% women, 74% Black) with SBP that ranged from below 115 mm Hg to above 145 mm Hg and with diastolic BP that ranged from below 75 mm Hg to above 105 mm Hg. About one-quarter (28%) had SBP in the stage 2 hypertension range (at least 140 mm Hg).

They obtained four sets of automated BP measurements after 5, 2, or 0 minutes of rest. All participants had their BP measured after a second 5-minute rest period as their last measurement to estimate repeatability.

Overall, there was no significant difference in the average BP obtained at any of the rest periods.

After the first and second 5-minute rest period, BPs were 127.5/74.7 mm Hg and 127.0/75.6 mm Hg, respectively. After 2 and 0 minutes of rest, BPs were 126.8/73.7 mm Hg and 126.5/74.0 mm Hg.

When looking just at adults with SBP below 140 mm Hg, there was no more than an average difference of ±2 mm Hg between BPs obtained at the 5-minute resting periods, compared with the shorter resting periods.

However, in those with SBP below 140 mm Hg, BP values were significantly different (defined as more than ±2 mm Hg) with shorter rest periods, “suggesting that shorter rest periods were in fact inferior to resting for 5 minutes in these patients,” Dr. Brady said.
 

More efficient, economic

“Economics play a significant role in blood pressure screenings, as clinics not as well-funded may find it especially challenging to implement a uniform, 5-minute rest period before testing, which could ultimately reduce the number of patients able to be screened,” Dr. Brady added in a conference statement.

“While our study sample was small, a reasonable approach based on these findings would be to measure blood pressure after minimal to no rest, and then repeat the measurements after 5 minutes only if a patient is found to have elevated blood pressure,” she said.

Weighing in on the results, Karen A. Griffin, MD, who chairs the AHA council on hypertension, said that “reducing the rest period to screen an individual for hypertension may result in faster throughput in the clinic and confer a cost savings.”

“At the present time, in order to maintain the clinic flow, some clinics use a single, often times ‘nonrested’ BP measurement as a screen, reserving the 5-minute rest automated-office BP measurement for patients found to have an elevated screening BP,” noted Dr. Griffin, professor of medicine, Loyola University Medical Center, Maywood, Ill.

“Nevertheless, even if limiting the use of automated-office BP to those who fail the initial screening BP, a cost savings would still be realized by reducing the currently recommended 5-minute rest to 2 minutes and have the most impact in very busy, less well-funded clinics,” said Dr. Griffin.

She cautioned, however, that further studies in a larger population will be needed before making a change to current clinical practice guidelines.

The study had no specific funding. Dr. Brady and Dr. Griffin have no relevant disclosures.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JOINT HYPERTENSION 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

MS Highlights From the AAN & CMSC Annual Meetings

Article Type
Changed
Fri, 09/25/2020 - 15:53

This supplement to Neurology Reviews compiles MS-related news briefs from the 2020 virtual annual meetings of the American Academy of Neurology and the Consortium of Multiple Sclerosis Centers.  

Click here to read the supplement

Publications
Topics
Sections

This supplement to Neurology Reviews compiles MS-related news briefs from the 2020 virtual annual meetings of the American Academy of Neurology and the Consortium of Multiple Sclerosis Centers.  

Click here to read the supplement

This supplement to Neurology Reviews compiles MS-related news briefs from the 2020 virtual annual meetings of the American Academy of Neurology and the Consortium of Multiple Sclerosis Centers.  

Click here to read the supplement

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 04/16/2020 - 14:45
Un-Gate On Date
Thu, 04/16/2020 - 14:45
Use ProPublica
CFC Schedule Remove Status
Thu, 04/16/2020 - 14:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

2020-2021 respiratory viral season: Onset, presentations, and testing likely to differ in pandemic

Article Type
Changed
Tue, 02/14/2023 - 13:00

Respiratory virus seasons usually follow a fairly well-known pattern. Enterovirus 68 (EV-D68) is a summer-to-early fall virus with biennial peak years. Rhinovirus (HRv) and adenovirus (Adv) occur nearly year-round but may have small upticks in the first month or so that children return to school. Early in the school year, upper respiratory infections from both HRv and Adv and viral sore throats from Adv are common, with conjunctivitis from Adv outbreaks in some years. October to November is human parainfluenza (HPiV) 1 and 2 season, often presenting as croup. Human metapneumovirus infections span October through April. In late November to December, influenza begins, usually with an A type, later transitioning to a B type in February through April. Also in December, respiratory syncytial virus (RSV) starts, characteristically with bronchiolitis presentations, peaking in February to March and tapering off in May. In late March to April, HPiV 3 also appears for 4-6 weeks.

Will 2020-2021 be different?

Summer was remarkably free of expected enterovirus activity, suggesting that the seasonal parade may differ this year. Remember that the 2019-2020 respiratory season suddenly and nearly completely stopped in March because of social distancing and lockdowns needed to address the SARS-CoV-2 pandemic.

The mild influenza season in the southern hemisphere suggests that our influenza season also could be mild. But perhaps not – most southern hemisphere countries that are surveyed for influenza activities had the most intense SARS-CoV-2 mitigations, making the observed mildness potentially related more to social mitigation than less virulent influenza strains. If so, southern hemisphere influenza data may not apply to the United States, where social distancing and masks are ignored or used inconsistently by almost half the population.

Dr. Christopher J. Harrison

Further, the stop-and-go pattern of in-person school/college attendance adds to uncertainties for the usual orderly virus-specific seasonality. The result may be multiple stop-and-go “pop-up” or “mini” outbreaks for any given virus potentially reflected as exaggerated local or regional differences in circulation of various viruses. The erratic seasonality also would increase coinfections, which could present with more severe or different symptoms.
 

SARS-CoV-2’s potential interaction

Will the relatively mild presentations for most children with SARS-CoV-2 hold up in the setting of coinfections or sequential respiratory viral infections? Could SARS-CoV-2 cause worse/more prolonged symptoms or more sequelae if paired simultaneously or in tandem with a traditional respiratory virus? To date, data on the frequency and severity of SARS-CoV-2 coinfections are conflicting and sparse, but it appears that non-SARS-CoV-2 viruses can be involved in 15%-50% pediatric acute respiratory infections.1,2

However, it may not be important to know about coinfecting viruses other than influenza (can be treated) or SARS-CoV-2 (needs quarantine and contact tracing), unless symptoms are atypical or more severe than usual. For example, a young child with bronchiolitis is most likely infected with RSV, but HPiV, influenza, metapneumovirus, HRv, and even SARS-CoV-2 can cause bronchiolitis. Even so, testing outpatients for RSV or non-influenza is not routine or even clinically helpful. Supportive treatment and restriction from daycare attendance are sufficient management for outpatient ARIs whether presenting as bronchiolitis or not. The worry is that SARS-CoV-2 as a coinfecting agent may not provide an identifiable clinical signal as primary or coinfecting ARI pathogen.
 

 

 

Considerations for SARS-CoV-2 testing: Outpatient bronchiolitis

If a child presents with classic bronchiolitis but has above moderate to severe symptoms, is SARS-CoV-2 a consideration? Perhaps, if SARS-CoV-2 acts similarly to non-SARS-CoV-2s.

A recent report from the 30th Multicenter Airway Research Collaboration (MARC-30) surveillance study (2007-2014) of children hospitalized with clinical bronchiolitis evaluated respiratory viruses, including RSV and the four common non-SARS coronaviruses using molecular testing.3 Among 1,880 subjects, a CoV (alpha CoV: NL63 or 229E, or beta CoV: KKU1 or OC43) was detected in 12%. Yet most had only RSV (n = 1,661); 32 had only CoV (n = 32). But note that 219 had both.

Bronchiolitis subjects with CoV were older – median 3.7 (1.4-5.8) vs. 2.8 (1.9-7.2) years – and more likely male than were RSV subjects (68% vs. 58%). OC43 was most frequent followed by equal numbers of HKU1 and NL63, while 229E was the least frequent. Medical utilization and severity did not differ among the CoVs, or between RSV+CoV vs. RSV alone, unless one considered CoV viral load as a variable. ICU use increased when the polymerase chain reaction cycle threshold result indicated a high CoV viral load.

These data suggest CoVs are not infrequent coinfectors with RSV in bronchiolitis – and that SARS-CoV-2 is the same. Therefore, a bronchiolitis presentation doesn’t necessarily take us off the hook for the need to consider SARS-CoV-2 testing, particularly in the somewhat older bronchiolitis patient with more than mild symptoms.
 

Considerations for SARS-CoV-2 testing: Outpatient influenza-like illness

In 2020-2021, the Centers for Disease Control and Prevention recommends considering empiric antiviral treatment for ILIs (fever plus either cough or sore throat) based upon our clinical judgement, even in non-high-risk children.4

While pediatric COVID-19 illnesses are predominantly asymptomatic or mild, a febrile ARI is also a SARS-CoV-2 compatible presentation. So, if all we use is our clinical judgment, how do we know if the febrile ARI is due to influenza or SARS-CoV-2 or both? At least one study used a highly sensitive and specific molecular influenza test to show that the accuracy of clinically diagnosing influenza in children is not much better than flipping a coin and would lead to potential antiviral overuse.5

So, it seems ideal to test for influenza when possible. Point-of-care (POC) tests are frequently used for outpatients. Eight POC Clinical Laboratory Improvement Amendments (CLIA)–waived kits, some also detecting RSV, are available but most have modest sensitivity (60%-80%) compared with lab-based molecular tests.6 That said, if supplies and kits for one of the POC tests are available to us during these SARS-CoV-2 stressed times (back orders seem more common this year), a positive influenza test in the first 48 hours of symptoms confirms the option to prescribe an antiviral. Yet how will we have confidence that the febrile ARI is not also partly due to SARS-CoV-2? Currently febrile ARIs usually are considered SARS-CoV-2 and the children are sent for SARS-CoV-2 testing. During influenza season, it seems we will need to continue to send febrile outpatients for SARS-CoV-2 testing, even if POC influenza positive, via whatever mechanisms are available as time goes on.

We expect more rapid pediatric testing modalities for SARS-CoV-2 (maybe even saliva tests) to become available over the next months. Indeed, rapid antigen tests and rapid molecular tests are being evaluated in adults and seem destined for CLIA waivers as POC tests, and even home testing kits. Pediatric approvals hopefully also will occur. So, the pathways for SARS-CoV-2 testing available now will likely change over this winter. But be aware that supplies/kits will be prioritized to locations within high need areas and bulk purchase contracts. So POC kits may remain scarce for practices, meaning a reference laboratory still could be the way to go for SARS-CoV-2 for at least the rest of 2020. Reference labs are becoming creative as well; one combined detection of influenza A, influenza B, RSV, and SARS-CoV-2 into one test, and hopes to get approval for swab collection that can be done by families at home and mailed in.

 

Summary

Expect variations on the traditional parade of seasonal respiratory viruses, with increased numbers of coinfections. Choosing the outpatient who needs influenza testing is the same as in past years, although we have CDC permissive recommendations to prescribe antivirals for any outpatient ILI within the first 48 hours of symptoms. Still, POC testing for influenza remains potentially valuable in the ILI patient. The choice of whether and how to test for SARS-CoV-2 given its potential to be a primary or coinfecting agent in presentations linked more closely to a traditional virus (e.g. RSV bronchiolitis) will be a test of our clinical judgement until more data and easier testing are available. Further complicating coinfection recognition is the fact that many sick visits occur by telehealth and much testing is done at drive-through SARS-CoV-2 testing facilities with no clinician exam. Unless we are liberal in SARS-CoV-2 testing, detecting SARS-CoV-2 coinfections is easier said than done given its usually mild presentation being overshadowed by any coinfecting virus.

But understanding who has SARS-CoV-2, even as a coinfection, still is essential in controlling the pandemic. We will need to be vigilant for evolving approaches to SARS-CoV-2 testing in the context of symptomatic ARI presentations, knowing this will likely remain a moving target for the foreseeable future.
 

Dr. Harrison is professor of pediatrics and pediatric infectious diseases at Children’s Mercy Hospital-Kansas City, Mo. Children’s Mercy Hospital receives grant funding to study two candidate RSV vaccines. The hospital also receives CDC funding under the New Vaccine Surveillance Network for multicenter surveillance of acute respiratory infections, including influenza, RSV, and parainfluenza virus. Email Dr. Harrison at [email protected].

References

1. Pediatrics. 2020;146(1):e20200961.

2. JAMA. 2020 May 26;323(20):2085-6.

3. Pediatrics. 2020. doi: 10.1542/peds.2020-1267.

4. www.cdc.gov/flu/professionals/antivirals/summary-clinicians.htm.

5. J. Pediatr. 2020. doi: 10.1016/j.jpeds.2020.08.007.

6. www.cdc.gov/flu/professionals/diagnosis/table-nucleic-acid-detection.html.

Publications
Topics
Sections

Respiratory virus seasons usually follow a fairly well-known pattern. Enterovirus 68 (EV-D68) is a summer-to-early fall virus with biennial peak years. Rhinovirus (HRv) and adenovirus (Adv) occur nearly year-round but may have small upticks in the first month or so that children return to school. Early in the school year, upper respiratory infections from both HRv and Adv and viral sore throats from Adv are common, with conjunctivitis from Adv outbreaks in some years. October to November is human parainfluenza (HPiV) 1 and 2 season, often presenting as croup. Human metapneumovirus infections span October through April. In late November to December, influenza begins, usually with an A type, later transitioning to a B type in February through April. Also in December, respiratory syncytial virus (RSV) starts, characteristically with bronchiolitis presentations, peaking in February to March and tapering off in May. In late March to April, HPiV 3 also appears for 4-6 weeks.

Will 2020-2021 be different?

Summer was remarkably free of expected enterovirus activity, suggesting that the seasonal parade may differ this year. Remember that the 2019-2020 respiratory season suddenly and nearly completely stopped in March because of social distancing and lockdowns needed to address the SARS-CoV-2 pandemic.

The mild influenza season in the southern hemisphere suggests that our influenza season also could be mild. But perhaps not – most southern hemisphere countries that are surveyed for influenza activities had the most intense SARS-CoV-2 mitigations, making the observed mildness potentially related more to social mitigation than less virulent influenza strains. If so, southern hemisphere influenza data may not apply to the United States, where social distancing and masks are ignored or used inconsistently by almost half the population.

Dr. Christopher J. Harrison

Further, the stop-and-go pattern of in-person school/college attendance adds to uncertainties for the usual orderly virus-specific seasonality. The result may be multiple stop-and-go “pop-up” or “mini” outbreaks for any given virus potentially reflected as exaggerated local or regional differences in circulation of various viruses. The erratic seasonality also would increase coinfections, which could present with more severe or different symptoms.
 

SARS-CoV-2’s potential interaction

Will the relatively mild presentations for most children with SARS-CoV-2 hold up in the setting of coinfections or sequential respiratory viral infections? Could SARS-CoV-2 cause worse/more prolonged symptoms or more sequelae if paired simultaneously or in tandem with a traditional respiratory virus? To date, data on the frequency and severity of SARS-CoV-2 coinfections are conflicting and sparse, but it appears that non-SARS-CoV-2 viruses can be involved in 15%-50% pediatric acute respiratory infections.1,2

However, it may not be important to know about coinfecting viruses other than influenza (can be treated) or SARS-CoV-2 (needs quarantine and contact tracing), unless symptoms are atypical or more severe than usual. For example, a young child with bronchiolitis is most likely infected with RSV, but HPiV, influenza, metapneumovirus, HRv, and even SARS-CoV-2 can cause bronchiolitis. Even so, testing outpatients for RSV or non-influenza is not routine or even clinically helpful. Supportive treatment and restriction from daycare attendance are sufficient management for outpatient ARIs whether presenting as bronchiolitis or not. The worry is that SARS-CoV-2 as a coinfecting agent may not provide an identifiable clinical signal as primary or coinfecting ARI pathogen.
 

 

 

Considerations for SARS-CoV-2 testing: Outpatient bronchiolitis

If a child presents with classic bronchiolitis but has above moderate to severe symptoms, is SARS-CoV-2 a consideration? Perhaps, if SARS-CoV-2 acts similarly to non-SARS-CoV-2s.

A recent report from the 30th Multicenter Airway Research Collaboration (MARC-30) surveillance study (2007-2014) of children hospitalized with clinical bronchiolitis evaluated respiratory viruses, including RSV and the four common non-SARS coronaviruses using molecular testing.3 Among 1,880 subjects, a CoV (alpha CoV: NL63 or 229E, or beta CoV: KKU1 or OC43) was detected in 12%. Yet most had only RSV (n = 1,661); 32 had only CoV (n = 32). But note that 219 had both.

Bronchiolitis subjects with CoV were older – median 3.7 (1.4-5.8) vs. 2.8 (1.9-7.2) years – and more likely male than were RSV subjects (68% vs. 58%). OC43 was most frequent followed by equal numbers of HKU1 and NL63, while 229E was the least frequent. Medical utilization and severity did not differ among the CoVs, or between RSV+CoV vs. RSV alone, unless one considered CoV viral load as a variable. ICU use increased when the polymerase chain reaction cycle threshold result indicated a high CoV viral load.

These data suggest CoVs are not infrequent coinfectors with RSV in bronchiolitis – and that SARS-CoV-2 is the same. Therefore, a bronchiolitis presentation doesn’t necessarily take us off the hook for the need to consider SARS-CoV-2 testing, particularly in the somewhat older bronchiolitis patient with more than mild symptoms.
 

Considerations for SARS-CoV-2 testing: Outpatient influenza-like illness

In 2020-2021, the Centers for Disease Control and Prevention recommends considering empiric antiviral treatment for ILIs (fever plus either cough or sore throat) based upon our clinical judgement, even in non-high-risk children.4

While pediatric COVID-19 illnesses are predominantly asymptomatic or mild, a febrile ARI is also a SARS-CoV-2 compatible presentation. So, if all we use is our clinical judgment, how do we know if the febrile ARI is due to influenza or SARS-CoV-2 or both? At least one study used a highly sensitive and specific molecular influenza test to show that the accuracy of clinically diagnosing influenza in children is not much better than flipping a coin and would lead to potential antiviral overuse.5

So, it seems ideal to test for influenza when possible. Point-of-care (POC) tests are frequently used for outpatients. Eight POC Clinical Laboratory Improvement Amendments (CLIA)–waived kits, some also detecting RSV, are available but most have modest sensitivity (60%-80%) compared with lab-based molecular tests.6 That said, if supplies and kits for one of the POC tests are available to us during these SARS-CoV-2 stressed times (back orders seem more common this year), a positive influenza test in the first 48 hours of symptoms confirms the option to prescribe an antiviral. Yet how will we have confidence that the febrile ARI is not also partly due to SARS-CoV-2? Currently febrile ARIs usually are considered SARS-CoV-2 and the children are sent for SARS-CoV-2 testing. During influenza season, it seems we will need to continue to send febrile outpatients for SARS-CoV-2 testing, even if POC influenza positive, via whatever mechanisms are available as time goes on.

We expect more rapid pediatric testing modalities for SARS-CoV-2 (maybe even saliva tests) to become available over the next months. Indeed, rapid antigen tests and rapid molecular tests are being evaluated in adults and seem destined for CLIA waivers as POC tests, and even home testing kits. Pediatric approvals hopefully also will occur. So, the pathways for SARS-CoV-2 testing available now will likely change over this winter. But be aware that supplies/kits will be prioritized to locations within high need areas and bulk purchase contracts. So POC kits may remain scarce for practices, meaning a reference laboratory still could be the way to go for SARS-CoV-2 for at least the rest of 2020. Reference labs are becoming creative as well; one combined detection of influenza A, influenza B, RSV, and SARS-CoV-2 into one test, and hopes to get approval for swab collection that can be done by families at home and mailed in.

 

Summary

Expect variations on the traditional parade of seasonal respiratory viruses, with increased numbers of coinfections. Choosing the outpatient who needs influenza testing is the same as in past years, although we have CDC permissive recommendations to prescribe antivirals for any outpatient ILI within the first 48 hours of symptoms. Still, POC testing for influenza remains potentially valuable in the ILI patient. The choice of whether and how to test for SARS-CoV-2 given its potential to be a primary or coinfecting agent in presentations linked more closely to a traditional virus (e.g. RSV bronchiolitis) will be a test of our clinical judgement until more data and easier testing are available. Further complicating coinfection recognition is the fact that many sick visits occur by telehealth and much testing is done at drive-through SARS-CoV-2 testing facilities with no clinician exam. Unless we are liberal in SARS-CoV-2 testing, detecting SARS-CoV-2 coinfections is easier said than done given its usually mild presentation being overshadowed by any coinfecting virus.

But understanding who has SARS-CoV-2, even as a coinfection, still is essential in controlling the pandemic. We will need to be vigilant for evolving approaches to SARS-CoV-2 testing in the context of symptomatic ARI presentations, knowing this will likely remain a moving target for the foreseeable future.
 

Dr. Harrison is professor of pediatrics and pediatric infectious diseases at Children’s Mercy Hospital-Kansas City, Mo. Children’s Mercy Hospital receives grant funding to study two candidate RSV vaccines. The hospital also receives CDC funding under the New Vaccine Surveillance Network for multicenter surveillance of acute respiratory infections, including influenza, RSV, and parainfluenza virus. Email Dr. Harrison at [email protected].

References

1. Pediatrics. 2020;146(1):e20200961.

2. JAMA. 2020 May 26;323(20):2085-6.

3. Pediatrics. 2020. doi: 10.1542/peds.2020-1267.

4. www.cdc.gov/flu/professionals/antivirals/summary-clinicians.htm.

5. J. Pediatr. 2020. doi: 10.1016/j.jpeds.2020.08.007.

6. www.cdc.gov/flu/professionals/diagnosis/table-nucleic-acid-detection.html.

Respiratory virus seasons usually follow a fairly well-known pattern. Enterovirus 68 (EV-D68) is a summer-to-early fall virus with biennial peak years. Rhinovirus (HRv) and adenovirus (Adv) occur nearly year-round but may have small upticks in the first month or so that children return to school. Early in the school year, upper respiratory infections from both HRv and Adv and viral sore throats from Adv are common, with conjunctivitis from Adv outbreaks in some years. October to November is human parainfluenza (HPiV) 1 and 2 season, often presenting as croup. Human metapneumovirus infections span October through April. In late November to December, influenza begins, usually with an A type, later transitioning to a B type in February through April. Also in December, respiratory syncytial virus (RSV) starts, characteristically with bronchiolitis presentations, peaking in February to March and tapering off in May. In late March to April, HPiV 3 also appears for 4-6 weeks.

Will 2020-2021 be different?

Summer was remarkably free of expected enterovirus activity, suggesting that the seasonal parade may differ this year. Remember that the 2019-2020 respiratory season suddenly and nearly completely stopped in March because of social distancing and lockdowns needed to address the SARS-CoV-2 pandemic.

The mild influenza season in the southern hemisphere suggests that our influenza season also could be mild. But perhaps not – most southern hemisphere countries that are surveyed for influenza activities had the most intense SARS-CoV-2 mitigations, making the observed mildness potentially related more to social mitigation than less virulent influenza strains. If so, southern hemisphere influenza data may not apply to the United States, where social distancing and masks are ignored or used inconsistently by almost half the population.

Dr. Christopher J. Harrison

Further, the stop-and-go pattern of in-person school/college attendance adds to uncertainties for the usual orderly virus-specific seasonality. The result may be multiple stop-and-go “pop-up” or “mini” outbreaks for any given virus potentially reflected as exaggerated local or regional differences in circulation of various viruses. The erratic seasonality also would increase coinfections, which could present with more severe or different symptoms.
 

SARS-CoV-2’s potential interaction

Will the relatively mild presentations for most children with SARS-CoV-2 hold up in the setting of coinfections or sequential respiratory viral infections? Could SARS-CoV-2 cause worse/more prolonged symptoms or more sequelae if paired simultaneously or in tandem with a traditional respiratory virus? To date, data on the frequency and severity of SARS-CoV-2 coinfections are conflicting and sparse, but it appears that non-SARS-CoV-2 viruses can be involved in 15%-50% pediatric acute respiratory infections.1,2

However, it may not be important to know about coinfecting viruses other than influenza (can be treated) or SARS-CoV-2 (needs quarantine and contact tracing), unless symptoms are atypical or more severe than usual. For example, a young child with bronchiolitis is most likely infected with RSV, but HPiV, influenza, metapneumovirus, HRv, and even SARS-CoV-2 can cause bronchiolitis. Even so, testing outpatients for RSV or non-influenza is not routine or even clinically helpful. Supportive treatment and restriction from daycare attendance are sufficient management for outpatient ARIs whether presenting as bronchiolitis or not. The worry is that SARS-CoV-2 as a coinfecting agent may not provide an identifiable clinical signal as primary or coinfecting ARI pathogen.
 

 

 

Considerations for SARS-CoV-2 testing: Outpatient bronchiolitis

If a child presents with classic bronchiolitis but has above moderate to severe symptoms, is SARS-CoV-2 a consideration? Perhaps, if SARS-CoV-2 acts similarly to non-SARS-CoV-2s.

A recent report from the 30th Multicenter Airway Research Collaboration (MARC-30) surveillance study (2007-2014) of children hospitalized with clinical bronchiolitis evaluated respiratory viruses, including RSV and the four common non-SARS coronaviruses using molecular testing.3 Among 1,880 subjects, a CoV (alpha CoV: NL63 or 229E, or beta CoV: KKU1 or OC43) was detected in 12%. Yet most had only RSV (n = 1,661); 32 had only CoV (n = 32). But note that 219 had both.

Bronchiolitis subjects with CoV were older – median 3.7 (1.4-5.8) vs. 2.8 (1.9-7.2) years – and more likely male than were RSV subjects (68% vs. 58%). OC43 was most frequent followed by equal numbers of HKU1 and NL63, while 229E was the least frequent. Medical utilization and severity did not differ among the CoVs, or between RSV+CoV vs. RSV alone, unless one considered CoV viral load as a variable. ICU use increased when the polymerase chain reaction cycle threshold result indicated a high CoV viral load.

These data suggest CoVs are not infrequent coinfectors with RSV in bronchiolitis – and that SARS-CoV-2 is the same. Therefore, a bronchiolitis presentation doesn’t necessarily take us off the hook for the need to consider SARS-CoV-2 testing, particularly in the somewhat older bronchiolitis patient with more than mild symptoms.
 

Considerations for SARS-CoV-2 testing: Outpatient influenza-like illness

In 2020-2021, the Centers for Disease Control and Prevention recommends considering empiric antiviral treatment for ILIs (fever plus either cough or sore throat) based upon our clinical judgement, even in non-high-risk children.4

While pediatric COVID-19 illnesses are predominantly asymptomatic or mild, a febrile ARI is also a SARS-CoV-2 compatible presentation. So, if all we use is our clinical judgment, how do we know if the febrile ARI is due to influenza or SARS-CoV-2 or both? At least one study used a highly sensitive and specific molecular influenza test to show that the accuracy of clinically diagnosing influenza in children is not much better than flipping a coin and would lead to potential antiviral overuse.5

So, it seems ideal to test for influenza when possible. Point-of-care (POC) tests are frequently used for outpatients. Eight POC Clinical Laboratory Improvement Amendments (CLIA)–waived kits, some also detecting RSV, are available but most have modest sensitivity (60%-80%) compared with lab-based molecular tests.6 That said, if supplies and kits for one of the POC tests are available to us during these SARS-CoV-2 stressed times (back orders seem more common this year), a positive influenza test in the first 48 hours of symptoms confirms the option to prescribe an antiviral. Yet how will we have confidence that the febrile ARI is not also partly due to SARS-CoV-2? Currently febrile ARIs usually are considered SARS-CoV-2 and the children are sent for SARS-CoV-2 testing. During influenza season, it seems we will need to continue to send febrile outpatients for SARS-CoV-2 testing, even if POC influenza positive, via whatever mechanisms are available as time goes on.

We expect more rapid pediatric testing modalities for SARS-CoV-2 (maybe even saliva tests) to become available over the next months. Indeed, rapid antigen tests and rapid molecular tests are being evaluated in adults and seem destined for CLIA waivers as POC tests, and even home testing kits. Pediatric approvals hopefully also will occur. So, the pathways for SARS-CoV-2 testing available now will likely change over this winter. But be aware that supplies/kits will be prioritized to locations within high need areas and bulk purchase contracts. So POC kits may remain scarce for practices, meaning a reference laboratory still could be the way to go for SARS-CoV-2 for at least the rest of 2020. Reference labs are becoming creative as well; one combined detection of influenza A, influenza B, RSV, and SARS-CoV-2 into one test, and hopes to get approval for swab collection that can be done by families at home and mailed in.

 

Summary

Expect variations on the traditional parade of seasonal respiratory viruses, with increased numbers of coinfections. Choosing the outpatient who needs influenza testing is the same as in past years, although we have CDC permissive recommendations to prescribe antivirals for any outpatient ILI within the first 48 hours of symptoms. Still, POC testing for influenza remains potentially valuable in the ILI patient. The choice of whether and how to test for SARS-CoV-2 given its potential to be a primary or coinfecting agent in presentations linked more closely to a traditional virus (e.g. RSV bronchiolitis) will be a test of our clinical judgement until more data and easier testing are available. Further complicating coinfection recognition is the fact that many sick visits occur by telehealth and much testing is done at drive-through SARS-CoV-2 testing facilities with no clinician exam. Unless we are liberal in SARS-CoV-2 testing, detecting SARS-CoV-2 coinfections is easier said than done given its usually mild presentation being overshadowed by any coinfecting virus.

But understanding who has SARS-CoV-2, even as a coinfection, still is essential in controlling the pandemic. We will need to be vigilant for evolving approaches to SARS-CoV-2 testing in the context of symptomatic ARI presentations, knowing this will likely remain a moving target for the foreseeable future.
 

Dr. Harrison is professor of pediatrics and pediatric infectious diseases at Children’s Mercy Hospital-Kansas City, Mo. Children’s Mercy Hospital receives grant funding to study two candidate RSV vaccines. The hospital also receives CDC funding under the New Vaccine Surveillance Network for multicenter surveillance of acute respiratory infections, including influenza, RSV, and parainfluenza virus. Email Dr. Harrison at [email protected].

References

1. Pediatrics. 2020;146(1):e20200961.

2. JAMA. 2020 May 26;323(20):2085-6.

3. Pediatrics. 2020. doi: 10.1542/peds.2020-1267.

4. www.cdc.gov/flu/professionals/antivirals/summary-clinicians.htm.

5. J. Pediatr. 2020. doi: 10.1016/j.jpeds.2020.08.007.

6. www.cdc.gov/flu/professionals/diagnosis/table-nucleic-acid-detection.html.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Insomnia may have a role in generation of stressful life events

Article Type
Changed
Tue, 10/13/2020 - 13:29

 

Insomnia disorder appears to play a causal role in the development of new stressful life events, especially “dependent” events for which individuals are at least partly responsible, said the investigators of an ongoing longitudinal study of people who have experienced involuntary job loss.

The “stress-generation hypothesis” has been applied for several decades in the context of depression. It posits that depressed individuals generate more stressful life events – events that create family conflict or disrupt careers, for instance – than individuals who are not depressed.

The new analysis of individuals with involuntary job loss suggests that the same can be said of insomnia. “Insomnia disorder is associated with fatigue, daytime sleepiness, impaired concentration, and difficulties in emotional regulation,” Iva Skobic, MSPH, MA, a PhD student at the University of Arizona, Tucson, said at the virtual annual meeting of the Associated Professional Sleep Societies.

“These may lead to impaired decision-making, interpersonal conflicts, difficulty meeting deadlines and keeping commitments, and other sources [of stressful life events],” she said. “This extension of the stress-generation hypothesis has important implications for harm reduction interventions for insomnia disorder.”

Investigators conducted a cross-lagged panel analysis using baseline and 3-month follow-up data from 137 individuals who completed a standardized, textual life event measure called the Life Events and Difficulties Schedule after having lost their jobs involuntarily. Participants were interviewed and their events were rated for severity by a consensus panel using operationalized criteria. The analysis employed linear regression controlling for covariates (age, gender, and race) and logistic regression that controlled for insomnia at baseline. Insomnia disorder was defined as meeting ICSD-2/3 criteria using the Duke Structured Interview for Sleep Disorders.

The findings: Insomnia disorder at baseline predicted the number of stressful life events (either dependent or interpersonal) generated within 3 months (beta, 0.70; standard error, 0.31; Tscore, 2.27; P = .03). Conversely, the number of stressful events at baseline did not predict insomnia (odds ratio, 0.97; 95% confidence interval, 0.73-1.29). There also was a trend toward increased generation of dependent events specifically among those with insomnia disorder.

Participants were a mean age of 42 years, and all had been in their previous place of employment for at least 6 months. Nearly 60% met the diagnostic threshold for insomnia at baseline. They were part of a larger ongoing study examining the linkages between job loss and sleep disturbances, obesity, and mental health – the Assessing Daily Activity Patterns through Occupational Transitions (ADAPT) study, supported by the National Heart, Lung, and Blood Institute.

This analysis on insomnia was completed before the COVID-19 pandemic began, but it and other analyses soon to be reported are highly relevant to the economic climate, said Patricia Haynes, PhD, principal investigator of ADAPT and a coauthor of the insomnia study, in an interview after the meeting.

Insomnia is a frequent comorbidity of depression and shares many of its symptoms, from increased fatigue to emotional dysregulation and an increased risk of maladaptive coping strategies. “Interestingly, the literature on the stress-generation hypothesis posits that these very symptoms are on the casual pathway between depression and stressful life events,” said Ms. Skobic at the meeting.

In commenting on the study, Krishna M. Sundar, MD, medical director of the Sleep-Wake Center at the University of Utah, Salt Lake City, noted that the analysis did not include any measure of the severity of insomnia. Still, he said, “finding an association [with stress generation] at [just] 3 months with the presence of insomnia disorder is quite interesting.”

There were higher rates of insomnia in the sample than depression, Dr. Haynes said, but the analysis did not control for depression or take it into account.

“We know [from prior research] that stress clearly leads to insomnia. The big [takeaway] here is that insomnia can also lead to more stress,” she said. “It’s important to think of it as a reciprocal relationship. If we can potentially treat insomnia, we may be able to stop that cycle of other stressful events that affect both [the individuals] and others as well.”

Ms. Skobic had no disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Insomnia disorder appears to play a causal role in the development of new stressful life events, especially “dependent” events for which individuals are at least partly responsible, said the investigators of an ongoing longitudinal study of people who have experienced involuntary job loss.

The “stress-generation hypothesis” has been applied for several decades in the context of depression. It posits that depressed individuals generate more stressful life events – events that create family conflict or disrupt careers, for instance – than individuals who are not depressed.

The new analysis of individuals with involuntary job loss suggests that the same can be said of insomnia. “Insomnia disorder is associated with fatigue, daytime sleepiness, impaired concentration, and difficulties in emotional regulation,” Iva Skobic, MSPH, MA, a PhD student at the University of Arizona, Tucson, said at the virtual annual meeting of the Associated Professional Sleep Societies.

“These may lead to impaired decision-making, interpersonal conflicts, difficulty meeting deadlines and keeping commitments, and other sources [of stressful life events],” she said. “This extension of the stress-generation hypothesis has important implications for harm reduction interventions for insomnia disorder.”

Investigators conducted a cross-lagged panel analysis using baseline and 3-month follow-up data from 137 individuals who completed a standardized, textual life event measure called the Life Events and Difficulties Schedule after having lost their jobs involuntarily. Participants were interviewed and their events were rated for severity by a consensus panel using operationalized criteria. The analysis employed linear regression controlling for covariates (age, gender, and race) and logistic regression that controlled for insomnia at baseline. Insomnia disorder was defined as meeting ICSD-2/3 criteria using the Duke Structured Interview for Sleep Disorders.

The findings: Insomnia disorder at baseline predicted the number of stressful life events (either dependent or interpersonal) generated within 3 months (beta, 0.70; standard error, 0.31; Tscore, 2.27; P = .03). Conversely, the number of stressful events at baseline did not predict insomnia (odds ratio, 0.97; 95% confidence interval, 0.73-1.29). There also was a trend toward increased generation of dependent events specifically among those with insomnia disorder.

Participants were a mean age of 42 years, and all had been in their previous place of employment for at least 6 months. Nearly 60% met the diagnostic threshold for insomnia at baseline. They were part of a larger ongoing study examining the linkages between job loss and sleep disturbances, obesity, and mental health – the Assessing Daily Activity Patterns through Occupational Transitions (ADAPT) study, supported by the National Heart, Lung, and Blood Institute.

This analysis on insomnia was completed before the COVID-19 pandemic began, but it and other analyses soon to be reported are highly relevant to the economic climate, said Patricia Haynes, PhD, principal investigator of ADAPT and a coauthor of the insomnia study, in an interview after the meeting.

Insomnia is a frequent comorbidity of depression and shares many of its symptoms, from increased fatigue to emotional dysregulation and an increased risk of maladaptive coping strategies. “Interestingly, the literature on the stress-generation hypothesis posits that these very symptoms are on the casual pathway between depression and stressful life events,” said Ms. Skobic at the meeting.

In commenting on the study, Krishna M. Sundar, MD, medical director of the Sleep-Wake Center at the University of Utah, Salt Lake City, noted that the analysis did not include any measure of the severity of insomnia. Still, he said, “finding an association [with stress generation] at [just] 3 months with the presence of insomnia disorder is quite interesting.”

There were higher rates of insomnia in the sample than depression, Dr. Haynes said, but the analysis did not control for depression or take it into account.

“We know [from prior research] that stress clearly leads to insomnia. The big [takeaway] here is that insomnia can also lead to more stress,” she said. “It’s important to think of it as a reciprocal relationship. If we can potentially treat insomnia, we may be able to stop that cycle of other stressful events that affect both [the individuals] and others as well.”

Ms. Skobic had no disclosures.

 

Insomnia disorder appears to play a causal role in the development of new stressful life events, especially “dependent” events for which individuals are at least partly responsible, said the investigators of an ongoing longitudinal study of people who have experienced involuntary job loss.

The “stress-generation hypothesis” has been applied for several decades in the context of depression. It posits that depressed individuals generate more stressful life events – events that create family conflict or disrupt careers, for instance – than individuals who are not depressed.

The new analysis of individuals with involuntary job loss suggests that the same can be said of insomnia. “Insomnia disorder is associated with fatigue, daytime sleepiness, impaired concentration, and difficulties in emotional regulation,” Iva Skobic, MSPH, MA, a PhD student at the University of Arizona, Tucson, said at the virtual annual meeting of the Associated Professional Sleep Societies.

“These may lead to impaired decision-making, interpersonal conflicts, difficulty meeting deadlines and keeping commitments, and other sources [of stressful life events],” she said. “This extension of the stress-generation hypothesis has important implications for harm reduction interventions for insomnia disorder.”

Investigators conducted a cross-lagged panel analysis using baseline and 3-month follow-up data from 137 individuals who completed a standardized, textual life event measure called the Life Events and Difficulties Schedule after having lost their jobs involuntarily. Participants were interviewed and their events were rated for severity by a consensus panel using operationalized criteria. The analysis employed linear regression controlling for covariates (age, gender, and race) and logistic regression that controlled for insomnia at baseline. Insomnia disorder was defined as meeting ICSD-2/3 criteria using the Duke Structured Interview for Sleep Disorders.

The findings: Insomnia disorder at baseline predicted the number of stressful life events (either dependent or interpersonal) generated within 3 months (beta, 0.70; standard error, 0.31; Tscore, 2.27; P = .03). Conversely, the number of stressful events at baseline did not predict insomnia (odds ratio, 0.97; 95% confidence interval, 0.73-1.29). There also was a trend toward increased generation of dependent events specifically among those with insomnia disorder.

Participants were a mean age of 42 years, and all had been in their previous place of employment for at least 6 months. Nearly 60% met the diagnostic threshold for insomnia at baseline. They were part of a larger ongoing study examining the linkages between job loss and sleep disturbances, obesity, and mental health – the Assessing Daily Activity Patterns through Occupational Transitions (ADAPT) study, supported by the National Heart, Lung, and Blood Institute.

This analysis on insomnia was completed before the COVID-19 pandemic began, but it and other analyses soon to be reported are highly relevant to the economic climate, said Patricia Haynes, PhD, principal investigator of ADAPT and a coauthor of the insomnia study, in an interview after the meeting.

Insomnia is a frequent comorbidity of depression and shares many of its symptoms, from increased fatigue to emotional dysregulation and an increased risk of maladaptive coping strategies. “Interestingly, the literature on the stress-generation hypothesis posits that these very symptoms are on the casual pathway between depression and stressful life events,” said Ms. Skobic at the meeting.

In commenting on the study, Krishna M. Sundar, MD, medical director of the Sleep-Wake Center at the University of Utah, Salt Lake City, noted that the analysis did not include any measure of the severity of insomnia. Still, he said, “finding an association [with stress generation] at [just] 3 months with the presence of insomnia disorder is quite interesting.”

There were higher rates of insomnia in the sample than depression, Dr. Haynes said, but the analysis did not control for depression or take it into account.

“We know [from prior research] that stress clearly leads to insomnia. The big [takeaway] here is that insomnia can also lead to more stress,” she said. “It’s important to think of it as a reciprocal relationship. If we can potentially treat insomnia, we may be able to stop that cycle of other stressful events that affect both [the individuals] and others as well.”

Ms. Skobic had no disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SLEEP 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

New radiomic model may improve prognostication in meningioma

Article Type
Changed
Thu, 09/17/2020 - 11:59

A novel radiomic model may improve clinical decision-making and prognostication in patients with low- to high-grade meningioma, according to a study published in the European Journal of Radiology.

The model – which combines conventional magnetic resonance imaging (cMRI), apparent diffusion coefficient (ADC) maps, and susceptibility weighted imaging (SWI) – was the best performer of all models tested.

Recent studies have shown that radiomic features from cMRI or ADC maps could build “a robust model to predict the grade of meningioma by using machine learning algorithms,” wrote study author Jianping Hu, MD, PhD, of Fujian Medical University in Fujian, China, and colleagues.

With that in mind, the researchers evaluated the role of radiomic models based on cMRI, ADC maps, and/or SWI in predicting meningioma grade.
 

Patients and models

The team retrospectively analyzed 514 patients with meningioma who underwent preoperative MRI assessment over a 10-year period. There were 316 patients included in the final analysis, 229 with low-grade (grade I) and 87 with high-grade (grade II-III) meningioma.

Radiomic features from cMRI, ADC maps, and SWI were extracted based on total tumor volume.

Using a nested leave-one-out cross-validation method, the researchers evaluated the prediction performance of various radiomic models, including cMRI, ADC, SWI, ADC plus SWI, cMRI plus ADC, cMRI plus SWI, and cMRI plus ADC plus SWI.

To establish the final prediction model, the researchers used least absolute shrinkage and selection operator feature selection and implemented a random forest classifier that was trained with and without subsampling. The area under the receiver operating characteristic curve (AUC) was used to evaluate the prediction performance of each model.
 

Results

The model combining cMRI, ADC, and SWI had the best performance in predicting meningioma grade. The AUC of this model was 0.81 with subsampling and 0.84 without subsampling. The other models had an AUC range of 0.71-0.79 with subsampling and 0.75-0.80 without subsampling.

“Our results indicated that [the] multiparametric radiomic model based on cMRI, ADC map, and SWI [tended] to be the best model for the prediction of meningioma grade,” Dr. Hu and colleagues wrote.

Other recent studies have demonstrated that radiomic features from various imaging parameters, such as diffusion weighted imaging and cMRI, can establish robust prediction models for the prediction of meningioma grade, in which AUCs have ranged from 0.63 to 0.91.

While these findings are encouraging, the researchers acknowledged that these data should be interpreted with discretion as the cystic or necrotic areas of tumor were included in the analysis. In addition, the retrospective nature of the study could have introduced selection bias.

No funding sources were reported. The authors reported having no conflicts of interest.

SOURCE: Hu J et al. Eur J Radiol. 2020 Aug 28. doi: 10.1016/j.ejrad.2020.109251.

Publications
Topics
Sections

A novel radiomic model may improve clinical decision-making and prognostication in patients with low- to high-grade meningioma, according to a study published in the European Journal of Radiology.

The model – which combines conventional magnetic resonance imaging (cMRI), apparent diffusion coefficient (ADC) maps, and susceptibility weighted imaging (SWI) – was the best performer of all models tested.

Recent studies have shown that radiomic features from cMRI or ADC maps could build “a robust model to predict the grade of meningioma by using machine learning algorithms,” wrote study author Jianping Hu, MD, PhD, of Fujian Medical University in Fujian, China, and colleagues.

With that in mind, the researchers evaluated the role of radiomic models based on cMRI, ADC maps, and/or SWI in predicting meningioma grade.
 

Patients and models

The team retrospectively analyzed 514 patients with meningioma who underwent preoperative MRI assessment over a 10-year period. There were 316 patients included in the final analysis, 229 with low-grade (grade I) and 87 with high-grade (grade II-III) meningioma.

Radiomic features from cMRI, ADC maps, and SWI were extracted based on total tumor volume.

Using a nested leave-one-out cross-validation method, the researchers evaluated the prediction performance of various radiomic models, including cMRI, ADC, SWI, ADC plus SWI, cMRI plus ADC, cMRI plus SWI, and cMRI plus ADC plus SWI.

To establish the final prediction model, the researchers used least absolute shrinkage and selection operator feature selection and implemented a random forest classifier that was trained with and without subsampling. The area under the receiver operating characteristic curve (AUC) was used to evaluate the prediction performance of each model.
 

Results

The model combining cMRI, ADC, and SWI had the best performance in predicting meningioma grade. The AUC of this model was 0.81 with subsampling and 0.84 without subsampling. The other models had an AUC range of 0.71-0.79 with subsampling and 0.75-0.80 without subsampling.

“Our results indicated that [the] multiparametric radiomic model based on cMRI, ADC map, and SWI [tended] to be the best model for the prediction of meningioma grade,” Dr. Hu and colleagues wrote.

Other recent studies have demonstrated that radiomic features from various imaging parameters, such as diffusion weighted imaging and cMRI, can establish robust prediction models for the prediction of meningioma grade, in which AUCs have ranged from 0.63 to 0.91.

While these findings are encouraging, the researchers acknowledged that these data should be interpreted with discretion as the cystic or necrotic areas of tumor were included in the analysis. In addition, the retrospective nature of the study could have introduced selection bias.

No funding sources were reported. The authors reported having no conflicts of interest.

SOURCE: Hu J et al. Eur J Radiol. 2020 Aug 28. doi: 10.1016/j.ejrad.2020.109251.

A novel radiomic model may improve clinical decision-making and prognostication in patients with low- to high-grade meningioma, according to a study published in the European Journal of Radiology.

The model – which combines conventional magnetic resonance imaging (cMRI), apparent diffusion coefficient (ADC) maps, and susceptibility weighted imaging (SWI) – was the best performer of all models tested.

Recent studies have shown that radiomic features from cMRI or ADC maps could build “a robust model to predict the grade of meningioma by using machine learning algorithms,” wrote study author Jianping Hu, MD, PhD, of Fujian Medical University in Fujian, China, and colleagues.

With that in mind, the researchers evaluated the role of radiomic models based on cMRI, ADC maps, and/or SWI in predicting meningioma grade.
 

Patients and models

The team retrospectively analyzed 514 patients with meningioma who underwent preoperative MRI assessment over a 10-year period. There were 316 patients included in the final analysis, 229 with low-grade (grade I) and 87 with high-grade (grade II-III) meningioma.

Radiomic features from cMRI, ADC maps, and SWI were extracted based on total tumor volume.

Using a nested leave-one-out cross-validation method, the researchers evaluated the prediction performance of various radiomic models, including cMRI, ADC, SWI, ADC plus SWI, cMRI plus ADC, cMRI plus SWI, and cMRI plus ADC plus SWI.

To establish the final prediction model, the researchers used least absolute shrinkage and selection operator feature selection and implemented a random forest classifier that was trained with and without subsampling. The area under the receiver operating characteristic curve (AUC) was used to evaluate the prediction performance of each model.
 

Results

The model combining cMRI, ADC, and SWI had the best performance in predicting meningioma grade. The AUC of this model was 0.81 with subsampling and 0.84 without subsampling. The other models had an AUC range of 0.71-0.79 with subsampling and 0.75-0.80 without subsampling.

“Our results indicated that [the] multiparametric radiomic model based on cMRI, ADC map, and SWI [tended] to be the best model for the prediction of meningioma grade,” Dr. Hu and colleagues wrote.

Other recent studies have demonstrated that radiomic features from various imaging parameters, such as diffusion weighted imaging and cMRI, can establish robust prediction models for the prediction of meningioma grade, in which AUCs have ranged from 0.63 to 0.91.

While these findings are encouraging, the researchers acknowledged that these data should be interpreted with discretion as the cystic or necrotic areas of tumor were included in the analysis. In addition, the retrospective nature of the study could have introduced selection bias.

No funding sources were reported. The authors reported having no conflicts of interest.

SOURCE: Hu J et al. Eur J Radiol. 2020 Aug 28. doi: 10.1016/j.ejrad.2020.109251.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM EUROPEAN JOURNAL OF RADIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

AI algorithm on par with radiologists as mammogram reader

Article Type
Changed
Thu, 12/15/2022 - 17:35

 

An artificial intelligence (AI) computer algorithm performed on par with, and in some cases exceeded, radiologists in reading mammograms in a case-control study of 8,805 women undergoing routine screening.

The algorithm – from the company Lunit, which was not involved in the study – had an area under the curve of 0.956 for detection of pathologically confirmed breast cancer.

When operating at a specificity of 96.6%, the sensitivity was 81.9% for the algorithm, 77.4% for first-reader radiologists, and 80.1% for second-reader radiologists. Combining the algorithm with first-reader radiologists identified more cases than combining first- and second-reader radiologists.

These findings were published in JAMA Oncology.

The study’s authors wrote that the algorithm results are a “considerable” achievement because, unlike the radiologists, the algorithm had no access to prior mammograms or information about hormonal medications or breast symptoms.

“We believe that the time has come to evaluate AI CAD [computer-aided detection] algorithms as independent readers in prospective clinical studies,” Mattie Salim, MD, of Karolinska Institute/Karolinska University Hospital in Stockholm, and colleagues wrote.

“The authors are to be commended for providing data that support this next critical phase of discovery,” Constance Dobbins Lehman, MD, PhD, of Massachusetts General Hospital and Harvard Medical School, both in Boston, wrote in a related editorial. She added that “it is time to move beyond simulation and reader studies and enter the critical phase of rigorous, prospective clinical evaluation.”
 

Study rationale and details

Routine mammograms save lives, but the workload for radiologists is high, and the quality of assessments varies widely, Dr. Salim and colleagues wrote. There are also problems with access in areas with few radiologists.

To address these issues, academic and commercial researchers have worked hard to apply AI – specifically, deep neural networks – to computer programs that read mammograms.

For this study, the investigators conducted the first third-party external validation of three competing algorithms. The three algorithms were not named in the report, but Lunit announced that its algorithm was the best-performing algorithm after the study was published. The other two algorithms did not perform as well and remain anonymous.

The investigators compared the algorithms’ assessments with the original radiology reports for 739 women who were diagnosed with breast cancer within 12 months of their mammogram and 8,066 women with negative mammograms who remained cancer free at a 2-year follow-up.

The women, aged 40-74 years, had conventional two-dimensional imaging read by two radiologists at the Karolinska University Hospital during 2008-2015. The subjects’ median age at screening was 54.5 years.

The algorithms gave a prediction score between 0 and 1 for each breast, with 1 denoting the highest level of cancer suspicion. To enable a comparison with the binary decisions of the radiologists, the output of each algorithm was dichotomized (normal or abnormal) at a cut point defined by the mean specificity of the first-reader radiologists, 96.6%.

At a specificity of 96.6%, the sensitivity was 81.9% for the Lunit algorithm, 67.0% for one anonymous algorithm (AI-2), 67.4% for the other anonymous algorithm (AI-3), 77.4% for first-reader radiologists, and 80.1% for second-reader radiologists

The investigators also ran their analysis at a cut point of 88.9% specificity. The sensitivity was 88.6% for the Lunit algorithm, 80.0% for AI-2, and 80.2% for AI-3.

“This can be compared with the Breast Cancer Surveillance Consortium benchmarks of 86.9% sensitivity at 88.9% specificity,” the authors wrote.

The most potent screening strategy was combining the Lunit algorithm with the first reader, which increased cancer detection by 8% but came at the cost of a 77% increase in abnormal assessments.

“More true-positive cases would likely be found, but a much larger proportion of false-positive examinations would have to be handled in the ensuing consensus discussion,” the authors wrote. “[A] cost-benefit analysis is required ... to determine the economic implications of adding a human reader at all.”

The team noted that the Lunit algorithm was trained on images of South Korean women from GE equipment.

“Although we do not have ethnic descriptors of our study population, the vast majority of women in Stockholm are White, and all images in our study were acquired on Hologic equipment,” the authors wrote. “In training AI algorithms for mammographic cancer detection, matching ethnic and equipment distributions between the training population and the clinical test population may not be of highest importance.”

As for why the Lunit algorithm outperformed the other two algorithms, one explanation may be that the Lunit algorithm was trained on more mammograms – 72,000 cancer and 680,000 normal images (vs. 10,000 cancer and 229,000 normal images for AI-2; 6,000 cancer and 106,000 normal images for AI-3).

As for next steps, the investigators are planning a prospective clinical study to see how AI works as an independent reviewer of mammograms in a day-to-day clinical environment, both as a third reviewer and to help select women for follow-up MRI.

The current study was funded by the Stockholm County Council. The investigators disclosed financial relationships with the Swedish Research Council, the Swedish Cancer Society, Stockholm City Council, Collective Minds Radiology, and Pfizer. Dr Lehman’s institution receives grants from GE Healthcare.

SOURCE: Salim M et al. JAMA Oncol. 2020 Aug 27. doi: 10.1001/jamaoncol.2020.3321.

Publications
Topics
Sections

 

An artificial intelligence (AI) computer algorithm performed on par with, and in some cases exceeded, radiologists in reading mammograms in a case-control study of 8,805 women undergoing routine screening.

The algorithm – from the company Lunit, which was not involved in the study – had an area under the curve of 0.956 for detection of pathologically confirmed breast cancer.

When operating at a specificity of 96.6%, the sensitivity was 81.9% for the algorithm, 77.4% for first-reader radiologists, and 80.1% for second-reader radiologists. Combining the algorithm with first-reader radiologists identified more cases than combining first- and second-reader radiologists.

These findings were published in JAMA Oncology.

The study’s authors wrote that the algorithm results are a “considerable” achievement because, unlike the radiologists, the algorithm had no access to prior mammograms or information about hormonal medications or breast symptoms.

“We believe that the time has come to evaluate AI CAD [computer-aided detection] algorithms as independent readers in prospective clinical studies,” Mattie Salim, MD, of Karolinska Institute/Karolinska University Hospital in Stockholm, and colleagues wrote.

“The authors are to be commended for providing data that support this next critical phase of discovery,” Constance Dobbins Lehman, MD, PhD, of Massachusetts General Hospital and Harvard Medical School, both in Boston, wrote in a related editorial. She added that “it is time to move beyond simulation and reader studies and enter the critical phase of rigorous, prospective clinical evaluation.”
 

Study rationale and details

Routine mammograms save lives, but the workload for radiologists is high, and the quality of assessments varies widely, Dr. Salim and colleagues wrote. There are also problems with access in areas with few radiologists.

To address these issues, academic and commercial researchers have worked hard to apply AI – specifically, deep neural networks – to computer programs that read mammograms.

For this study, the investigators conducted the first third-party external validation of three competing algorithms. The three algorithms were not named in the report, but Lunit announced that its algorithm was the best-performing algorithm after the study was published. The other two algorithms did not perform as well and remain anonymous.

The investigators compared the algorithms’ assessments with the original radiology reports for 739 women who were diagnosed with breast cancer within 12 months of their mammogram and 8,066 women with negative mammograms who remained cancer free at a 2-year follow-up.

The women, aged 40-74 years, had conventional two-dimensional imaging read by two radiologists at the Karolinska University Hospital during 2008-2015. The subjects’ median age at screening was 54.5 years.

The algorithms gave a prediction score between 0 and 1 for each breast, with 1 denoting the highest level of cancer suspicion. To enable a comparison with the binary decisions of the radiologists, the output of each algorithm was dichotomized (normal or abnormal) at a cut point defined by the mean specificity of the first-reader radiologists, 96.6%.

At a specificity of 96.6%, the sensitivity was 81.9% for the Lunit algorithm, 67.0% for one anonymous algorithm (AI-2), 67.4% for the other anonymous algorithm (AI-3), 77.4% for first-reader radiologists, and 80.1% for second-reader radiologists

The investigators also ran their analysis at a cut point of 88.9% specificity. The sensitivity was 88.6% for the Lunit algorithm, 80.0% for AI-2, and 80.2% for AI-3.

“This can be compared with the Breast Cancer Surveillance Consortium benchmarks of 86.9% sensitivity at 88.9% specificity,” the authors wrote.

The most potent screening strategy was combining the Lunit algorithm with the first reader, which increased cancer detection by 8% but came at the cost of a 77% increase in abnormal assessments.

“More true-positive cases would likely be found, but a much larger proportion of false-positive examinations would have to be handled in the ensuing consensus discussion,” the authors wrote. “[A] cost-benefit analysis is required ... to determine the economic implications of adding a human reader at all.”

The team noted that the Lunit algorithm was trained on images of South Korean women from GE equipment.

“Although we do not have ethnic descriptors of our study population, the vast majority of women in Stockholm are White, and all images in our study were acquired on Hologic equipment,” the authors wrote. “In training AI algorithms for mammographic cancer detection, matching ethnic and equipment distributions between the training population and the clinical test population may not be of highest importance.”

As for why the Lunit algorithm outperformed the other two algorithms, one explanation may be that the Lunit algorithm was trained on more mammograms – 72,000 cancer and 680,000 normal images (vs. 10,000 cancer and 229,000 normal images for AI-2; 6,000 cancer and 106,000 normal images for AI-3).

As for next steps, the investigators are planning a prospective clinical study to see how AI works as an independent reviewer of mammograms in a day-to-day clinical environment, both as a third reviewer and to help select women for follow-up MRI.

The current study was funded by the Stockholm County Council. The investigators disclosed financial relationships with the Swedish Research Council, the Swedish Cancer Society, Stockholm City Council, Collective Minds Radiology, and Pfizer. Dr Lehman’s institution receives grants from GE Healthcare.

SOURCE: Salim M et al. JAMA Oncol. 2020 Aug 27. doi: 10.1001/jamaoncol.2020.3321.

 

An artificial intelligence (AI) computer algorithm performed on par with, and in some cases exceeded, radiologists in reading mammograms in a case-control study of 8,805 women undergoing routine screening.

The algorithm – from the company Lunit, which was not involved in the study – had an area under the curve of 0.956 for detection of pathologically confirmed breast cancer.

When operating at a specificity of 96.6%, the sensitivity was 81.9% for the algorithm, 77.4% for first-reader radiologists, and 80.1% for second-reader radiologists. Combining the algorithm with first-reader radiologists identified more cases than combining first- and second-reader radiologists.

These findings were published in JAMA Oncology.

The study’s authors wrote that the algorithm results are a “considerable” achievement because, unlike the radiologists, the algorithm had no access to prior mammograms or information about hormonal medications or breast symptoms.

“We believe that the time has come to evaluate AI CAD [computer-aided detection] algorithms as independent readers in prospective clinical studies,” Mattie Salim, MD, of Karolinska Institute/Karolinska University Hospital in Stockholm, and colleagues wrote.

“The authors are to be commended for providing data that support this next critical phase of discovery,” Constance Dobbins Lehman, MD, PhD, of Massachusetts General Hospital and Harvard Medical School, both in Boston, wrote in a related editorial. She added that “it is time to move beyond simulation and reader studies and enter the critical phase of rigorous, prospective clinical evaluation.”
 

Study rationale and details

Routine mammograms save lives, but the workload for radiologists is high, and the quality of assessments varies widely, Dr. Salim and colleagues wrote. There are also problems with access in areas with few radiologists.

To address these issues, academic and commercial researchers have worked hard to apply AI – specifically, deep neural networks – to computer programs that read mammograms.

For this study, the investigators conducted the first third-party external validation of three competing algorithms. The three algorithms were not named in the report, but Lunit announced that its algorithm was the best-performing algorithm after the study was published. The other two algorithms did not perform as well and remain anonymous.

The investigators compared the algorithms’ assessments with the original radiology reports for 739 women who were diagnosed with breast cancer within 12 months of their mammogram and 8,066 women with negative mammograms who remained cancer free at a 2-year follow-up.

The women, aged 40-74 years, had conventional two-dimensional imaging read by two radiologists at the Karolinska University Hospital during 2008-2015. The subjects’ median age at screening was 54.5 years.

The algorithms gave a prediction score between 0 and 1 for each breast, with 1 denoting the highest level of cancer suspicion. To enable a comparison with the binary decisions of the radiologists, the output of each algorithm was dichotomized (normal or abnormal) at a cut point defined by the mean specificity of the first-reader radiologists, 96.6%.

At a specificity of 96.6%, the sensitivity was 81.9% for the Lunit algorithm, 67.0% for one anonymous algorithm (AI-2), 67.4% for the other anonymous algorithm (AI-3), 77.4% for first-reader radiologists, and 80.1% for second-reader radiologists

The investigators also ran their analysis at a cut point of 88.9% specificity. The sensitivity was 88.6% for the Lunit algorithm, 80.0% for AI-2, and 80.2% for AI-3.

“This can be compared with the Breast Cancer Surveillance Consortium benchmarks of 86.9% sensitivity at 88.9% specificity,” the authors wrote.

The most potent screening strategy was combining the Lunit algorithm with the first reader, which increased cancer detection by 8% but came at the cost of a 77% increase in abnormal assessments.

“More true-positive cases would likely be found, but a much larger proportion of false-positive examinations would have to be handled in the ensuing consensus discussion,” the authors wrote. “[A] cost-benefit analysis is required ... to determine the economic implications of adding a human reader at all.”

The team noted that the Lunit algorithm was trained on images of South Korean women from GE equipment.

“Although we do not have ethnic descriptors of our study population, the vast majority of women in Stockholm are White, and all images in our study were acquired on Hologic equipment,” the authors wrote. “In training AI algorithms for mammographic cancer detection, matching ethnic and equipment distributions between the training population and the clinical test population may not be of highest importance.”

As for why the Lunit algorithm outperformed the other two algorithms, one explanation may be that the Lunit algorithm was trained on more mammograms – 72,000 cancer and 680,000 normal images (vs. 10,000 cancer and 229,000 normal images for AI-2; 6,000 cancer and 106,000 normal images for AI-3).

As for next steps, the investigators are planning a prospective clinical study to see how AI works as an independent reviewer of mammograms in a day-to-day clinical environment, both as a third reviewer and to help select women for follow-up MRI.

The current study was funded by the Stockholm County Council. The investigators disclosed financial relationships with the Swedish Research Council, the Swedish Cancer Society, Stockholm City Council, Collective Minds Radiology, and Pfizer. Dr Lehman’s institution receives grants from GE Healthcare.

SOURCE: Salim M et al. JAMA Oncol. 2020 Aug 27. doi: 10.1001/jamaoncol.2020.3321.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JAMA ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Reassuring findings on SSRIs and diabetes risk in children

Article Type
Changed
Thu, 09/17/2020 - 11:25

 

SSRIs are associated with a much lower risk of type 2 diabetes (T2D) in children and adolescents than previously reported, new research shows.

Investigators found publicly insured patients treated with SSRIs had a 13% increased risk for T2D, compared with those not treated with these agents. In addition, those taking SSRIs continuously (defined as receiving one or more prescriptions every 3 months) had a 33% increased risk of T2D.

On the other hand, privately insured youth had a much lower increased risk – a finding that may be attributable to a lower prevalence of risk factors for T2D in this group.

“We cannot exclude that children and adolescents treated with SSRIs may be at a small increased risk of developing T2D, particularly publicly insured patients, but the magnitude of association was weaker than previous thought and much smaller than other known risk factors for T2DM, such as obesity, race, and poverty,” lead investigator Jenny Sun, PhD, said in an interview.

“When weighing the known benefits and risks of SSRI treatment in children and adolescents, our findings provide reassurance that the risk of T2DM is not as substantial as initially reported,” said Dr. Sun, a postdoctoral research fellow in the department of population medicine at Harvard Medical School’s Harvard Pilgrim Health Care Institute, Boston.

The study was published online Sept. 2 in JAMA Psychiatry.

Limited evidence

Previous research suggested that SSRIs increase the risk of T2D by up to 90% in children and adolescents.

However, the investigators noted, the study reporting this finding was too small to draw conclusions about the SSRI class as a whole also did not examine specific SSRIs.

In addition, although “several studies have reported that antidepressant use may be a risk factor for T2D in adults, evidence was limited in children and adolescents,” said Dr. Sun.

“Rapid changes in growth during childhood and adolescents can alter drugs’ pharmacokinetics and pharmacodynamics, so high-quality, age-specific data are needed to inform prescribing decisions,” she said.

For the current study, the researchers analyzed claims data on almost 1.6 million patients aged 10-19 years (58.3% female; mean age, 15.1 years) from two large claims databases.

The analysis focused on those with a diagnosis warranting treatment with an SSRI, including depression, generalized or social anxiety disorder, obsessive compulsive disorder, PTSD, panic disorder, or bulimia nervosa.

The Medicaid Analytic Extract database consisted of 316,178 patients insured through Medicaid or the Children’s Health Insurance Program. The IBM MarketScan database consisted of 211,460 privately insured patients. Patients were followed up for a mean of 2.3 and 2.2 years, respectively.

Patients who initiated SSRI treatment were compared with those with a similar indication but who were not taking an SSRI. Secondary analyses compared new SSRI users with patients who recently initiated treatment with bupropion, which has no metabolic side effects, or with patients who recently initiated psychotherapy.

“In observational data, it is difficult to mimic a placebo group, often used in RCTs [randomized, controlled trials], therefore several comparator groups were explored to broaden our understanding,” said Dr. Sun.

In addition, the researchers compared the individual SSRI medications, using fluoxetine as a comparator.

A wide range of more than 100 potential confounders or “proxies of confounders,” were taken into account, including demographic characteristics, psychiatric diagnoses, metabolic conditions, concomitant medications, and use of health care services.

The researchers conducted two analyses. They included an intention-to-treat (ITT) analysis that was restricted to patients with one or more additional SSRI prescriptions during the 6 months following the index exposure assessment period.

 

 

Close monitoring required

An as-treated analysis estimated the association of continuous SSRI treatment (vs. untreated, bupropion treatment, and psychotherapy), with adherence assessed at 3-month intervals.

Initiation and continuation of SSRI treatment in publicly insured patients were both associated with a considerably higher risk of T2D, compared with untreated patients, and a steeper risk, compared with their privately insured counterparts.

For newly treated publicly insured patients initiated on SSRI treatment, the ITT adjusted hazard ratio was 1.13 (95% confidence interval, 1.04-1.22).

There was an even stronger association among continuously treated publicly insured patients, with an as-treated aHR of 1.33 (95% CI, 1.21-1.47). The authors noted that this corresponds to 6.6 additional T2D cases per 10,000 patients continuously treated for at least 2 years.

The association was weaker in privately insured patients (ITT aHR, 1.01; 95% CI, 0.84-1.23; as-treated aHR, 1.10; 95% CI, 0.88-1.36).

The secondary analyses yielded similar findings: When SSRI treatment was compared with psychotherapy, the as-treated aHR for publicly insured patients was 1.44 (95% CI, 1.25-1.65), whereas the aHR for privately insured patients was lower at 1.21 (95% CI, 0.93-1.57)

The investigators found no increased risk when SSRIs were compared with bupropion, and the within-class analysis showed that none of the SSRIs carried an increased hazard of T2D, compared with fluoxetine.

“Publicly insured patients are enrolled in Medicaid and the Children’s Health Insurance Program, whereas privately insured patients are generally covered by their parent’s employer-sponsored insurance,” said Dr. Sun.

“Publicly insured patients are of lower socioeconomic status and represent a population with greater overall medical burden, more comorbidities, and a higher prevalence of risk factors for T2D, such as obesity, at the time of treatment initiation,” she said.

She added that high-risk children and youth should be closely monitored and clinicians should also consider recommending dietary modifications and increased exercise to offset T2D risk.

Useful ‘real-world data’

William Cooper, MD, MPH, professor of pediatrics and health policy at Vanderbilt University Medical Center in Nashville, Tenn., said that the study “provides a fascinating look at risks of SSRI medications in children and adolescents.”

Dr. Cooper, who was not involved with the study, said that the authors “draw from real-world data representing two different populations and carefully consider factors which might confound the associations.”

The results, he said, “provide important benefits for patients, families, and clinicians as they weigh the risks and benefits of using SSRIs for children who need treatment for depression and anxiety disorders.

“As a pediatrician, I would find these results useful as I work with my patients, their families, and behavioral health colleagues in making important treatment decisions.”

The study was supported by a training grant from the program in pharmacoepidemiology at the Harvard School of Public Health. Dr. Sun disclosed no relevant financial relationships. Dr. Cooper disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

SSRIs are associated with a much lower risk of type 2 diabetes (T2D) in children and adolescents than previously reported, new research shows.

Investigators found publicly insured patients treated with SSRIs had a 13% increased risk for T2D, compared with those not treated with these agents. In addition, those taking SSRIs continuously (defined as receiving one or more prescriptions every 3 months) had a 33% increased risk of T2D.

On the other hand, privately insured youth had a much lower increased risk – a finding that may be attributable to a lower prevalence of risk factors for T2D in this group.

“We cannot exclude that children and adolescents treated with SSRIs may be at a small increased risk of developing T2D, particularly publicly insured patients, but the magnitude of association was weaker than previous thought and much smaller than other known risk factors for T2DM, such as obesity, race, and poverty,” lead investigator Jenny Sun, PhD, said in an interview.

“When weighing the known benefits and risks of SSRI treatment in children and adolescents, our findings provide reassurance that the risk of T2DM is not as substantial as initially reported,” said Dr. Sun, a postdoctoral research fellow in the department of population medicine at Harvard Medical School’s Harvard Pilgrim Health Care Institute, Boston.

The study was published online Sept. 2 in JAMA Psychiatry.

Limited evidence

Previous research suggested that SSRIs increase the risk of T2D by up to 90% in children and adolescents.

However, the investigators noted, the study reporting this finding was too small to draw conclusions about the SSRI class as a whole also did not examine specific SSRIs.

In addition, although “several studies have reported that antidepressant use may be a risk factor for T2D in adults, evidence was limited in children and adolescents,” said Dr. Sun.

“Rapid changes in growth during childhood and adolescents can alter drugs’ pharmacokinetics and pharmacodynamics, so high-quality, age-specific data are needed to inform prescribing decisions,” she said.

For the current study, the researchers analyzed claims data on almost 1.6 million patients aged 10-19 years (58.3% female; mean age, 15.1 years) from two large claims databases.

The analysis focused on those with a diagnosis warranting treatment with an SSRI, including depression, generalized or social anxiety disorder, obsessive compulsive disorder, PTSD, panic disorder, or bulimia nervosa.

The Medicaid Analytic Extract database consisted of 316,178 patients insured through Medicaid or the Children’s Health Insurance Program. The IBM MarketScan database consisted of 211,460 privately insured patients. Patients were followed up for a mean of 2.3 and 2.2 years, respectively.

Patients who initiated SSRI treatment were compared with those with a similar indication but who were not taking an SSRI. Secondary analyses compared new SSRI users with patients who recently initiated treatment with bupropion, which has no metabolic side effects, or with patients who recently initiated psychotherapy.

“In observational data, it is difficult to mimic a placebo group, often used in RCTs [randomized, controlled trials], therefore several comparator groups were explored to broaden our understanding,” said Dr. Sun.

In addition, the researchers compared the individual SSRI medications, using fluoxetine as a comparator.

A wide range of more than 100 potential confounders or “proxies of confounders,” were taken into account, including demographic characteristics, psychiatric diagnoses, metabolic conditions, concomitant medications, and use of health care services.

The researchers conducted two analyses. They included an intention-to-treat (ITT) analysis that was restricted to patients with one or more additional SSRI prescriptions during the 6 months following the index exposure assessment period.

 

 

Close monitoring required

An as-treated analysis estimated the association of continuous SSRI treatment (vs. untreated, bupropion treatment, and psychotherapy), with adherence assessed at 3-month intervals.

Initiation and continuation of SSRI treatment in publicly insured patients were both associated with a considerably higher risk of T2D, compared with untreated patients, and a steeper risk, compared with their privately insured counterparts.

For newly treated publicly insured patients initiated on SSRI treatment, the ITT adjusted hazard ratio was 1.13 (95% confidence interval, 1.04-1.22).

There was an even stronger association among continuously treated publicly insured patients, with an as-treated aHR of 1.33 (95% CI, 1.21-1.47). The authors noted that this corresponds to 6.6 additional T2D cases per 10,000 patients continuously treated for at least 2 years.

The association was weaker in privately insured patients (ITT aHR, 1.01; 95% CI, 0.84-1.23; as-treated aHR, 1.10; 95% CI, 0.88-1.36).

The secondary analyses yielded similar findings: When SSRI treatment was compared with psychotherapy, the as-treated aHR for publicly insured patients was 1.44 (95% CI, 1.25-1.65), whereas the aHR for privately insured patients was lower at 1.21 (95% CI, 0.93-1.57)

The investigators found no increased risk when SSRIs were compared with bupropion, and the within-class analysis showed that none of the SSRIs carried an increased hazard of T2D, compared with fluoxetine.

“Publicly insured patients are enrolled in Medicaid and the Children’s Health Insurance Program, whereas privately insured patients are generally covered by their parent’s employer-sponsored insurance,” said Dr. Sun.

“Publicly insured patients are of lower socioeconomic status and represent a population with greater overall medical burden, more comorbidities, and a higher prevalence of risk factors for T2D, such as obesity, at the time of treatment initiation,” she said.

She added that high-risk children and youth should be closely monitored and clinicians should also consider recommending dietary modifications and increased exercise to offset T2D risk.

Useful ‘real-world data’

William Cooper, MD, MPH, professor of pediatrics and health policy at Vanderbilt University Medical Center in Nashville, Tenn., said that the study “provides a fascinating look at risks of SSRI medications in children and adolescents.”

Dr. Cooper, who was not involved with the study, said that the authors “draw from real-world data representing two different populations and carefully consider factors which might confound the associations.”

The results, he said, “provide important benefits for patients, families, and clinicians as they weigh the risks and benefits of using SSRIs for children who need treatment for depression and anxiety disorders.

“As a pediatrician, I would find these results useful as I work with my patients, their families, and behavioral health colleagues in making important treatment decisions.”

The study was supported by a training grant from the program in pharmacoepidemiology at the Harvard School of Public Health. Dr. Sun disclosed no relevant financial relationships. Dr. Cooper disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

SSRIs are associated with a much lower risk of type 2 diabetes (T2D) in children and adolescents than previously reported, new research shows.

Investigators found publicly insured patients treated with SSRIs had a 13% increased risk for T2D, compared with those not treated with these agents. In addition, those taking SSRIs continuously (defined as receiving one or more prescriptions every 3 months) had a 33% increased risk of T2D.

On the other hand, privately insured youth had a much lower increased risk – a finding that may be attributable to a lower prevalence of risk factors for T2D in this group.

“We cannot exclude that children and adolescents treated with SSRIs may be at a small increased risk of developing T2D, particularly publicly insured patients, but the magnitude of association was weaker than previous thought and much smaller than other known risk factors for T2DM, such as obesity, race, and poverty,” lead investigator Jenny Sun, PhD, said in an interview.

“When weighing the known benefits and risks of SSRI treatment in children and adolescents, our findings provide reassurance that the risk of T2DM is not as substantial as initially reported,” said Dr. Sun, a postdoctoral research fellow in the department of population medicine at Harvard Medical School’s Harvard Pilgrim Health Care Institute, Boston.

The study was published online Sept. 2 in JAMA Psychiatry.

Limited evidence

Previous research suggested that SSRIs increase the risk of T2D by up to 90% in children and adolescents.

However, the investigators noted, the study reporting this finding was too small to draw conclusions about the SSRI class as a whole also did not examine specific SSRIs.

In addition, although “several studies have reported that antidepressant use may be a risk factor for T2D in adults, evidence was limited in children and adolescents,” said Dr. Sun.

“Rapid changes in growth during childhood and adolescents can alter drugs’ pharmacokinetics and pharmacodynamics, so high-quality, age-specific data are needed to inform prescribing decisions,” she said.

For the current study, the researchers analyzed claims data on almost 1.6 million patients aged 10-19 years (58.3% female; mean age, 15.1 years) from two large claims databases.

The analysis focused on those with a diagnosis warranting treatment with an SSRI, including depression, generalized or social anxiety disorder, obsessive compulsive disorder, PTSD, panic disorder, or bulimia nervosa.

The Medicaid Analytic Extract database consisted of 316,178 patients insured through Medicaid or the Children’s Health Insurance Program. The IBM MarketScan database consisted of 211,460 privately insured patients. Patients were followed up for a mean of 2.3 and 2.2 years, respectively.

Patients who initiated SSRI treatment were compared with those with a similar indication but who were not taking an SSRI. Secondary analyses compared new SSRI users with patients who recently initiated treatment with bupropion, which has no metabolic side effects, or with patients who recently initiated psychotherapy.

“In observational data, it is difficult to mimic a placebo group, often used in RCTs [randomized, controlled trials], therefore several comparator groups were explored to broaden our understanding,” said Dr. Sun.

In addition, the researchers compared the individual SSRI medications, using fluoxetine as a comparator.

A wide range of more than 100 potential confounders or “proxies of confounders,” were taken into account, including demographic characteristics, psychiatric diagnoses, metabolic conditions, concomitant medications, and use of health care services.

The researchers conducted two analyses. They included an intention-to-treat (ITT) analysis that was restricted to patients with one or more additional SSRI prescriptions during the 6 months following the index exposure assessment period.

 

 

Close monitoring required

An as-treated analysis estimated the association of continuous SSRI treatment (vs. untreated, bupropion treatment, and psychotherapy), with adherence assessed at 3-month intervals.

Initiation and continuation of SSRI treatment in publicly insured patients were both associated with a considerably higher risk of T2D, compared with untreated patients, and a steeper risk, compared with their privately insured counterparts.

For newly treated publicly insured patients initiated on SSRI treatment, the ITT adjusted hazard ratio was 1.13 (95% confidence interval, 1.04-1.22).

There was an even stronger association among continuously treated publicly insured patients, with an as-treated aHR of 1.33 (95% CI, 1.21-1.47). The authors noted that this corresponds to 6.6 additional T2D cases per 10,000 patients continuously treated for at least 2 years.

The association was weaker in privately insured patients (ITT aHR, 1.01; 95% CI, 0.84-1.23; as-treated aHR, 1.10; 95% CI, 0.88-1.36).

The secondary analyses yielded similar findings: When SSRI treatment was compared with psychotherapy, the as-treated aHR for publicly insured patients was 1.44 (95% CI, 1.25-1.65), whereas the aHR for privately insured patients was lower at 1.21 (95% CI, 0.93-1.57)

The investigators found no increased risk when SSRIs were compared with bupropion, and the within-class analysis showed that none of the SSRIs carried an increased hazard of T2D, compared with fluoxetine.

“Publicly insured patients are enrolled in Medicaid and the Children’s Health Insurance Program, whereas privately insured patients are generally covered by their parent’s employer-sponsored insurance,” said Dr. Sun.

“Publicly insured patients are of lower socioeconomic status and represent a population with greater overall medical burden, more comorbidities, and a higher prevalence of risk factors for T2D, such as obesity, at the time of treatment initiation,” she said.

She added that high-risk children and youth should be closely monitored and clinicians should also consider recommending dietary modifications and increased exercise to offset T2D risk.

Useful ‘real-world data’

William Cooper, MD, MPH, professor of pediatrics and health policy at Vanderbilt University Medical Center in Nashville, Tenn., said that the study “provides a fascinating look at risks of SSRI medications in children and adolescents.”

Dr. Cooper, who was not involved with the study, said that the authors “draw from real-world data representing two different populations and carefully consider factors which might confound the associations.”

The results, he said, “provide important benefits for patients, families, and clinicians as they weigh the risks and benefits of using SSRIs for children who need treatment for depression and anxiety disorders.

“As a pediatrician, I would find these results useful as I work with my patients, their families, and behavioral health colleagues in making important treatment decisions.”

The study was supported by a training grant from the program in pharmacoepidemiology at the Harvard School of Public Health. Dr. Sun disclosed no relevant financial relationships. Dr. Cooper disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

The gut a new therapeutic target for major depression?

Article Type
Changed
Thu, 09/17/2020 - 10:52

The gut microbiota differs significantly between patients with major depressive disorder (MDD) and healthy individuals and may be modifiable with a probiotic diet to improve stress and depression scores, two new studies suggest.

ChrisChrisW/iStock/Getty Images

In one study, investigators compared stool samples between patients with MDD and healthy controls. They found significant differences in bacterial profiles between the two groups, as well as between patients who responded vs those who were resistant to treatment.

“This finding further supports the relevance of an altered composition of the gut microbiota in the etiopathogenesis of MDD and suggests a role in response to antidepressants,” coinvestigator Andrea Fontana, MSc, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy, said in an interview.

Results from the second study showed significant improvements in self-reported stress, anxiety, and depression scores in healthy individuals following a “psychobiotic” diet (using probiotics or prebiotics to manipulate the microbiota to improve mental health) that was rich in fruit, vegetables, and fermented foods vs. those who received dietary advice alone.

The investigators, led by Kirsten Berding, PhD, APC Microbiome Ireland, University College Cork, Ireland, now plan on testing their psychobiotic diet in patients with MDD and hope the findings could be helpful in “the development of adjuvant therapeutic opportunities” where pharmacologic treatment is not effective.

Both studies were presented at the virtual congress of the European College of Neuropsychopharmacology, held online this year because of the COVID-19 pandemic.
 

A “hallmark” of major depression

Mr. Fontana and colleagues note that the mostly suboptimal response to pharmacologic treatments among patients with MDD is one of the factors that “contributes to the large socioeconomic burden” of the disease.

Previous research shows patients with MDD have gut dysbiosis, or an imbalance in the natural flora; that antidepressants have antimicrobial properties; and that probiotics have an antibiotic effect. However, the correlation between the composition of the gut microbiota and antidepressant response is poorly understood.

The investigators recruited 34 patients with MDD (aged 18-70 years) who were in a euthymic phase and who did not have comorbid conditions that could affect the gut microbiota.

Eight patients were treatment resistant, defined as a poor response to at least two adequate trials of different antidepressant classes, while 19 were treatment responsive and seven were treatment naive.

The researchers also recruited 20 healthy individuals via word of mouth to act as the control group. There were no significant differences between patients and the control group in terms of baseline characteristics.

Genomic sequencing of bacteria obtained from stool samples showed that it was possible to distinguish between patients with MDD and the healthy individuals, especially at the family, genus, and species levels.

In particular, there were significant differences in the Paenibacillaceae and Flavobacteriaceaea families, for the genus Fenollaria, and the species Flintibacter butyricusChristensenella timonensis, and Eisenbergiella massiliensis, among others.

Results also showed that the phyla Proteobacteria, Tenericutes, and the family Peptostreptococcaceae were more common in patients with treatment-resistant MDD, whereas the phylum Actinobacteria was more abundant in treatment responders.

Moreover, several bacteria were found only in the microbiota of patients with treatment-resistant MDD, while others were seen only in treatment-responsive patients. This made it possible to discriminate not only between treatment-resistant and -responsive patients but also between those two patient groups and healthy controls.

“The results of our study confirm that gut dysbiosis is a hallmark of MDD, and suggests that the gut microbiota of patients with treatment-resistant MDD significantly differs from responders to antidepressants,” Mr. Fontana said.
 

 

 

Psychobiotic diet

For the second study, Dr. Berding and colleagues note that “psychobiotics” has previously achieved “promising results.”

In addition, diet is both “one of the most influential modifying factors” for the gut microbiota and an easily accessible strategy, they wrote. However, there is also a paucity of studies in this area, they added.

The researchers randomly assigned healthy volunteers with relatively poor dietary habits to either a 4-week psychobiotic diet group (n = 21) or a control group (n = 19).

Courtesy National Cancer Institute

Individuals in the psychobiotic group were told to eat a diet rich in prebiotics, such as fruit and vegetables, fiber including whole grains and legumes, and fermented foods. The control group was educated on Irish healthy-eating guidelines.

Stool and saliva samples were collected and the participants completed several self-reported mental health questionnaires, as well as a 7-day food diary. They also took the socially evaluated cold-pressor test (SECPT) to measure acute stress responses.

Results showed that total daily energy intake decreased significantly in both the diet and control groups over the study period (P = .04 for both) but did not differ significantly between the groups.

In contrast, dietary fiber intake increased significantly in the diet group (P < .001) and was significantly higher than in the control group at the end of the intervention (P = .03).

Individuals in the diet group showed significant decreases in scores on the Perceived Stress Scale (P = .002) and the Beck Depression Inventory (P = .007) during the study, an effect that was not found in the control group.
 

Dietary intervention

There were no significant effects of diet on the acute stress response, but both groups showed improvements in self-concept, or perceived ability to cope, on the Primary Appraisal, Secondary Appraisal index (P = .03 for the diet group, P = .04 for the control group).

The results show that a dietary intervention targeted at the microbiota “can improve subjective feelings of stress and depression in a healthy population,” the investigators wrote.

However, elucidating the “contribution of the microbiota-gut-brain axis on the signaling response to dietary interventions” will require further studies on microbiota sequencing and biological measures of stress, they added.

This will “contribute to the understanding of the benefits of a psychobiotic diet on stress and anxiety,” wrote the researchers.

Dr. Berding said in an interview that while the consumption of dietary fiber changed the most in the diet group, “it would not be the only nutrient” that had an impact on the results, with fermented foods a likely candidate.

She said the next step is to test the dietary intervention in patients with MDD; however, “doing nutritional interventions in diseased populations is always difficult.”

Dr. Berding suggested that the best approach would be to study inpatients in a clinic, as “we would be able to provide every meal and only provide foods that are part of the dietary intervention.”

Although another option would be to conduct the study in outpatients, she noted that assessing inpatients “would give us the best control over compliance.”
 

“Brilliant ideas”

Commenting on the findings, Sergueï Fetissov, MD, PhD, professor of physiology at Rouen University, Mont-Saint-Aignan, France, said that although both studies bring attention to a possible role for the gut microbiota in MDD, neither “provide any experimental evidence of a causative nature.”

Dr. Serguei Fetissov

Dr. Fetissov, who was not involved in either study, noted that this topic has been the subject of clinical nutritional research for many years.

However, “we still need some strong evidence to prove that some bacteria can influence the regulation of mood and anxiety and stress,” he said.

In addition, researchers currently do not know what actually causes MDD. “How we can say the gut bacteria regulates something if we don’t know what really causes the altered mood?” said Dr. Fetissov.

He noted that over the last 50 years, there have been great advances in the development of drugs that alleviate depression and anxiety by regulating dopamine, serotonin, and other neurotransmitters. However, it is still unknown whether these reflect primary or secondary aspects of mood disorders.

Furthermore, it is not clear “how probiotics to bacteria can influence these neuronal pathways,” he said.

“The ideas are brilliant and I support them ... but we have to provide proof,” Dr. Fetissov concluded.

The research by Dr. Berding and colleagues is funded by a postdoctoral fellowship grant from the Irish Research Council. The study authors and Dr. Fetissov have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The gut microbiota differs significantly between patients with major depressive disorder (MDD) and healthy individuals and may be modifiable with a probiotic diet to improve stress and depression scores, two new studies suggest.

ChrisChrisW/iStock/Getty Images

In one study, investigators compared stool samples between patients with MDD and healthy controls. They found significant differences in bacterial profiles between the two groups, as well as between patients who responded vs those who were resistant to treatment.

“This finding further supports the relevance of an altered composition of the gut microbiota in the etiopathogenesis of MDD and suggests a role in response to antidepressants,” coinvestigator Andrea Fontana, MSc, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy, said in an interview.

Results from the second study showed significant improvements in self-reported stress, anxiety, and depression scores in healthy individuals following a “psychobiotic” diet (using probiotics or prebiotics to manipulate the microbiota to improve mental health) that was rich in fruit, vegetables, and fermented foods vs. those who received dietary advice alone.

The investigators, led by Kirsten Berding, PhD, APC Microbiome Ireland, University College Cork, Ireland, now plan on testing their psychobiotic diet in patients with MDD and hope the findings could be helpful in “the development of adjuvant therapeutic opportunities” where pharmacologic treatment is not effective.

Both studies were presented at the virtual congress of the European College of Neuropsychopharmacology, held online this year because of the COVID-19 pandemic.
 

A “hallmark” of major depression

Mr. Fontana and colleagues note that the mostly suboptimal response to pharmacologic treatments among patients with MDD is one of the factors that “contributes to the large socioeconomic burden” of the disease.

Previous research shows patients with MDD have gut dysbiosis, or an imbalance in the natural flora; that antidepressants have antimicrobial properties; and that probiotics have an antibiotic effect. However, the correlation between the composition of the gut microbiota and antidepressant response is poorly understood.

The investigators recruited 34 patients with MDD (aged 18-70 years) who were in a euthymic phase and who did not have comorbid conditions that could affect the gut microbiota.

Eight patients were treatment resistant, defined as a poor response to at least two adequate trials of different antidepressant classes, while 19 were treatment responsive and seven were treatment naive.

The researchers also recruited 20 healthy individuals via word of mouth to act as the control group. There were no significant differences between patients and the control group in terms of baseline characteristics.

Genomic sequencing of bacteria obtained from stool samples showed that it was possible to distinguish between patients with MDD and the healthy individuals, especially at the family, genus, and species levels.

In particular, there were significant differences in the Paenibacillaceae and Flavobacteriaceaea families, for the genus Fenollaria, and the species Flintibacter butyricusChristensenella timonensis, and Eisenbergiella massiliensis, among others.

Results also showed that the phyla Proteobacteria, Tenericutes, and the family Peptostreptococcaceae were more common in patients with treatment-resistant MDD, whereas the phylum Actinobacteria was more abundant in treatment responders.

Moreover, several bacteria were found only in the microbiota of patients with treatment-resistant MDD, while others were seen only in treatment-responsive patients. This made it possible to discriminate not only between treatment-resistant and -responsive patients but also between those two patient groups and healthy controls.

“The results of our study confirm that gut dysbiosis is a hallmark of MDD, and suggests that the gut microbiota of patients with treatment-resistant MDD significantly differs from responders to antidepressants,” Mr. Fontana said.
 

 

 

Psychobiotic diet

For the second study, Dr. Berding and colleagues note that “psychobiotics” has previously achieved “promising results.”

In addition, diet is both “one of the most influential modifying factors” for the gut microbiota and an easily accessible strategy, they wrote. However, there is also a paucity of studies in this area, they added.

The researchers randomly assigned healthy volunteers with relatively poor dietary habits to either a 4-week psychobiotic diet group (n = 21) or a control group (n = 19).

Courtesy National Cancer Institute

Individuals in the psychobiotic group were told to eat a diet rich in prebiotics, such as fruit and vegetables, fiber including whole grains and legumes, and fermented foods. The control group was educated on Irish healthy-eating guidelines.

Stool and saliva samples were collected and the participants completed several self-reported mental health questionnaires, as well as a 7-day food diary. They also took the socially evaluated cold-pressor test (SECPT) to measure acute stress responses.

Results showed that total daily energy intake decreased significantly in both the diet and control groups over the study period (P = .04 for both) but did not differ significantly between the groups.

In contrast, dietary fiber intake increased significantly in the diet group (P < .001) and was significantly higher than in the control group at the end of the intervention (P = .03).

Individuals in the diet group showed significant decreases in scores on the Perceived Stress Scale (P = .002) and the Beck Depression Inventory (P = .007) during the study, an effect that was not found in the control group.
 

Dietary intervention

There were no significant effects of diet on the acute stress response, but both groups showed improvements in self-concept, or perceived ability to cope, on the Primary Appraisal, Secondary Appraisal index (P = .03 for the diet group, P = .04 for the control group).

The results show that a dietary intervention targeted at the microbiota “can improve subjective feelings of stress and depression in a healthy population,” the investigators wrote.

However, elucidating the “contribution of the microbiota-gut-brain axis on the signaling response to dietary interventions” will require further studies on microbiota sequencing and biological measures of stress, they added.

This will “contribute to the understanding of the benefits of a psychobiotic diet on stress and anxiety,” wrote the researchers.

Dr. Berding said in an interview that while the consumption of dietary fiber changed the most in the diet group, “it would not be the only nutrient” that had an impact on the results, with fermented foods a likely candidate.

She said the next step is to test the dietary intervention in patients with MDD; however, “doing nutritional interventions in diseased populations is always difficult.”

Dr. Berding suggested that the best approach would be to study inpatients in a clinic, as “we would be able to provide every meal and only provide foods that are part of the dietary intervention.”

Although another option would be to conduct the study in outpatients, she noted that assessing inpatients “would give us the best control over compliance.”
 

“Brilliant ideas”

Commenting on the findings, Sergueï Fetissov, MD, PhD, professor of physiology at Rouen University, Mont-Saint-Aignan, France, said that although both studies bring attention to a possible role for the gut microbiota in MDD, neither “provide any experimental evidence of a causative nature.”

Dr. Serguei Fetissov

Dr. Fetissov, who was not involved in either study, noted that this topic has been the subject of clinical nutritional research for many years.

However, “we still need some strong evidence to prove that some bacteria can influence the regulation of mood and anxiety and stress,” he said.

In addition, researchers currently do not know what actually causes MDD. “How we can say the gut bacteria regulates something if we don’t know what really causes the altered mood?” said Dr. Fetissov.

He noted that over the last 50 years, there have been great advances in the development of drugs that alleviate depression and anxiety by regulating dopamine, serotonin, and other neurotransmitters. However, it is still unknown whether these reflect primary or secondary aspects of mood disorders.

Furthermore, it is not clear “how probiotics to bacteria can influence these neuronal pathways,” he said.

“The ideas are brilliant and I support them ... but we have to provide proof,” Dr. Fetissov concluded.

The research by Dr. Berding and colleagues is funded by a postdoctoral fellowship grant from the Irish Research Council. The study authors and Dr. Fetissov have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

The gut microbiota differs significantly between patients with major depressive disorder (MDD) and healthy individuals and may be modifiable with a probiotic diet to improve stress and depression scores, two new studies suggest.

ChrisChrisW/iStock/Getty Images

In one study, investigators compared stool samples between patients with MDD and healthy controls. They found significant differences in bacterial profiles between the two groups, as well as between patients who responded vs those who were resistant to treatment.

“This finding further supports the relevance of an altered composition of the gut microbiota in the etiopathogenesis of MDD and suggests a role in response to antidepressants,” coinvestigator Andrea Fontana, MSc, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy, said in an interview.

Results from the second study showed significant improvements in self-reported stress, anxiety, and depression scores in healthy individuals following a “psychobiotic” diet (using probiotics or prebiotics to manipulate the microbiota to improve mental health) that was rich in fruit, vegetables, and fermented foods vs. those who received dietary advice alone.

The investigators, led by Kirsten Berding, PhD, APC Microbiome Ireland, University College Cork, Ireland, now plan on testing their psychobiotic diet in patients with MDD and hope the findings could be helpful in “the development of adjuvant therapeutic opportunities” where pharmacologic treatment is not effective.

Both studies were presented at the virtual congress of the European College of Neuropsychopharmacology, held online this year because of the COVID-19 pandemic.
 

A “hallmark” of major depression

Mr. Fontana and colleagues note that the mostly suboptimal response to pharmacologic treatments among patients with MDD is one of the factors that “contributes to the large socioeconomic burden” of the disease.

Previous research shows patients with MDD have gut dysbiosis, or an imbalance in the natural flora; that antidepressants have antimicrobial properties; and that probiotics have an antibiotic effect. However, the correlation between the composition of the gut microbiota and antidepressant response is poorly understood.

The investigators recruited 34 patients with MDD (aged 18-70 years) who were in a euthymic phase and who did not have comorbid conditions that could affect the gut microbiota.

Eight patients were treatment resistant, defined as a poor response to at least two adequate trials of different antidepressant classes, while 19 were treatment responsive and seven were treatment naive.

The researchers also recruited 20 healthy individuals via word of mouth to act as the control group. There were no significant differences between patients and the control group in terms of baseline characteristics.

Genomic sequencing of bacteria obtained from stool samples showed that it was possible to distinguish between patients with MDD and the healthy individuals, especially at the family, genus, and species levels.

In particular, there were significant differences in the Paenibacillaceae and Flavobacteriaceaea families, for the genus Fenollaria, and the species Flintibacter butyricusChristensenella timonensis, and Eisenbergiella massiliensis, among others.

Results also showed that the phyla Proteobacteria, Tenericutes, and the family Peptostreptococcaceae were more common in patients with treatment-resistant MDD, whereas the phylum Actinobacteria was more abundant in treatment responders.

Moreover, several bacteria were found only in the microbiota of patients with treatment-resistant MDD, while others were seen only in treatment-responsive patients. This made it possible to discriminate not only between treatment-resistant and -responsive patients but also between those two patient groups and healthy controls.

“The results of our study confirm that gut dysbiosis is a hallmark of MDD, and suggests that the gut microbiota of patients with treatment-resistant MDD significantly differs from responders to antidepressants,” Mr. Fontana said.
 

 

 

Psychobiotic diet

For the second study, Dr. Berding and colleagues note that “psychobiotics” has previously achieved “promising results.”

In addition, diet is both “one of the most influential modifying factors” for the gut microbiota and an easily accessible strategy, they wrote. However, there is also a paucity of studies in this area, they added.

The researchers randomly assigned healthy volunteers with relatively poor dietary habits to either a 4-week psychobiotic diet group (n = 21) or a control group (n = 19).

Courtesy National Cancer Institute

Individuals in the psychobiotic group were told to eat a diet rich in prebiotics, such as fruit and vegetables, fiber including whole grains and legumes, and fermented foods. The control group was educated on Irish healthy-eating guidelines.

Stool and saliva samples were collected and the participants completed several self-reported mental health questionnaires, as well as a 7-day food diary. They also took the socially evaluated cold-pressor test (SECPT) to measure acute stress responses.

Results showed that total daily energy intake decreased significantly in both the diet and control groups over the study period (P = .04 for both) but did not differ significantly between the groups.

In contrast, dietary fiber intake increased significantly in the diet group (P < .001) and was significantly higher than in the control group at the end of the intervention (P = .03).

Individuals in the diet group showed significant decreases in scores on the Perceived Stress Scale (P = .002) and the Beck Depression Inventory (P = .007) during the study, an effect that was not found in the control group.
 

Dietary intervention

There were no significant effects of diet on the acute stress response, but both groups showed improvements in self-concept, or perceived ability to cope, on the Primary Appraisal, Secondary Appraisal index (P = .03 for the diet group, P = .04 for the control group).

The results show that a dietary intervention targeted at the microbiota “can improve subjective feelings of stress and depression in a healthy population,” the investigators wrote.

However, elucidating the “contribution of the microbiota-gut-brain axis on the signaling response to dietary interventions” will require further studies on microbiota sequencing and biological measures of stress, they added.

This will “contribute to the understanding of the benefits of a psychobiotic diet on stress and anxiety,” wrote the researchers.

Dr. Berding said in an interview that while the consumption of dietary fiber changed the most in the diet group, “it would not be the only nutrient” that had an impact on the results, with fermented foods a likely candidate.

She said the next step is to test the dietary intervention in patients with MDD; however, “doing nutritional interventions in diseased populations is always difficult.”

Dr. Berding suggested that the best approach would be to study inpatients in a clinic, as “we would be able to provide every meal and only provide foods that are part of the dietary intervention.”

Although another option would be to conduct the study in outpatients, she noted that assessing inpatients “would give us the best control over compliance.”
 

“Brilliant ideas”

Commenting on the findings, Sergueï Fetissov, MD, PhD, professor of physiology at Rouen University, Mont-Saint-Aignan, France, said that although both studies bring attention to a possible role for the gut microbiota in MDD, neither “provide any experimental evidence of a causative nature.”

Dr. Serguei Fetissov

Dr. Fetissov, who was not involved in either study, noted that this topic has been the subject of clinical nutritional research for many years.

However, “we still need some strong evidence to prove that some bacteria can influence the regulation of mood and anxiety and stress,” he said.

In addition, researchers currently do not know what actually causes MDD. “How we can say the gut bacteria regulates something if we don’t know what really causes the altered mood?” said Dr. Fetissov.

He noted that over the last 50 years, there have been great advances in the development of drugs that alleviate depression and anxiety by regulating dopamine, serotonin, and other neurotransmitters. However, it is still unknown whether these reflect primary or secondary aspects of mood disorders.

Furthermore, it is not clear “how probiotics to bacteria can influence these neuronal pathways,” he said.

“The ideas are brilliant and I support them ... but we have to provide proof,” Dr. Fetissov concluded.

The research by Dr. Berding and colleagues is funded by a postdoctoral fellowship grant from the Irish Research Council. The study authors and Dr. Fetissov have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

If PPIs are onboard, atezolizumab may not work for bladder cancer

Article Type
Changed
Mon, 03/22/2021 - 14:08

 

Proton pump inhibitors may short-circuit the benefits of atezolizumab (Tecentriq) in patients with advanced/metastatic urothelial cancer, according to a post hoc analysis of 1,360 subjects from two atezolizumab trials.

Proton pump inhibitor (PPI) use was associated with worse overall and progression-free survival among patients on atezolizumab, but there was no such association in a matched cohort receiving chemotherapy alone. In short, concomitant “PPI users had no atezolizumab benefit,” wrote the investigators led by Ashley Hopkins, PhD, a research fellow at Flinders University in Adelaide, Australia.

This is the first time that PPI use has been shown to be an independent prognostic factor for worse survival in this setting with atezolizumab use – but not with chemotherapy, wrote the authors of the study, published online in Clinical Cancer Research.

“PPIs are overused, or inappropriately used, in patients with cancer by up to 50%, seemingly from a perspective that they will cause no harm. The findings from this study suggest that noncritical PPI use needs to be approached very cautiously, particularly when an immune checkpoint inhibitor is being used to treat urothelial cancer,” Hopkins said in a press release.

Although about one third of cancer patients use PPIs, there has been growing evidence that the changes they induce in the gut microbiome impact immune checkpoint inhibitor (ICI) effectiveness. A similar study of pooled trial data recently found that PPIs, as well as antibiotics, were associated with worse survival in advanced non–small cell lung cancer treated with atezolizumab, while no such tie was found with chemotherapy (Ann Oncol. 2020;31:525-31. doi: 10.1016/j.annonc.2020.01.006).

The mechanism is uncertain. PPIs have been associated with T-cell tolerance, pharmacokinetic changes, and decreased gut microbiota diversity. High diversity, the investigators noted, has been associated with stronger ICI responses in melanoma. Antibiotics have been associated with similar gut dysbiosis.

“It is increasingly evident that altered gut microbiota impacts homeostasis, immune response, cancer prognosis, and ICI efficacy. The hypothetical basis of [our] research is that PPIs are associated with marked changes to the gut microbiota, driven by both altered stomach acidity and direct compound effects, and these changes may impact immunotherapy,” Hopkins said in an email to Medscape.

The associations with urothelial cancer hadn’t been investigated before, so Hopkins and his team pooled patient-level data from the single-arm IMvigor210 trial of atezolizumab for urothelial cancer and the randomized IMvigor211 trial, which pitted atezolizumab against chemotherapy for the indication.

The investigators compared the outcomes of the 471 subjects who were on a PPI from 30 days before to 30 days after starting atezolizumab with the outcomes of 889 subjects who were not on a PPI. Findings were adjusted for tumor histology and the number of prior treatments and metastases sites, as well as age, body mass index, performance status, and other potential confounders.

PPI use was associated with markedly worse overall survival (hazard ratio, 1.52; 95% confidence interval, 1.27-1.83; P < .001) and progression-free survival (HR, 1.38; 95% CI, 1.18-1.62; P < .001) in patients on atezolizumab but not chemotherapy. PPI use was also associated with worse objective response to the ICI (HR, 0.51; 95% CI, 0.32-0.82; P = .006).

In the randomized trial, atezolizumab seemed to offer no overall survival benefit versus chemotherapy when PPIs were onboard (HR, 1.04; 95% CI, 0.81-1.34), but atezolizumab offered a substantial benefit when PPIs were not in use (HR, 0.69; 95% CI, 0.56-0.84). Findings were consistent when limited to the PD-L1 IC2/3 population.

It seems that PPIs negate “the magnitude of atezolizumab efficacy,” the investigators wrote.

Concomitant antibiotics made the effect of PPIs on overall survival with atezolizumab even worse (antibiotics plus PPI: HR 2.51; 95% CI, 1.12-5.59; versus no antibiotics with PPI: HR, 1.44; 95% CI, 1.19-1.74).

The investigators cautioned that, although “the conducted analyses have been adjusted, there is the potential that PPI use constitutes a surrogate marker for an unfit or immunodeficient patient.” They called for further investigation with other ICIs, cancer types, and chemotherapy regimens.

The dose and compliance with PPI therapy were unknown, but the team noted that over 90% of the PPI subjects were on PPIs for long-term reasons, most commonly gastric protection and gastroesophageal reflux disease (GERD). Omeprazolepantoprazole, and esomeprazole were the most frequently used. 

There were no significant associations between PPI use and the first occurrence of atezolizumab-induced adverse events.

The study was funded by the National Breast Cancer Foundation (Australia) and the Cancer Council South Australia. Hopkins has disclosed no relevant financial relationships. Multiple study authors have financial ties to industry, including makers of ICIs. The full list can be found with the original article.

This article first appeared on Medscape.com.

Publications
Topics
Sections

 

Proton pump inhibitors may short-circuit the benefits of atezolizumab (Tecentriq) in patients with advanced/metastatic urothelial cancer, according to a post hoc analysis of 1,360 subjects from two atezolizumab trials.

Proton pump inhibitor (PPI) use was associated with worse overall and progression-free survival among patients on atezolizumab, but there was no such association in a matched cohort receiving chemotherapy alone. In short, concomitant “PPI users had no atezolizumab benefit,” wrote the investigators led by Ashley Hopkins, PhD, a research fellow at Flinders University in Adelaide, Australia.

This is the first time that PPI use has been shown to be an independent prognostic factor for worse survival in this setting with atezolizumab use – but not with chemotherapy, wrote the authors of the study, published online in Clinical Cancer Research.

“PPIs are overused, or inappropriately used, in patients with cancer by up to 50%, seemingly from a perspective that they will cause no harm. The findings from this study suggest that noncritical PPI use needs to be approached very cautiously, particularly when an immune checkpoint inhibitor is being used to treat urothelial cancer,” Hopkins said in a press release.

Although about one third of cancer patients use PPIs, there has been growing evidence that the changes they induce in the gut microbiome impact immune checkpoint inhibitor (ICI) effectiveness. A similar study of pooled trial data recently found that PPIs, as well as antibiotics, were associated with worse survival in advanced non–small cell lung cancer treated with atezolizumab, while no such tie was found with chemotherapy (Ann Oncol. 2020;31:525-31. doi: 10.1016/j.annonc.2020.01.006).

The mechanism is uncertain. PPIs have been associated with T-cell tolerance, pharmacokinetic changes, and decreased gut microbiota diversity. High diversity, the investigators noted, has been associated with stronger ICI responses in melanoma. Antibiotics have been associated with similar gut dysbiosis.

“It is increasingly evident that altered gut microbiota impacts homeostasis, immune response, cancer prognosis, and ICI efficacy. The hypothetical basis of [our] research is that PPIs are associated with marked changes to the gut microbiota, driven by both altered stomach acidity and direct compound effects, and these changes may impact immunotherapy,” Hopkins said in an email to Medscape.

The associations with urothelial cancer hadn’t been investigated before, so Hopkins and his team pooled patient-level data from the single-arm IMvigor210 trial of atezolizumab for urothelial cancer and the randomized IMvigor211 trial, which pitted atezolizumab against chemotherapy for the indication.

The investigators compared the outcomes of the 471 subjects who were on a PPI from 30 days before to 30 days after starting atezolizumab with the outcomes of 889 subjects who were not on a PPI. Findings were adjusted for tumor histology and the number of prior treatments and metastases sites, as well as age, body mass index, performance status, and other potential confounders.

PPI use was associated with markedly worse overall survival (hazard ratio, 1.52; 95% confidence interval, 1.27-1.83; P < .001) and progression-free survival (HR, 1.38; 95% CI, 1.18-1.62; P < .001) in patients on atezolizumab but not chemotherapy. PPI use was also associated with worse objective response to the ICI (HR, 0.51; 95% CI, 0.32-0.82; P = .006).

In the randomized trial, atezolizumab seemed to offer no overall survival benefit versus chemotherapy when PPIs were onboard (HR, 1.04; 95% CI, 0.81-1.34), but atezolizumab offered a substantial benefit when PPIs were not in use (HR, 0.69; 95% CI, 0.56-0.84). Findings were consistent when limited to the PD-L1 IC2/3 population.

It seems that PPIs negate “the magnitude of atezolizumab efficacy,” the investigators wrote.

Concomitant antibiotics made the effect of PPIs on overall survival with atezolizumab even worse (antibiotics plus PPI: HR 2.51; 95% CI, 1.12-5.59; versus no antibiotics with PPI: HR, 1.44; 95% CI, 1.19-1.74).

The investigators cautioned that, although “the conducted analyses have been adjusted, there is the potential that PPI use constitutes a surrogate marker for an unfit or immunodeficient patient.” They called for further investigation with other ICIs, cancer types, and chemotherapy regimens.

The dose and compliance with PPI therapy were unknown, but the team noted that over 90% of the PPI subjects were on PPIs for long-term reasons, most commonly gastric protection and gastroesophageal reflux disease (GERD). Omeprazolepantoprazole, and esomeprazole were the most frequently used. 

There were no significant associations between PPI use and the first occurrence of atezolizumab-induced adverse events.

The study was funded by the National Breast Cancer Foundation (Australia) and the Cancer Council South Australia. Hopkins has disclosed no relevant financial relationships. Multiple study authors have financial ties to industry, including makers of ICIs. The full list can be found with the original article.

This article first appeared on Medscape.com.

 

Proton pump inhibitors may short-circuit the benefits of atezolizumab (Tecentriq) in patients with advanced/metastatic urothelial cancer, according to a post hoc analysis of 1,360 subjects from two atezolizumab trials.

Proton pump inhibitor (PPI) use was associated with worse overall and progression-free survival among patients on atezolizumab, but there was no such association in a matched cohort receiving chemotherapy alone. In short, concomitant “PPI users had no atezolizumab benefit,” wrote the investigators led by Ashley Hopkins, PhD, a research fellow at Flinders University in Adelaide, Australia.

This is the first time that PPI use has been shown to be an independent prognostic factor for worse survival in this setting with atezolizumab use – but not with chemotherapy, wrote the authors of the study, published online in Clinical Cancer Research.

“PPIs are overused, or inappropriately used, in patients with cancer by up to 50%, seemingly from a perspective that they will cause no harm. The findings from this study suggest that noncritical PPI use needs to be approached very cautiously, particularly when an immune checkpoint inhibitor is being used to treat urothelial cancer,” Hopkins said in a press release.

Although about one third of cancer patients use PPIs, there has been growing evidence that the changes they induce in the gut microbiome impact immune checkpoint inhibitor (ICI) effectiveness. A similar study of pooled trial data recently found that PPIs, as well as antibiotics, were associated with worse survival in advanced non–small cell lung cancer treated with atezolizumab, while no such tie was found with chemotherapy (Ann Oncol. 2020;31:525-31. doi: 10.1016/j.annonc.2020.01.006).

The mechanism is uncertain. PPIs have been associated with T-cell tolerance, pharmacokinetic changes, and decreased gut microbiota diversity. High diversity, the investigators noted, has been associated with stronger ICI responses in melanoma. Antibiotics have been associated with similar gut dysbiosis.

“It is increasingly evident that altered gut microbiota impacts homeostasis, immune response, cancer prognosis, and ICI efficacy. The hypothetical basis of [our] research is that PPIs are associated with marked changes to the gut microbiota, driven by both altered stomach acidity and direct compound effects, and these changes may impact immunotherapy,” Hopkins said in an email to Medscape.

The associations with urothelial cancer hadn’t been investigated before, so Hopkins and his team pooled patient-level data from the single-arm IMvigor210 trial of atezolizumab for urothelial cancer and the randomized IMvigor211 trial, which pitted atezolizumab against chemotherapy for the indication.

The investigators compared the outcomes of the 471 subjects who were on a PPI from 30 days before to 30 days after starting atezolizumab with the outcomes of 889 subjects who were not on a PPI. Findings were adjusted for tumor histology and the number of prior treatments and metastases sites, as well as age, body mass index, performance status, and other potential confounders.

PPI use was associated with markedly worse overall survival (hazard ratio, 1.52; 95% confidence interval, 1.27-1.83; P < .001) and progression-free survival (HR, 1.38; 95% CI, 1.18-1.62; P < .001) in patients on atezolizumab but not chemotherapy. PPI use was also associated with worse objective response to the ICI (HR, 0.51; 95% CI, 0.32-0.82; P = .006).

In the randomized trial, atezolizumab seemed to offer no overall survival benefit versus chemotherapy when PPIs were onboard (HR, 1.04; 95% CI, 0.81-1.34), but atezolizumab offered a substantial benefit when PPIs were not in use (HR, 0.69; 95% CI, 0.56-0.84). Findings were consistent when limited to the PD-L1 IC2/3 population.

It seems that PPIs negate “the magnitude of atezolizumab efficacy,” the investigators wrote.

Concomitant antibiotics made the effect of PPIs on overall survival with atezolizumab even worse (antibiotics plus PPI: HR 2.51; 95% CI, 1.12-5.59; versus no antibiotics with PPI: HR, 1.44; 95% CI, 1.19-1.74).

The investigators cautioned that, although “the conducted analyses have been adjusted, there is the potential that PPI use constitutes a surrogate marker for an unfit or immunodeficient patient.” They called for further investigation with other ICIs, cancer types, and chemotherapy regimens.

The dose and compliance with PPI therapy were unknown, but the team noted that over 90% of the PPI subjects were on PPIs for long-term reasons, most commonly gastric protection and gastroesophageal reflux disease (GERD). Omeprazolepantoprazole, and esomeprazole were the most frequently used. 

There were no significant associations between PPI use and the first occurrence of atezolizumab-induced adverse events.

The study was funded by the National Breast Cancer Foundation (Australia) and the Cancer Council South Australia. Hopkins has disclosed no relevant financial relationships. Multiple study authors have financial ties to industry, including makers of ICIs. The full list can be found with the original article.

This article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article