Neurodevelopmental concerns may emerge later in Zika-exposed infants

Article Type
Changed
Mon, 04/29/2019 - 09:34

 

– Most infants prenatally exposed to Zika showed relatively normal neurodevelopment if their fetal MRI and birth head circumference were normal, but others with similarly initial normal measures appeared to struggle with social cognition and mobility as they got older, according to a new study.

Dr. Sarah Mulkey

“I think we need to be cautious with saying that these children are normal when these normal-appearing children may not be doing as well as we think,” lead author Sarah Mulkey, MD, of Children’s National Health System and George Washington University, Washington, said in an interview. “While most children are showing fairly normal development, there are some children who are … becoming more abnormal over time.”

Dr. Mulkey shared her findings at the Pediatric Academic Societies annual meeting. She and her colleagues had previously published a prospective study of 82 Zika-exposed infants’ fetal brain MRIs. In their new study, they followed up with the 78 Colombian infants from that study whose fetal neuroimaging and birth head circumstance had been normal.

The researchers used the Alberta Infant Motor Scale (AIMS) and the Warner Initial Developmental Evaluation of Adaptive and Functional Skills (WIDEA) to evaluate 72 of the children, 34 of whom underwent assessment twice. Forty of the children were an average 5.7 months old when evaluated, and 66 were an average 13.5 months old.

As the children got older, their overall WIDEA z-score and their subscores in the social cognition domain and especially in the mobility domain trended downward. Three of the children had AIMS scores two standard deviations below normal, but the rest fell within the normal range.

Their WIDEA communication z-score hovered relatively close to the norm, but self-care also showed a very slight slope downward, albeit not as substantially as in the social cognition and mobility domains.

The younger a child is, the fewer skills they generally show related to neurocognitive development, Dr. Mulkey explained. But as they grow older and are expected to show more skills, it becomes more apparent where gaps and delays might exist.

“We can see that there are a lot of kids doing well, but some of these kids certainly are not,” she said. “Until children have a long time to develop, you really can’t see these changes unless you follow them long-term.”

The researchers also looked separately at a subgroup of 19 children (26%) whose cranial ultrasounds showed mild nonspecific findings. These findings – such as lenticulostriate vasculopathy, choroid plexus cysts, subependymal cysts and calcifications – do not usually indicate any problems, but they appeared in a quarter of this population, considerably more than the approximately 5% typically seen in the general population, Dr. Mulkey said.

 

 

Though the findings did not reach significance, infants in this subgroup tended to have a lower WIDEA mobility z-scores (P = .054) and lower AIMS scores (P = .26) than the Zika-exposed infants with normal cranial ultrasounds.

“Mild nonspecific cranial ultrasound findings may represent a mild injury” related to exposure to their mother’s Zika infection during pregnancy, the researchers suggested. “It may be a risk factor for the lower mobility outcome,” Dr. Mulkey said.

The researchers hope to continue later follow-ups as the children age.

The research was funded by the Thrasher Research Fund. Dr. Mulkey had no conflicts of interest.
Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Most infants prenatally exposed to Zika showed relatively normal neurodevelopment if their fetal MRI and birth head circumference were normal, but others with similarly initial normal measures appeared to struggle with social cognition and mobility as they got older, according to a new study.

Dr. Sarah Mulkey

“I think we need to be cautious with saying that these children are normal when these normal-appearing children may not be doing as well as we think,” lead author Sarah Mulkey, MD, of Children’s National Health System and George Washington University, Washington, said in an interview. “While most children are showing fairly normal development, there are some children who are … becoming more abnormal over time.”

Dr. Mulkey shared her findings at the Pediatric Academic Societies annual meeting. She and her colleagues had previously published a prospective study of 82 Zika-exposed infants’ fetal brain MRIs. In their new study, they followed up with the 78 Colombian infants from that study whose fetal neuroimaging and birth head circumstance had been normal.

The researchers used the Alberta Infant Motor Scale (AIMS) and the Warner Initial Developmental Evaluation of Adaptive and Functional Skills (WIDEA) to evaluate 72 of the children, 34 of whom underwent assessment twice. Forty of the children were an average 5.7 months old when evaluated, and 66 were an average 13.5 months old.

As the children got older, their overall WIDEA z-score and their subscores in the social cognition domain and especially in the mobility domain trended downward. Three of the children had AIMS scores two standard deviations below normal, but the rest fell within the normal range.

Their WIDEA communication z-score hovered relatively close to the norm, but self-care also showed a very slight slope downward, albeit not as substantially as in the social cognition and mobility domains.

The younger a child is, the fewer skills they generally show related to neurocognitive development, Dr. Mulkey explained. But as they grow older and are expected to show more skills, it becomes more apparent where gaps and delays might exist.

“We can see that there are a lot of kids doing well, but some of these kids certainly are not,” she said. “Until children have a long time to develop, you really can’t see these changes unless you follow them long-term.”

The researchers also looked separately at a subgroup of 19 children (26%) whose cranial ultrasounds showed mild nonspecific findings. These findings – such as lenticulostriate vasculopathy, choroid plexus cysts, subependymal cysts and calcifications – do not usually indicate any problems, but they appeared in a quarter of this population, considerably more than the approximately 5% typically seen in the general population, Dr. Mulkey said.

 

 

Though the findings did not reach significance, infants in this subgroup tended to have a lower WIDEA mobility z-scores (P = .054) and lower AIMS scores (P = .26) than the Zika-exposed infants with normal cranial ultrasounds.

“Mild nonspecific cranial ultrasound findings may represent a mild injury” related to exposure to their mother’s Zika infection during pregnancy, the researchers suggested. “It may be a risk factor for the lower mobility outcome,” Dr. Mulkey said.

The researchers hope to continue later follow-ups as the children age.

The research was funded by the Thrasher Research Fund. Dr. Mulkey had no conflicts of interest.

 

– Most infants prenatally exposed to Zika showed relatively normal neurodevelopment if their fetal MRI and birth head circumference were normal, but others with similarly initial normal measures appeared to struggle with social cognition and mobility as they got older, according to a new study.

Dr. Sarah Mulkey

“I think we need to be cautious with saying that these children are normal when these normal-appearing children may not be doing as well as we think,” lead author Sarah Mulkey, MD, of Children’s National Health System and George Washington University, Washington, said in an interview. “While most children are showing fairly normal development, there are some children who are … becoming more abnormal over time.”

Dr. Mulkey shared her findings at the Pediatric Academic Societies annual meeting. She and her colleagues had previously published a prospective study of 82 Zika-exposed infants’ fetal brain MRIs. In their new study, they followed up with the 78 Colombian infants from that study whose fetal neuroimaging and birth head circumstance had been normal.

The researchers used the Alberta Infant Motor Scale (AIMS) and the Warner Initial Developmental Evaluation of Adaptive and Functional Skills (WIDEA) to evaluate 72 of the children, 34 of whom underwent assessment twice. Forty of the children were an average 5.7 months old when evaluated, and 66 were an average 13.5 months old.

As the children got older, their overall WIDEA z-score and their subscores in the social cognition domain and especially in the mobility domain trended downward. Three of the children had AIMS scores two standard deviations below normal, but the rest fell within the normal range.

Their WIDEA communication z-score hovered relatively close to the norm, but self-care also showed a very slight slope downward, albeit not as substantially as in the social cognition and mobility domains.

The younger a child is, the fewer skills they generally show related to neurocognitive development, Dr. Mulkey explained. But as they grow older and are expected to show more skills, it becomes more apparent where gaps and delays might exist.

“We can see that there are a lot of kids doing well, but some of these kids certainly are not,” she said. “Until children have a long time to develop, you really can’t see these changes unless you follow them long-term.”

The researchers also looked separately at a subgroup of 19 children (26%) whose cranial ultrasounds showed mild nonspecific findings. These findings – such as lenticulostriate vasculopathy, choroid plexus cysts, subependymal cysts and calcifications – do not usually indicate any problems, but they appeared in a quarter of this population, considerably more than the approximately 5% typically seen in the general population, Dr. Mulkey said.

 

 

Though the findings did not reach significance, infants in this subgroup tended to have a lower WIDEA mobility z-scores (P = .054) and lower AIMS scores (P = .26) than the Zika-exposed infants with normal cranial ultrasounds.

“Mild nonspecific cranial ultrasound findings may represent a mild injury” related to exposure to their mother’s Zika infection during pregnancy, the researchers suggested. “It may be a risk factor for the lower mobility outcome,” Dr. Mulkey said.

The researchers hope to continue later follow-ups as the children age.

The research was funded by the Thrasher Research Fund. Dr. Mulkey had no conflicts of interest.
Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM PAS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Apparently normal newborns exposed prenatally to Zika may show neurodevelopmental difficulties in later infancy.

Major finding: Zika-exposed infants with normal fetal MRI neuroimaging showed increasingly lower mobility and social cognition skills as they approached their first birthday.

Study details: The findings are based on neurodevelopmental assessments of 72 Zika-exposed Colombian children at 4-18 months old.

Disclosures: The research was funded by the Thrasher Research Fund. Dr. Mulkey had no conflicts of interest.
 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Marijuana during prenatal OUD treatment increases premature birth

Article Type
Changed
Mon, 04/29/2019 - 14:15

 

– Marijuana is a not a good idea during pregnancy, and it’s an even worse idea when women are being treated for opioid addiction, according to an investigation from East Tennessee State University, Mountain Home.

M. Alexander Otto/MDedge News
Dr. Darshan Shah

Marijuana use may become more common as legalization rolls out across the country, and legalization, in turn, may add to the perception that pot is harmless, and maybe a good way to take the edge off during pregnancy and prevent morning sickness, said neonatologist Darshan Shaw, MD, of the department of pediatrics at the university.

Dr. Shaw wondered how that trend might impact treatment of opioid use disorder (OUD) during pregnancy, which has also become more common. The take-home is that “if you have a pregnant patient on medically assistant therapy” for opioid addition, “you should warn them against use of marijuana. It increases the risk of prematurity and low birth weight,” he said at the Pediatric Academic Societies annual meeting.

He and his team reviewed 2,375 opioid-exposed pregnancies at six hospitals in south-central Appalachia from July 2011 to June 2016. All of the women had used opioids during pregnancy, some illegally and others for opioid use disorder (OUD) treatment or other medical issues; 108 had urine screens that were positive for tetrahydrocannabinol (THC) at the time of delivery.

Infants were born a mean of 3 days earlier in the marijuana group, and a mean of 265 g lighter. They were also more likely to be born before 37 weeks’ gestation (14% versus 6.5%); born weighing less than 2,500 g (17.6% versus 7.3%); and more likely to be admitted to the neonatal ICU (17.5% versus 7.1%).

On logistic regression to control for parity, maternal status, and tobacco and benzodiazepine use, prenatal marijuana exposure more than doubled the risk of prematurity (odds ratio, 2.35; 95% confidence interval, 1.3-4.23); tobacco and benzodiazepines did not increase the risk. Marijuana also doubled the risk of low birth weight (OR, 2.02; 95% CI, 1.18-3.47), about the same as tobacco and benzodiazepines.

The study had limitations. There was no controlling for a major confounder: the amount of opioids woman took while pregnant. These data were not available, Dr. Shaw said.

Neonatal abstinence syndrome was more common in the marijuana group (33.3% versus 18.1%), so it’s possible that women who used marijuana also used more opioids. “We suspect that opioid exposure was not uniform among all infants,” he said. There were also no data on the amount or way marijuana was used.

Marijuana-positive women were more likely to be unmarried, nulliparous, and use tobacco and benzodiazepines.

There was no industry funding for the work, and Dr. Shaw had no disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Marijuana is a not a good idea during pregnancy, and it’s an even worse idea when women are being treated for opioid addiction, according to an investigation from East Tennessee State University, Mountain Home.

M. Alexander Otto/MDedge News
Dr. Darshan Shah

Marijuana use may become more common as legalization rolls out across the country, and legalization, in turn, may add to the perception that pot is harmless, and maybe a good way to take the edge off during pregnancy and prevent morning sickness, said neonatologist Darshan Shaw, MD, of the department of pediatrics at the university.

Dr. Shaw wondered how that trend might impact treatment of opioid use disorder (OUD) during pregnancy, which has also become more common. The take-home is that “if you have a pregnant patient on medically assistant therapy” for opioid addition, “you should warn them against use of marijuana. It increases the risk of prematurity and low birth weight,” he said at the Pediatric Academic Societies annual meeting.

He and his team reviewed 2,375 opioid-exposed pregnancies at six hospitals in south-central Appalachia from July 2011 to June 2016. All of the women had used opioids during pregnancy, some illegally and others for opioid use disorder (OUD) treatment or other medical issues; 108 had urine screens that were positive for tetrahydrocannabinol (THC) at the time of delivery.

Infants were born a mean of 3 days earlier in the marijuana group, and a mean of 265 g lighter. They were also more likely to be born before 37 weeks’ gestation (14% versus 6.5%); born weighing less than 2,500 g (17.6% versus 7.3%); and more likely to be admitted to the neonatal ICU (17.5% versus 7.1%).

On logistic regression to control for parity, maternal status, and tobacco and benzodiazepine use, prenatal marijuana exposure more than doubled the risk of prematurity (odds ratio, 2.35; 95% confidence interval, 1.3-4.23); tobacco and benzodiazepines did not increase the risk. Marijuana also doubled the risk of low birth weight (OR, 2.02; 95% CI, 1.18-3.47), about the same as tobacco and benzodiazepines.

The study had limitations. There was no controlling for a major confounder: the amount of opioids woman took while pregnant. These data were not available, Dr. Shaw said.

Neonatal abstinence syndrome was more common in the marijuana group (33.3% versus 18.1%), so it’s possible that women who used marijuana also used more opioids. “We suspect that opioid exposure was not uniform among all infants,” he said. There were also no data on the amount or way marijuana was used.

Marijuana-positive women were more likely to be unmarried, nulliparous, and use tobacco and benzodiazepines.

There was no industry funding for the work, and Dr. Shaw had no disclosures.

 

– Marijuana is a not a good idea during pregnancy, and it’s an even worse idea when women are being treated for opioid addiction, according to an investigation from East Tennessee State University, Mountain Home.

M. Alexander Otto/MDedge News
Dr. Darshan Shah

Marijuana use may become more common as legalization rolls out across the country, and legalization, in turn, may add to the perception that pot is harmless, and maybe a good way to take the edge off during pregnancy and prevent morning sickness, said neonatologist Darshan Shaw, MD, of the department of pediatrics at the university.

Dr. Shaw wondered how that trend might impact treatment of opioid use disorder (OUD) during pregnancy, which has also become more common. The take-home is that “if you have a pregnant patient on medically assistant therapy” for opioid addition, “you should warn them against use of marijuana. It increases the risk of prematurity and low birth weight,” he said at the Pediatric Academic Societies annual meeting.

He and his team reviewed 2,375 opioid-exposed pregnancies at six hospitals in south-central Appalachia from July 2011 to June 2016. All of the women had used opioids during pregnancy, some illegally and others for opioid use disorder (OUD) treatment or other medical issues; 108 had urine screens that were positive for tetrahydrocannabinol (THC) at the time of delivery.

Infants were born a mean of 3 days earlier in the marijuana group, and a mean of 265 g lighter. They were also more likely to be born before 37 weeks’ gestation (14% versus 6.5%); born weighing less than 2,500 g (17.6% versus 7.3%); and more likely to be admitted to the neonatal ICU (17.5% versus 7.1%).

On logistic regression to control for parity, maternal status, and tobacco and benzodiazepine use, prenatal marijuana exposure more than doubled the risk of prematurity (odds ratio, 2.35; 95% confidence interval, 1.3-4.23); tobacco and benzodiazepines did not increase the risk. Marijuana also doubled the risk of low birth weight (OR, 2.02; 95% CI, 1.18-3.47), about the same as tobacco and benzodiazepines.

The study had limitations. There was no controlling for a major confounder: the amount of opioids woman took while pregnant. These data were not available, Dr. Shaw said.

Neonatal abstinence syndrome was more common in the marijuana group (33.3% versus 18.1%), so it’s possible that women who used marijuana also used more opioids. “We suspect that opioid exposure was not uniform among all infants,” he said. There were also no data on the amount or way marijuana was used.

Marijuana-positive women were more likely to be unmarried, nulliparous, and use tobacco and benzodiazepines.

There was no industry funding for the work, and Dr. Shaw had no disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM PAS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Warn pregnant women being treated for opioid use disorder to stay away from marijuana.

Major finding: Marijuana use more than doubled the risk of prematurity and low birth weight.

Study details: Review of 2,375 opioid-exposed pregnancies at six hospitals

Disclosures: There was no industry funding for the work, and the lead investigator had no disclosures.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

VIDEO: Givosiran cuts acute intermittent porphyria attacks in pivotal trial

Article Type
Changed
Tue, 07/21/2020 - 14:18

 

– A novel RNA-inhibitor drug, givosiran, produced a large cut in acute porphyria attacks in a pivotal trial with 94 patients with acute hepatic porphyria.

Vidyard Video

Although the study also identified some safety issues with givosiran, an RNA-inhibitor molecule delivered by subcutaneous injection once a month, the increases in liver enzyme levels it produced in some patients as well as decreased renal function did not seem severe or frequent enough to counterbalance the benefits to treated patients, who often have significant comorbidities and adverse effects because of their disease, Manisha Balwani, MD, said at the meeting sponsored by the European Association for the Study of the Liver. Among the 48 patients assigned to the givosiran group, one patient dropped out because of an adverse effect of treatment.

The results put givosiran on track to become the first Food and Drug Administration–approved treatment for acute hepatic porphyria, a set of similar, rare genetic diseases that produce symptoms in about 1 in every 10,000 people, although asymptomatic disease is likely more common (Hepatol Commun. 2019 Feb;3[2]:193-206). The trial outcomes were also notable for the dramatic improvements in life-disrupting symptoms like pain, nausea, and fatigue that many treated patients experienced.

Patients’ lives were “completely transformed” by givosiran treatment, Dr. Balwani said in a video interview. Patients also had a reduced need for analgesics, including opioids, said Dr. Balwani, a medical geneticist at the Icahn School of Medicine at Mount Sinai in New York.

The ENVISION (A Study to Evaluate the Efficacy and Safety of Givosiran [ALN-AS1] in Patients With Acute Hepatic Porphyrias) study randomized 94 patients who were at least 12 years old and diagnosed with an acute hepatic porphyria, and had experienced at least two porphyria attacks during the prior 6 months. The study ran at 36 sites in 18 countries. Enrolled patients averaged about 39 years old, and had been diagnosed with a hepatic porphyria for an average of about 6 years. During the study, patients did not receive hemin (Panhematin) prophylaxis.

 

 

The study’s primary endpoint was the average annualized rate of porphyria attacks during 6 months of treatment, which was 3.2 attacks in 46 patients evaluable for efficacy on givosiran treatment and 12.5 attacks in 43 patients evaluable for efficacy in the control group, a 74% reduction in attacks with givosiran that was statistically significant, Dr. Balwani reported. The percentage of patients with no attacks during the study was 16% among control patients and 50% among those on givosiran. Future analysis of the study data will attempt to identify the patients with the best responses to givosiran.

Among the full cohort of 94 patients enrolled in the study, 21% of the givosiran-treated patients had a adverse reaction, and 17% had a severe adverse reaction, compared with rates of 9% and 11%, respectively, among controls. Three of the serious adverse reactions were judged related to givosiran treatment: one patient with pyrexia, one with abnormal liver function test results, and one patient who developed chronic kidney disease. A total of two patients in the givosiran group developed chronic kidney disease that warranted elective hospitalization for diagnostic evaluation, and an additional three patients on the drug developed chronic kidney disease that did not require hospitalization. Nausea affected 27% of patients on givosiran and 11% of the control patients. Injection-site reactions occurred in 17% of those on givosiran and in none of the placebo patients. An elevation in the serum level of alanine aminotransferase to more than three times the upper limit of normal of baseline occurred in 15% of the givosiran-treated patients and in 2% of the placebo patients.

Givosiran’s small RNA molecule inhibits production of 5‐aminolevulinic acid synthase 1 (ALAS‐1), the rate-limiting enzyme that drives production of the heme precursor molecules that are pathophysiologic in patients with acute hepatic porphyria.

[email protected]

SOURCE: Balwani M et al. J Hepatol. 2019 April 70(1):e81-2.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– A novel RNA-inhibitor drug, givosiran, produced a large cut in acute porphyria attacks in a pivotal trial with 94 patients with acute hepatic porphyria.

Vidyard Video

Although the study also identified some safety issues with givosiran, an RNA-inhibitor molecule delivered by subcutaneous injection once a month, the increases in liver enzyme levels it produced in some patients as well as decreased renal function did not seem severe or frequent enough to counterbalance the benefits to treated patients, who often have significant comorbidities and adverse effects because of their disease, Manisha Balwani, MD, said at the meeting sponsored by the European Association for the Study of the Liver. Among the 48 patients assigned to the givosiran group, one patient dropped out because of an adverse effect of treatment.

The results put givosiran on track to become the first Food and Drug Administration–approved treatment for acute hepatic porphyria, a set of similar, rare genetic diseases that produce symptoms in about 1 in every 10,000 people, although asymptomatic disease is likely more common (Hepatol Commun. 2019 Feb;3[2]:193-206). The trial outcomes were also notable for the dramatic improvements in life-disrupting symptoms like pain, nausea, and fatigue that many treated patients experienced.

Patients’ lives were “completely transformed” by givosiran treatment, Dr. Balwani said in a video interview. Patients also had a reduced need for analgesics, including opioids, said Dr. Balwani, a medical geneticist at the Icahn School of Medicine at Mount Sinai in New York.

The ENVISION (A Study to Evaluate the Efficacy and Safety of Givosiran [ALN-AS1] in Patients With Acute Hepatic Porphyrias) study randomized 94 patients who were at least 12 years old and diagnosed with an acute hepatic porphyria, and had experienced at least two porphyria attacks during the prior 6 months. The study ran at 36 sites in 18 countries. Enrolled patients averaged about 39 years old, and had been diagnosed with a hepatic porphyria for an average of about 6 years. During the study, patients did not receive hemin (Panhematin) prophylaxis.

 

 

The study’s primary endpoint was the average annualized rate of porphyria attacks during 6 months of treatment, which was 3.2 attacks in 46 patients evaluable for efficacy on givosiran treatment and 12.5 attacks in 43 patients evaluable for efficacy in the control group, a 74% reduction in attacks with givosiran that was statistically significant, Dr. Balwani reported. The percentage of patients with no attacks during the study was 16% among control patients and 50% among those on givosiran. Future analysis of the study data will attempt to identify the patients with the best responses to givosiran.

Among the full cohort of 94 patients enrolled in the study, 21% of the givosiran-treated patients had a adverse reaction, and 17% had a severe adverse reaction, compared with rates of 9% and 11%, respectively, among controls. Three of the serious adverse reactions were judged related to givosiran treatment: one patient with pyrexia, one with abnormal liver function test results, and one patient who developed chronic kidney disease. A total of two patients in the givosiran group developed chronic kidney disease that warranted elective hospitalization for diagnostic evaluation, and an additional three patients on the drug developed chronic kidney disease that did not require hospitalization. Nausea affected 27% of patients on givosiran and 11% of the control patients. Injection-site reactions occurred in 17% of those on givosiran and in none of the placebo patients. An elevation in the serum level of alanine aminotransferase to more than three times the upper limit of normal of baseline occurred in 15% of the givosiran-treated patients and in 2% of the placebo patients.

Givosiran’s small RNA molecule inhibits production of 5‐aminolevulinic acid synthase 1 (ALAS‐1), the rate-limiting enzyme that drives production of the heme precursor molecules that are pathophysiologic in patients with acute hepatic porphyria.

[email protected]

SOURCE: Balwani M et al. J Hepatol. 2019 April 70(1):e81-2.

 

– A novel RNA-inhibitor drug, givosiran, produced a large cut in acute porphyria attacks in a pivotal trial with 94 patients with acute hepatic porphyria.

Vidyard Video

Although the study also identified some safety issues with givosiran, an RNA-inhibitor molecule delivered by subcutaneous injection once a month, the increases in liver enzyme levels it produced in some patients as well as decreased renal function did not seem severe or frequent enough to counterbalance the benefits to treated patients, who often have significant comorbidities and adverse effects because of their disease, Manisha Balwani, MD, said at the meeting sponsored by the European Association for the Study of the Liver. Among the 48 patients assigned to the givosiran group, one patient dropped out because of an adverse effect of treatment.

The results put givosiran on track to become the first Food and Drug Administration–approved treatment for acute hepatic porphyria, a set of similar, rare genetic diseases that produce symptoms in about 1 in every 10,000 people, although asymptomatic disease is likely more common (Hepatol Commun. 2019 Feb;3[2]:193-206). The trial outcomes were also notable for the dramatic improvements in life-disrupting symptoms like pain, nausea, and fatigue that many treated patients experienced.

Patients’ lives were “completely transformed” by givosiran treatment, Dr. Balwani said in a video interview. Patients also had a reduced need for analgesics, including opioids, said Dr. Balwani, a medical geneticist at the Icahn School of Medicine at Mount Sinai in New York.

The ENVISION (A Study to Evaluate the Efficacy and Safety of Givosiran [ALN-AS1] in Patients With Acute Hepatic Porphyrias) study randomized 94 patients who were at least 12 years old and diagnosed with an acute hepatic porphyria, and had experienced at least two porphyria attacks during the prior 6 months. The study ran at 36 sites in 18 countries. Enrolled patients averaged about 39 years old, and had been diagnosed with a hepatic porphyria for an average of about 6 years. During the study, patients did not receive hemin (Panhematin) prophylaxis.

 

 

The study’s primary endpoint was the average annualized rate of porphyria attacks during 6 months of treatment, which was 3.2 attacks in 46 patients evaluable for efficacy on givosiran treatment and 12.5 attacks in 43 patients evaluable for efficacy in the control group, a 74% reduction in attacks with givosiran that was statistically significant, Dr. Balwani reported. The percentage of patients with no attacks during the study was 16% among control patients and 50% among those on givosiran. Future analysis of the study data will attempt to identify the patients with the best responses to givosiran.

Among the full cohort of 94 patients enrolled in the study, 21% of the givosiran-treated patients had a adverse reaction, and 17% had a severe adverse reaction, compared with rates of 9% and 11%, respectively, among controls. Three of the serious adverse reactions were judged related to givosiran treatment: one patient with pyrexia, one with abnormal liver function test results, and one patient who developed chronic kidney disease. A total of two patients in the givosiran group developed chronic kidney disease that warranted elective hospitalization for diagnostic evaluation, and an additional three patients on the drug developed chronic kidney disease that did not require hospitalization. Nausea affected 27% of patients on givosiran and 11% of the control patients. Injection-site reactions occurred in 17% of those on givosiran and in none of the placebo patients. An elevation in the serum level of alanine aminotransferase to more than three times the upper limit of normal of baseline occurred in 15% of the givosiran-treated patients and in 2% of the placebo patients.

Givosiran’s small RNA molecule inhibits production of 5‐aminolevulinic acid synthase 1 (ALAS‐1), the rate-limiting enzyme that drives production of the heme precursor molecules that are pathophysiologic in patients with acute hepatic porphyria.

[email protected]

SOURCE: Balwani M et al. J Hepatol. 2019 April 70(1):e81-2.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ILC 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Givosiran cut acute hepatic porphyria attacks in its pivotal trial.

Major finding: Patients treated with givosiran had 74% fewer acute porphyria attacks, compared with patients on placebo.

Study details: ENVISION, an international pivotal trial with 94 patients.

Disclosures: ENVISION was funded by Alnylam, the company developing givosiran. Dr. Balwani has been an advisor to and has received research funding from Alnylam. The center where Dr. Balwani works, the Icahn School of Medicine at Mount Sinai, in New York, holds patents related to givosiran that it has licensed to Amnylam.

Source: Balwani M et al. J Hepatol. 2019 April 70(1):e81-2.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Staging tool predicts post-RYGB complications

Article Type
Changed
Tue, 05/21/2019 - 15:52

 

BALTIMORE – A staging scale developed by the bariatric team at the University of Alberta has shown potential as a tool to accurately predict major complications 1 year after Roux-en-Y gastric bypass (RYGB) surgery, surpassing the predictability of body mass index alone, a researcher reported at the annual meeting of the Society of American Gastrointestinal Endoscopic Surgeons.

monkeybusinessimages/Thinkstock

Researchers at the university validated the predictive utility of the scale, known as the Edmonton Obesity Staging System (EOSS), in a retrospective chart review of 378 patients who had RYGB between December 2009 and November 2015 at Royal Alexandra Hospital in Edmonton, Alt. The EOSS uses a scale from 0 to 4 to score a patient’s risk for complications: the higher the score, the greater the risk of complications.

“The EOSS may help determine risk of major complications after RYGB, and, given its overall simplicity, you can also think of it as analogous to the American Society of Anesthesiologists physical status classification system or the New York Heart Association classification system for congestive heart failure,” said Samuel Skulsky, a 3rd-year medical student at the University of Alberta. “It may have utility as well in communicating to patients their overall risk.”

A previous study applied the EOSS score to the National Health and Human Nutrition Examination Survey to compare it to use of body mass index (BMI) as a predictive marker of mortality (CMAJ. 2011;183:E1059-66). Where the four BMI classifications were clustered on the Kaplan-Meier between 0.7 and 0.9 at 200 months post examination, the four EOSS stages analyzed, 0-3, showed more of a spread, from around 0.55 for stage 3 to near 1.0 for stage 0. This gave the researchers the idea that EOSS could also be used to predict morbidity and mortality specifically in obese patients scheduled for surgery, Mr. Skulsky said. “With the Kaplan-Meier survival curves, the EOSS actually nicely stratifies the patients with their overall survival,” he said. “In comparison, BMI did not do as well in stratifying overall mortality.”

The study reported the following 1-year complication rates in the EOSS stages:

  • Stage 0 (n = 14), 7.1%.
  • Stage 1 (n = 41), 4.9%.
  • Stage 2 (n = 297), 8.8%.
  • Stage 3 (n = 26), 23.1%.

There were no stage 4 patients in the study population.

 

 

The multivariable logistic regression analysis determined that patients with EOSS stage 3 have a 2.94 adjusted odds ratio of 1-year complications vs. patients of lower stages (P less than .043).

“Although the patients with higher EOSS scores above 3 and … end-organ damage … may benefit from bariatric surgery, they inherently have higher postoperative risk,” Mr. Skulsky said. “We must take that into consideration.”

Among the limitations of the study, Mr. Skulsky acknowledged, were that it included only patients who had RYGB, that it had a bias toward patients with EOSS stage 2 score, and that it included no stage 4 patients. “They’re not commonly operated on,” he noted, “so we didn’t actually get to study the entire scoring system.”

The next step involves moving the analysis forward to the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program database, Mr. Skulsky said. “The results that we found so far are pretty encouraging,” he said.

Mr. Skulsky had no financial relationships to disclose.

SOURCE: Skulsky SL et al. SAGES 2019, Session SS12.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

BALTIMORE – A staging scale developed by the bariatric team at the University of Alberta has shown potential as a tool to accurately predict major complications 1 year after Roux-en-Y gastric bypass (RYGB) surgery, surpassing the predictability of body mass index alone, a researcher reported at the annual meeting of the Society of American Gastrointestinal Endoscopic Surgeons.

monkeybusinessimages/Thinkstock

Researchers at the university validated the predictive utility of the scale, known as the Edmonton Obesity Staging System (EOSS), in a retrospective chart review of 378 patients who had RYGB between December 2009 and November 2015 at Royal Alexandra Hospital in Edmonton, Alt. The EOSS uses a scale from 0 to 4 to score a patient’s risk for complications: the higher the score, the greater the risk of complications.

“The EOSS may help determine risk of major complications after RYGB, and, given its overall simplicity, you can also think of it as analogous to the American Society of Anesthesiologists physical status classification system or the New York Heart Association classification system for congestive heart failure,” said Samuel Skulsky, a 3rd-year medical student at the University of Alberta. “It may have utility as well in communicating to patients their overall risk.”

A previous study applied the EOSS score to the National Health and Human Nutrition Examination Survey to compare it to use of body mass index (BMI) as a predictive marker of mortality (CMAJ. 2011;183:E1059-66). Where the four BMI classifications were clustered on the Kaplan-Meier between 0.7 and 0.9 at 200 months post examination, the four EOSS stages analyzed, 0-3, showed more of a spread, from around 0.55 for stage 3 to near 1.0 for stage 0. This gave the researchers the idea that EOSS could also be used to predict morbidity and mortality specifically in obese patients scheduled for surgery, Mr. Skulsky said. “With the Kaplan-Meier survival curves, the EOSS actually nicely stratifies the patients with their overall survival,” he said. “In comparison, BMI did not do as well in stratifying overall mortality.”

The study reported the following 1-year complication rates in the EOSS stages:

  • Stage 0 (n = 14), 7.1%.
  • Stage 1 (n = 41), 4.9%.
  • Stage 2 (n = 297), 8.8%.
  • Stage 3 (n = 26), 23.1%.

There were no stage 4 patients in the study population.

 

 

The multivariable logistic regression analysis determined that patients with EOSS stage 3 have a 2.94 adjusted odds ratio of 1-year complications vs. patients of lower stages (P less than .043).

“Although the patients with higher EOSS scores above 3 and … end-organ damage … may benefit from bariatric surgery, they inherently have higher postoperative risk,” Mr. Skulsky said. “We must take that into consideration.”

Among the limitations of the study, Mr. Skulsky acknowledged, were that it included only patients who had RYGB, that it had a bias toward patients with EOSS stage 2 score, and that it included no stage 4 patients. “They’re not commonly operated on,” he noted, “so we didn’t actually get to study the entire scoring system.”

The next step involves moving the analysis forward to the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program database, Mr. Skulsky said. “The results that we found so far are pretty encouraging,” he said.

Mr. Skulsky had no financial relationships to disclose.

SOURCE: Skulsky SL et al. SAGES 2019, Session SS12.

 

BALTIMORE – A staging scale developed by the bariatric team at the University of Alberta has shown potential as a tool to accurately predict major complications 1 year after Roux-en-Y gastric bypass (RYGB) surgery, surpassing the predictability of body mass index alone, a researcher reported at the annual meeting of the Society of American Gastrointestinal Endoscopic Surgeons.

monkeybusinessimages/Thinkstock

Researchers at the university validated the predictive utility of the scale, known as the Edmonton Obesity Staging System (EOSS), in a retrospective chart review of 378 patients who had RYGB between December 2009 and November 2015 at Royal Alexandra Hospital in Edmonton, Alt. The EOSS uses a scale from 0 to 4 to score a patient’s risk for complications: the higher the score, the greater the risk of complications.

“The EOSS may help determine risk of major complications after RYGB, and, given its overall simplicity, you can also think of it as analogous to the American Society of Anesthesiologists physical status classification system or the New York Heart Association classification system for congestive heart failure,” said Samuel Skulsky, a 3rd-year medical student at the University of Alberta. “It may have utility as well in communicating to patients their overall risk.”

A previous study applied the EOSS score to the National Health and Human Nutrition Examination Survey to compare it to use of body mass index (BMI) as a predictive marker of mortality (CMAJ. 2011;183:E1059-66). Where the four BMI classifications were clustered on the Kaplan-Meier between 0.7 and 0.9 at 200 months post examination, the four EOSS stages analyzed, 0-3, showed more of a spread, from around 0.55 for stage 3 to near 1.0 for stage 0. This gave the researchers the idea that EOSS could also be used to predict morbidity and mortality specifically in obese patients scheduled for surgery, Mr. Skulsky said. “With the Kaplan-Meier survival curves, the EOSS actually nicely stratifies the patients with their overall survival,” he said. “In comparison, BMI did not do as well in stratifying overall mortality.”

The study reported the following 1-year complication rates in the EOSS stages:

  • Stage 0 (n = 14), 7.1%.
  • Stage 1 (n = 41), 4.9%.
  • Stage 2 (n = 297), 8.8%.
  • Stage 3 (n = 26), 23.1%.

There were no stage 4 patients in the study population.

 

 

The multivariable logistic regression analysis determined that patients with EOSS stage 3 have a 2.94 adjusted odds ratio of 1-year complications vs. patients of lower stages (P less than .043).

“Although the patients with higher EOSS scores above 3 and … end-organ damage … may benefit from bariatric surgery, they inherently have higher postoperative risk,” Mr. Skulsky said. “We must take that into consideration.”

Among the limitations of the study, Mr. Skulsky acknowledged, were that it included only patients who had RYGB, that it had a bias toward patients with EOSS stage 2 score, and that it included no stage 4 patients. “They’re not commonly operated on,” he noted, “so we didn’t actually get to study the entire scoring system.”

The next step involves moving the analysis forward to the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program database, Mr. Skulsky said. “The results that we found so far are pretty encouraging,” he said.

Mr. Skulsky had no financial relationships to disclose.

SOURCE: Skulsky SL et al. SAGES 2019, Session SS12.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM SAGES 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: The Edmonton Obesity Staging System is predictive of complications after Roux-en-Y gastric bypass surgery.

Major finding: Patients with a score greater than 3 had a threefold greater incidence of complications at 1 year.

Study details: Retrospective chart review of 378 patients who had RYGB at a single center from 2009 through 2015.

Disclosures: Mr. Skulsky has no financial relationships to disclose.

Source: Skulsky SL et al. SAGES 2018, Session SS12.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Long-term antibiotic use may heighten stroke, CHD risk

Article Type
Changed
Thu, 12/15/2022 - 15:46

 

Among middle-aged and older women, 2 or more months’ exposure to antibiotics is associated with an increased risk of coronary heart disease or stroke, according to a study in the European Heart Journal.

European Heart Journal and Professor Lu Qi, Tulane University, USA

Women in the Nurses’ Health Study who used antibiotics for 2 or more months between ages 40 and 59 years or at age 60 years and older had a significantly increased risk of cardiovascular disease, compared with those who did not use antibiotics. Antibiotic use between 20 and 39 years old was not significantly related to cardiovascular disease.

Prior research has found that antibiotics may have long-lasting effects on gut microbiota and relate to cardiovascular disease risk.

“Antibiotic use is the most critical factor in altering the balance of microorganisms in the gut,” said lead investigator Lu Qi, MD, PhD, in a news release. “Previous studies have shown a link between alterations in the microbiotic environment of the gut and inflammation and narrowing of the blood vessels, stroke, and heart disease,” said Dr. Qi, who is the director of the Tulane University Obesity Research Center in New Orleans and an adjunct professor of nutrition at Harvard T.C. Chan School of Public Health in Boston.

To evaluate associations between life stage, antibiotic exposure, and subsequent cardiovascular disease, researchers analyzed data from 36,429 participants in the Nurses’ Health Study. The women were at least 60 years old and had no history of cardiovascular disease or cancer when they completed a 2004 questionnaire about antibiotic usage during young, middle, and late adulthood. The questionnaire asked participants to indicate the total time using antibiotics with eight categories ranging from none to 5 or more years.

The researchers defined incident cardiovascular disease as a composite endpoint of coronary heart disease (nonfatal myocardial infarction or fatal coronary heart disease) and stroke (nonfatal or fatal). They calculated person-years of follow-up from the questionnaire return date until date of cardiovascular disease diagnosis, death, or end of follow-up in 2012.

Women with longer duration of antibiotic use were more likely to use other medications and have unfavorable cardiovascular risk profiles, including family history of myocardial infarction and higher body mass index. Antibiotics most often were used to treat respiratory infections. During an average follow-up of 7.6 years, 1,056 participants developed cardiovascular disease.

In a multivariable model that adjusted for demographics, diet, lifestyle, reason for antibiotic use, medications, overweight status, and other factors, long-term antibiotic use – 2 months or more – in late adulthood was associated with significantly increased risk of cardiovascular disease (hazard ratio, 1.32), as was long-term antibiotic use in middle adulthood (HR, 1.28).

Although antibiotic use was self-reported, which could lead to misclassification, the participants were health professionals, which may mitigate this limitation, the authors noted. Whether these findings apply to men and other populations requires further study, they said.

 

 


Because of the study’s observational design, the results “cannot show that antibiotics cause heart disease and stroke, only that there is a link between them,” Dr. Qi said. “It’s possible that women who reported more antibiotic use might be sicker in other ways that we were unable to measure, or there may be other factors that could affect the results that we have not been able take account of.”

“Our study suggests that antibiotics should be used only when they are absolutely needed,” he concluded. “Considering the potentially cumulative adverse effects, the shorter time of antibiotic use the better.”

The study was supported by National Institutes of Health grants, the Boston Obesity Nutrition Research Center, and the United States–Israel Binational Science Foundation. One author received support from the Japan Society for the Promotion of Science. The authors had no conflicts of interest.

[email protected]

SOURCE: Heianza Y et al. Eur Heart J. 2019 Apr 24. doi: 10.1093/eurheartj/ehz231.

Issue
Neurology Reviews-27(6)
Publications
Topics
Page Number
40
Sections

 

Among middle-aged and older women, 2 or more months’ exposure to antibiotics is associated with an increased risk of coronary heart disease or stroke, according to a study in the European Heart Journal.

European Heart Journal and Professor Lu Qi, Tulane University, USA

Women in the Nurses’ Health Study who used antibiotics for 2 or more months between ages 40 and 59 years or at age 60 years and older had a significantly increased risk of cardiovascular disease, compared with those who did not use antibiotics. Antibiotic use between 20 and 39 years old was not significantly related to cardiovascular disease.

Prior research has found that antibiotics may have long-lasting effects on gut microbiota and relate to cardiovascular disease risk.

“Antibiotic use is the most critical factor in altering the balance of microorganisms in the gut,” said lead investigator Lu Qi, MD, PhD, in a news release. “Previous studies have shown a link between alterations in the microbiotic environment of the gut and inflammation and narrowing of the blood vessels, stroke, and heart disease,” said Dr. Qi, who is the director of the Tulane University Obesity Research Center in New Orleans and an adjunct professor of nutrition at Harvard T.C. Chan School of Public Health in Boston.

To evaluate associations between life stage, antibiotic exposure, and subsequent cardiovascular disease, researchers analyzed data from 36,429 participants in the Nurses’ Health Study. The women were at least 60 years old and had no history of cardiovascular disease or cancer when they completed a 2004 questionnaire about antibiotic usage during young, middle, and late adulthood. The questionnaire asked participants to indicate the total time using antibiotics with eight categories ranging from none to 5 or more years.

The researchers defined incident cardiovascular disease as a composite endpoint of coronary heart disease (nonfatal myocardial infarction or fatal coronary heart disease) and stroke (nonfatal or fatal). They calculated person-years of follow-up from the questionnaire return date until date of cardiovascular disease diagnosis, death, or end of follow-up in 2012.

Women with longer duration of antibiotic use were more likely to use other medications and have unfavorable cardiovascular risk profiles, including family history of myocardial infarction and higher body mass index. Antibiotics most often were used to treat respiratory infections. During an average follow-up of 7.6 years, 1,056 participants developed cardiovascular disease.

In a multivariable model that adjusted for demographics, diet, lifestyle, reason for antibiotic use, medications, overweight status, and other factors, long-term antibiotic use – 2 months or more – in late adulthood was associated with significantly increased risk of cardiovascular disease (hazard ratio, 1.32), as was long-term antibiotic use in middle adulthood (HR, 1.28).

Although antibiotic use was self-reported, which could lead to misclassification, the participants were health professionals, which may mitigate this limitation, the authors noted. Whether these findings apply to men and other populations requires further study, they said.

 

 


Because of the study’s observational design, the results “cannot show that antibiotics cause heart disease and stroke, only that there is a link between them,” Dr. Qi said. “It’s possible that women who reported more antibiotic use might be sicker in other ways that we were unable to measure, or there may be other factors that could affect the results that we have not been able take account of.”

“Our study suggests that antibiotics should be used only when they are absolutely needed,” he concluded. “Considering the potentially cumulative adverse effects, the shorter time of antibiotic use the better.”

The study was supported by National Institutes of Health grants, the Boston Obesity Nutrition Research Center, and the United States–Israel Binational Science Foundation. One author received support from the Japan Society for the Promotion of Science. The authors had no conflicts of interest.

[email protected]

SOURCE: Heianza Y et al. Eur Heart J. 2019 Apr 24. doi: 10.1093/eurheartj/ehz231.

 

Among middle-aged and older women, 2 or more months’ exposure to antibiotics is associated with an increased risk of coronary heart disease or stroke, according to a study in the European Heart Journal.

European Heart Journal and Professor Lu Qi, Tulane University, USA

Women in the Nurses’ Health Study who used antibiotics for 2 or more months between ages 40 and 59 years or at age 60 years and older had a significantly increased risk of cardiovascular disease, compared with those who did not use antibiotics. Antibiotic use between 20 and 39 years old was not significantly related to cardiovascular disease.

Prior research has found that antibiotics may have long-lasting effects on gut microbiota and relate to cardiovascular disease risk.

“Antibiotic use is the most critical factor in altering the balance of microorganisms in the gut,” said lead investigator Lu Qi, MD, PhD, in a news release. “Previous studies have shown a link between alterations in the microbiotic environment of the gut and inflammation and narrowing of the blood vessels, stroke, and heart disease,” said Dr. Qi, who is the director of the Tulane University Obesity Research Center in New Orleans and an adjunct professor of nutrition at Harvard T.C. Chan School of Public Health in Boston.

To evaluate associations between life stage, antibiotic exposure, and subsequent cardiovascular disease, researchers analyzed data from 36,429 participants in the Nurses’ Health Study. The women were at least 60 years old and had no history of cardiovascular disease or cancer when they completed a 2004 questionnaire about antibiotic usage during young, middle, and late adulthood. The questionnaire asked participants to indicate the total time using antibiotics with eight categories ranging from none to 5 or more years.

The researchers defined incident cardiovascular disease as a composite endpoint of coronary heart disease (nonfatal myocardial infarction or fatal coronary heart disease) and stroke (nonfatal or fatal). They calculated person-years of follow-up from the questionnaire return date until date of cardiovascular disease diagnosis, death, or end of follow-up in 2012.

Women with longer duration of antibiotic use were more likely to use other medications and have unfavorable cardiovascular risk profiles, including family history of myocardial infarction and higher body mass index. Antibiotics most often were used to treat respiratory infections. During an average follow-up of 7.6 years, 1,056 participants developed cardiovascular disease.

In a multivariable model that adjusted for demographics, diet, lifestyle, reason for antibiotic use, medications, overweight status, and other factors, long-term antibiotic use – 2 months or more – in late adulthood was associated with significantly increased risk of cardiovascular disease (hazard ratio, 1.32), as was long-term antibiotic use in middle adulthood (HR, 1.28).

Although antibiotic use was self-reported, which could lead to misclassification, the participants were health professionals, which may mitigate this limitation, the authors noted. Whether these findings apply to men and other populations requires further study, they said.

 

 


Because of the study’s observational design, the results “cannot show that antibiotics cause heart disease and stroke, only that there is a link between them,” Dr. Qi said. “It’s possible that women who reported more antibiotic use might be sicker in other ways that we were unable to measure, or there may be other factors that could affect the results that we have not been able take account of.”

“Our study suggests that antibiotics should be used only when they are absolutely needed,” he concluded. “Considering the potentially cumulative adverse effects, the shorter time of antibiotic use the better.”

The study was supported by National Institutes of Health grants, the Boston Obesity Nutrition Research Center, and the United States–Israel Binational Science Foundation. One author received support from the Japan Society for the Promotion of Science. The authors had no conflicts of interest.

[email protected]

SOURCE: Heianza Y et al. Eur Heart J. 2019 Apr 24. doi: 10.1093/eurheartj/ehz231.

Issue
Neurology Reviews-27(6)
Issue
Neurology Reviews-27(6)
Page Number
40
Page Number
40
Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

FROM THE EUROPEAN HEART JOURNAL

Citation Override
Publish date: April 28, 2019
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
199735
Vitals

 

Key clinical point: Among middle-aged and older women, 2 or more months’ exposure to antibiotics is associated with an increased risk of coronary heart disease or stroke.

Major finding: Long-term antibiotic use in late adulthood was associated with significantly increased risk of cardiovascular disease (hazard ratio, 1.32), as was long-term antibiotic use in middle adulthood (HR, 1.28).

Study details: An analysis of data from nearly 36,500 women in the Nurses’ Health Study.

Disclosures: The study was supported by National Institutes of Health grants, the Boston Obesity Nutrition Research Center, and the United States–Israel Binational Science Foundation. One author received support from the Japan Society for the Promotion of Science. The authors had no conflicts of interest.

Source: Heianza Y et al. Eur Heart J. 2019 Apr 24. doi: 10.1093/eurheartj/ehz231.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Smoking found not protective against uveitis attacks in axSpA patients

Article Type
Changed
Sun, 04/28/2019 - 15:33

 

Smoking does not appear to have protective effects against anterior uveitis attacks in patients with axial spondyloarthritis, according to prospective registry study data.

pmphoto/iStockphoto.com

Both current and ex-smokers had increased uveitis rates versus never-smokers in the study, suggesting that the supposed protective effect of smoking found in previous axial spondyloarthritis studies was not causal, Sizheng Steven Zhao, MD, and his colleagues reported in the Annals of the Rheumatic Diseases.

“Spurious relationships can emerge when studies restrict to a disease population,” the researchers wrote.

The present analysis by Dr. Zhao and colleagues included 2,420 patients with axial spondyloarthritis in the British Society for Rheumatology Biologics Registry for Ankylosing Spondylitis. Of that group, 632 (26%) had a diagnosis of acute anterior uveitis over a total of 1,457 patient-years of follow-up.

Researchers looked specifically at the number of uveitis episodes per 12-month period, which ranged from 0 to 15 in the overall study cohort.

Current smokers had a 33% higher incidence of acute anterior uveitis episodes versus never-smokers, while ex-smokers had a 19% higher incidence, although the findings did not reach statistical significance, according to the researchers.

Because some studies have suggested that smoking may influence response to biologic therapy, Dr. Zhao and coinvestigators stratified patients into biologic and nonbiologic cohorts. In the biologic cohort, they found a 76% higher incidence per year of uveitis attacks for current smokers versus never-smokers, and a 29% increased incidence for ex-smokers versus never-smokers.

These findings are “consistent with increased risk of uveitis observed among smokers in the general population,” the researchers said. “Although nicotine may have anti-inflammatory properties, cigarette smoking is overall pro-inflammatory.”

Those results provide “yet another line of evidence” that should compel spondyloarthritis patients to quit smoking, the researchers added. Previous studies have suggested that smoking may increase radiographic progression and may reduce response to treatment.

The authors declared no competing interests. The registry study is supported by the British Society for Rheumatology, which has received funding from Pfizer, AbbVie, and UCB for the study.

SOURCE: Zhao SS et al. Ann Rheum Dis. 2019 Apr 20. doi: 10.1136/annrheumdis-2019-215348

Publications
Topics
Sections

 

Smoking does not appear to have protective effects against anterior uveitis attacks in patients with axial spondyloarthritis, according to prospective registry study data.

pmphoto/iStockphoto.com

Both current and ex-smokers had increased uveitis rates versus never-smokers in the study, suggesting that the supposed protective effect of smoking found in previous axial spondyloarthritis studies was not causal, Sizheng Steven Zhao, MD, and his colleagues reported in the Annals of the Rheumatic Diseases.

“Spurious relationships can emerge when studies restrict to a disease population,” the researchers wrote.

The present analysis by Dr. Zhao and colleagues included 2,420 patients with axial spondyloarthritis in the British Society for Rheumatology Biologics Registry for Ankylosing Spondylitis. Of that group, 632 (26%) had a diagnosis of acute anterior uveitis over a total of 1,457 patient-years of follow-up.

Researchers looked specifically at the number of uveitis episodes per 12-month period, which ranged from 0 to 15 in the overall study cohort.

Current smokers had a 33% higher incidence of acute anterior uveitis episodes versus never-smokers, while ex-smokers had a 19% higher incidence, although the findings did not reach statistical significance, according to the researchers.

Because some studies have suggested that smoking may influence response to biologic therapy, Dr. Zhao and coinvestigators stratified patients into biologic and nonbiologic cohorts. In the biologic cohort, they found a 76% higher incidence per year of uveitis attacks for current smokers versus never-smokers, and a 29% increased incidence for ex-smokers versus never-smokers.

These findings are “consistent with increased risk of uveitis observed among smokers in the general population,” the researchers said. “Although nicotine may have anti-inflammatory properties, cigarette smoking is overall pro-inflammatory.”

Those results provide “yet another line of evidence” that should compel spondyloarthritis patients to quit smoking, the researchers added. Previous studies have suggested that smoking may increase radiographic progression and may reduce response to treatment.

The authors declared no competing interests. The registry study is supported by the British Society for Rheumatology, which has received funding from Pfizer, AbbVie, and UCB for the study.

SOURCE: Zhao SS et al. Ann Rheum Dis. 2019 Apr 20. doi: 10.1136/annrheumdis-2019-215348

 

Smoking does not appear to have protective effects against anterior uveitis attacks in patients with axial spondyloarthritis, according to prospective registry study data.

pmphoto/iStockphoto.com

Both current and ex-smokers had increased uveitis rates versus never-smokers in the study, suggesting that the supposed protective effect of smoking found in previous axial spondyloarthritis studies was not causal, Sizheng Steven Zhao, MD, and his colleagues reported in the Annals of the Rheumatic Diseases.

“Spurious relationships can emerge when studies restrict to a disease population,” the researchers wrote.

The present analysis by Dr. Zhao and colleagues included 2,420 patients with axial spondyloarthritis in the British Society for Rheumatology Biologics Registry for Ankylosing Spondylitis. Of that group, 632 (26%) had a diagnosis of acute anterior uveitis over a total of 1,457 patient-years of follow-up.

Researchers looked specifically at the number of uveitis episodes per 12-month period, which ranged from 0 to 15 in the overall study cohort.

Current smokers had a 33% higher incidence of acute anterior uveitis episodes versus never-smokers, while ex-smokers had a 19% higher incidence, although the findings did not reach statistical significance, according to the researchers.

Because some studies have suggested that smoking may influence response to biologic therapy, Dr. Zhao and coinvestigators stratified patients into biologic and nonbiologic cohorts. In the biologic cohort, they found a 76% higher incidence per year of uveitis attacks for current smokers versus never-smokers, and a 29% increased incidence for ex-smokers versus never-smokers.

These findings are “consistent with increased risk of uveitis observed among smokers in the general population,” the researchers said. “Although nicotine may have anti-inflammatory properties, cigarette smoking is overall pro-inflammatory.”

Those results provide “yet another line of evidence” that should compel spondyloarthritis patients to quit smoking, the researchers added. Previous studies have suggested that smoking may increase radiographic progression and may reduce response to treatment.

The authors declared no competing interests. The registry study is supported by the British Society for Rheumatology, which has received funding from Pfizer, AbbVie, and UCB for the study.

SOURCE: Zhao SS et al. Ann Rheum Dis. 2019 Apr 20. doi: 10.1136/annrheumdis-2019-215348

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM ANNALS OF THE RHEUMATIC DISEASES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: In contrast to previous studies, smoking was not linked to a lower rate of uveitis attacks in patients with axial spondyloarthritis.

Major finding: Current smokers had a 33% higher incidence of acute anterior uveitis episodes versus never-smokers, while ex-smokers had a 19% higher incidence.

Study details: Analysis including 2,420 patients with axial spondyloarthritis in the British Society for Rheumatology Biologics Registry for Ankylosing Spondylitis.

Disclosures: The authors declared no competing interests. The study is supported by the British Society for Rheumatology, which has received funding from Pfizer, AbbVie, and UCB for the study.

Source: Zhao SS et al. Ann Rheum Dis. 2019 Apr 20. doi: 10.1136/annrheumdis-2019-215348.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Perceived Physical Functioning Predicts Mortality

Article Type
Changed
Sun, 04/28/2019 - 03:46
It seems to be self-evident that how you feel about your health affects your health, but what affects those feelings?

Researchers from Erasmus University, The Netherlands, and Monash University, Australia, say theirs is the first study to determine the independent association of various measures of subjective health with mortality. Previously, few studies had showed an effect of physical functioning independent of other subjective measures.

The researchers evaluated data on 5,538 adults who took part in the Rotterdam Study and who were followed for a mean of 12 years. One-third had cardiovascular disease; 8% had chronic obstructive pulmonary disease, and 38% had joint problems.

The researchers investigated 6 different measures of subjective health and how they related to all-cause mortality. They conceptualized subjective health—often associated with health and well-being—as a continuum with physical functioning at one end and mental health at the other. Physical functioning included basic activities of daily living (BADL), such as eating and grooming. Instrumental activities of daily living (IADL) included the cognitive attributes of performing self-reliant daily tasks, such as meal preparation and shopping. The researchers assessed mental health with scales measuring positive and negative effects as well as somatic symptoms (the physical manifestations of dysthymia) and quality of life.

“Importantly,” the researchers say, any of those indicators is affected strongly by both physical and mental aspects of health. For example, physical and functional decline are related to higher scores on dysthymia questionnaires.  

During 48,534 person-years of follow-up, 2,021 people died. Only impairment in physical functioning assessed by either self-report of BADL or IADL was related to mortality. Quality of life, positive affect, somatic symptoms, and negative affect did not predict mortality once self-rated physical functioning was accounted for.

Clinically speaking, the researchers say, it might be good to focus interventions aimed at improving survival on subjective indicators of physical well-being: in other words, activities of daily living and what it takes to perform them.

 

Publications
Topics
Sections
It seems to be self-evident that how you feel about your health affects your health, but what affects those feelings?
It seems to be self-evident that how you feel about your health affects your health, but what affects those feelings?

Researchers from Erasmus University, The Netherlands, and Monash University, Australia, say theirs is the first study to determine the independent association of various measures of subjective health with mortality. Previously, few studies had showed an effect of physical functioning independent of other subjective measures.

The researchers evaluated data on 5,538 adults who took part in the Rotterdam Study and who were followed for a mean of 12 years. One-third had cardiovascular disease; 8% had chronic obstructive pulmonary disease, and 38% had joint problems.

The researchers investigated 6 different measures of subjective health and how they related to all-cause mortality. They conceptualized subjective health—often associated with health and well-being—as a continuum with physical functioning at one end and mental health at the other. Physical functioning included basic activities of daily living (BADL), such as eating and grooming. Instrumental activities of daily living (IADL) included the cognitive attributes of performing self-reliant daily tasks, such as meal preparation and shopping. The researchers assessed mental health with scales measuring positive and negative effects as well as somatic symptoms (the physical manifestations of dysthymia) and quality of life.

“Importantly,” the researchers say, any of those indicators is affected strongly by both physical and mental aspects of health. For example, physical and functional decline are related to higher scores on dysthymia questionnaires.  

During 48,534 person-years of follow-up, 2,021 people died. Only impairment in physical functioning assessed by either self-report of BADL or IADL was related to mortality. Quality of life, positive affect, somatic symptoms, and negative affect did not predict mortality once self-rated physical functioning was accounted for.

Clinically speaking, the researchers say, it might be good to focus interventions aimed at improving survival on subjective indicators of physical well-being: in other words, activities of daily living and what it takes to perform them.

 

Researchers from Erasmus University, The Netherlands, and Monash University, Australia, say theirs is the first study to determine the independent association of various measures of subjective health with mortality. Previously, few studies had showed an effect of physical functioning independent of other subjective measures.

The researchers evaluated data on 5,538 adults who took part in the Rotterdam Study and who were followed for a mean of 12 years. One-third had cardiovascular disease; 8% had chronic obstructive pulmonary disease, and 38% had joint problems.

The researchers investigated 6 different measures of subjective health and how they related to all-cause mortality. They conceptualized subjective health—often associated with health and well-being—as a continuum with physical functioning at one end and mental health at the other. Physical functioning included basic activities of daily living (BADL), such as eating and grooming. Instrumental activities of daily living (IADL) included the cognitive attributes of performing self-reliant daily tasks, such as meal preparation and shopping. The researchers assessed mental health with scales measuring positive and negative effects as well as somatic symptoms (the physical manifestations of dysthymia) and quality of life.

“Importantly,” the researchers say, any of those indicators is affected strongly by both physical and mental aspects of health. For example, physical and functional decline are related to higher scores on dysthymia questionnaires.  

During 48,534 person-years of follow-up, 2,021 people died. Only impairment in physical functioning assessed by either self-report of BADL or IADL was related to mortality. Quality of life, positive affect, somatic symptoms, and negative affect did not predict mortality once self-rated physical functioning was accounted for.

Clinically speaking, the researchers say, it might be good to focus interventions aimed at improving survival on subjective indicators of physical well-being: in other words, activities of daily living and what it takes to perform them.

 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 04/23/2019 - 13:30
Un-Gate On Date
Tue, 04/23/2019 - 13:30
Use ProPublica
CFC Schedule Remove Status
Tue, 04/23/2019 - 13:30
Hide sidebar & use full width
render the right sidebar.

VIDEO: Physicians fall short on adequate sleep, consumption of fruits and vegetables

Article Type
Changed
Fri, 06/30/2023 - 08:17

– Physicians appear to meet Centers for Disease Control and Prevention guidelines for exercise, but they fall short when it comes to getting enough sleep and consuming an adequate amount of fruits and vegetables.

Those are key findings from a survey of 20 Tennessee-based physicians from a variety of medical specialties whom Deepti G. Bulchandani, MD, presented at the annual scientific & clinical congress of the American Association of Clinical Endocrinologists.

Inspired by her daughter, Eesha Nachnani, Dr. Bulchandani, an endocrinologist with Saint Thomas Medical Partners in Hendersonville, Tenn., created a survey in which physicians were asked about their nutritional habits, as well as how much sleep and exercise they were getting. The duo found that only half of the survey respondents were eating at least 1.5-2 cups of fruit and 2-3 cups of vegetables a day, as recommended by the CDC, and only half were consuming less than 2,300 mg of sodium per day. They also found that only one in four physicians were sleeping more than 7 hours a day on a regular basis. The good news? All respondents met the recommended CDC guidelines for exercise.

“What was most neglected was sleep,” Dr. Bulchandani said. “[Electronic medical reports are] taking a lot of time. We do have [work hours] protection for residents, but physicians don’t have rules that are set for them. I think that is taking a toll.”

Dr. Bulchandani reported having no financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Physicians appear to meet Centers for Disease Control and Prevention guidelines for exercise, but they fall short when it comes to getting enough sleep and consuming an adequate amount of fruits and vegetables.

Those are key findings from a survey of 20 Tennessee-based physicians from a variety of medical specialties whom Deepti G. Bulchandani, MD, presented at the annual scientific & clinical congress of the American Association of Clinical Endocrinologists.

Inspired by her daughter, Eesha Nachnani, Dr. Bulchandani, an endocrinologist with Saint Thomas Medical Partners in Hendersonville, Tenn., created a survey in which physicians were asked about their nutritional habits, as well as how much sleep and exercise they were getting. The duo found that only half of the survey respondents were eating at least 1.5-2 cups of fruit and 2-3 cups of vegetables a day, as recommended by the CDC, and only half were consuming less than 2,300 mg of sodium per day. They also found that only one in four physicians were sleeping more than 7 hours a day on a regular basis. The good news? All respondents met the recommended CDC guidelines for exercise.

“What was most neglected was sleep,” Dr. Bulchandani said. “[Electronic medical reports are] taking a lot of time. We do have [work hours] protection for residents, but physicians don’t have rules that are set for them. I think that is taking a toll.”

Dr. Bulchandani reported having no financial disclosures.

– Physicians appear to meet Centers for Disease Control and Prevention guidelines for exercise, but they fall short when it comes to getting enough sleep and consuming an adequate amount of fruits and vegetables.

Those are key findings from a survey of 20 Tennessee-based physicians from a variety of medical specialties whom Deepti G. Bulchandani, MD, presented at the annual scientific & clinical congress of the American Association of Clinical Endocrinologists.

Inspired by her daughter, Eesha Nachnani, Dr. Bulchandani, an endocrinologist with Saint Thomas Medical Partners in Hendersonville, Tenn., created a survey in which physicians were asked about their nutritional habits, as well as how much sleep and exercise they were getting. The duo found that only half of the survey respondents were eating at least 1.5-2 cups of fruit and 2-3 cups of vegetables a day, as recommended by the CDC, and only half were consuming less than 2,300 mg of sodium per day. They also found that only one in four physicians were sleeping more than 7 hours a day on a regular basis. The good news? All respondents met the recommended CDC guidelines for exercise.

“What was most neglected was sleep,” Dr. Bulchandani said. “[Electronic medical reports are] taking a lot of time. We do have [work hours] protection for residents, but physicians don’t have rules that are set for them. I think that is taking a toll.”

Dr. Bulchandani reported having no financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM AACE 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Older women with ESRD face higher mortality, compared with male counterparts

Article Type
Changed
Wed, 06/23/2021 - 12:47

 

– In patients with end-stage renal disease, women older than 50 years have a significantly higher mortality, compared with their male counterparts, results from an analysis of national data showed.

“The racial and ethnic disparities in the prevalence, treatment, risks, and outcomes of [hypertension] in patients with CKD [chronic kidney disease], are well recognized,” the study’s senior author, Ricardo Correa, MD, said in an interview in advance of the annual scientific and clinical congress of the American Association of Clinical Endocrinologists. “Whites have better control of blood pressure, compared with Hispanics or African Americans with CKD, for example. On the other hand, gender differences in the outcome of blood pressure control and mortality across the different CKD stages have been very poorly studied, with conflicting results.”

The importance of gender difference has been mostly the focus in cardiovascular diseases, he continued, with compelling data revealing a higher incidence in men than in women of similar age, and a menopause-associated increase in cardiovascular disease in women.

“Whether the same can be said for hypertension, remains to be elucidated,” said Dr. Correa, an endocrinologist who directs the diabetes and metabolism fellowship at the University of Arizona in Phoenix.

In what he said is the first study of its kind, Dr. Correa and his colleagues set out to determine if gender in the U.S. population and menopausal age affect the inpatient survival rate in hypertensive patients across different stages of CKD. They drew from the 2005-2012 National Inpatient Sample to identify 2,121,750 hospitalized hypertensive patients and compared a number of factors between men and women, including crude mortality and mortality per CKD stage, menopausal age, length of stay, and total hospital charges.

Of the 2,121,750 patients, 1,092,931 (52%) were men and 1,028,819 (48%) were women; their mean age was 65 years. Among women, 32% had stage 3 CKD, 15% had stage 4 disease, 3% had stage 5 CKD, and 54% had end-stage renal disease (ESRD). Among men, 33% had stage 3 CKD, 13% had stage 4 disease, 3% had stage 5 CKD, and 51% had ESRD. The researchers observed that in-hospital crude mortality was significantly higher for men, compared with a matched group of women at CKD stages 3 and 4 (3.09% vs. 3.29% for CDK 3; P less than .0001 and 4.05% vs. 4.36% for CDK 4; P = .0004), yet was nonsignificant among those with ESRD (4.68% vs. 4.83%; P = .45).

 

 

When the researchers factored in menopausal age, they found that women with stage 3, 4, or 5 CKD who were aged 50 years or younger had a mortality rate similar to that of men with same stage disease, whereas women older than 50 years with ESRD had a significantly higher mortality, compared with their male counterparts, especially those of Asian, African American, and Hispanic descent (P less than .001, compared with those of white, non-Hispanic descent).



“One could hypothesize that cardiac remodeling in hemodialysis women may be different than that in hemodialysis men to the extent that it affects mortality,” Dr. Correa said. “However, it is unclear if the survival benefit for dialysis men is owing to the possibility of a selection bias or not. Dialysis women may not be receiving equal access to cardiovascular procedures or surgical interventions (arteriovenous fistula, for example) or women may not be offered adequate hemodialysis to the same extent as men are. Further investigations regarding sex-based differences in dialysis treatment are required.”

He acknowledged certain limitations of the study, including its observational design. “We lacked detailed information regarding the cause of death, dialysis efficiency, types of dialysis accesses, and left ventricular hypertrophy measurements. We did not account for transitions between different hemodialysis modalities [and] we do not have information about distances or traveling time to dialysis units.”

The study’s first author was Kelvin Tran, MD. The researchers reported having no financial disclosures.

[email protected]

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– In patients with end-stage renal disease, women older than 50 years have a significantly higher mortality, compared with their male counterparts, results from an analysis of national data showed.

“The racial and ethnic disparities in the prevalence, treatment, risks, and outcomes of [hypertension] in patients with CKD [chronic kidney disease], are well recognized,” the study’s senior author, Ricardo Correa, MD, said in an interview in advance of the annual scientific and clinical congress of the American Association of Clinical Endocrinologists. “Whites have better control of blood pressure, compared with Hispanics or African Americans with CKD, for example. On the other hand, gender differences in the outcome of blood pressure control and mortality across the different CKD stages have been very poorly studied, with conflicting results.”

The importance of gender difference has been mostly the focus in cardiovascular diseases, he continued, with compelling data revealing a higher incidence in men than in women of similar age, and a menopause-associated increase in cardiovascular disease in women.

“Whether the same can be said for hypertension, remains to be elucidated,” said Dr. Correa, an endocrinologist who directs the diabetes and metabolism fellowship at the University of Arizona in Phoenix.

In what he said is the first study of its kind, Dr. Correa and his colleagues set out to determine if gender in the U.S. population and menopausal age affect the inpatient survival rate in hypertensive patients across different stages of CKD. They drew from the 2005-2012 National Inpatient Sample to identify 2,121,750 hospitalized hypertensive patients and compared a number of factors between men and women, including crude mortality and mortality per CKD stage, menopausal age, length of stay, and total hospital charges.

Of the 2,121,750 patients, 1,092,931 (52%) were men and 1,028,819 (48%) were women; their mean age was 65 years. Among women, 32% had stage 3 CKD, 15% had stage 4 disease, 3% had stage 5 CKD, and 54% had end-stage renal disease (ESRD). Among men, 33% had stage 3 CKD, 13% had stage 4 disease, 3% had stage 5 CKD, and 51% had ESRD. The researchers observed that in-hospital crude mortality was significantly higher for men, compared with a matched group of women at CKD stages 3 and 4 (3.09% vs. 3.29% for CDK 3; P less than .0001 and 4.05% vs. 4.36% for CDK 4; P = .0004), yet was nonsignificant among those with ESRD (4.68% vs. 4.83%; P = .45).

 

 

When the researchers factored in menopausal age, they found that women with stage 3, 4, or 5 CKD who were aged 50 years or younger had a mortality rate similar to that of men with same stage disease, whereas women older than 50 years with ESRD had a significantly higher mortality, compared with their male counterparts, especially those of Asian, African American, and Hispanic descent (P less than .001, compared with those of white, non-Hispanic descent).



“One could hypothesize that cardiac remodeling in hemodialysis women may be different than that in hemodialysis men to the extent that it affects mortality,” Dr. Correa said. “However, it is unclear if the survival benefit for dialysis men is owing to the possibility of a selection bias or not. Dialysis women may not be receiving equal access to cardiovascular procedures or surgical interventions (arteriovenous fistula, for example) or women may not be offered adequate hemodialysis to the same extent as men are. Further investigations regarding sex-based differences in dialysis treatment are required.”

He acknowledged certain limitations of the study, including its observational design. “We lacked detailed information regarding the cause of death, dialysis efficiency, types of dialysis accesses, and left ventricular hypertrophy measurements. We did not account for transitions between different hemodialysis modalities [and] we do not have information about distances or traveling time to dialysis units.”

The study’s first author was Kelvin Tran, MD. The researchers reported having no financial disclosures.

[email protected]

 

– In patients with end-stage renal disease, women older than 50 years have a significantly higher mortality, compared with their male counterparts, results from an analysis of national data showed.

“The racial and ethnic disparities in the prevalence, treatment, risks, and outcomes of [hypertension] in patients with CKD [chronic kidney disease], are well recognized,” the study’s senior author, Ricardo Correa, MD, said in an interview in advance of the annual scientific and clinical congress of the American Association of Clinical Endocrinologists. “Whites have better control of blood pressure, compared with Hispanics or African Americans with CKD, for example. On the other hand, gender differences in the outcome of blood pressure control and mortality across the different CKD stages have been very poorly studied, with conflicting results.”

The importance of gender difference has been mostly the focus in cardiovascular diseases, he continued, with compelling data revealing a higher incidence in men than in women of similar age, and a menopause-associated increase in cardiovascular disease in women.

“Whether the same can be said for hypertension, remains to be elucidated,” said Dr. Correa, an endocrinologist who directs the diabetes and metabolism fellowship at the University of Arizona in Phoenix.

In what he said is the first study of its kind, Dr. Correa and his colleagues set out to determine if gender in the U.S. population and menopausal age affect the inpatient survival rate in hypertensive patients across different stages of CKD. They drew from the 2005-2012 National Inpatient Sample to identify 2,121,750 hospitalized hypertensive patients and compared a number of factors between men and women, including crude mortality and mortality per CKD stage, menopausal age, length of stay, and total hospital charges.

Of the 2,121,750 patients, 1,092,931 (52%) were men and 1,028,819 (48%) were women; their mean age was 65 years. Among women, 32% had stage 3 CKD, 15% had stage 4 disease, 3% had stage 5 CKD, and 54% had end-stage renal disease (ESRD). Among men, 33% had stage 3 CKD, 13% had stage 4 disease, 3% had stage 5 CKD, and 51% had ESRD. The researchers observed that in-hospital crude mortality was significantly higher for men, compared with a matched group of women at CKD stages 3 and 4 (3.09% vs. 3.29% for CDK 3; P less than .0001 and 4.05% vs. 4.36% for CDK 4; P = .0004), yet was nonsignificant among those with ESRD (4.68% vs. 4.83%; P = .45).

 

 

When the researchers factored in menopausal age, they found that women with stage 3, 4, or 5 CKD who were aged 50 years or younger had a mortality rate similar to that of men with same stage disease, whereas women older than 50 years with ESRD had a significantly higher mortality, compared with their male counterparts, especially those of Asian, African American, and Hispanic descent (P less than .001, compared with those of white, non-Hispanic descent).



“One could hypothesize that cardiac remodeling in hemodialysis women may be different than that in hemodialysis men to the extent that it affects mortality,” Dr. Correa said. “However, it is unclear if the survival benefit for dialysis men is owing to the possibility of a selection bias or not. Dialysis women may not be receiving equal access to cardiovascular procedures or surgical interventions (arteriovenous fistula, for example) or women may not be offered adequate hemodialysis to the same extent as men are. Further investigations regarding sex-based differences in dialysis treatment are required.”

He acknowledged certain limitations of the study, including its observational design. “We lacked detailed information regarding the cause of death, dialysis efficiency, types of dialysis accesses, and left ventricular hypertrophy measurements. We did not account for transitions between different hemodialysis modalities [and] we do not have information about distances or traveling time to dialysis units.”

The study’s first author was Kelvin Tran, MD. The researchers reported having no financial disclosures.

[email protected]

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM AACE 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Gender and race affect inpatient mortality of hypertensive patients across chronic kidney disease stages to end-stage renal disease.

Major finding: Women older than 50 years with end-stage renal disease had significantly higher mortality, compared with their male counterparts, especially those of Asian, African American, and Hispanic descent (P less than .001 vs. those of white, non-Hispanic descent).

Study details: An observational study of more than 2 million hypertensive patients from the Nationwide Inpatient Sample.

Disclosures: Dr. Correa reported having no financial disclosures.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Depression treatment rates rose with expanded insurance coverage

Article Type
Changed
Mon, 04/29/2019 - 10:31

 

Multiple national policies designed to expand insurance coverage for mental health services in the United States likely contributed to modest increases in treatment for depression, according to an analysis of three national medical expenditure surveys.

“These findings still need to be balanced against the fact that the lower-than-expected rate of treatment suggests that substantial barriers remain to individuals receiving treatment for their depression,” wrote Jason M. Hockenberry, PhD, of Emory University in Atlanta and his associates. The study was published in JAMA Psychiatry.

To examine trends in depression treatment and spending, especially after the passage of the Mental Health Parity and Addiction Equity Act in 2008 and the Affordable Care Act in 2010, the authors analyzed responses to the 1998, 2007, and 2015 Medical Expenditure Panel Surveys (MEPSs). The final analysis included 86,216 individuals who were a mean (SD) age of 37.2 years.

From 1998 to 2015, rates of outpatient treatment for depression increased from 2.36 (95% confidence interval, 2.12-2.61) per 100 to 3.47 (95% CI, 3.16-3.79) per 100. The treated prevalence among white survey respondents was more than double that of black respondents in 2015, at 4.00 (95% CI, 3.58-4.43) per 100, compared with 1.91 (95% CI, 1.55-2.28) per 100. Though psychotherapy use declined from 1998 to 2007 and then increased slightly in 2015, the proportion of patients treated using pharmacotherapy stayed relatively constant at 81.9% (95% CI, 77.9%-85.9%) in 1998 and 80.8% (95% CI, 77.9%-83.7%) in 2015.

Total spending on outpatient depression treatment increased from $12,430,000 in 1998 to $15,554,000 in 2007, and $17,404,000 in 2015. The percentage of spending that came from self-pay decreased from 32% in 1998 to 20% in 2015. At the same time, the percentage of spending covered by Medicaid increased, from 19% in 1998 to 36% in 2015.

Dr. Hockenberry and his coauthors acknowledged the limitations of their study, including the pitfalls of relying on national surveys over long periods of time. Specifically, the MEPSs depended in part on inexact measures, such as memory of health care visits; the 2015 survey also had a response rate of only 47.7%. That said, they reinforced their findings by citing how additional surveys that assess major depression – including the 2016 National Survey on Drug Use and Health – “have found similar proportions of treated depression to what we find in the 2015 MEPS.”

The study was supported in part by the Commonwealth Fund, and Dr. Hockenberry also reported receiving grants from the Commonwealth Fund. No other conflicts of interest were reported.

SOURCE: Hockenberry JM et al. JAMA Psychiatry. 2019 Apr 24. doi: 10.1001/jamapsychiatry.2019.0633.

Publications
Topics
Sections

 

Multiple national policies designed to expand insurance coverage for mental health services in the United States likely contributed to modest increases in treatment for depression, according to an analysis of three national medical expenditure surveys.

“These findings still need to be balanced against the fact that the lower-than-expected rate of treatment suggests that substantial barriers remain to individuals receiving treatment for their depression,” wrote Jason M. Hockenberry, PhD, of Emory University in Atlanta and his associates. The study was published in JAMA Psychiatry.

To examine trends in depression treatment and spending, especially after the passage of the Mental Health Parity and Addiction Equity Act in 2008 and the Affordable Care Act in 2010, the authors analyzed responses to the 1998, 2007, and 2015 Medical Expenditure Panel Surveys (MEPSs). The final analysis included 86,216 individuals who were a mean (SD) age of 37.2 years.

From 1998 to 2015, rates of outpatient treatment for depression increased from 2.36 (95% confidence interval, 2.12-2.61) per 100 to 3.47 (95% CI, 3.16-3.79) per 100. The treated prevalence among white survey respondents was more than double that of black respondents in 2015, at 4.00 (95% CI, 3.58-4.43) per 100, compared with 1.91 (95% CI, 1.55-2.28) per 100. Though psychotherapy use declined from 1998 to 2007 and then increased slightly in 2015, the proportion of patients treated using pharmacotherapy stayed relatively constant at 81.9% (95% CI, 77.9%-85.9%) in 1998 and 80.8% (95% CI, 77.9%-83.7%) in 2015.

Total spending on outpatient depression treatment increased from $12,430,000 in 1998 to $15,554,000 in 2007, and $17,404,000 in 2015. The percentage of spending that came from self-pay decreased from 32% in 1998 to 20% in 2015. At the same time, the percentage of spending covered by Medicaid increased, from 19% in 1998 to 36% in 2015.

Dr. Hockenberry and his coauthors acknowledged the limitations of their study, including the pitfalls of relying on national surveys over long periods of time. Specifically, the MEPSs depended in part on inexact measures, such as memory of health care visits; the 2015 survey also had a response rate of only 47.7%. That said, they reinforced their findings by citing how additional surveys that assess major depression – including the 2016 National Survey on Drug Use and Health – “have found similar proportions of treated depression to what we find in the 2015 MEPS.”

The study was supported in part by the Commonwealth Fund, and Dr. Hockenberry also reported receiving grants from the Commonwealth Fund. No other conflicts of interest were reported.

SOURCE: Hockenberry JM et al. JAMA Psychiatry. 2019 Apr 24. doi: 10.1001/jamapsychiatry.2019.0633.

 

Multiple national policies designed to expand insurance coverage for mental health services in the United States likely contributed to modest increases in treatment for depression, according to an analysis of three national medical expenditure surveys.

“These findings still need to be balanced against the fact that the lower-than-expected rate of treatment suggests that substantial barriers remain to individuals receiving treatment for their depression,” wrote Jason M. Hockenberry, PhD, of Emory University in Atlanta and his associates. The study was published in JAMA Psychiatry.

To examine trends in depression treatment and spending, especially after the passage of the Mental Health Parity and Addiction Equity Act in 2008 and the Affordable Care Act in 2010, the authors analyzed responses to the 1998, 2007, and 2015 Medical Expenditure Panel Surveys (MEPSs). The final analysis included 86,216 individuals who were a mean (SD) age of 37.2 years.

From 1998 to 2015, rates of outpatient treatment for depression increased from 2.36 (95% confidence interval, 2.12-2.61) per 100 to 3.47 (95% CI, 3.16-3.79) per 100. The treated prevalence among white survey respondents was more than double that of black respondents in 2015, at 4.00 (95% CI, 3.58-4.43) per 100, compared with 1.91 (95% CI, 1.55-2.28) per 100. Though psychotherapy use declined from 1998 to 2007 and then increased slightly in 2015, the proportion of patients treated using pharmacotherapy stayed relatively constant at 81.9% (95% CI, 77.9%-85.9%) in 1998 and 80.8% (95% CI, 77.9%-83.7%) in 2015.

Total spending on outpatient depression treatment increased from $12,430,000 in 1998 to $15,554,000 in 2007, and $17,404,000 in 2015. The percentage of spending that came from self-pay decreased from 32% in 1998 to 20% in 2015. At the same time, the percentage of spending covered by Medicaid increased, from 19% in 1998 to 36% in 2015.

Dr. Hockenberry and his coauthors acknowledged the limitations of their study, including the pitfalls of relying on national surveys over long periods of time. Specifically, the MEPSs depended in part on inexact measures, such as memory of health care visits; the 2015 survey also had a response rate of only 47.7%. That said, they reinforced their findings by citing how additional surveys that assess major depression – including the 2016 National Survey on Drug Use and Health – “have found similar proportions of treated depression to what we find in the 2015 MEPS.”

The study was supported in part by the Commonwealth Fund, and Dr. Hockenberry also reported receiving grants from the Commonwealth Fund. No other conflicts of interest were reported.

SOURCE: Hockenberry JM et al. JAMA Psychiatry. 2019 Apr 24. doi: 10.1001/jamapsychiatry.2019.0633.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Treatment for – and spending on – depression both saw modest increases from 1998 to 2015.

Major finding: Rates of outpatient treatment for depression increased from 2.36 (95% confidence interval, 2.12-2.61) per 100 in 1998 to 3.47 (95% CI, 3.16-3.79) per 100 in 2015.

Study details: An analysis of 86,216 individuals from the 1998, 2007, and 2015 Medical Expenditure Panel Surveys.

Disclosures: The study was supported in part by the Commonwealth Fund, and the lead author also reported receiving grants from the Commonwealth Fund. No other conflicts of interest were reported.

Source: Hockenberry JM et al. JAMA Psychiatry. 2019 Apr 24. doi: 10.1001/jamapsychiatry.2019.0633.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.