User login
Waning vaccine immunity linked to pertussis resurgence
researchers said.
In the March 28, 2018, edition of Science Translational Medicine, researchers reported on a study that used different models of transmission to explore what might be the cause of the steady increase in pertussis infections since the mid-1970s.
Using 16 years’ worth of detailed, age-stratified incidence data from Massachusetts, researchers found that the model which assumed a gradual waning in protection was the best fit for the observed patterns of pertussis incidence across the population.
This model suggested significant variability in how the level of protection changes over time, with a 10% risk of vaccine protection waning to zero within 10 years of completing routine vaccination and a 55% chance that the vaccine would confer lifelong protection.
“Crucially, we find that the vaccine is effective at reducing pathogen circulation but not so effective that eradication of this highly contagious bacterium should be possible without targeted booster campaigns,” wrote Dr. Matthieu Domenech de Cellès, PhD, of the Institut Pasteur at the University of Versailles (France) and his coauthors.
The model also considered the possibility that the whole-cell and acellular pertussis vaccines might show differences in immunity, which had been suggested as one explanation for the resurgence of the disease. However, the authors found little evidence of a marked epidemiological switch from the whole-cell to acellular vaccines, although their results did suggest the acellular vaccine has a moderately reduced efficacy.
“Our results suggest that the train of events leading to the resurgence of pertussis was set in motion well before the shift to the DTaP vaccine,” Dr. Domenech de Cellès and his associates said.
The model also pointed to big shifts in the age-specific immunological profile caused by introduction of vaccination, which led to a reduction in transmission and also a reduction in natural infections both in vaccinated and unvaccinated individuals.
This meant individuals who either did not get vaccinated as children or who did not gain immunity from vaccination were growing to adulthood without ever being exposed to natural infection.
“Concurrently, older cohorts, with their long-lived immunity derived from natural infections experienced during the prevaccine period, were gradually dying out,” the authors said. “The resulting rise in the number of susceptible adults sets the stage for the pertussis resurgence, especially among adults.”
Two authors were supported by the National Institutes of Health and by Models of Infectious Disease Agent Study–National Institute of General Medical Sciences. No conflicts of interest were declared.
SOURCE: Domenech de Cellès M et al. Sci Transl Med. 2018 Mar 28;10:eaaj1748.
researchers said.
In the March 28, 2018, edition of Science Translational Medicine, researchers reported on a study that used different models of transmission to explore what might be the cause of the steady increase in pertussis infections since the mid-1970s.
Using 16 years’ worth of detailed, age-stratified incidence data from Massachusetts, researchers found that the model which assumed a gradual waning in protection was the best fit for the observed patterns of pertussis incidence across the population.
This model suggested significant variability in how the level of protection changes over time, with a 10% risk of vaccine protection waning to zero within 10 years of completing routine vaccination and a 55% chance that the vaccine would confer lifelong protection.
“Crucially, we find that the vaccine is effective at reducing pathogen circulation but not so effective that eradication of this highly contagious bacterium should be possible without targeted booster campaigns,” wrote Dr. Matthieu Domenech de Cellès, PhD, of the Institut Pasteur at the University of Versailles (France) and his coauthors.
The model also considered the possibility that the whole-cell and acellular pertussis vaccines might show differences in immunity, which had been suggested as one explanation for the resurgence of the disease. However, the authors found little evidence of a marked epidemiological switch from the whole-cell to acellular vaccines, although their results did suggest the acellular vaccine has a moderately reduced efficacy.
“Our results suggest that the train of events leading to the resurgence of pertussis was set in motion well before the shift to the DTaP vaccine,” Dr. Domenech de Cellès and his associates said.
The model also pointed to big shifts in the age-specific immunological profile caused by introduction of vaccination, which led to a reduction in transmission and also a reduction in natural infections both in vaccinated and unvaccinated individuals.
This meant individuals who either did not get vaccinated as children or who did not gain immunity from vaccination were growing to adulthood without ever being exposed to natural infection.
“Concurrently, older cohorts, with their long-lived immunity derived from natural infections experienced during the prevaccine period, were gradually dying out,” the authors said. “The resulting rise in the number of susceptible adults sets the stage for the pertussis resurgence, especially among adults.”
Two authors were supported by the National Institutes of Health and by Models of Infectious Disease Agent Study–National Institute of General Medical Sciences. No conflicts of interest were declared.
SOURCE: Domenech de Cellès M et al. Sci Transl Med. 2018 Mar 28;10:eaaj1748.
researchers said.
In the March 28, 2018, edition of Science Translational Medicine, researchers reported on a study that used different models of transmission to explore what might be the cause of the steady increase in pertussis infections since the mid-1970s.
Using 16 years’ worth of detailed, age-stratified incidence data from Massachusetts, researchers found that the model which assumed a gradual waning in protection was the best fit for the observed patterns of pertussis incidence across the population.
This model suggested significant variability in how the level of protection changes over time, with a 10% risk of vaccine protection waning to zero within 10 years of completing routine vaccination and a 55% chance that the vaccine would confer lifelong protection.
“Crucially, we find that the vaccine is effective at reducing pathogen circulation but not so effective that eradication of this highly contagious bacterium should be possible without targeted booster campaigns,” wrote Dr. Matthieu Domenech de Cellès, PhD, of the Institut Pasteur at the University of Versailles (France) and his coauthors.
The model also considered the possibility that the whole-cell and acellular pertussis vaccines might show differences in immunity, which had been suggested as one explanation for the resurgence of the disease. However, the authors found little evidence of a marked epidemiological switch from the whole-cell to acellular vaccines, although their results did suggest the acellular vaccine has a moderately reduced efficacy.
“Our results suggest that the train of events leading to the resurgence of pertussis was set in motion well before the shift to the DTaP vaccine,” Dr. Domenech de Cellès and his associates said.
The model also pointed to big shifts in the age-specific immunological profile caused by introduction of vaccination, which led to a reduction in transmission and also a reduction in natural infections both in vaccinated and unvaccinated individuals.
This meant individuals who either did not get vaccinated as children or who did not gain immunity from vaccination were growing to adulthood without ever being exposed to natural infection.
“Concurrently, older cohorts, with their long-lived immunity derived from natural infections experienced during the prevaccine period, were gradually dying out,” the authors said. “The resulting rise in the number of susceptible adults sets the stage for the pertussis resurgence, especially among adults.”
Two authors were supported by the National Institutes of Health and by Models of Infectious Disease Agent Study–National Institute of General Medical Sciences. No conflicts of interest were declared.
SOURCE: Domenech de Cellès M et al. Sci Transl Med. 2018 Mar 28;10:eaaj1748.
FROM SCIENCE TRANSLATIONAL MEDICINE
Irritability, depressive mood tied to higher suicidality risk in adolescence
Children who are particularly irritable, depressive, and anxious might be at greater risk of suicidality in adolescence, according to a population-based cohort study.
Researchers enrolled 1,430 participants from the Québec Longitudinal Study of Child Development, aged 6-12 years, and performed yearly or biyearly assessments over a follow-up of 5 months to 17 years, according to a study published online March 28 in JAMA Psychiatry (doi: 10.1001/jamapsychiatry.2018.0174).
They found that girls who rated highly for irritability and for the depressive/anxious mood profile on the Behavior Questionnaire, a measure created for Canada’s National Longitudinal Study of Children and Youth, had a threefold higher risk of suicidality (odds ratio, 3.07; 95% confidence interval, 1.54-6.12). Meanwhile, boys had a twofold higher risk (OR, 2.13; 95% CI, 0.95-4.78), compared with children with low irritability and depressive/anxious mood.
“Exploratory analyses by sex indicated that this association was more important for girls than boys, as indicated by the need to prevent the exposure among 5 girls to avoid 1 case of suicidality,” wrote Massimiliano Orri, PhD, and his associates.
The rates of suicidality in children with high irritability and high depressive/anxious mood were 16.4%, compared with 11% in the group with the lowest symptom levels.
Even in children with only moderate irritability and low depressive/anxious mood, a significant increase was found in the odds of showing suicidality, compared with the reference group (OR, 1.51; 95% CI, 1.02-2.25).
“Although previous studies reported associations between irritability during childhood and adolescence and later depression, anxiety, and suicidality, we found that even moderate levels of irritability may contribute to suicidal risk,” wrote Dr. Orri of Bordeaux Population Health Research Centre, at the Institut National de la Santé et de la Recherche Medicale in France. “Such results indicate that .”
Children with a high depressive/anxious mood profile showed the same odds of suicidality as those of the reference group.
The authors noted that there was considerably stability in developmental profiles, so children who showed the highest levels of symptoms at age 6 were likely to exhibit those same high levels at age 12.
They also commented on their study’s use of an “innovative, person-centered approach” to describe the joint course of these moods over the time course of the study.
The investigators cited several limitations. One is that the assessment of childhood symptoms were based on teachers only, so depressive/anxious mood might have been underrated compared with irritability “because internalizing symptoms may be more difficult to observe in a school setting than externalized symptoms.”
Dr. Orri and two associates reported receiving support from the Canadian Institutes of Health Research. The other researchers cited funding from the National Alliance for Research on Schizophrenia and Depression and the Fonds de Recherche du Québec. No other financial disclosures were reported. The Québec Longitudinal Study of Child Development was supported by several entities, including the Québec Government’s Ministry of Health, Ministry of Education, and Ministry of Family Affairs.
SOURCE: Orri M et al. JAMA Psychiatry. 2018 Mar 28. doi: 10.1001/jamapsychiatry.2018.0174.
Children who are particularly irritable, depressive, and anxious might be at greater risk of suicidality in adolescence, according to a population-based cohort study.
Researchers enrolled 1,430 participants from the Québec Longitudinal Study of Child Development, aged 6-12 years, and performed yearly or biyearly assessments over a follow-up of 5 months to 17 years, according to a study published online March 28 in JAMA Psychiatry (doi: 10.1001/jamapsychiatry.2018.0174).
They found that girls who rated highly for irritability and for the depressive/anxious mood profile on the Behavior Questionnaire, a measure created for Canada’s National Longitudinal Study of Children and Youth, had a threefold higher risk of suicidality (odds ratio, 3.07; 95% confidence interval, 1.54-6.12). Meanwhile, boys had a twofold higher risk (OR, 2.13; 95% CI, 0.95-4.78), compared with children with low irritability and depressive/anxious mood.
“Exploratory analyses by sex indicated that this association was more important for girls than boys, as indicated by the need to prevent the exposure among 5 girls to avoid 1 case of suicidality,” wrote Massimiliano Orri, PhD, and his associates.
The rates of suicidality in children with high irritability and high depressive/anxious mood were 16.4%, compared with 11% in the group with the lowest symptom levels.
Even in children with only moderate irritability and low depressive/anxious mood, a significant increase was found in the odds of showing suicidality, compared with the reference group (OR, 1.51; 95% CI, 1.02-2.25).
“Although previous studies reported associations between irritability during childhood and adolescence and later depression, anxiety, and suicidality, we found that even moderate levels of irritability may contribute to suicidal risk,” wrote Dr. Orri of Bordeaux Population Health Research Centre, at the Institut National de la Santé et de la Recherche Medicale in France. “Such results indicate that .”
Children with a high depressive/anxious mood profile showed the same odds of suicidality as those of the reference group.
The authors noted that there was considerably stability in developmental profiles, so children who showed the highest levels of symptoms at age 6 were likely to exhibit those same high levels at age 12.
They also commented on their study’s use of an “innovative, person-centered approach” to describe the joint course of these moods over the time course of the study.
The investigators cited several limitations. One is that the assessment of childhood symptoms were based on teachers only, so depressive/anxious mood might have been underrated compared with irritability “because internalizing symptoms may be more difficult to observe in a school setting than externalized symptoms.”
Dr. Orri and two associates reported receiving support from the Canadian Institutes of Health Research. The other researchers cited funding from the National Alliance for Research on Schizophrenia and Depression and the Fonds de Recherche du Québec. No other financial disclosures were reported. The Québec Longitudinal Study of Child Development was supported by several entities, including the Québec Government’s Ministry of Health, Ministry of Education, and Ministry of Family Affairs.
SOURCE: Orri M et al. JAMA Psychiatry. 2018 Mar 28. doi: 10.1001/jamapsychiatry.2018.0174.
Children who are particularly irritable, depressive, and anxious might be at greater risk of suicidality in adolescence, according to a population-based cohort study.
Researchers enrolled 1,430 participants from the Québec Longitudinal Study of Child Development, aged 6-12 years, and performed yearly or biyearly assessments over a follow-up of 5 months to 17 years, according to a study published online March 28 in JAMA Psychiatry (doi: 10.1001/jamapsychiatry.2018.0174).
They found that girls who rated highly for irritability and for the depressive/anxious mood profile on the Behavior Questionnaire, a measure created for Canada’s National Longitudinal Study of Children and Youth, had a threefold higher risk of suicidality (odds ratio, 3.07; 95% confidence interval, 1.54-6.12). Meanwhile, boys had a twofold higher risk (OR, 2.13; 95% CI, 0.95-4.78), compared with children with low irritability and depressive/anxious mood.
“Exploratory analyses by sex indicated that this association was more important for girls than boys, as indicated by the need to prevent the exposure among 5 girls to avoid 1 case of suicidality,” wrote Massimiliano Orri, PhD, and his associates.
The rates of suicidality in children with high irritability and high depressive/anxious mood were 16.4%, compared with 11% in the group with the lowest symptom levels.
Even in children with only moderate irritability and low depressive/anxious mood, a significant increase was found in the odds of showing suicidality, compared with the reference group (OR, 1.51; 95% CI, 1.02-2.25).
“Although previous studies reported associations between irritability during childhood and adolescence and later depression, anxiety, and suicidality, we found that even moderate levels of irritability may contribute to suicidal risk,” wrote Dr. Orri of Bordeaux Population Health Research Centre, at the Institut National de la Santé et de la Recherche Medicale in France. “Such results indicate that .”
Children with a high depressive/anxious mood profile showed the same odds of suicidality as those of the reference group.
The authors noted that there was considerably stability in developmental profiles, so children who showed the highest levels of symptoms at age 6 were likely to exhibit those same high levels at age 12.
They also commented on their study’s use of an “innovative, person-centered approach” to describe the joint course of these moods over the time course of the study.
The investigators cited several limitations. One is that the assessment of childhood symptoms were based on teachers only, so depressive/anxious mood might have been underrated compared with irritability “because internalizing symptoms may be more difficult to observe in a school setting than externalized symptoms.”
Dr. Orri and two associates reported receiving support from the Canadian Institutes of Health Research. The other researchers cited funding from the National Alliance for Research on Schizophrenia and Depression and the Fonds de Recherche du Québec. No other financial disclosures were reported. The Québec Longitudinal Study of Child Development was supported by several entities, including the Québec Government’s Ministry of Health, Ministry of Education, and Ministry of Family Affairs.
SOURCE: Orri M et al. JAMA Psychiatry. 2018 Mar 28. doi: 10.1001/jamapsychiatry.2018.0174.
FROM JAMA PSYCHIATRY
Key clinical point: Irritability in children may predict suicidality in adolescence.
Major finding: Girls with high irritability and depressive/anxious mood profile had a threefold higher risk of suicidality in adolescence.
Study details: A population-based cohort study involving 1,430 participants.
Disclosures: Dr. Orri and two associates reported receiving support from the Canadian Institutes of Health Research. The other researchers cited funding from the National Alliance for Research on Schizophrenia and Depression and the Fonds de Recherche du Québec. No other financial disclosures were reported. The Québec Longitudinal Study of Child Development was supported by several entities, including the Québec Government’s Ministry of Health, Ministry of Education, and Ministry of Family Affairs.
Source: Orri M et al. JAMA Psychiatry. 2018 Mar 28. doi: 10.1001/jamapsychiatry.2018.0174.
Office-based screen predicts dementia in Parkinson’s disease
A simple, office-based screening tool was at least as effective as biomarker-based assessments in predicting which patients with Parkinson’s disease are likely to develop dementia in an international study.
The eight-item scale “is a short and easily-administered office tool that despite its simplicity can nonetheless accurately screen for dementia risk in Parkinson’s disease,” investigators noted in reporting the results of an international multicenter study in 607 patients with Parkinson’s disease but free of dementia at baseline. The results of the study, testing the predictive validity of the Montreal Parkinson’s Risk of Dementia Scale, were published online in JAMA Neurology.
Compared with patients in the low-risk group, those in the high-risk group had a 20-fold higher risk of dementia, and those in the intermediate risk group had a 10-fold higher risk (P less than 0.001).
A positive screen result – a cut-off of four or above – showed a sensitivity of 77.1% and a specificity of 87.2%. The positive predictive value was 43.9% and the negative predictive value was 96.7%, and the overall area under the receiver operating characteristic curve was 0.877.
Benjamin K. Dawson, from the department of neurology and neurosurgery at McGill University, Montreal, and coauthors said a previous study using a combination of lumbar puncture, dopamine transporter scanning with [123I]FP-CIT single photon emission CT (DaTscan), and clinical markers had an area under the curve of 0.80, while clinical-genetic risk score that included an analysis of GBA mutations had reported an AUC of 0.88.
The Montreal Parkinson’s Risk of Dementia Scale includes eight items: age below 70 years, male sex, falls and/or freezing, bilateral disease onset, history suggestive of rapid eye movement sleep behavior disorder, orthostatic hypotension, mild cognitive impairment, and visual hallucinations.
The authors noted that the risk scores were lower when the cohort was limited to patients without mild cognitive impairment. Because sex was also such a strong risk factor for dementia, the authors divided the results according to sex and found that the scale did perform somewhat better in men.
The authors commented that the main advantage of the Montreal Parkinson Risk of Dementia Scale was its practicality in an office-based clinical setting.
“Featuring demographic data as well as motor and nonmotor signs, the items of the scale are already often screened for in a routine office visit of a patient with [Parkinson’s disease], with no need for biological samples, neuroimaging, or genetic testing,” they wrote. “Therefore, compiling results is rapid for the clinician during a single outpatient office visit, and the results are available without delay or requirement for statistical software.”
The study was supported by the Fonds de la Recherche Sante Quebec and the Canadian Institute of Health Research. One author declared travel and speaking fees and consultancies with the pharmaceutical, and he and two other authors declared a range of grants from other funding bodies including Fonds de la Recherche Sante Quebec. No other conflicts of interest were declared.
SOURCE: Dawson B et al. JAMA Neurol. 2018 Mar 26. doi:10.1001/jamaneurol.2018.0254.
A simple, office-based screening tool was at least as effective as biomarker-based assessments in predicting which patients with Parkinson’s disease are likely to develop dementia in an international study.
The eight-item scale “is a short and easily-administered office tool that despite its simplicity can nonetheless accurately screen for dementia risk in Parkinson’s disease,” investigators noted in reporting the results of an international multicenter study in 607 patients with Parkinson’s disease but free of dementia at baseline. The results of the study, testing the predictive validity of the Montreal Parkinson’s Risk of Dementia Scale, were published online in JAMA Neurology.
Compared with patients in the low-risk group, those in the high-risk group had a 20-fold higher risk of dementia, and those in the intermediate risk group had a 10-fold higher risk (P less than 0.001).
A positive screen result – a cut-off of four or above – showed a sensitivity of 77.1% and a specificity of 87.2%. The positive predictive value was 43.9% and the negative predictive value was 96.7%, and the overall area under the receiver operating characteristic curve was 0.877.
Benjamin K. Dawson, from the department of neurology and neurosurgery at McGill University, Montreal, and coauthors said a previous study using a combination of lumbar puncture, dopamine transporter scanning with [123I]FP-CIT single photon emission CT (DaTscan), and clinical markers had an area under the curve of 0.80, while clinical-genetic risk score that included an analysis of GBA mutations had reported an AUC of 0.88.
The Montreal Parkinson’s Risk of Dementia Scale includes eight items: age below 70 years, male sex, falls and/or freezing, bilateral disease onset, history suggestive of rapid eye movement sleep behavior disorder, orthostatic hypotension, mild cognitive impairment, and visual hallucinations.
The authors noted that the risk scores were lower when the cohort was limited to patients without mild cognitive impairment. Because sex was also such a strong risk factor for dementia, the authors divided the results according to sex and found that the scale did perform somewhat better in men.
The authors commented that the main advantage of the Montreal Parkinson Risk of Dementia Scale was its practicality in an office-based clinical setting.
“Featuring demographic data as well as motor and nonmotor signs, the items of the scale are already often screened for in a routine office visit of a patient with [Parkinson’s disease], with no need for biological samples, neuroimaging, or genetic testing,” they wrote. “Therefore, compiling results is rapid for the clinician during a single outpatient office visit, and the results are available without delay or requirement for statistical software.”
The study was supported by the Fonds de la Recherche Sante Quebec and the Canadian Institute of Health Research. One author declared travel and speaking fees and consultancies with the pharmaceutical, and he and two other authors declared a range of grants from other funding bodies including Fonds de la Recherche Sante Quebec. No other conflicts of interest were declared.
SOURCE: Dawson B et al. JAMA Neurol. 2018 Mar 26. doi:10.1001/jamaneurol.2018.0254.
A simple, office-based screening tool was at least as effective as biomarker-based assessments in predicting which patients with Parkinson’s disease are likely to develop dementia in an international study.
The eight-item scale “is a short and easily-administered office tool that despite its simplicity can nonetheless accurately screen for dementia risk in Parkinson’s disease,” investigators noted in reporting the results of an international multicenter study in 607 patients with Parkinson’s disease but free of dementia at baseline. The results of the study, testing the predictive validity of the Montreal Parkinson’s Risk of Dementia Scale, were published online in JAMA Neurology.
Compared with patients in the low-risk group, those in the high-risk group had a 20-fold higher risk of dementia, and those in the intermediate risk group had a 10-fold higher risk (P less than 0.001).
A positive screen result – a cut-off of four or above – showed a sensitivity of 77.1% and a specificity of 87.2%. The positive predictive value was 43.9% and the negative predictive value was 96.7%, and the overall area under the receiver operating characteristic curve was 0.877.
Benjamin K. Dawson, from the department of neurology and neurosurgery at McGill University, Montreal, and coauthors said a previous study using a combination of lumbar puncture, dopamine transporter scanning with [123I]FP-CIT single photon emission CT (DaTscan), and clinical markers had an area under the curve of 0.80, while clinical-genetic risk score that included an analysis of GBA mutations had reported an AUC of 0.88.
The Montreal Parkinson’s Risk of Dementia Scale includes eight items: age below 70 years, male sex, falls and/or freezing, bilateral disease onset, history suggestive of rapid eye movement sleep behavior disorder, orthostatic hypotension, mild cognitive impairment, and visual hallucinations.
The authors noted that the risk scores were lower when the cohort was limited to patients without mild cognitive impairment. Because sex was also such a strong risk factor for dementia, the authors divided the results according to sex and found that the scale did perform somewhat better in men.
The authors commented that the main advantage of the Montreal Parkinson Risk of Dementia Scale was its practicality in an office-based clinical setting.
“Featuring demographic data as well as motor and nonmotor signs, the items of the scale are already often screened for in a routine office visit of a patient with [Parkinson’s disease], with no need for biological samples, neuroimaging, or genetic testing,” they wrote. “Therefore, compiling results is rapid for the clinician during a single outpatient office visit, and the results are available without delay or requirement for statistical software.”
The study was supported by the Fonds de la Recherche Sante Quebec and the Canadian Institute of Health Research. One author declared travel and speaking fees and consultancies with the pharmaceutical, and he and two other authors declared a range of grants from other funding bodies including Fonds de la Recherche Sante Quebec. No other conflicts of interest were declared.
SOURCE: Dawson B et al. JAMA Neurol. 2018 Mar 26. doi:10.1001/jamaneurol.2018.0254.
FROM JAMA NEUROLOGY
Key clinical point: An eight-item screen predicts dementia in Parkinson’s disease.
Major finding: The screening tool has an area under the curve of 0.88.
Study details: An international multicenter study in 607 patients with Parkinson’s disease.
Disclosures: The study was supported by the Fonds de la Recherche Sante Quebec and the Canadian Institute of Health Research. One author declared travel and speaking fees and consultancies with the pharmaceutical, and he and two other authors declared a range of grants from other funding bodies including Fonds de la Recherche Sante Quebec. No other conflicts of interest were declared.
Source: Dawson BK et al. JAMA Neurol. 2018 Mar 26. doi:10.1001/jamaneurol.2018.0254.
DPP-4 inhibitors increase IBD risk in diabetes
a study has found.
Researchers reported the results of an observational cohort study of 141,170 patients with type 2 diabetes newly treated with noninsulin antidiabetic drugs, with 552,413 person years of follow-up. Of these, 30,488 patients (21.6%) received at least one prescription for a dipeptidyl peptidase-4 inhibitor, and median duration of use was 1.6 years.
The report was published March 21 in the BMJ.
The researchers found that dipeptidyl peptidase-4 (DPP-4) inhibitors were associated with a 75% increased risk of IBD, compared with other antidiabetic drugs (53.4 vs. 34.5 per 100,000 per year, 95% confidence interval, 1.22-2.49).
The risk increased with longer duration of use, peaking at a nearly threefold increase in the risk of IBD after 3-4 years of taking DPP-4 inhibitors (hazard ratio 2.9, 95% CI, 1.31-6.41), and declining to a 45% increase in risk with 4 years of use.
“Although the absolute risk is low, physicians should be aware of this possible association and perhaps refrain from prescribing dipeptidyl peptidase-4 inhibitors for people at high risk (that is, those with a family history of disease or with known autoimmune conditions),” wrote Devin Abrahami of McGill University, Montreal, and coauthors. “Moreover, patients presenting with persistent gastrointestinal symptoms such as abdominal pain or diarrhoea should be closely monitored for worsening of symptoms.”
The same pattern was seen with years since initiation of medication, with a peak in the risk of IBD seen at 3-4 years after initiation followed by a decline.
“This gradual increase in the risk is consistent with the hypothesis of a possible delayed effect of the use of dipeptidyl peptidase-4 inhibitors on the incidence of inflammatory bowel disease,” the authors wrote.
When compared directly with insulin, the use of DPP-4 inhibitors was associated with an over twofold increase in the risk of IBD (HR, 2.28, 95% CI, 1.07-4.85).
The use of DPP-4 inhibitors was also associated with a greater than twofold increase in the risk of ulcerative colitis but no significant effect was seen for Crohn’s disease. However, the authors noted that this result was based on relatively few events and should be interpreted with caution.
The research did not find any difference in risk across different DPP-4 inhibitor drugs.
The DPP-4 enzyme is known to be expressed on the surface of cell types involved in immune response, and patients with IBD have been found to have lower serum DPP-4 enzyme concentrations than healthy controls.
Yet the authors said this was the first study to their knowledge that specifically investigated the effect of DPP-4 inhibitor use on the incidence of IBD.
One previous observational study actually found a decreased risk of a composite outcome of several autoimmune disorders – including IBD – with the use of DPP-4 inhibitors, but it did not report on IBD specifically. The authors also noted that DPP-4 may have a different biological function in IBD.
The Canadian Institutes of Health Research funded the study. No conflicts of interest were declared.
SOURCE: Abrahami D et al. BMJ. 2018;360:k872.
a study has found.
Researchers reported the results of an observational cohort study of 141,170 patients with type 2 diabetes newly treated with noninsulin antidiabetic drugs, with 552,413 person years of follow-up. Of these, 30,488 patients (21.6%) received at least one prescription for a dipeptidyl peptidase-4 inhibitor, and median duration of use was 1.6 years.
The report was published March 21 in the BMJ.
The researchers found that dipeptidyl peptidase-4 (DPP-4) inhibitors were associated with a 75% increased risk of IBD, compared with other antidiabetic drugs (53.4 vs. 34.5 per 100,000 per year, 95% confidence interval, 1.22-2.49).
The risk increased with longer duration of use, peaking at a nearly threefold increase in the risk of IBD after 3-4 years of taking DPP-4 inhibitors (hazard ratio 2.9, 95% CI, 1.31-6.41), and declining to a 45% increase in risk with 4 years of use.
“Although the absolute risk is low, physicians should be aware of this possible association and perhaps refrain from prescribing dipeptidyl peptidase-4 inhibitors for people at high risk (that is, those with a family history of disease or with known autoimmune conditions),” wrote Devin Abrahami of McGill University, Montreal, and coauthors. “Moreover, patients presenting with persistent gastrointestinal symptoms such as abdominal pain or diarrhoea should be closely monitored for worsening of symptoms.”
The same pattern was seen with years since initiation of medication, with a peak in the risk of IBD seen at 3-4 years after initiation followed by a decline.
“This gradual increase in the risk is consistent with the hypothesis of a possible delayed effect of the use of dipeptidyl peptidase-4 inhibitors on the incidence of inflammatory bowel disease,” the authors wrote.
When compared directly with insulin, the use of DPP-4 inhibitors was associated with an over twofold increase in the risk of IBD (HR, 2.28, 95% CI, 1.07-4.85).
The use of DPP-4 inhibitors was also associated with a greater than twofold increase in the risk of ulcerative colitis but no significant effect was seen for Crohn’s disease. However, the authors noted that this result was based on relatively few events and should be interpreted with caution.
The research did not find any difference in risk across different DPP-4 inhibitor drugs.
The DPP-4 enzyme is known to be expressed on the surface of cell types involved in immune response, and patients with IBD have been found to have lower serum DPP-4 enzyme concentrations than healthy controls.
Yet the authors said this was the first study to their knowledge that specifically investigated the effect of DPP-4 inhibitor use on the incidence of IBD.
One previous observational study actually found a decreased risk of a composite outcome of several autoimmune disorders – including IBD – with the use of DPP-4 inhibitors, but it did not report on IBD specifically. The authors also noted that DPP-4 may have a different biological function in IBD.
The Canadian Institutes of Health Research funded the study. No conflicts of interest were declared.
SOURCE: Abrahami D et al. BMJ. 2018;360:k872.
a study has found.
Researchers reported the results of an observational cohort study of 141,170 patients with type 2 diabetes newly treated with noninsulin antidiabetic drugs, with 552,413 person years of follow-up. Of these, 30,488 patients (21.6%) received at least one prescription for a dipeptidyl peptidase-4 inhibitor, and median duration of use was 1.6 years.
The report was published March 21 in the BMJ.
The researchers found that dipeptidyl peptidase-4 (DPP-4) inhibitors were associated with a 75% increased risk of IBD, compared with other antidiabetic drugs (53.4 vs. 34.5 per 100,000 per year, 95% confidence interval, 1.22-2.49).
The risk increased with longer duration of use, peaking at a nearly threefold increase in the risk of IBD after 3-4 years of taking DPP-4 inhibitors (hazard ratio 2.9, 95% CI, 1.31-6.41), and declining to a 45% increase in risk with 4 years of use.
“Although the absolute risk is low, physicians should be aware of this possible association and perhaps refrain from prescribing dipeptidyl peptidase-4 inhibitors for people at high risk (that is, those with a family history of disease or with known autoimmune conditions),” wrote Devin Abrahami of McGill University, Montreal, and coauthors. “Moreover, patients presenting with persistent gastrointestinal symptoms such as abdominal pain or diarrhoea should be closely monitored for worsening of symptoms.”
The same pattern was seen with years since initiation of medication, with a peak in the risk of IBD seen at 3-4 years after initiation followed by a decline.
“This gradual increase in the risk is consistent with the hypothesis of a possible delayed effect of the use of dipeptidyl peptidase-4 inhibitors on the incidence of inflammatory bowel disease,” the authors wrote.
When compared directly with insulin, the use of DPP-4 inhibitors was associated with an over twofold increase in the risk of IBD (HR, 2.28, 95% CI, 1.07-4.85).
The use of DPP-4 inhibitors was also associated with a greater than twofold increase in the risk of ulcerative colitis but no significant effect was seen for Crohn’s disease. However, the authors noted that this result was based on relatively few events and should be interpreted with caution.
The research did not find any difference in risk across different DPP-4 inhibitor drugs.
The DPP-4 enzyme is known to be expressed on the surface of cell types involved in immune response, and patients with IBD have been found to have lower serum DPP-4 enzyme concentrations than healthy controls.
Yet the authors said this was the first study to their knowledge that specifically investigated the effect of DPP-4 inhibitor use on the incidence of IBD.
One previous observational study actually found a decreased risk of a composite outcome of several autoimmune disorders – including IBD – with the use of DPP-4 inhibitors, but it did not report on IBD specifically. The authors also noted that DPP-4 may have a different biological function in IBD.
The Canadian Institutes of Health Research funded the study. No conflicts of interest were declared.
SOURCE: Abrahami D et al. BMJ. 2018;360:k872.
FROM THE BMJ
Key clinical point: Use of DPP-4 inhibitors may put patients with type 2 diabetes at increased risk of developing IBD.
Major finding: Use of DPP-4 inhibitors linked to 75% increase in IBD risk in type 2 diabetes.
Study details: An observational cohort study of 141,170 patients with type 2 diabetes.
Disclosures: The Canadian Institutes of Health Research funded the study. No conflicts of interest were declared.
Source: Abrahami D et al. BMJ. 2018;360:k872.
Red meat intake linked to NAFLD risk
Higher dietary intake of red meat and processed meats such as salami may increase the risk of nonalcoholic fatty liver disease and insulin resistance, new research suggests.
In a cross-sectional study, published in the March 20 edition of the Journal of Hepatology, researchers used food-frequency questionnaires to examine red and processed meat consumption in 789 adults aged 40-70 years, including information on cooking methods.
They found that those who reported a total meat intake above the median had a significant 49% higher odds of nonalcoholic fatty liver disease (NAFLD) (95% confidence interval, 1.05-2.13; P = .028) and 63% greater odds of insulin resistance (95% CI, 1.12-2.37, P = .011), even after adjustment for potential confounders such as body mass index, physical activity, smoking, alcohol, and saturated fat and cholesterol intake.
Those whose intake of red and/or processed meat was above the median had a 47% greater odds of NAFLD (P = .031), and a 55% greater odds of insulin resistance (P = .020).
Even when the analysis was limited to nondiabetic participants, the study still showed a significant relationship between higher intake of red and processed meat, and insulin resistance (J Hepatol. 2018 Mar 20. doi: 10.1016/j.jhep.2018.01.015).
“It can be claimed that the harmful association with meat may, at least partially, be related to a generally less healthy diet or lifestyle characterizing people who eat more red or processed meat, rather than a causal effect of meat,” wrote Shira Zelber-Sagi, PhD, of the department of gastroenterology at Tel Aviv Medical Center, and coauthors. “However, in the current study we meticulously adjusted the association with meat for other nutritional and lifestyle parameters to minimize confounding as much as possible.”
There was also a significant association between unhealthy cooking methods such as frying, broiling, and grilling – which are known to increase the quantity of heterocyclic amines (HCA) in the meat – and insulin resistance.
Individuals who ate one or more portions of meat cooked by these methods showed a higher incidence of insulin resistance compared with those who ate fewer than one portion per week (36.00% vs. 22.20%, P = .004). Researchers also used the food-frequency questionnaires to calculate the quantity of participants’ HCA intake, and found a significantly higher odds of insulin resistance in individuals whose HCA intake was above the median.
Even among the 305 individuals with NAFLD, higher total meat intake, and higher red and processed meat intake, the prevalence of insulin resistance was higher.
In this group, high HCA intake and high consumption of meat cooked by the unhealthy methods were associated with a fourfold higher odds of insulin resistance.
“Potential mechanisms for NAFLD may be related to the formation of reactive species during HCA metabolism, which can cause oxidation of lipids, proteins, and nucleic acids, resulting in oxidative stress, cell damage, and loss of biological function,” the authors wrote. “HCAs were also demonstrated to be bioactive in adipocytes in vitro, leading to increased expression of genes related to inflammation, diabetes and cancer risk.”
The authors noted that their findings supported the recommendations in dietary guidelines for cardiometabolic health, which suggest no more than one to two 100-g servings per week of red meat, and no more than one 50-g serving per week of processed meats.
“Although the specific effect of different types of meat and their quantities in NAFLD requires further research, these recommendations may be helpful in the treatment of patients with NAFLD at least in terms of CVD and diabetes prevention, and maybe for NAFLD prevention by reducing insulin resistance.”
The Israeli Ministry of Health supported the study. No conflicts of interest were declared.
SOURCE: Zelber-Sagi S et al. J Hepatol. 2018 Mar 20. doi: 10.1016/j.jhep.2018.01.015.
Higher dietary intake of red meat and processed meats such as salami may increase the risk of nonalcoholic fatty liver disease and insulin resistance, new research suggests.
In a cross-sectional study, published in the March 20 edition of the Journal of Hepatology, researchers used food-frequency questionnaires to examine red and processed meat consumption in 789 adults aged 40-70 years, including information on cooking methods.
They found that those who reported a total meat intake above the median had a significant 49% higher odds of nonalcoholic fatty liver disease (NAFLD) (95% confidence interval, 1.05-2.13; P = .028) and 63% greater odds of insulin resistance (95% CI, 1.12-2.37, P = .011), even after adjustment for potential confounders such as body mass index, physical activity, smoking, alcohol, and saturated fat and cholesterol intake.
Those whose intake of red and/or processed meat was above the median had a 47% greater odds of NAFLD (P = .031), and a 55% greater odds of insulin resistance (P = .020).
Even when the analysis was limited to nondiabetic participants, the study still showed a significant relationship between higher intake of red and processed meat, and insulin resistance (J Hepatol. 2018 Mar 20. doi: 10.1016/j.jhep.2018.01.015).
“It can be claimed that the harmful association with meat may, at least partially, be related to a generally less healthy diet or lifestyle characterizing people who eat more red or processed meat, rather than a causal effect of meat,” wrote Shira Zelber-Sagi, PhD, of the department of gastroenterology at Tel Aviv Medical Center, and coauthors. “However, in the current study we meticulously adjusted the association with meat for other nutritional and lifestyle parameters to minimize confounding as much as possible.”
There was also a significant association between unhealthy cooking methods such as frying, broiling, and grilling – which are known to increase the quantity of heterocyclic amines (HCA) in the meat – and insulin resistance.
Individuals who ate one or more portions of meat cooked by these methods showed a higher incidence of insulin resistance compared with those who ate fewer than one portion per week (36.00% vs. 22.20%, P = .004). Researchers also used the food-frequency questionnaires to calculate the quantity of participants’ HCA intake, and found a significantly higher odds of insulin resistance in individuals whose HCA intake was above the median.
Even among the 305 individuals with NAFLD, higher total meat intake, and higher red and processed meat intake, the prevalence of insulin resistance was higher.
In this group, high HCA intake and high consumption of meat cooked by the unhealthy methods were associated with a fourfold higher odds of insulin resistance.
“Potential mechanisms for NAFLD may be related to the formation of reactive species during HCA metabolism, which can cause oxidation of lipids, proteins, and nucleic acids, resulting in oxidative stress, cell damage, and loss of biological function,” the authors wrote. “HCAs were also demonstrated to be bioactive in adipocytes in vitro, leading to increased expression of genes related to inflammation, diabetes and cancer risk.”
The authors noted that their findings supported the recommendations in dietary guidelines for cardiometabolic health, which suggest no more than one to two 100-g servings per week of red meat, and no more than one 50-g serving per week of processed meats.
“Although the specific effect of different types of meat and their quantities in NAFLD requires further research, these recommendations may be helpful in the treatment of patients with NAFLD at least in terms of CVD and diabetes prevention, and maybe for NAFLD prevention by reducing insulin resistance.”
The Israeli Ministry of Health supported the study. No conflicts of interest were declared.
SOURCE: Zelber-Sagi S et al. J Hepatol. 2018 Mar 20. doi: 10.1016/j.jhep.2018.01.015.
Higher dietary intake of red meat and processed meats such as salami may increase the risk of nonalcoholic fatty liver disease and insulin resistance, new research suggests.
In a cross-sectional study, published in the March 20 edition of the Journal of Hepatology, researchers used food-frequency questionnaires to examine red and processed meat consumption in 789 adults aged 40-70 years, including information on cooking methods.
They found that those who reported a total meat intake above the median had a significant 49% higher odds of nonalcoholic fatty liver disease (NAFLD) (95% confidence interval, 1.05-2.13; P = .028) and 63% greater odds of insulin resistance (95% CI, 1.12-2.37, P = .011), even after adjustment for potential confounders such as body mass index, physical activity, smoking, alcohol, and saturated fat and cholesterol intake.
Those whose intake of red and/or processed meat was above the median had a 47% greater odds of NAFLD (P = .031), and a 55% greater odds of insulin resistance (P = .020).
Even when the analysis was limited to nondiabetic participants, the study still showed a significant relationship between higher intake of red and processed meat, and insulin resistance (J Hepatol. 2018 Mar 20. doi: 10.1016/j.jhep.2018.01.015).
“It can be claimed that the harmful association with meat may, at least partially, be related to a generally less healthy diet or lifestyle characterizing people who eat more red or processed meat, rather than a causal effect of meat,” wrote Shira Zelber-Sagi, PhD, of the department of gastroenterology at Tel Aviv Medical Center, and coauthors. “However, in the current study we meticulously adjusted the association with meat for other nutritional and lifestyle parameters to minimize confounding as much as possible.”
There was also a significant association between unhealthy cooking methods such as frying, broiling, and grilling – which are known to increase the quantity of heterocyclic amines (HCA) in the meat – and insulin resistance.
Individuals who ate one or more portions of meat cooked by these methods showed a higher incidence of insulin resistance compared with those who ate fewer than one portion per week (36.00% vs. 22.20%, P = .004). Researchers also used the food-frequency questionnaires to calculate the quantity of participants’ HCA intake, and found a significantly higher odds of insulin resistance in individuals whose HCA intake was above the median.
Even among the 305 individuals with NAFLD, higher total meat intake, and higher red and processed meat intake, the prevalence of insulin resistance was higher.
In this group, high HCA intake and high consumption of meat cooked by the unhealthy methods were associated with a fourfold higher odds of insulin resistance.
“Potential mechanisms for NAFLD may be related to the formation of reactive species during HCA metabolism, which can cause oxidation of lipids, proteins, and nucleic acids, resulting in oxidative stress, cell damage, and loss of biological function,” the authors wrote. “HCAs were also demonstrated to be bioactive in adipocytes in vitro, leading to increased expression of genes related to inflammation, diabetes and cancer risk.”
The authors noted that their findings supported the recommendations in dietary guidelines for cardiometabolic health, which suggest no more than one to two 100-g servings per week of red meat, and no more than one 50-g serving per week of processed meats.
“Although the specific effect of different types of meat and their quantities in NAFLD requires further research, these recommendations may be helpful in the treatment of patients with NAFLD at least in terms of CVD and diabetes prevention, and maybe for NAFLD prevention by reducing insulin resistance.”
The Israeli Ministry of Health supported the study. No conflicts of interest were declared.
SOURCE: Zelber-Sagi S et al. J Hepatol. 2018 Mar 20. doi: 10.1016/j.jhep.2018.01.015.
FROM JOURNAL OF HEPATOLOGY
Key clinical point: High red meat intake increases the risk of nonalcoholic fatty liver disease.
Major finding: Higher meat intake was associated with a 49% greater odds of NAFLD.
Study details: A cross-sectional study of 789 adults.
Disclosures: The Israeli Ministry of Health supported the study. No conflicts of interest were declared.
Source: Zelber-Sagi S et al. J. Hepatol. 2018. doi: 10.1016/j.jhep.2018.01.015.
Adding biomarkers beats NICE guidelines for detecting preeclampsia
FROM ULTRASOUND IN OBSTETRICS & GYNECOLOGY
Screening for preeclampsia using the United Kingdom’s National Institute for Health and Care Excellence (NICE) guidelines detects only about one-third of cases, according to a study published in Ultrasound in Obstetrics & Gynecology.
The United Kingdom-based prospective multicenter study, involving 16,747 singleton pregnancies, also looked at the effectiveness of a screening method that used Bayes’ theorem to combine maternal risk factors with biomarkers.
Preeclampsia developed in 473 (2.8%) pregnancies, and in 142 cases (0.8%) this led to preterm birth.
The NICE method of screening labels as high-risk women who have one major risk factor – such as a history of hypertensive disease in pregnancy or chronic kidney disease – or two moderate factors, including first pregnancy older than 40 years or a family history of preeclampsia.
This method of screening detected 30.4% of the cases of preeclampsia that developed and 40.8% of the cases that resulted in pre-term birth. The overall screen-positive rate by the NICE method was 10.3% of all participants in the study (1,727 women).
The Bayes’ theorem-based method assessed maternal risk factors in combination with mean arterial pressure and serum pregnancy-associated plasma protein-A. The detection rate for all preeclampsia using this method was 42.5%, representing an improvement of 11.3% over the NICE method, after adjusting for the effects of aspirin use in both groups. Researchers also examined the effect of adding in the biomarkers of uterine artery pulsatility index and serum placental growth factor, and found this detected 82.4% of preterm preeclampsia.
“The performance of screening by a combination of maternal factors with biomarkers was far superior to that of screening by NICE guidelines,” wrote Min Yi Tan, MD, of King’s College Hospital in London, and co-authors.
Overall, 4.5% of women in the study took aspirin from 14 weeks’ gestation until 36 weeks or delivery, but only 23.2% of women who screened positive according to the NICE guidelines took aspirin.
“Such poor compliance may at least in part be attributed to the generally held belief, based on the results of a meta-analysis in 2007, that aspirin reduces the risk of PE by only about 10%,” Dr. Tan and co-authors wrote.
The authors acknowledged that their study did not explore the health economic implications of the combined screening approach, but said there was now accumulating evidence that the performance of first-trimester screening for preterm preeclampsia could be improved substantially by the additional measurement of biomarkers.The study was sponsored by King’s College London, and supported by the National Institute for Health Research Efficacy and Mechanism Evaluation Programme, the Fetal Medicine Foundation and NIHR Collaboration for Leadership in Applied Health Research and Care South London at King’s College Hospital NHS Foundation Trust, with in-kind support from PerkinElmer Life and Analytical Sciences, and Thermo Fisher Scientific. No conflicts of interest were declared.
[email protected]
SOURCE: Tan MY et al. Ultrasound Obstet & Gynecol. 2018 Mar 14. doi: 10.1002/uog.19039.
FROM ULTRASOUND IN OBSTETRICS & GYNECOLOGY
Screening for preeclampsia using the United Kingdom’s National Institute for Health and Care Excellence (NICE) guidelines detects only about one-third of cases, according to a study published in Ultrasound in Obstetrics & Gynecology.
The United Kingdom-based prospective multicenter study, involving 16,747 singleton pregnancies, also looked at the effectiveness of a screening method that used Bayes’ theorem to combine maternal risk factors with biomarkers.
Preeclampsia developed in 473 (2.8%) pregnancies, and in 142 cases (0.8%) this led to preterm birth.
The NICE method of screening labels as high-risk women who have one major risk factor – such as a history of hypertensive disease in pregnancy or chronic kidney disease – or two moderate factors, including first pregnancy older than 40 years or a family history of preeclampsia.
This method of screening detected 30.4% of the cases of preeclampsia that developed and 40.8% of the cases that resulted in pre-term birth. The overall screen-positive rate by the NICE method was 10.3% of all participants in the study (1,727 women).
The Bayes’ theorem-based method assessed maternal risk factors in combination with mean arterial pressure and serum pregnancy-associated plasma protein-A. The detection rate for all preeclampsia using this method was 42.5%, representing an improvement of 11.3% over the NICE method, after adjusting for the effects of aspirin use in both groups. Researchers also examined the effect of adding in the biomarkers of uterine artery pulsatility index and serum placental growth factor, and found this detected 82.4% of preterm preeclampsia.
“The performance of screening by a combination of maternal factors with biomarkers was far superior to that of screening by NICE guidelines,” wrote Min Yi Tan, MD, of King’s College Hospital in London, and co-authors.
Overall, 4.5% of women in the study took aspirin from 14 weeks’ gestation until 36 weeks or delivery, but only 23.2% of women who screened positive according to the NICE guidelines took aspirin.
“Such poor compliance may at least in part be attributed to the generally held belief, based on the results of a meta-analysis in 2007, that aspirin reduces the risk of PE by only about 10%,” Dr. Tan and co-authors wrote.
The authors acknowledged that their study did not explore the health economic implications of the combined screening approach, but said there was now accumulating evidence that the performance of first-trimester screening for preterm preeclampsia could be improved substantially by the additional measurement of biomarkers.The study was sponsored by King’s College London, and supported by the National Institute for Health Research Efficacy and Mechanism Evaluation Programme, the Fetal Medicine Foundation and NIHR Collaboration for Leadership in Applied Health Research and Care South London at King’s College Hospital NHS Foundation Trust, with in-kind support from PerkinElmer Life and Analytical Sciences, and Thermo Fisher Scientific. No conflicts of interest were declared.
[email protected]
SOURCE: Tan MY et al. Ultrasound Obstet & Gynecol. 2018 Mar 14. doi: 10.1002/uog.19039.
FROM ULTRASOUND IN OBSTETRICS & GYNECOLOGY
Screening for preeclampsia using the United Kingdom’s National Institute for Health and Care Excellence (NICE) guidelines detects only about one-third of cases, according to a study published in Ultrasound in Obstetrics & Gynecology.
The United Kingdom-based prospective multicenter study, involving 16,747 singleton pregnancies, also looked at the effectiveness of a screening method that used Bayes’ theorem to combine maternal risk factors with biomarkers.
Preeclampsia developed in 473 (2.8%) pregnancies, and in 142 cases (0.8%) this led to preterm birth.
The NICE method of screening labels as high-risk women who have one major risk factor – such as a history of hypertensive disease in pregnancy or chronic kidney disease – or two moderate factors, including first pregnancy older than 40 years or a family history of preeclampsia.
This method of screening detected 30.4% of the cases of preeclampsia that developed and 40.8% of the cases that resulted in pre-term birth. The overall screen-positive rate by the NICE method was 10.3% of all participants in the study (1,727 women).
The Bayes’ theorem-based method assessed maternal risk factors in combination with mean arterial pressure and serum pregnancy-associated plasma protein-A. The detection rate for all preeclampsia using this method was 42.5%, representing an improvement of 11.3% over the NICE method, after adjusting for the effects of aspirin use in both groups. Researchers also examined the effect of adding in the biomarkers of uterine artery pulsatility index and serum placental growth factor, and found this detected 82.4% of preterm preeclampsia.
“The performance of screening by a combination of maternal factors with biomarkers was far superior to that of screening by NICE guidelines,” wrote Min Yi Tan, MD, of King’s College Hospital in London, and co-authors.
Overall, 4.5% of women in the study took aspirin from 14 weeks’ gestation until 36 weeks or delivery, but only 23.2% of women who screened positive according to the NICE guidelines took aspirin.
“Such poor compliance may at least in part be attributed to the generally held belief, based on the results of a meta-analysis in 2007, that aspirin reduces the risk of PE by only about 10%,” Dr. Tan and co-authors wrote.
The authors acknowledged that their study did not explore the health economic implications of the combined screening approach, but said there was now accumulating evidence that the performance of first-trimester screening for preterm preeclampsia could be improved substantially by the additional measurement of biomarkers.The study was sponsored by King’s College London, and supported by the National Institute for Health Research Efficacy and Mechanism Evaluation Programme, the Fetal Medicine Foundation and NIHR Collaboration for Leadership in Applied Health Research and Care South London at King’s College Hospital NHS Foundation Trust, with in-kind support from PerkinElmer Life and Analytical Sciences, and Thermo Fisher Scientific. No conflicts of interest were declared.
[email protected]
SOURCE: Tan MY et al. Ultrasound Obstet & Gynecol. 2018 Mar 14. doi: 10.1002/uog.19039.
Key clinical point: Screening for preeclampsia using the United Kingdom’s National Institute for Health and Care Excellence (NICE) guidelines only detects around one-third of all preeclampsia cases, but the addition of biomarkers can improve this significantly.
Major finding: The NICE guidelines detected 30.4% of cases of preeclampsia, while a Bayes’ theorem-based method using maternal risk factors and biomarkers detected 42.5%.
Data source: A prospective multicenter study of 16,747 singleton pregnancies.
Disclosures: The study was sponsored by King’s College London, and supported by the National Institute for Health Research Efficacy and Mechanism Evaluation Programme, the Fetal Medicine Foundation and NIHR Collaboration for Leadership in Applied Health Research and Care South London at King’s College Hospital NHS Foundation Trust, with in-kind support from PerkinElmer Life and Analytical Sciences, and Thermo Fisher Scientific. No conflicts of interest were declared.
Source: Tan MY et al. Ultrasound Obstet & Gynecol. 2018 Mar 14. doi: 10.1002/uog.19039.
mainbar
Screening for preeclampsia using the United Kingdom’s National Institute for Health and Care Excellence (NICE) guidelines detects only about one-third of cases, according to a study published in Ultrasound in Obstetrics & Gynecology.
The United Kingdom-based prospective multicenter study, involving 16,747 singleton pregnancies, also looked at the effectiveness of a screening method that used Bayes’ theorem to combine maternal risk factors with biomarkers.
Preeclampsia developed in 473 (2.8%) pregnancies, and in 142 cases (0.8%) this led to preterm birth.
The NICE method of screening labels as high-risk women who have one major risk factor – such as a history of hypertensive disease in pregnancy or chronic kidney disease – or two moderate factors, including first pregnancy older than 40 years or a family history of preeclampsia.
This method of screening detected 30.4% of the cases of preeclampsia that developed and 40.8% of the cases that resulted in pre-term birth. The overall screen-positive rate by the NICE method was 10.3% of all participants in the study (1,727 women).
The Bayes’ theorem-based method assessed maternal risk factors in combination with mean arterial pressure and serum pregnancy-associated plasma protein-A. The detection rate for all preeclampsia using this method was 42.5%, representing an improvement of 11.3% over the NICE method, after adjusting for the effects of aspirin use in both groups. Researchers also examined the effect of adding in the biomarkers of uterine artery pulsatility index and serum placental growth factor, and found this detected 82.4% of preterm preeclampsia.
“The performance of screening by a combination of maternal factors with biomarkers was far superior to that of screening by NICE guidelines,” wrote Min Yi Tan, MD, of King’s College Hospital in London, and co-authors.
Overall, 4.5% of women in the study took aspirin from 14 weeks’ gestation until 36 weeks or delivery, but only 23.2% of women who screened positive according to the NICE guidelines took aspirin.
“Such poor compliance may at least in part be attributed to the generally held belief, based on the results of a meta-analysis in 2007, that aspirin reduces the risk of PE by only about 10%,” Dr. Tan and co-authors wrote.
The authors acknowledged that their study did not explore the health economic implications of the combined screening approach, but said there was now accumulating evidence that the performance of first-trimester screening for preterm preeclampsia could be improved substantially by the additional measurement of biomarkers.
The study was sponsored by King’s College London, and supported by the National Institute for Health Research Efficacy and Mechanism Evaluation Programme, the Fetal Medicine Foundation and NIHR Collaboration for Leadership in Applied Health Research and Care South London at King’s College Hospital NHS Foundation Trust, with in-kind support from PerkinElmer Life and Analytical Sciences, and Thermo Fisher Scientific. No conflicts of interest were declared.
SOURCE: Tan MY et al. Ultrasound Obstet & Gynecol. 2018 Mar 14. doi: 10.1002/uog.19039.
Alcohol dependence may accelerate aging, frontal cortical deficits
Alcoholism compounds age-associated volume deficits in the frontal cortex, independent of the additional effects of drug dependence or hepatitis C infection, suggests new research published March 14 in JAMA Psychiatry.
Edith V. Sullivan, PhD, and her coauthors reported the results of a 14-year longitudinal study that used magnetic resonance imaging to examine the brains of 116 participants with alcohol dependence and 96 age-matched controls.
They found that participants with alcohol dependence as defined by the DSM-IV had significantly greater gray matter volume deficits in their frontal, temporal, parietal, cingulate and insular cortices, compared with controls – most prominently in the frontal subregions – with the only exception being the occipital lobe. When age was taken into account, age-related volume deficits were seen in the control group in five of the six cortical regions, but the alcoholism group showed a significantly greater deficit in the precentral and superior frontal cortex.
Dr. Sullivan, of the department of psychiatry and behavioral sciences at Stanford (Calif.) University, and her coauthors said the presence of age-alcoholism interactions puts older alcohol-dependent individuals at greater risk of age-associated functional compromise, even if their excessive drinking starts later in life.
More than half of individuals in the alcoholism group (54.5%) also reported drug dependence. The imaging showed that participants with alcohol use disorder who also reported opiate or cocaine use had smaller frontal cortex volumes compared with those who were not drug users. However, the non–drug-dependent participants in the group still showed deficits in precentral, supplementary motor and medial cortices volumes, compared with controls.
“These findings in alcohol-dependent and control participants, examined 1 to 8 times or more during intervals of 1 week to 12.5 years, representing, to our knowledge, the largest and longest-studied group to date, support our study hypotheses regarding alcoholism-associated accelerated aging and cortical volume deficits independent of drug dependence or HCV infection comorbidity,” the authors wrote.
“We observed a selectivity of frontal cortex to age-alcoholism interaction beyond normal aging effects and independent of deficits related to drug dependence.”
, compared with those with alcoholism alone and compared with controls. “Thus, HCV infection, while having focal effects on frontal brain systems, targeted frontally based systems also vulnerable to chronic and extensive alcohol consumption,” the authors wrote. “Whether the compounded untoward effects of alcoholism and HCV infection on brain structure can be ameliorated with successful treatment of the infection remains to be determined.”
Dr. Sullivan and her coauthors cited several limitations. For example, non–alcohol-dependent or HCV-infected comparison groups were not available for analysis.
The study was supported by the National Institute on Alcohol Abuse and Alcoholism, and the Moldow Women’s Hope and Heal Fund. No conflicts of interest were declared.
SOURCE: Sullivan EV et al. JAMA Psychiatry. 2018 Mar 14. doi: 10.1001/jamapsychiatry.2018.0021.
With an aging population that is also showing significant increases in alcohol use and misuse, studies of the interaction between alcohol and aging and the brain are highly significant. The most compelling finding of this study is the impact of that interaction on the frontal cortex volume, because of the key role this region of the brain plays in executive function.
Deficits in frontal cortical volume associated with aging are hypothesized to also result in impulsivity and compulsivity, which opens up the possibility of greater vulnerability to alcohol use disorder later in life. So as excessive alcohol consumption contributes to this aging process, this aging process also might contribute to excessive drinking in the elderly as a form of self-medication of the negative emotional states associated with aging.
“The study ... provides compelling evidence that alcohol misuse during later adulthood could confer a greater risk of deficits in frontal lobe function beyond the deficits that typically occur with aging,” wrote George F. Koob, PhD.
Given this, it is critical that strategies be explored and implemented aimed at addressing the misuse of alcohol by older drinkers. “As Yoda might say, ‘Protect their brains, we must.’ ”
George F. Koob, PhD, is affiliated with the National Institute on Alcohol Abuse and Alcoholism at the National Institutes of Health, Rockville, Md. These comments are taken from an editorial (JAMA Psychiatry. 2018 March 14. doi: 10.1001/jamapsychiatry.2018.0009). No conflicts of interest were declared.
With an aging population that is also showing significant increases in alcohol use and misuse, studies of the interaction between alcohol and aging and the brain are highly significant. The most compelling finding of this study is the impact of that interaction on the frontal cortex volume, because of the key role this region of the brain plays in executive function.
Deficits in frontal cortical volume associated with aging are hypothesized to also result in impulsivity and compulsivity, which opens up the possibility of greater vulnerability to alcohol use disorder later in life. So as excessive alcohol consumption contributes to this aging process, this aging process also might contribute to excessive drinking in the elderly as a form of self-medication of the negative emotional states associated with aging.
“The study ... provides compelling evidence that alcohol misuse during later adulthood could confer a greater risk of deficits in frontal lobe function beyond the deficits that typically occur with aging,” wrote George F. Koob, PhD.
Given this, it is critical that strategies be explored and implemented aimed at addressing the misuse of alcohol by older drinkers. “As Yoda might say, ‘Protect their brains, we must.’ ”
George F. Koob, PhD, is affiliated with the National Institute on Alcohol Abuse and Alcoholism at the National Institutes of Health, Rockville, Md. These comments are taken from an editorial (JAMA Psychiatry. 2018 March 14. doi: 10.1001/jamapsychiatry.2018.0009). No conflicts of interest were declared.
With an aging population that is also showing significant increases in alcohol use and misuse, studies of the interaction between alcohol and aging and the brain are highly significant. The most compelling finding of this study is the impact of that interaction on the frontal cortex volume, because of the key role this region of the brain plays in executive function.
Deficits in frontal cortical volume associated with aging are hypothesized to also result in impulsivity and compulsivity, which opens up the possibility of greater vulnerability to alcohol use disorder later in life. So as excessive alcohol consumption contributes to this aging process, this aging process also might contribute to excessive drinking in the elderly as a form of self-medication of the negative emotional states associated with aging.
“The study ... provides compelling evidence that alcohol misuse during later adulthood could confer a greater risk of deficits in frontal lobe function beyond the deficits that typically occur with aging,” wrote George F. Koob, PhD.
Given this, it is critical that strategies be explored and implemented aimed at addressing the misuse of alcohol by older drinkers. “As Yoda might say, ‘Protect their brains, we must.’ ”
George F. Koob, PhD, is affiliated with the National Institute on Alcohol Abuse and Alcoholism at the National Institutes of Health, Rockville, Md. These comments are taken from an editorial (JAMA Psychiatry. 2018 March 14. doi: 10.1001/jamapsychiatry.2018.0009). No conflicts of interest were declared.
Alcoholism compounds age-associated volume deficits in the frontal cortex, independent of the additional effects of drug dependence or hepatitis C infection, suggests new research published March 14 in JAMA Psychiatry.
Edith V. Sullivan, PhD, and her coauthors reported the results of a 14-year longitudinal study that used magnetic resonance imaging to examine the brains of 116 participants with alcohol dependence and 96 age-matched controls.
They found that participants with alcohol dependence as defined by the DSM-IV had significantly greater gray matter volume deficits in their frontal, temporal, parietal, cingulate and insular cortices, compared with controls – most prominently in the frontal subregions – with the only exception being the occipital lobe. When age was taken into account, age-related volume deficits were seen in the control group in five of the six cortical regions, but the alcoholism group showed a significantly greater deficit in the precentral and superior frontal cortex.
Dr. Sullivan, of the department of psychiatry and behavioral sciences at Stanford (Calif.) University, and her coauthors said the presence of age-alcoholism interactions puts older alcohol-dependent individuals at greater risk of age-associated functional compromise, even if their excessive drinking starts later in life.
More than half of individuals in the alcoholism group (54.5%) also reported drug dependence. The imaging showed that participants with alcohol use disorder who also reported opiate or cocaine use had smaller frontal cortex volumes compared with those who were not drug users. However, the non–drug-dependent participants in the group still showed deficits in precentral, supplementary motor and medial cortices volumes, compared with controls.
“These findings in alcohol-dependent and control participants, examined 1 to 8 times or more during intervals of 1 week to 12.5 years, representing, to our knowledge, the largest and longest-studied group to date, support our study hypotheses regarding alcoholism-associated accelerated aging and cortical volume deficits independent of drug dependence or HCV infection comorbidity,” the authors wrote.
“We observed a selectivity of frontal cortex to age-alcoholism interaction beyond normal aging effects and independent of deficits related to drug dependence.”
, compared with those with alcoholism alone and compared with controls. “Thus, HCV infection, while having focal effects on frontal brain systems, targeted frontally based systems also vulnerable to chronic and extensive alcohol consumption,” the authors wrote. “Whether the compounded untoward effects of alcoholism and HCV infection on brain structure can be ameliorated with successful treatment of the infection remains to be determined.”
Dr. Sullivan and her coauthors cited several limitations. For example, non–alcohol-dependent or HCV-infected comparison groups were not available for analysis.
The study was supported by the National Institute on Alcohol Abuse and Alcoholism, and the Moldow Women’s Hope and Heal Fund. No conflicts of interest were declared.
SOURCE: Sullivan EV et al. JAMA Psychiatry. 2018 Mar 14. doi: 10.1001/jamapsychiatry.2018.0021.
Alcoholism compounds age-associated volume deficits in the frontal cortex, independent of the additional effects of drug dependence or hepatitis C infection, suggests new research published March 14 in JAMA Psychiatry.
Edith V. Sullivan, PhD, and her coauthors reported the results of a 14-year longitudinal study that used magnetic resonance imaging to examine the brains of 116 participants with alcohol dependence and 96 age-matched controls.
They found that participants with alcohol dependence as defined by the DSM-IV had significantly greater gray matter volume deficits in their frontal, temporal, parietal, cingulate and insular cortices, compared with controls – most prominently in the frontal subregions – with the only exception being the occipital lobe. When age was taken into account, age-related volume deficits were seen in the control group in five of the six cortical regions, but the alcoholism group showed a significantly greater deficit in the precentral and superior frontal cortex.
Dr. Sullivan, of the department of psychiatry and behavioral sciences at Stanford (Calif.) University, and her coauthors said the presence of age-alcoholism interactions puts older alcohol-dependent individuals at greater risk of age-associated functional compromise, even if their excessive drinking starts later in life.
More than half of individuals in the alcoholism group (54.5%) also reported drug dependence. The imaging showed that participants with alcohol use disorder who also reported opiate or cocaine use had smaller frontal cortex volumes compared with those who were not drug users. However, the non–drug-dependent participants in the group still showed deficits in precentral, supplementary motor and medial cortices volumes, compared with controls.
“These findings in alcohol-dependent and control participants, examined 1 to 8 times or more during intervals of 1 week to 12.5 years, representing, to our knowledge, the largest and longest-studied group to date, support our study hypotheses regarding alcoholism-associated accelerated aging and cortical volume deficits independent of drug dependence or HCV infection comorbidity,” the authors wrote.
“We observed a selectivity of frontal cortex to age-alcoholism interaction beyond normal aging effects and independent of deficits related to drug dependence.”
, compared with those with alcoholism alone and compared with controls. “Thus, HCV infection, while having focal effects on frontal brain systems, targeted frontally based systems also vulnerable to chronic and extensive alcohol consumption,” the authors wrote. “Whether the compounded untoward effects of alcoholism and HCV infection on brain structure can be ameliorated with successful treatment of the infection remains to be determined.”
Dr. Sullivan and her coauthors cited several limitations. For example, non–alcohol-dependent or HCV-infected comparison groups were not available for analysis.
The study was supported by the National Institute on Alcohol Abuse and Alcoholism, and the Moldow Women’s Hope and Heal Fund. No conflicts of interest were declared.
SOURCE: Sullivan EV et al. JAMA Psychiatry. 2018 Mar 14. doi: 10.1001/jamapsychiatry.2018.0021.
FROM JAMA PSYCHIATRY
Key clinical point: Older alcohol-dependent patients are at greater risk of functional compromise, even if their excessive drinking starts later in life.
Major finding: Individuals with alcohol use disorder show significantly greater deficits in the precentral and superior frontal cortex, compared with age-matched controls.
Data source: Longitudinal study of 116 participants with alcohol dependence and 96 age-matched controls.
Disclosures: The U.S. National Institute on Alcohol Abuse and Alcoholism, and the Moldow Women’s Hope and Heal Fund supported the study. No conflicts of interest were declared.
Source: Sullivan EV et al. JAMA Psychiatry. 2018 Mar 14. doi: 10.1001/jamapsychiatry.2018.0021.
Minor differences with electric and manual aspiration of molar pregnancy
Manual vacuum aspiration of molar pregnancy achieves similar outcomes to electric vacuum aspiration, although it may lead to a lower incidence of uterine synechia, according to a paper published in the April edition of Obstetrics & Gynecology.
While electric vacuum aspiration of molar pregnancy is the dominant technique in North America, in other parts of the world, such as Brazil, manual vacuum aspiration is far more commonly used.
In a retrospective cohort study, researchers looked at outcomes for 1,727 patients with molar pregnancy; 1,206 of these patients underwent electric vacuum aspiration, and 521 underwent manual vacuum aspiration.
Patients who underwent electric vacuum aspiration had significantly shorter operative times (25.3 minutes vs. 34.2 minutes; P less than .001) and showed a greater drop in hemoglobin levels after evacuation (–0.3 g/dL vs. –0.19 g/dL; P less than .001), compared with those who underwent manual vacuum aspiration.
The electric procedure was also associated with a significantly higher risk of intrauterine adhesions after the procedure, compared with the manual vacuum aspiration (5.2% vs. 1.2%; P less than .001).
Lilian Padrón, MD, of the Trophoblastic Disease Center at the Rio de Janeiro Federal University and coauthors commented that the vacuum pressure is about 100 mm Hg higher in the electric technique than it is in the manual technique, which may be responsible for the greater risk of synechia.
However, there were no significant differences seen between the two groups in the risk of developing postmolar gestational trophoblastic neoplasia (14.2% with electric vs. 17.3% with manual; P = .074) nor in the presence of metastatic disease (19.9% vs. 17.8%; P = .082) or the need for multiagent chemotherapy.
Around 13% of patients had incomplete uterine evacuation, but the risk was similar between electric and manual vacuum aspiration.
“In our sample, formed exclusively by patients with molar pregnancy, the rate of complete uterine emptying did not reach 90% with either technique,” the authors wrote. “This may reflect not only the greater amount of molar trophoblastic tissue, compared with an abortion, but also the invasiveness of molar trophoblastic cells into the maternal decidua.”
There were nine cases of uterine perforation in the electric vacuum aspiration group (0.7%), and none in the manual group, although the difference was not statistically significant.
“Although differences in rates of uterine perforation as well as prolonged length of stay were not statistically different between the groups,” the authors wrote, “both of these were rare events, and we lacked sufficient power to detect differences in rare outcomes.”
No conflicts of interest were declared.
SOURCE: Padrón L et al. Obstet Gynecol. 2018;131:652-9.
Manual vacuum aspiration of molar pregnancy achieves similar outcomes to electric vacuum aspiration, although it may lead to a lower incidence of uterine synechia, according to a paper published in the April edition of Obstetrics & Gynecology.
While electric vacuum aspiration of molar pregnancy is the dominant technique in North America, in other parts of the world, such as Brazil, manual vacuum aspiration is far more commonly used.
In a retrospective cohort study, researchers looked at outcomes for 1,727 patients with molar pregnancy; 1,206 of these patients underwent electric vacuum aspiration, and 521 underwent manual vacuum aspiration.
Patients who underwent electric vacuum aspiration had significantly shorter operative times (25.3 minutes vs. 34.2 minutes; P less than .001) and showed a greater drop in hemoglobin levels after evacuation (–0.3 g/dL vs. –0.19 g/dL; P less than .001), compared with those who underwent manual vacuum aspiration.
The electric procedure was also associated with a significantly higher risk of intrauterine adhesions after the procedure, compared with the manual vacuum aspiration (5.2% vs. 1.2%; P less than .001).
Lilian Padrón, MD, of the Trophoblastic Disease Center at the Rio de Janeiro Federal University and coauthors commented that the vacuum pressure is about 100 mm Hg higher in the electric technique than it is in the manual technique, which may be responsible for the greater risk of synechia.
However, there were no significant differences seen between the two groups in the risk of developing postmolar gestational trophoblastic neoplasia (14.2% with electric vs. 17.3% with manual; P = .074) nor in the presence of metastatic disease (19.9% vs. 17.8%; P = .082) or the need for multiagent chemotherapy.
Around 13% of patients had incomplete uterine evacuation, but the risk was similar between electric and manual vacuum aspiration.
“In our sample, formed exclusively by patients with molar pregnancy, the rate of complete uterine emptying did not reach 90% with either technique,” the authors wrote. “This may reflect not only the greater amount of molar trophoblastic tissue, compared with an abortion, but also the invasiveness of molar trophoblastic cells into the maternal decidua.”
There were nine cases of uterine perforation in the electric vacuum aspiration group (0.7%), and none in the manual group, although the difference was not statistically significant.
“Although differences in rates of uterine perforation as well as prolonged length of stay were not statistically different between the groups,” the authors wrote, “both of these were rare events, and we lacked sufficient power to detect differences in rare outcomes.”
No conflicts of interest were declared.
SOURCE: Padrón L et al. Obstet Gynecol. 2018;131:652-9.
Manual vacuum aspiration of molar pregnancy achieves similar outcomes to electric vacuum aspiration, although it may lead to a lower incidence of uterine synechia, according to a paper published in the April edition of Obstetrics & Gynecology.
While electric vacuum aspiration of molar pregnancy is the dominant technique in North America, in other parts of the world, such as Brazil, manual vacuum aspiration is far more commonly used.
In a retrospective cohort study, researchers looked at outcomes for 1,727 patients with molar pregnancy; 1,206 of these patients underwent electric vacuum aspiration, and 521 underwent manual vacuum aspiration.
Patients who underwent electric vacuum aspiration had significantly shorter operative times (25.3 minutes vs. 34.2 minutes; P less than .001) and showed a greater drop in hemoglobin levels after evacuation (–0.3 g/dL vs. –0.19 g/dL; P less than .001), compared with those who underwent manual vacuum aspiration.
The electric procedure was also associated with a significantly higher risk of intrauterine adhesions after the procedure, compared with the manual vacuum aspiration (5.2% vs. 1.2%; P less than .001).
Lilian Padrón, MD, of the Trophoblastic Disease Center at the Rio de Janeiro Federal University and coauthors commented that the vacuum pressure is about 100 mm Hg higher in the electric technique than it is in the manual technique, which may be responsible for the greater risk of synechia.
However, there were no significant differences seen between the two groups in the risk of developing postmolar gestational trophoblastic neoplasia (14.2% with electric vs. 17.3% with manual; P = .074) nor in the presence of metastatic disease (19.9% vs. 17.8%; P = .082) or the need for multiagent chemotherapy.
Around 13% of patients had incomplete uterine evacuation, but the risk was similar between electric and manual vacuum aspiration.
“In our sample, formed exclusively by patients with molar pregnancy, the rate of complete uterine emptying did not reach 90% with either technique,” the authors wrote. “This may reflect not only the greater amount of molar trophoblastic tissue, compared with an abortion, but also the invasiveness of molar trophoblastic cells into the maternal decidua.”
There were nine cases of uterine perforation in the electric vacuum aspiration group (0.7%), and none in the manual group, although the difference was not statistically significant.
“Although differences in rates of uterine perforation as well as prolonged length of stay were not statistically different between the groups,” the authors wrote, “both of these were rare events, and we lacked sufficient power to detect differences in rare outcomes.”
No conflicts of interest were declared.
SOURCE: Padrón L et al. Obstet Gynecol. 2018;131:652-9.
FROM OBSTETRICS & GYNECOLOGY
Key clinical point: Manual vacuum aspiration of molar pregnancy achieves similar outcomes to electric vacuum aspiration, although it may lead to a lower incidence of uterine synechia.
Major finding: Electric vacuum aspiration of molar pregnancy is associated with a higher risk of synechia than manual vacuum aspiration.
Data source: A retrospective cohort study in 1,727 patients with molar pregnancy.
Disclosures: No conflicts of interest were declared.
Source: Padrón L et al. Obstet Gynecol. 2018;131:652-9.
Possible increased breast cancer risk found in women with schizophrenia
A meta-analysis has found an increased risk of breast cancer in women with schizophrenia, but its authors noted significant diversity of results across the included studies.
In the meta-analysis, Chuanjun Zhuo, MD, PhD, and Patrick Todd Triplett, MD, presented the results of 12 cohort studies involving 125,760 women that showed the risk of breast cancer in women with schizophrenia, compared with the general population.
They found that women with schizophrenia had a 31% higher standardized incidence ratio of breast cancer (95% confidence interval, 1.14-1.50; P less than .001). However, significant heterogeneity was found between studies, with the prediction interval ranging from 0.81 to 2.10. The report was published in JAMA Psychiatry.
“Accordingly, it is possible that a future study will show a decreased breast cancer risk in women with schizophrenia compared with the general population,” said Dr. Zhuo of Tianjin Medical University, China, and Dr. Triplett, of Johns Hopkins University, Baltimore.
As it turns out, one of the subgroup analyses showed that the association between schizophrenia and breast cancer was significant only in studies that excluded women who were diagnosed with breast cancer before they were diagnosed with schizophrenia (standardized incidence ratio, 1.34; 95% CI, 1.20-1.51; P less than .001).
The same was seen in studies where there were more than 100 cases of breast cancer (SIR, 1.31; 95% CI, 1.18-1.46; P less than .001), while the association was not significant in studies with fewer than 100 cases.
The authors said their findings contradict a hypothesis that schizophrenia might be protective against cancer.
“These results, together with our recent meta-analysis results showing no association with lung cancer risk but a reduced hepatic cancer risk in schizophrenia, indicated that the association between schizophrenia and cancer risk may be complicated and depend on the cancer site,” wrote Dr. Zhuo and Dr. Triplett.
In terms of possible mechanisms underlying the increased risk of breast cancer seen in this study, the authors suggested that people with schizophrenia could experience other clinical conditions such as obesity that might increase their risk of breast cancer.
“As breast cancer may be a hormone-dependent cancer, a significant positive association between plasma prolactin levels and the risk of breast cancer has been observed; in addition, increased prolactin levels have been documented in women with schizophrenia, particularly for those receiving certain antipsychotics,” they wrote.
While the incidence of cancer in people with schizophrenia might not necessarily differ from that of the general population, the authors said studies have found that people with schizophrenia have higher cancer mortality. Because “breast cancer prevention and treatment options are less optimal in women with schizophrenia, our results highlight that women with schizophrenia deserve focused care for breast cancer screening and treatment,” they wrote.
The Tianjin Health Bureau Foundation and the Natural Science Foundation of Tianjin, China, supported the study. No conflicts of interest were declared.
SOURCE: Zhuo C et al. JAMA Psychiatry. 2018 Mar 7. doi: 10.1001/jamapsychiatry.2017.4748.
A meta-analysis has found an increased risk of breast cancer in women with schizophrenia, but its authors noted significant diversity of results across the included studies.
In the meta-analysis, Chuanjun Zhuo, MD, PhD, and Patrick Todd Triplett, MD, presented the results of 12 cohort studies involving 125,760 women that showed the risk of breast cancer in women with schizophrenia, compared with the general population.
They found that women with schizophrenia had a 31% higher standardized incidence ratio of breast cancer (95% confidence interval, 1.14-1.50; P less than .001). However, significant heterogeneity was found between studies, with the prediction interval ranging from 0.81 to 2.10. The report was published in JAMA Psychiatry.
“Accordingly, it is possible that a future study will show a decreased breast cancer risk in women with schizophrenia compared with the general population,” said Dr. Zhuo of Tianjin Medical University, China, and Dr. Triplett, of Johns Hopkins University, Baltimore.
As it turns out, one of the subgroup analyses showed that the association between schizophrenia and breast cancer was significant only in studies that excluded women who were diagnosed with breast cancer before they were diagnosed with schizophrenia (standardized incidence ratio, 1.34; 95% CI, 1.20-1.51; P less than .001).
The same was seen in studies where there were more than 100 cases of breast cancer (SIR, 1.31; 95% CI, 1.18-1.46; P less than .001), while the association was not significant in studies with fewer than 100 cases.
The authors said their findings contradict a hypothesis that schizophrenia might be protective against cancer.
“These results, together with our recent meta-analysis results showing no association with lung cancer risk but a reduced hepatic cancer risk in schizophrenia, indicated that the association between schizophrenia and cancer risk may be complicated and depend on the cancer site,” wrote Dr. Zhuo and Dr. Triplett.
In terms of possible mechanisms underlying the increased risk of breast cancer seen in this study, the authors suggested that people with schizophrenia could experience other clinical conditions such as obesity that might increase their risk of breast cancer.
“As breast cancer may be a hormone-dependent cancer, a significant positive association between plasma prolactin levels and the risk of breast cancer has been observed; in addition, increased prolactin levels have been documented in women with schizophrenia, particularly for those receiving certain antipsychotics,” they wrote.
While the incidence of cancer in people with schizophrenia might not necessarily differ from that of the general population, the authors said studies have found that people with schizophrenia have higher cancer mortality. Because “breast cancer prevention and treatment options are less optimal in women with schizophrenia, our results highlight that women with schizophrenia deserve focused care for breast cancer screening and treatment,” they wrote.
The Tianjin Health Bureau Foundation and the Natural Science Foundation of Tianjin, China, supported the study. No conflicts of interest were declared.
SOURCE: Zhuo C et al. JAMA Psychiatry. 2018 Mar 7. doi: 10.1001/jamapsychiatry.2017.4748.
A meta-analysis has found an increased risk of breast cancer in women with schizophrenia, but its authors noted significant diversity of results across the included studies.
In the meta-analysis, Chuanjun Zhuo, MD, PhD, and Patrick Todd Triplett, MD, presented the results of 12 cohort studies involving 125,760 women that showed the risk of breast cancer in women with schizophrenia, compared with the general population.
They found that women with schizophrenia had a 31% higher standardized incidence ratio of breast cancer (95% confidence interval, 1.14-1.50; P less than .001). However, significant heterogeneity was found between studies, with the prediction interval ranging from 0.81 to 2.10. The report was published in JAMA Psychiatry.
“Accordingly, it is possible that a future study will show a decreased breast cancer risk in women with schizophrenia compared with the general population,” said Dr. Zhuo of Tianjin Medical University, China, and Dr. Triplett, of Johns Hopkins University, Baltimore.
As it turns out, one of the subgroup analyses showed that the association between schizophrenia and breast cancer was significant only in studies that excluded women who were diagnosed with breast cancer before they were diagnosed with schizophrenia (standardized incidence ratio, 1.34; 95% CI, 1.20-1.51; P less than .001).
The same was seen in studies where there were more than 100 cases of breast cancer (SIR, 1.31; 95% CI, 1.18-1.46; P less than .001), while the association was not significant in studies with fewer than 100 cases.
The authors said their findings contradict a hypothesis that schizophrenia might be protective against cancer.
“These results, together with our recent meta-analysis results showing no association with lung cancer risk but a reduced hepatic cancer risk in schizophrenia, indicated that the association between schizophrenia and cancer risk may be complicated and depend on the cancer site,” wrote Dr. Zhuo and Dr. Triplett.
In terms of possible mechanisms underlying the increased risk of breast cancer seen in this study, the authors suggested that people with schizophrenia could experience other clinical conditions such as obesity that might increase their risk of breast cancer.
“As breast cancer may be a hormone-dependent cancer, a significant positive association between plasma prolactin levels and the risk of breast cancer has been observed; in addition, increased prolactin levels have been documented in women with schizophrenia, particularly for those receiving certain antipsychotics,” they wrote.
While the incidence of cancer in people with schizophrenia might not necessarily differ from that of the general population, the authors said studies have found that people with schizophrenia have higher cancer mortality. Because “breast cancer prevention and treatment options are less optimal in women with schizophrenia, our results highlight that women with schizophrenia deserve focused care for breast cancer screening and treatment,” they wrote.
The Tianjin Health Bureau Foundation and the Natural Science Foundation of Tianjin, China, supported the study. No conflicts of interest were declared.
SOURCE: Zhuo C et al. JAMA Psychiatry. 2018 Mar 7. doi: 10.1001/jamapsychiatry.2017.4748.
FROM JAMA PSYCHIATRY
Key clinical point: Women diagnosed with schizophrenia should receive intensive screening and treatment for breast cancer.
Major finding: Women with schizophrenia showed a 31% higher standardized incidence ratio of breast cancer than that of the general population.
Data source: Meta-analysis of 12 cohort studies involving 125,760 women.
Disclosures: The Tianjin Health Bureau Foundation and the Natural Science Foundation of Tianjin, China, supported the work. No conflicts of interest were declared.
Source: Zhuo C et al. JAMA Psychiatry. 2018 Mar 7. doi: 10.1001/jamapsychiatry.2017.4748.
Among cannabinoids, cannabidiol has best evidence for decreasing seizures
A reasonable number of patients with treatment-resistant epilepsy experienced a decrease in the frequency of seizures when treated with pharmaceutical-grade cannabidiol, according to findings from a systematic review.
The review, published online March 6 in the Journal of Neurology, Neurosurgery and Psychiatry, centers on 36 studies testing the use of cannabinoids as adjunctive treatments for treatment-resistant epilepsy, including six randomized controlled trials involving a total of 555 patients and 30 observational studies involving 2,865 patients.
Two randomized, controlled trials representing a total of 291 patients (one with 120 patients with Dravet syndrome and another with 171 patients with Lennox-Gastaut syndrome) found cannabidiol (CBD) treatment was 74% more likely than placebo to achieve a greater than 50% reduction in seizures. In the observational studies, nearly half (48.5%) of the 970 patients across a range of epilepsy subtypes achieved a 50% or greater reduction in seizures.
Emily Stockings, PhD, of the National Drug and Alcohol Research Centre at the University of New South Wales, Sydney, and her coauthors estimated that eight patients would need to receive CBD treatment to achieve a 50% reduction in seizures in one person. However, they also pointed out that the quality of the evidence was mixed.
“There is insufficient evidence from moderate-quality or high-quality studies to assess whether there is a treatment effect of Cannabis sativa, CBD:THC combinations, or oral cannabis extracts,” they wrote.
There were three randomized, controlled trials that also looked at complete seizure freedom, finding a sixfold higher likelihood of total seizure freedom with CBD, compared with placebo. However, the number needed to treat to achieve this was 171, and again, the quality of evidence was described as “mixed.”
Just over half of patients treated with CBD reported improved quality of life, and significantly more parents and caregivers of those treated with CBD said the patient’s overall condition had improved. The pooled estimates from observational studies suggested that 55.8% of patients experienced improvements in their quality of life when using cannabinoids.
Studies involving patients with Dravet syndrome reported the greatest improvements in quality of life, compared with studies involving a mix of epilepsy syndromes. However, the authors noted that the studies that involved Dravet syndrome patients were all case series in which every patient responded and suggested they should be interpreted with caution.
The authors said they were more confident of the benefits of CBD in children than in adults, because the more recent, larger, and better-conducted randomized, controlled trials focused on children and adolescents.
“In RCTs, and most of the non-RCTs, cannabinoids were used as an adjunctive therapy rather than as a standalone intervention, so at present, there is little evidence to support any recommendation that cannabinoids can be recommended as a replacement for current standard [antiepileptic drugs].”
The review also looked at the number of withdrawals, which they said could serve as an indicator of the tolerability and effectiveness of a treatment. The randomized, controlled trials showed no difference in withdrawal rates between patients on CBD and those on placebo, although CBD patients were more likely to withdraw because of adverse events.
There was a small but significant increase in the risk of adverse events with CBD, compared with placebo: particularly drowsiness, diarrhea, fatigue, and changes in appetite. There also was a higher incidence of serious adverse events (AEs), including status epilepticus and elevated aminotransferase levels.
“The fact that more patients withdrew or experienced AEs when receiving CBD than placebo indicates the need for clinicians and patients to weigh the risks and benefits of adding CBD to other AED [antiepileptic drug] treatment,” the authors wrote.
The study was supported by the Commonwealth Department of Health, the New South Wales Government Centre for Medicinal Cannabis Research and Innovation, the Victorian Department of Health and Human Services, and the Queensland Department of Health. Four authors were also supported by National Health and Medical Research Council grants. Three authors declared grants from the pharmaceutical industry, and one author has provided evidence to parliamentary committees on medical uses of cannabis in Australia and the United Kingdom, and is on the Australian Advisory Council on the Medicinal Use of Cannabis. No other conflicts of interest were declared.
SOURCE: Stockings E et al. J Neurol Neurosurg Psychiatry. 2018 Mar 6. doi: 10.1136/jnnp-2017-317168
A reasonable number of patients with treatment-resistant epilepsy experienced a decrease in the frequency of seizures when treated with pharmaceutical-grade cannabidiol, according to findings from a systematic review.
The review, published online March 6 in the Journal of Neurology, Neurosurgery and Psychiatry, centers on 36 studies testing the use of cannabinoids as adjunctive treatments for treatment-resistant epilepsy, including six randomized controlled trials involving a total of 555 patients and 30 observational studies involving 2,865 patients.
Two randomized, controlled trials representing a total of 291 patients (one with 120 patients with Dravet syndrome and another with 171 patients with Lennox-Gastaut syndrome) found cannabidiol (CBD) treatment was 74% more likely than placebo to achieve a greater than 50% reduction in seizures. In the observational studies, nearly half (48.5%) of the 970 patients across a range of epilepsy subtypes achieved a 50% or greater reduction in seizures.
Emily Stockings, PhD, of the National Drug and Alcohol Research Centre at the University of New South Wales, Sydney, and her coauthors estimated that eight patients would need to receive CBD treatment to achieve a 50% reduction in seizures in one person. However, they also pointed out that the quality of the evidence was mixed.
“There is insufficient evidence from moderate-quality or high-quality studies to assess whether there is a treatment effect of Cannabis sativa, CBD:THC combinations, or oral cannabis extracts,” they wrote.
There were three randomized, controlled trials that also looked at complete seizure freedom, finding a sixfold higher likelihood of total seizure freedom with CBD, compared with placebo. However, the number needed to treat to achieve this was 171, and again, the quality of evidence was described as “mixed.”
Just over half of patients treated with CBD reported improved quality of life, and significantly more parents and caregivers of those treated with CBD said the patient’s overall condition had improved. The pooled estimates from observational studies suggested that 55.8% of patients experienced improvements in their quality of life when using cannabinoids.
Studies involving patients with Dravet syndrome reported the greatest improvements in quality of life, compared with studies involving a mix of epilepsy syndromes. However, the authors noted that the studies that involved Dravet syndrome patients were all case series in which every patient responded and suggested they should be interpreted with caution.
The authors said they were more confident of the benefits of CBD in children than in adults, because the more recent, larger, and better-conducted randomized, controlled trials focused on children and adolescents.
“In RCTs, and most of the non-RCTs, cannabinoids were used as an adjunctive therapy rather than as a standalone intervention, so at present, there is little evidence to support any recommendation that cannabinoids can be recommended as a replacement for current standard [antiepileptic drugs].”
The review also looked at the number of withdrawals, which they said could serve as an indicator of the tolerability and effectiveness of a treatment. The randomized, controlled trials showed no difference in withdrawal rates between patients on CBD and those on placebo, although CBD patients were more likely to withdraw because of adverse events.
There was a small but significant increase in the risk of adverse events with CBD, compared with placebo: particularly drowsiness, diarrhea, fatigue, and changes in appetite. There also was a higher incidence of serious adverse events (AEs), including status epilepticus and elevated aminotransferase levels.
“The fact that more patients withdrew or experienced AEs when receiving CBD than placebo indicates the need for clinicians and patients to weigh the risks and benefits of adding CBD to other AED [antiepileptic drug] treatment,” the authors wrote.
The study was supported by the Commonwealth Department of Health, the New South Wales Government Centre for Medicinal Cannabis Research and Innovation, the Victorian Department of Health and Human Services, and the Queensland Department of Health. Four authors were also supported by National Health and Medical Research Council grants. Three authors declared grants from the pharmaceutical industry, and one author has provided evidence to parliamentary committees on medical uses of cannabis in Australia and the United Kingdom, and is on the Australian Advisory Council on the Medicinal Use of Cannabis. No other conflicts of interest were declared.
SOURCE: Stockings E et al. J Neurol Neurosurg Psychiatry. 2018 Mar 6. doi: 10.1136/jnnp-2017-317168
A reasonable number of patients with treatment-resistant epilepsy experienced a decrease in the frequency of seizures when treated with pharmaceutical-grade cannabidiol, according to findings from a systematic review.
The review, published online March 6 in the Journal of Neurology, Neurosurgery and Psychiatry, centers on 36 studies testing the use of cannabinoids as adjunctive treatments for treatment-resistant epilepsy, including six randomized controlled trials involving a total of 555 patients and 30 observational studies involving 2,865 patients.
Two randomized, controlled trials representing a total of 291 patients (one with 120 patients with Dravet syndrome and another with 171 patients with Lennox-Gastaut syndrome) found cannabidiol (CBD) treatment was 74% more likely than placebo to achieve a greater than 50% reduction in seizures. In the observational studies, nearly half (48.5%) of the 970 patients across a range of epilepsy subtypes achieved a 50% or greater reduction in seizures.
Emily Stockings, PhD, of the National Drug and Alcohol Research Centre at the University of New South Wales, Sydney, and her coauthors estimated that eight patients would need to receive CBD treatment to achieve a 50% reduction in seizures in one person. However, they also pointed out that the quality of the evidence was mixed.
“There is insufficient evidence from moderate-quality or high-quality studies to assess whether there is a treatment effect of Cannabis sativa, CBD:THC combinations, or oral cannabis extracts,” they wrote.
There were three randomized, controlled trials that also looked at complete seizure freedom, finding a sixfold higher likelihood of total seizure freedom with CBD, compared with placebo. However, the number needed to treat to achieve this was 171, and again, the quality of evidence was described as “mixed.”
Just over half of patients treated with CBD reported improved quality of life, and significantly more parents and caregivers of those treated with CBD said the patient’s overall condition had improved. The pooled estimates from observational studies suggested that 55.8% of patients experienced improvements in their quality of life when using cannabinoids.
Studies involving patients with Dravet syndrome reported the greatest improvements in quality of life, compared with studies involving a mix of epilepsy syndromes. However, the authors noted that the studies that involved Dravet syndrome patients were all case series in which every patient responded and suggested they should be interpreted with caution.
The authors said they were more confident of the benefits of CBD in children than in adults, because the more recent, larger, and better-conducted randomized, controlled trials focused on children and adolescents.
“In RCTs, and most of the non-RCTs, cannabinoids were used as an adjunctive therapy rather than as a standalone intervention, so at present, there is little evidence to support any recommendation that cannabinoids can be recommended as a replacement for current standard [antiepileptic drugs].”
The review also looked at the number of withdrawals, which they said could serve as an indicator of the tolerability and effectiveness of a treatment. The randomized, controlled trials showed no difference in withdrawal rates between patients on CBD and those on placebo, although CBD patients were more likely to withdraw because of adverse events.
There was a small but significant increase in the risk of adverse events with CBD, compared with placebo: particularly drowsiness, diarrhea, fatigue, and changes in appetite. There also was a higher incidence of serious adverse events (AEs), including status epilepticus and elevated aminotransferase levels.
“The fact that more patients withdrew or experienced AEs when receiving CBD than placebo indicates the need for clinicians and patients to weigh the risks and benefits of adding CBD to other AED [antiepileptic drug] treatment,” the authors wrote.
The study was supported by the Commonwealth Department of Health, the New South Wales Government Centre for Medicinal Cannabis Research and Innovation, the Victorian Department of Health and Human Services, and the Queensland Department of Health. Four authors were also supported by National Health and Medical Research Council grants. Three authors declared grants from the pharmaceutical industry, and one author has provided evidence to parliamentary committees on medical uses of cannabis in Australia and the United Kingdom, and is on the Australian Advisory Council on the Medicinal Use of Cannabis. No other conflicts of interest were declared.
SOURCE: Stockings E et al. J Neurol Neurosurg Psychiatry. 2018 Mar 6. doi: 10.1136/jnnp-2017-317168
FROM JOURNAL OF NEUROLOGY, NEUROSURGERY AND PSYCHIATRY
Key clinical point:
Major finding: Eight patients would need to receive cannabidiol treatment to achieve a 50% reduction in seizures in one person.
Data source: Systematic review of 36 studies.
Disclosures: The study was supported by the Commonwealth Department of Health, the New South Wales Government Centre for Medicinal Cannabis Research and Innovation, the Victorian Department of Health and Human Services, and the Queensland Department of Health. Four authors also were supported by National Health and Medical Research Council grants. Three authors declared grants from the pharmaceutical industry, and one author has provided evidence to parliamentary committees on medical uses of cannabis in Australia and the United Kingdom and is on the Australian Advisory Council on the Medicinal Use of Cannabis. No other conflicts of interest were declared.
Source: Stockings E et al. J Neurol Neurosurg Psychiatry. 2018 Mar 6. doi: 10.1136/jnnp-2017-317168.