User login
International survey probes oxygen’s efficacy for cluster headache
According to the results, triptans also are highly effective, with some side effects. Newer medications deserve further study, the researchers said.
To assess the effectiveness and adverse effects of acute cluster headache medications in a large international sample, Stuart M. Pearson, a researcher in the department of psychology at the University of West Georgia in Carrollton, and his coauthors analyzed data from the Cluster Headache Questionnaire. Respondents from more than 50 countries completed the online survey; most were from the United States, the United Kingdom, and Canada. The survey included questions about cluster headache diagnostic criteria and medication effectiveness, complications, and access to medications.
In all, 3,251 subjects participated in the questionnaire, and 2,193 respondents met criteria for the study; 1,604 had cluster headache, and 589 had probable cluster headache. Among the respondents with cluster headache, 68.8% were male, 78.0% had episodic cluster headache, and the average age was 46 years. More than half of respondents reported complete or very effective treatment for triptans (54%) and oxygen (also 54%). The proportion of respondents who reported that ergot derivatives, caffeine or energy drinks, and intranasal ketamine were completely or very effective ranged from 14% to 25%. Patients were less likely to report high levels of efficacy for opioids (6%), intranasal capsaicin (5%), and intranasal lidocaine (2%).
Participants experienced few complications from oxygen, with 99% reporting no or minimal physical and medical complications, and 97% reporting no or minimal psychological and emotional complications. Patients also reported few complications from intranasal lidocaine, intranasal ketamine, intranasal capsaicin, and caffeine and energy drinks. For triptans, 74% of respondents reported no or minimal physical and medical complications, and 85% reported no or minimal psychological and emotional complications.
Among the 139 participants with cluster headache who were aged 65 years or older, responses were similar to those for the entire population. In addition, the 589 respondents with probable cluster headache reported similar efficacy data, compared with respondents with a full diagnosis of cluster headache.
“Oxygen in particular had a high rate of complete effectiveness, a low rate of ineffectiveness, and a low rate of physical, medical, emotional, and psychological side effects,” the investigators said. “However, respondents reported that it was difficult to obtain.”
Limited insurance coverage of oxygen may affect access, even though the treatment has a Level A recommendation for the acute treatment of cluster headache in the American Headache Society guidelines, the authors said. Physicians also may pose a barrier. A prior study found that 12% of providers did not prescribe oxygen for cluster headache because they doubted its efficacy or did not know about it. In addition, there may be concerns that the treatment could be a fire hazard in a patient population that has high rates of smoking, the researchers said.
Limitations of the study include the survey’s use of nonvalidated questions, the lack of a formal clinical diagnosis of cluster headache, and the grouping of all triptans, rather than assessing individual triptan medications, such as sumatriptan subcutaneous, alone.
The study received funding from Autonomic Technologies and Clusterbusters. One of the authors has served as a paid consultant to Eli Lilly as a member of the data monitoring committee for clinical trials of galcanezumab for cluster headache and migraine.
This article was updated 3/7/2019.
SOURCE: Pearson SM et al. Headache. 2019 Jan 11. doi: 10.1111/head.13473.
According to the results, triptans also are highly effective, with some side effects. Newer medications deserve further study, the researchers said.
To assess the effectiveness and adverse effects of acute cluster headache medications in a large international sample, Stuart M. Pearson, a researcher in the department of psychology at the University of West Georgia in Carrollton, and his coauthors analyzed data from the Cluster Headache Questionnaire. Respondents from more than 50 countries completed the online survey; most were from the United States, the United Kingdom, and Canada. The survey included questions about cluster headache diagnostic criteria and medication effectiveness, complications, and access to medications.
In all, 3,251 subjects participated in the questionnaire, and 2,193 respondents met criteria for the study; 1,604 had cluster headache, and 589 had probable cluster headache. Among the respondents with cluster headache, 68.8% were male, 78.0% had episodic cluster headache, and the average age was 46 years. More than half of respondents reported complete or very effective treatment for triptans (54%) and oxygen (also 54%). The proportion of respondents who reported that ergot derivatives, caffeine or energy drinks, and intranasal ketamine were completely or very effective ranged from 14% to 25%. Patients were less likely to report high levels of efficacy for opioids (6%), intranasal capsaicin (5%), and intranasal lidocaine (2%).
Participants experienced few complications from oxygen, with 99% reporting no or minimal physical and medical complications, and 97% reporting no or minimal psychological and emotional complications. Patients also reported few complications from intranasal lidocaine, intranasal ketamine, intranasal capsaicin, and caffeine and energy drinks. For triptans, 74% of respondents reported no or minimal physical and medical complications, and 85% reported no or minimal psychological and emotional complications.
Among the 139 participants with cluster headache who were aged 65 years or older, responses were similar to those for the entire population. In addition, the 589 respondents with probable cluster headache reported similar efficacy data, compared with respondents with a full diagnosis of cluster headache.
“Oxygen in particular had a high rate of complete effectiveness, a low rate of ineffectiveness, and a low rate of physical, medical, emotional, and psychological side effects,” the investigators said. “However, respondents reported that it was difficult to obtain.”
Limited insurance coverage of oxygen may affect access, even though the treatment has a Level A recommendation for the acute treatment of cluster headache in the American Headache Society guidelines, the authors said. Physicians also may pose a barrier. A prior study found that 12% of providers did not prescribe oxygen for cluster headache because they doubted its efficacy or did not know about it. In addition, there may be concerns that the treatment could be a fire hazard in a patient population that has high rates of smoking, the researchers said.
Limitations of the study include the survey’s use of nonvalidated questions, the lack of a formal clinical diagnosis of cluster headache, and the grouping of all triptans, rather than assessing individual triptan medications, such as sumatriptan subcutaneous, alone.
The study received funding from Autonomic Technologies and Clusterbusters. One of the authors has served as a paid consultant to Eli Lilly as a member of the data monitoring committee for clinical trials of galcanezumab for cluster headache and migraine.
This article was updated 3/7/2019.
SOURCE: Pearson SM et al. Headache. 2019 Jan 11. doi: 10.1111/head.13473.
According to the results, triptans also are highly effective, with some side effects. Newer medications deserve further study, the researchers said.
To assess the effectiveness and adverse effects of acute cluster headache medications in a large international sample, Stuart M. Pearson, a researcher in the department of psychology at the University of West Georgia in Carrollton, and his coauthors analyzed data from the Cluster Headache Questionnaire. Respondents from more than 50 countries completed the online survey; most were from the United States, the United Kingdom, and Canada. The survey included questions about cluster headache diagnostic criteria and medication effectiveness, complications, and access to medications.
In all, 3,251 subjects participated in the questionnaire, and 2,193 respondents met criteria for the study; 1,604 had cluster headache, and 589 had probable cluster headache. Among the respondents with cluster headache, 68.8% were male, 78.0% had episodic cluster headache, and the average age was 46 years. More than half of respondents reported complete or very effective treatment for triptans (54%) and oxygen (also 54%). The proportion of respondents who reported that ergot derivatives, caffeine or energy drinks, and intranasal ketamine were completely or very effective ranged from 14% to 25%. Patients were less likely to report high levels of efficacy for opioids (6%), intranasal capsaicin (5%), and intranasal lidocaine (2%).
Participants experienced few complications from oxygen, with 99% reporting no or minimal physical and medical complications, and 97% reporting no or minimal psychological and emotional complications. Patients also reported few complications from intranasal lidocaine, intranasal ketamine, intranasal capsaicin, and caffeine and energy drinks. For triptans, 74% of respondents reported no or minimal physical and medical complications, and 85% reported no or minimal psychological and emotional complications.
Among the 139 participants with cluster headache who were aged 65 years or older, responses were similar to those for the entire population. In addition, the 589 respondents with probable cluster headache reported similar efficacy data, compared with respondents with a full diagnosis of cluster headache.
“Oxygen in particular had a high rate of complete effectiveness, a low rate of ineffectiveness, and a low rate of physical, medical, emotional, and psychological side effects,” the investigators said. “However, respondents reported that it was difficult to obtain.”
Limited insurance coverage of oxygen may affect access, even though the treatment has a Level A recommendation for the acute treatment of cluster headache in the American Headache Society guidelines, the authors said. Physicians also may pose a barrier. A prior study found that 12% of providers did not prescribe oxygen for cluster headache because they doubted its efficacy or did not know about it. In addition, there may be concerns that the treatment could be a fire hazard in a patient population that has high rates of smoking, the researchers said.
Limitations of the study include the survey’s use of nonvalidated questions, the lack of a formal clinical diagnosis of cluster headache, and the grouping of all triptans, rather than assessing individual triptan medications, such as sumatriptan subcutaneous, alone.
The study received funding from Autonomic Technologies and Clusterbusters. One of the authors has served as a paid consultant to Eli Lilly as a member of the data monitoring committee for clinical trials of galcanezumab for cluster headache and migraine.
This article was updated 3/7/2019.
SOURCE: Pearson SM et al. Headache. 2019 Jan 11. doi: 10.1111/head.13473.
FROM HEADACHE
Key clinical point: Oxygen is a highly effective treatment for cluster headache with few complications.
Major finding: More than half of respondents (54%) reported that triptans and oxygen were completely or very effective.
Study details: Analysis of data from 1,604 people with cluster headache who completed the online Cluster Headache Questionnaire.
Disclosures: The study received funding from Autonomic Technologies and Clusterbusters. One of the authors has served as a paid consultant to Eli Lilly as a member of the data monitoring committee for clinical trials of galcanezumab for cluster headache and migraine.
Source: Pearson SM et al. Headache. 2019 Jan 11. doi: 10.1111/head.13473.
Novel plasma biomarkers may predict preclinical Alzheimer’s disease
, researchers reported in
“To our knowledge, this is the first time that a multianalyte plasma biomarker panel for an Alzheimer’s disease–related phenotype has been found and independently replicated by a nontargeted mass spectrometry approach,” said Nicholas J. Ashton, PhD, of King’s College London and the University of Gothenburg in Sweden, and his research colleagues.
Blood-based measures that predict amyloid-beta burden in preclinical Alzheimer’s disease have the potential to help investigators conduct clinical trials and aid in diagnostic management. However, this novel approach needs to be validated and translated “to a simpler automated platform suitable for wider utility,” the investigators noted. In addition, it is unclear whether their classifier can track changes in amyloid-beta or differentiate between other diseases with amyloid-beta pathology.
Advances in mass spectrometry technology have renewed interest in the analysis of plasma proteins in patients with various diseases. To assess whether proteomic discovery in plasma can help predict amyloid-beta burden in preclinical Alzheimer’s disease, Dr. Ashton and his colleagues studied 238 cognitively unimpaired individuals from the Australian Imaging, Biomarker and Lifestyle Flagship Study of Ageing (AIBL) and the Kerr Anglican Retirement Village Initiative in Ageing Health (KARVIAH). The participants had undergone PET to determine their amyloid-beta status. In the AIBL cohort (n = 144), 100 participants were amyloid-beta negative, and 44 were amyloid-beta positive. In the KARVIAH cohort (n = 94), 59 participants were amyloid-beta negative, and 35 were amyloid-beta positive. There were significantly more APOE4 carriers in the amyloid-beta–positive groups than in the amyloid-beta–negative groups. In addition, the amyloid-beta–positive groups tended to be older.
A support vector machine analysis created classifiers predicting amyloid-beta positivity in the AIBL cohort using demographics, proteins, or both. The researchers then tested each classifier in the KARVIAH dataset to identify which model best predicted amyloid-beta positivity. The optimal model included 10 protein features (prothrombin, adhesion G protein–coupled receptor, amyloid-beta A4 protein, NGN2, DNAH10, REST, NfL, RPS6KA3, GPSM2, FHAD1) and two demographic features (APOE4 count and age).
The classifier achieved a testing area under the receiver operator characteristic curve of 0.891 in the KARVIAH cohort to predict amyloid-beta positivity in cognitively unimpaired individuals with a sensitivity of 0.78 and specificity of 0.77.
The 10 protein features “represent a diverse array of pathways,” and the highest ranked feature was the serine protease prothrombin, which is a precursor to thrombin, the authors noted. “Multiple lines of evidence support that cerebrovascular disease may play a role in AD and that amyloid-beta may be involved in thrombosis, fibrinolysis, and inflammation via its interaction with the coagulation cascade,” the researchers wrote.
Two of the biomarkers – amyloid-beta A4 protein and NfL – have been examined in prior research and had a greater effect size in a secondary analysis that included participants with mild cognitive impairment and Alzheimer’s disease. This finding confirms “their connection with the more established disease state,” Dr. Ashton and colleagues said. In the secondary analysis, the optimal classifier included one demographic factor (APOE4 count) and nine protein features, eight of which also were used in the cognitively unimpaired classifier.
The study was funded in part by the National Institute for Health Research Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London, and many authors reported additional research support from various institutions. One author is an employee of Johnson & Johnson and a named inventor on unrelated biomarker intellectual property owned by Proteome Science and King’s College London.
SOURCE: Ashton NJ et al. Sci Adv. 2019 Feb 6. doi: 10.1126/sciadv.aau7220.
, researchers reported in
“To our knowledge, this is the first time that a multianalyte plasma biomarker panel for an Alzheimer’s disease–related phenotype has been found and independently replicated by a nontargeted mass spectrometry approach,” said Nicholas J. Ashton, PhD, of King’s College London and the University of Gothenburg in Sweden, and his research colleagues.
Blood-based measures that predict amyloid-beta burden in preclinical Alzheimer’s disease have the potential to help investigators conduct clinical trials and aid in diagnostic management. However, this novel approach needs to be validated and translated “to a simpler automated platform suitable for wider utility,” the investigators noted. In addition, it is unclear whether their classifier can track changes in amyloid-beta or differentiate between other diseases with amyloid-beta pathology.
Advances in mass spectrometry technology have renewed interest in the analysis of plasma proteins in patients with various diseases. To assess whether proteomic discovery in plasma can help predict amyloid-beta burden in preclinical Alzheimer’s disease, Dr. Ashton and his colleagues studied 238 cognitively unimpaired individuals from the Australian Imaging, Biomarker and Lifestyle Flagship Study of Ageing (AIBL) and the Kerr Anglican Retirement Village Initiative in Ageing Health (KARVIAH). The participants had undergone PET to determine their amyloid-beta status. In the AIBL cohort (n = 144), 100 participants were amyloid-beta negative, and 44 were amyloid-beta positive. In the KARVIAH cohort (n = 94), 59 participants were amyloid-beta negative, and 35 were amyloid-beta positive. There were significantly more APOE4 carriers in the amyloid-beta–positive groups than in the amyloid-beta–negative groups. In addition, the amyloid-beta–positive groups tended to be older.
A support vector machine analysis created classifiers predicting amyloid-beta positivity in the AIBL cohort using demographics, proteins, or both. The researchers then tested each classifier in the KARVIAH dataset to identify which model best predicted amyloid-beta positivity. The optimal model included 10 protein features (prothrombin, adhesion G protein–coupled receptor, amyloid-beta A4 protein, NGN2, DNAH10, REST, NfL, RPS6KA3, GPSM2, FHAD1) and two demographic features (APOE4 count and age).
The classifier achieved a testing area under the receiver operator characteristic curve of 0.891 in the KARVIAH cohort to predict amyloid-beta positivity in cognitively unimpaired individuals with a sensitivity of 0.78 and specificity of 0.77.
The 10 protein features “represent a diverse array of pathways,” and the highest ranked feature was the serine protease prothrombin, which is a precursor to thrombin, the authors noted. “Multiple lines of evidence support that cerebrovascular disease may play a role in AD and that amyloid-beta may be involved in thrombosis, fibrinolysis, and inflammation via its interaction with the coagulation cascade,” the researchers wrote.
Two of the biomarkers – amyloid-beta A4 protein and NfL – have been examined in prior research and had a greater effect size in a secondary analysis that included participants with mild cognitive impairment and Alzheimer’s disease. This finding confirms “their connection with the more established disease state,” Dr. Ashton and colleagues said. In the secondary analysis, the optimal classifier included one demographic factor (APOE4 count) and nine protein features, eight of which also were used in the cognitively unimpaired classifier.
The study was funded in part by the National Institute for Health Research Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London, and many authors reported additional research support from various institutions. One author is an employee of Johnson & Johnson and a named inventor on unrelated biomarker intellectual property owned by Proteome Science and King’s College London.
SOURCE: Ashton NJ et al. Sci Adv. 2019 Feb 6. doi: 10.1126/sciadv.aau7220.
, researchers reported in
“To our knowledge, this is the first time that a multianalyte plasma biomarker panel for an Alzheimer’s disease–related phenotype has been found and independently replicated by a nontargeted mass spectrometry approach,” said Nicholas J. Ashton, PhD, of King’s College London and the University of Gothenburg in Sweden, and his research colleagues.
Blood-based measures that predict amyloid-beta burden in preclinical Alzheimer’s disease have the potential to help investigators conduct clinical trials and aid in diagnostic management. However, this novel approach needs to be validated and translated “to a simpler automated platform suitable for wider utility,” the investigators noted. In addition, it is unclear whether their classifier can track changes in amyloid-beta or differentiate between other diseases with amyloid-beta pathology.
Advances in mass spectrometry technology have renewed interest in the analysis of plasma proteins in patients with various diseases. To assess whether proteomic discovery in plasma can help predict amyloid-beta burden in preclinical Alzheimer’s disease, Dr. Ashton and his colleagues studied 238 cognitively unimpaired individuals from the Australian Imaging, Biomarker and Lifestyle Flagship Study of Ageing (AIBL) and the Kerr Anglican Retirement Village Initiative in Ageing Health (KARVIAH). The participants had undergone PET to determine their amyloid-beta status. In the AIBL cohort (n = 144), 100 participants were amyloid-beta negative, and 44 were amyloid-beta positive. In the KARVIAH cohort (n = 94), 59 participants were amyloid-beta negative, and 35 were amyloid-beta positive. There were significantly more APOE4 carriers in the amyloid-beta–positive groups than in the amyloid-beta–negative groups. In addition, the amyloid-beta–positive groups tended to be older.
A support vector machine analysis created classifiers predicting amyloid-beta positivity in the AIBL cohort using demographics, proteins, or both. The researchers then tested each classifier in the KARVIAH dataset to identify which model best predicted amyloid-beta positivity. The optimal model included 10 protein features (prothrombin, adhesion G protein–coupled receptor, amyloid-beta A4 protein, NGN2, DNAH10, REST, NfL, RPS6KA3, GPSM2, FHAD1) and two demographic features (APOE4 count and age).
The classifier achieved a testing area under the receiver operator characteristic curve of 0.891 in the KARVIAH cohort to predict amyloid-beta positivity in cognitively unimpaired individuals with a sensitivity of 0.78 and specificity of 0.77.
The 10 protein features “represent a diverse array of pathways,” and the highest ranked feature was the serine protease prothrombin, which is a precursor to thrombin, the authors noted. “Multiple lines of evidence support that cerebrovascular disease may play a role in AD and that amyloid-beta may be involved in thrombosis, fibrinolysis, and inflammation via its interaction with the coagulation cascade,” the researchers wrote.
Two of the biomarkers – amyloid-beta A4 protein and NfL – have been examined in prior research and had a greater effect size in a secondary analysis that included participants with mild cognitive impairment and Alzheimer’s disease. This finding confirms “their connection with the more established disease state,” Dr. Ashton and colleagues said. In the secondary analysis, the optimal classifier included one demographic factor (APOE4 count) and nine protein features, eight of which also were used in the cognitively unimpaired classifier.
The study was funded in part by the National Institute for Health Research Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London, and many authors reported additional research support from various institutions. One author is an employee of Johnson & Johnson and a named inventor on unrelated biomarker intellectual property owned by Proteome Science and King’s College London.
SOURCE: Ashton NJ et al. Sci Adv. 2019 Feb 6. doi: 10.1126/sciadv.aau7220.
FROM SCIENCE ADVANCES
Key clinical point: Blood-based measures that predict amyloid-beta burden in preclinical Alzheimer’s disease have the potential to help investigators conduct clinical trials and aid in diagnostic management.
Major finding: A classifier developed using plasma proteomic analysis achieved an area under the receiver operator characteristic curve of 0.891.
Study details: An analysis of data from 238 cognitively unimpaired individuals from the Australian Imaging, Biomarker and Lifestyle Flagship Study of Ageing (AIBL) and the Kerr Anglican Retirement Village Initiative in Ageing Health (KARVIAH).
Disclosures: The study was funded in part by the National Institute for Health Research Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London, and many authors reported additional research support from various institutions. One author is an employee of Johnson & Johnson and a named inventor on unrelated biomarker intellectual property owned by Proteome Science and King’s College London.
Source: Ashton NJ et al. Sci Adv. 2019 Feb 6. doi: 10.1126/sciadv.aau7220.
Aerobic exercise may mitigate age-related cognitive decline
“The effect of aerobic exercise on executive function was more pronounced as age increased, suggesting that it may mitigate age-related declines,” wrote Yaakov Stern, PhD, chief of cognitive neuroscience in the department of neurology at Columbia University, New York, and his research colleagues.
Research indicates that aerobic exercise provides cognitive benefits across the lifespan, but controlled exercise studies have been limited to elderly individuals, the researchers wrote. To examine the effects of aerobic exercise on cognitive function in younger, healthy adults, they conducted a randomized, parallel-group, observer-masked, community-based clinical trial. The investigators enrolled 132 cognitively normal people aged 20-67 years with aerobic capacity below the median. About 70% were women, and participants’ mean age was about 40 years.
“We hypothesized that aerobic exercise would have cognitive benefits, even in this younger age range, but that age might moderate the nature or degree of the benefit,” Dr. Stern and his colleagues wrote.
Participants were nonsmoking, habitual nonexercisers with below-average fitness by American Heart Association standards. The investigators used baseline aerobic capacity testing to establish safe exercise measures and heart rate targets.
The investigators randomly assigned participants to a group that performed aerobic exercise or to a control group that performed stretching and toning four times per week for 6 months. Outcome measures included domains of cognitive function (such as executive function, episodic memory, processing speed, language, and attention), everyday function, aerobic capacity, body mass index, and cortical thickness.
During a 2-week run-in period, participants went to their choice of five YMCA of New York City fitness centers three times per week. They had to attend at least five of these sessions to stay in the study. In both study arms, training sessions consisted of 10-15 minutes of warm-up and cooldown and 30-40 minutes of workout. Coaches contacted participants weekly to monitor their progress, and participants wore heart rate monitors during each session. Exercises in the control group were designed to promote flexibility and improve core strength. In the aerobic exercise group, participants had a choice of exercises such as walking on a treadmill, cycling on a stationary bike, or using an elliptical machine, and they gradually increased their exercise intensity to 75% of maximum heart rate by week 5. A total of 94 participants – 50 in the control group and 44 in the aerobic exercise group – completed the 6-month trial.
Executive function, but not other cognitive measures, improved significantly in the aerobic exercise group. The effect on executive function was greater in older participants. For example, at age 40 years, the executive function measure increased by 0.228 standard deviation units from baseline; at age 60, it increased by 0.596 standard deviation units.
In addition, cortical thickness increased significantly in the aerobic exercise group in the left caudal middle frontal cortex Brodmann area; this effect did not differ by age. Improvement on executive function in the aerobic exercise group was greater among participants without an APOE E4 allele, contrasting with the findings of prior studies.
“Since a difference of 0.5 standard deviations is equivalent to 20 years of age-related difference in performance on these tests, the people who exercised were testing as if they were about 10 years younger at age 40 and about 20 years younger at age 60,” Dr. Stern said in a press release. “Since thinking skills at the start of the study were poorer for participants who were older, our findings suggest that aerobic exercise is more likely to improve age-related declines in thinking skills rather than improve performance in those without a decline.”
Furthermore, aerobic exercise significantly increased aerobic capacity and significantly decreased body mass index, whereas stretching and toning did not.
“Participants in this trial scheduled their exercise sessions on their own and exercised by themselves,” the authors noted. “In addition, they were allowed to choose whatever aerobic exercise modality they preferred, so long as they reached target heart rates, enhancing the flexibility of the intervention.” Limitations of the study include its relatively small sample size and the large number of participants who dropped out of the study between consenting to participate and randomization.
The trial was funded by the National Institutes of Health. Dr. Stern reported receiving a grant from the California Walnut Commission and consulting with Eli Lilly, Axovant Sciences, Takeda, and AbbVie. A coauthor reported grant support from AposTherapy, LIH Medical, and the Everest Foundation.
SOURCE: Stern Y et al. Neurology. 2019 Jan 30. doi: 10.1212/WNL.0000000000007003.
“The effect of aerobic exercise on executive function was more pronounced as age increased, suggesting that it may mitigate age-related declines,” wrote Yaakov Stern, PhD, chief of cognitive neuroscience in the department of neurology at Columbia University, New York, and his research colleagues.
Research indicates that aerobic exercise provides cognitive benefits across the lifespan, but controlled exercise studies have been limited to elderly individuals, the researchers wrote. To examine the effects of aerobic exercise on cognitive function in younger, healthy adults, they conducted a randomized, parallel-group, observer-masked, community-based clinical trial. The investigators enrolled 132 cognitively normal people aged 20-67 years with aerobic capacity below the median. About 70% were women, and participants’ mean age was about 40 years.
“We hypothesized that aerobic exercise would have cognitive benefits, even in this younger age range, but that age might moderate the nature or degree of the benefit,” Dr. Stern and his colleagues wrote.
Participants were nonsmoking, habitual nonexercisers with below-average fitness by American Heart Association standards. The investigators used baseline aerobic capacity testing to establish safe exercise measures and heart rate targets.
The investigators randomly assigned participants to a group that performed aerobic exercise or to a control group that performed stretching and toning four times per week for 6 months. Outcome measures included domains of cognitive function (such as executive function, episodic memory, processing speed, language, and attention), everyday function, aerobic capacity, body mass index, and cortical thickness.
During a 2-week run-in period, participants went to their choice of five YMCA of New York City fitness centers three times per week. They had to attend at least five of these sessions to stay in the study. In both study arms, training sessions consisted of 10-15 minutes of warm-up and cooldown and 30-40 minutes of workout. Coaches contacted participants weekly to monitor their progress, and participants wore heart rate monitors during each session. Exercises in the control group were designed to promote flexibility and improve core strength. In the aerobic exercise group, participants had a choice of exercises such as walking on a treadmill, cycling on a stationary bike, or using an elliptical machine, and they gradually increased their exercise intensity to 75% of maximum heart rate by week 5. A total of 94 participants – 50 in the control group and 44 in the aerobic exercise group – completed the 6-month trial.
Executive function, but not other cognitive measures, improved significantly in the aerobic exercise group. The effect on executive function was greater in older participants. For example, at age 40 years, the executive function measure increased by 0.228 standard deviation units from baseline; at age 60, it increased by 0.596 standard deviation units.
In addition, cortical thickness increased significantly in the aerobic exercise group in the left caudal middle frontal cortex Brodmann area; this effect did not differ by age. Improvement on executive function in the aerobic exercise group was greater among participants without an APOE E4 allele, contrasting with the findings of prior studies.
“Since a difference of 0.5 standard deviations is equivalent to 20 years of age-related difference in performance on these tests, the people who exercised were testing as if they were about 10 years younger at age 40 and about 20 years younger at age 60,” Dr. Stern said in a press release. “Since thinking skills at the start of the study were poorer for participants who were older, our findings suggest that aerobic exercise is more likely to improve age-related declines in thinking skills rather than improve performance in those without a decline.”
Furthermore, aerobic exercise significantly increased aerobic capacity and significantly decreased body mass index, whereas stretching and toning did not.
“Participants in this trial scheduled their exercise sessions on their own and exercised by themselves,” the authors noted. “In addition, they were allowed to choose whatever aerobic exercise modality they preferred, so long as they reached target heart rates, enhancing the flexibility of the intervention.” Limitations of the study include its relatively small sample size and the large number of participants who dropped out of the study between consenting to participate and randomization.
The trial was funded by the National Institutes of Health. Dr. Stern reported receiving a grant from the California Walnut Commission and consulting with Eli Lilly, Axovant Sciences, Takeda, and AbbVie. A coauthor reported grant support from AposTherapy, LIH Medical, and the Everest Foundation.
SOURCE: Stern Y et al. Neurology. 2019 Jan 30. doi: 10.1212/WNL.0000000000007003.
“The effect of aerobic exercise on executive function was more pronounced as age increased, suggesting that it may mitigate age-related declines,” wrote Yaakov Stern, PhD, chief of cognitive neuroscience in the department of neurology at Columbia University, New York, and his research colleagues.
Research indicates that aerobic exercise provides cognitive benefits across the lifespan, but controlled exercise studies have been limited to elderly individuals, the researchers wrote. To examine the effects of aerobic exercise on cognitive function in younger, healthy adults, they conducted a randomized, parallel-group, observer-masked, community-based clinical trial. The investigators enrolled 132 cognitively normal people aged 20-67 years with aerobic capacity below the median. About 70% were women, and participants’ mean age was about 40 years.
“We hypothesized that aerobic exercise would have cognitive benefits, even in this younger age range, but that age might moderate the nature or degree of the benefit,” Dr. Stern and his colleagues wrote.
Participants were nonsmoking, habitual nonexercisers with below-average fitness by American Heart Association standards. The investigators used baseline aerobic capacity testing to establish safe exercise measures and heart rate targets.
The investigators randomly assigned participants to a group that performed aerobic exercise or to a control group that performed stretching and toning four times per week for 6 months. Outcome measures included domains of cognitive function (such as executive function, episodic memory, processing speed, language, and attention), everyday function, aerobic capacity, body mass index, and cortical thickness.
During a 2-week run-in period, participants went to their choice of five YMCA of New York City fitness centers three times per week. They had to attend at least five of these sessions to stay in the study. In both study arms, training sessions consisted of 10-15 minutes of warm-up and cooldown and 30-40 minutes of workout. Coaches contacted participants weekly to monitor their progress, and participants wore heart rate monitors during each session. Exercises in the control group were designed to promote flexibility and improve core strength. In the aerobic exercise group, participants had a choice of exercises such as walking on a treadmill, cycling on a stationary bike, or using an elliptical machine, and they gradually increased their exercise intensity to 75% of maximum heart rate by week 5. A total of 94 participants – 50 in the control group and 44 in the aerobic exercise group – completed the 6-month trial.
Executive function, but not other cognitive measures, improved significantly in the aerobic exercise group. The effect on executive function was greater in older participants. For example, at age 40 years, the executive function measure increased by 0.228 standard deviation units from baseline; at age 60, it increased by 0.596 standard deviation units.
In addition, cortical thickness increased significantly in the aerobic exercise group in the left caudal middle frontal cortex Brodmann area; this effect did not differ by age. Improvement on executive function in the aerobic exercise group was greater among participants without an APOE E4 allele, contrasting with the findings of prior studies.
“Since a difference of 0.5 standard deviations is equivalent to 20 years of age-related difference in performance on these tests, the people who exercised were testing as if they were about 10 years younger at age 40 and about 20 years younger at age 60,” Dr. Stern said in a press release. “Since thinking skills at the start of the study were poorer for participants who were older, our findings suggest that aerobic exercise is more likely to improve age-related declines in thinking skills rather than improve performance in those without a decline.”
Furthermore, aerobic exercise significantly increased aerobic capacity and significantly decreased body mass index, whereas stretching and toning did not.
“Participants in this trial scheduled their exercise sessions on their own and exercised by themselves,” the authors noted. “In addition, they were allowed to choose whatever aerobic exercise modality they preferred, so long as they reached target heart rates, enhancing the flexibility of the intervention.” Limitations of the study include its relatively small sample size and the large number of participants who dropped out of the study between consenting to participate and randomization.
The trial was funded by the National Institutes of Health. Dr. Stern reported receiving a grant from the California Walnut Commission and consulting with Eli Lilly, Axovant Sciences, Takeda, and AbbVie. A coauthor reported grant support from AposTherapy, LIH Medical, and the Everest Foundation.
SOURCE: Stern Y et al. Neurology. 2019 Jan 30. doi: 10.1212/WNL.0000000000007003.
FROM NEUROLOGY
Key clinical point: Among adults with below-average fitness, a 6-month aerobic exercise program significantly improves executive function.
Major finding: The effect is more pronounced as age increases.
Study details: A randomized, parallel-group, observer-masked, community-based clinical trial of 132 cognitively normal adults aged 20-67 years.
Disclosures: The study was funded by the National Institutes of Health. Dr. Stern reported receiving a grant from the California Walnut Commission and consulted with Eli Lilly, Axovant Sciences, Takeda, and AbbVie. Another reported grant support from AposTherapy, LIH Medical, and the Everest Foundation.
Source: Stern Y et al. Neurology. 2019 Jan 30. doi: 10.1212/WNL.0000000000007003.
Clinical benefits persist 5 years after thymectomy for myasthenia gravis
Thymectomy may continue to benefit patients with myasthenia gravis 5 years after the procedure, according to an extension study published in Lancet Neurology.
The study evaluated the clinical status, medication requirements, and adverse events of patients with myasthenia gravis who completed a randomized controlled trial of thymectomy plus prednisone versus prednisone alone and agreed to participate in a rater-blinded 2-year extension.
“Thymectomy within the first few years of the disease course in addition to prednisone therapy confers benefits that persist for 5 years ... in patients with generalized nonthymomatous myasthenia gravis,” said lead study author Gil I. Wolfe, MD, chair of the department of neurology at the University at Buffalo in New York, and his research colleagues. “Results from the extension study provide further support for the use of thymectomy in management of myasthenia gravis and should encourage serious consideration of this treatment option in discussions between clinicians and their patients,” they wrote. “Our results should lead to revision of clinical guidelines in favor of thymectomy and could potentially reverse downward trends in the use of thymectomy in overall management of myasthenia gravis.”
The main 3-year results of the Thymectomy Trial in Nonthymomatous Myasthenia Gravis Patients Receiving Prednisone (MGTX) were reported in 2016; the international trial found that thymectomy plus prednisone was superior to prednisone alone at 3 years (N Engl J Med. 2016 Aug 11;375[6]:511-22). The extension study aimed to assess the durability of the treatment response.
MGTX enrolled patients aged 18-65 years who had generalized nonthymomatous myasthenia gravis of less than 5 years’ duration and Myasthenia Gravis Foundation of America Clinical Classification Class II-IV disease. Of 111 patients who completed MGTX, 68 entered the extension study, and 50 completed the 60-month assessment (24 patients in the prednisone alone group and 26 patients in the prednisone plus thymectomy group).
At 5 years, patients in the thymectomy plus prednisone group had significantly lower time-weighted average Quantitative Myasthenia Gravis (QMG) scores (5.47 vs. 9.34) and mean alternate-day prednisone doses (24 mg vs. 48 mg), compared with patients who received prednisone alone. Twelve of 35 patients in the thymectomy group and 14 of 33 patients in the prednisone group had at least one adverse event by month 60. No treatment-related deaths occurred in the extension phase.
At 5 years, significantly more patients who underwent thymectomy had minimal manifestation status (i.e., no functional limitations from the disease other than some muscle weakness) – 88% versus 58%. The corresponding figures at 3 years were 67% and 47%.
In addition, 3-year and 5-year data indicate that the need for hospitalization is reduced after surgery, compared with medical therapy alone, Dr. Wolfe said.
Two patients in each treatment arm had an increase of 2 points or more in the QMG score, indicating clinical worsening.
“Our current findings reinforce the benefit of thymectomy seen in [MGTX], dispelling doubts about the procedure’s benefits and how long those benefits last,” said Dr. Wolfe. “We do hope that the new findings help reverse the apparent reluctance to do thymectomy and that the proportion of patients with myasthenia gravis who undergo thymectomy will increase.”
The authors noted that the small sample size of the extension study may limit its generalizability.
The study received funding from the National Institutes of Health. Dr. Wolfe reported grants from the NIH, the Muscular Dystrophy Association, the Myasthenia Gravis Foundation of America, CSL-Behring, and ArgenX, as well as personal fees from Grifols, Shire, and Alexion Pharmaceuticals. Coauthors reported working with and receiving funds from agencies, foundations, and pharmaceutical companies.
SOURCE: Wolfe GI et al. Lancet Neurol. 2019 Jan 25. doi: 10.1016/S1474-4422(18)30392-2.
Thymectomy may continue to benefit patients with myasthenia gravis 5 years after the procedure, according to an extension study published in Lancet Neurology.
The study evaluated the clinical status, medication requirements, and adverse events of patients with myasthenia gravis who completed a randomized controlled trial of thymectomy plus prednisone versus prednisone alone and agreed to participate in a rater-blinded 2-year extension.
“Thymectomy within the first few years of the disease course in addition to prednisone therapy confers benefits that persist for 5 years ... in patients with generalized nonthymomatous myasthenia gravis,” said lead study author Gil I. Wolfe, MD, chair of the department of neurology at the University at Buffalo in New York, and his research colleagues. “Results from the extension study provide further support for the use of thymectomy in management of myasthenia gravis and should encourage serious consideration of this treatment option in discussions between clinicians and their patients,” they wrote. “Our results should lead to revision of clinical guidelines in favor of thymectomy and could potentially reverse downward trends in the use of thymectomy in overall management of myasthenia gravis.”
The main 3-year results of the Thymectomy Trial in Nonthymomatous Myasthenia Gravis Patients Receiving Prednisone (MGTX) were reported in 2016; the international trial found that thymectomy plus prednisone was superior to prednisone alone at 3 years (N Engl J Med. 2016 Aug 11;375[6]:511-22). The extension study aimed to assess the durability of the treatment response.
MGTX enrolled patients aged 18-65 years who had generalized nonthymomatous myasthenia gravis of less than 5 years’ duration and Myasthenia Gravis Foundation of America Clinical Classification Class II-IV disease. Of 111 patients who completed MGTX, 68 entered the extension study, and 50 completed the 60-month assessment (24 patients in the prednisone alone group and 26 patients in the prednisone plus thymectomy group).
At 5 years, patients in the thymectomy plus prednisone group had significantly lower time-weighted average Quantitative Myasthenia Gravis (QMG) scores (5.47 vs. 9.34) and mean alternate-day prednisone doses (24 mg vs. 48 mg), compared with patients who received prednisone alone. Twelve of 35 patients in the thymectomy group and 14 of 33 patients in the prednisone group had at least one adverse event by month 60. No treatment-related deaths occurred in the extension phase.
At 5 years, significantly more patients who underwent thymectomy had minimal manifestation status (i.e., no functional limitations from the disease other than some muscle weakness) – 88% versus 58%. The corresponding figures at 3 years were 67% and 47%.
In addition, 3-year and 5-year data indicate that the need for hospitalization is reduced after surgery, compared with medical therapy alone, Dr. Wolfe said.
Two patients in each treatment arm had an increase of 2 points or more in the QMG score, indicating clinical worsening.
“Our current findings reinforce the benefit of thymectomy seen in [MGTX], dispelling doubts about the procedure’s benefits and how long those benefits last,” said Dr. Wolfe. “We do hope that the new findings help reverse the apparent reluctance to do thymectomy and that the proportion of patients with myasthenia gravis who undergo thymectomy will increase.”
The authors noted that the small sample size of the extension study may limit its generalizability.
The study received funding from the National Institutes of Health. Dr. Wolfe reported grants from the NIH, the Muscular Dystrophy Association, the Myasthenia Gravis Foundation of America, CSL-Behring, and ArgenX, as well as personal fees from Grifols, Shire, and Alexion Pharmaceuticals. Coauthors reported working with and receiving funds from agencies, foundations, and pharmaceutical companies.
SOURCE: Wolfe GI et al. Lancet Neurol. 2019 Jan 25. doi: 10.1016/S1474-4422(18)30392-2.
Thymectomy may continue to benefit patients with myasthenia gravis 5 years after the procedure, according to an extension study published in Lancet Neurology.
The study evaluated the clinical status, medication requirements, and adverse events of patients with myasthenia gravis who completed a randomized controlled trial of thymectomy plus prednisone versus prednisone alone and agreed to participate in a rater-blinded 2-year extension.
“Thymectomy within the first few years of the disease course in addition to prednisone therapy confers benefits that persist for 5 years ... in patients with generalized nonthymomatous myasthenia gravis,” said lead study author Gil I. Wolfe, MD, chair of the department of neurology at the University at Buffalo in New York, and his research colleagues. “Results from the extension study provide further support for the use of thymectomy in management of myasthenia gravis and should encourage serious consideration of this treatment option in discussions between clinicians and their patients,” they wrote. “Our results should lead to revision of clinical guidelines in favor of thymectomy and could potentially reverse downward trends in the use of thymectomy in overall management of myasthenia gravis.”
The main 3-year results of the Thymectomy Trial in Nonthymomatous Myasthenia Gravis Patients Receiving Prednisone (MGTX) were reported in 2016; the international trial found that thymectomy plus prednisone was superior to prednisone alone at 3 years (N Engl J Med. 2016 Aug 11;375[6]:511-22). The extension study aimed to assess the durability of the treatment response.
MGTX enrolled patients aged 18-65 years who had generalized nonthymomatous myasthenia gravis of less than 5 years’ duration and Myasthenia Gravis Foundation of America Clinical Classification Class II-IV disease. Of 111 patients who completed MGTX, 68 entered the extension study, and 50 completed the 60-month assessment (24 patients in the prednisone alone group and 26 patients in the prednisone plus thymectomy group).
At 5 years, patients in the thymectomy plus prednisone group had significantly lower time-weighted average Quantitative Myasthenia Gravis (QMG) scores (5.47 vs. 9.34) and mean alternate-day prednisone doses (24 mg vs. 48 mg), compared with patients who received prednisone alone. Twelve of 35 patients in the thymectomy group and 14 of 33 patients in the prednisone group had at least one adverse event by month 60. No treatment-related deaths occurred in the extension phase.
At 5 years, significantly more patients who underwent thymectomy had minimal manifestation status (i.e., no functional limitations from the disease other than some muscle weakness) – 88% versus 58%. The corresponding figures at 3 years were 67% and 47%.
In addition, 3-year and 5-year data indicate that the need for hospitalization is reduced after surgery, compared with medical therapy alone, Dr. Wolfe said.
Two patients in each treatment arm had an increase of 2 points or more in the QMG score, indicating clinical worsening.
“Our current findings reinforce the benefit of thymectomy seen in [MGTX], dispelling doubts about the procedure’s benefits and how long those benefits last,” said Dr. Wolfe. “We do hope that the new findings help reverse the apparent reluctance to do thymectomy and that the proportion of patients with myasthenia gravis who undergo thymectomy will increase.”
The authors noted that the small sample size of the extension study may limit its generalizability.
The study received funding from the National Institutes of Health. Dr. Wolfe reported grants from the NIH, the Muscular Dystrophy Association, the Myasthenia Gravis Foundation of America, CSL-Behring, and ArgenX, as well as personal fees from Grifols, Shire, and Alexion Pharmaceuticals. Coauthors reported working with and receiving funds from agencies, foundations, and pharmaceutical companies.
SOURCE: Wolfe GI et al. Lancet Neurol. 2019 Jan 25. doi: 10.1016/S1474-4422(18)30392-2.
FROM LANCET NEUROLOGY
Key clinical point: The benefits of thymectomy for myasthenia gravis persist 5 years after the procedure.
Major finding: Patients who undergo thymectomy and receive prednisone have lower time-weighted average Quantitative Myasthenia Gravis scores (5.47 vs. 9.34) and mean alternate-day prednisone doses (24 mg vs. 48 mg), compared with patients who receive prednisone alone.
Study details: A rater-blinded 2-year extension study that enrolled 68 patients who had completed a 3-year randomized controlled trial.
Disclosures: The study received funding from the National Institutes of Health. Dr. Wolfe reported grants from the NIH, the Muscular Dystrophy Association, the Myasthenia Gravis Foundation of America, CSL-Behring, and ArgenX, as well as personal fees from Grifols, Shire, and Alexion Pharmaceuticals. Other authors reported working with and receiving funds from various agencies, foundations, and pharmaceutical companies.
Source: Wolfe GI et al. Lancet Neurol. 2019 Jan 25. doi: 10.1016/S1474-4422(18)30392-2.
Routine clinical data may predict psychiatric adverse effects from levetiracetam
Among patients with epilepsy, a simple model that incorporates factors such as a patient’s sex and history of depression, anxiety, and recreational drug use may help predict the risk of a psychiatric adverse effect from levetiracetam, according to a study published in JAMA Neurology.
“This study derived 2 simple models that predict the risk of a psychiatric adverse effect from levetiracetam” and can “guide prescription in clinical practice,” said Colin B. Josephson, MD, of the department of clinical neurosciences at the University of Calgary (Canada) and his research colleagues.
Levetiracetam is a commonly used first-line treatment for epilepsy because of its ease of use, broad spectrum of action, and safety profile, the researchers said. Still, psychiatric adverse reactions occur in as many as 16% of patients and frequently require treatment discontinuation.
To evaluate whether routine clinical data can predict which patients with epilepsy will experience a psychiatric adverse event from levetiracetam, the investigators analyzed data from The Health Improvement Network (THIN) database, which includes anonymized patient records from general practices in the United Kingdom. They assessed 21 variables for possible inclusion in prediction models. They identified these variables by searching the literature and weighing input from a panel of experts.
Their analysis included data from Jan. 1, 2000–May 31, 2012. Among the more than 11 million patients in THIN, the researchers identified 7,300 incident cases of epilepsy. The researchers examined when patients received a first prescription for levetiracetam and whether patients experienced a psychiatric symptom or disorder within 2 years of the prescription.
Among 1,173 patients with epilepsy receiving levetiracetam, the median age was 39 years; about half were women. In all, 14.1% experienced a psychiatric symptom or disorder within 2 years of prescription. Women were more likely to report a psychiatric symptom (odds ratio, 1.41), as were patients with a history of social deprivation (OR, 1.15), anxiety (OR, 1.74), recreational drug use (OR, 2.02), or depression (OR, 2.20).
The final model included female sex, history of depression, history of anxiety, and history of recreational drug use. Low socioeconomic status was not included because “it would be challenging to assign this score in clinic,” the authors said.
“There was a gradient in risk probabilities increasing from 8% for 0 risk factors to 11%-17% for 1, 17% to 31% for 2, 30%-42% for 3, and 49% when all risk factors were present,” Dr. Josephson and his colleagues indicated. “The discovered incremental probability of reporting a psychiatric sign can help generate an index of suspicion to counsel patients.”
Using the example of a woman patient with depression, the model “suggests she would be at risk,” with a 22% chance of a psychiatric adverse event in the 2 years after receiving a levetiracetam prescription.
The researchers created a second prediction algorithm based on data from patients without documentation of a mental health sign, symptom, or disorder prior to their levetiracetam prescription. This model incorporated age, sex, recreational drug use, and levetiracetam daily dose; it performed comparably well and might be used to determine safety of prescription, according to Dr. Josephson and his colleagues.
The authors noted that the study was limited by an inability to evaluate medication adherence and seizure type and frequency. One advantage of the study’s design is that it may have circumvented expectation bias because general practitioners were not prone to anticipating psychiatric adverse events or to have a lower threshold for diagnosing them.
The authors disclosed research fellowships and support from foundations and federal agencies.
SOURCE: Josephson CB et al. JAMA Neurol. 2019 Jan 28. doi: 10.1001/jamaneurol.2018.4561.
Among patients with epilepsy, a simple model that incorporates factors such as a patient’s sex and history of depression, anxiety, and recreational drug use may help predict the risk of a psychiatric adverse effect from levetiracetam, according to a study published in JAMA Neurology.
“This study derived 2 simple models that predict the risk of a psychiatric adverse effect from levetiracetam” and can “guide prescription in clinical practice,” said Colin B. Josephson, MD, of the department of clinical neurosciences at the University of Calgary (Canada) and his research colleagues.
Levetiracetam is a commonly used first-line treatment for epilepsy because of its ease of use, broad spectrum of action, and safety profile, the researchers said. Still, psychiatric adverse reactions occur in as many as 16% of patients and frequently require treatment discontinuation.
To evaluate whether routine clinical data can predict which patients with epilepsy will experience a psychiatric adverse event from levetiracetam, the investigators analyzed data from The Health Improvement Network (THIN) database, which includes anonymized patient records from general practices in the United Kingdom. They assessed 21 variables for possible inclusion in prediction models. They identified these variables by searching the literature and weighing input from a panel of experts.
Their analysis included data from Jan. 1, 2000–May 31, 2012. Among the more than 11 million patients in THIN, the researchers identified 7,300 incident cases of epilepsy. The researchers examined when patients received a first prescription for levetiracetam and whether patients experienced a psychiatric symptom or disorder within 2 years of the prescription.
Among 1,173 patients with epilepsy receiving levetiracetam, the median age was 39 years; about half were women. In all, 14.1% experienced a psychiatric symptom or disorder within 2 years of prescription. Women were more likely to report a psychiatric symptom (odds ratio, 1.41), as were patients with a history of social deprivation (OR, 1.15), anxiety (OR, 1.74), recreational drug use (OR, 2.02), or depression (OR, 2.20).
The final model included female sex, history of depression, history of anxiety, and history of recreational drug use. Low socioeconomic status was not included because “it would be challenging to assign this score in clinic,” the authors said.
“There was a gradient in risk probabilities increasing from 8% for 0 risk factors to 11%-17% for 1, 17% to 31% for 2, 30%-42% for 3, and 49% when all risk factors were present,” Dr. Josephson and his colleagues indicated. “The discovered incremental probability of reporting a psychiatric sign can help generate an index of suspicion to counsel patients.”
Using the example of a woman patient with depression, the model “suggests she would be at risk,” with a 22% chance of a psychiatric adverse event in the 2 years after receiving a levetiracetam prescription.
The researchers created a second prediction algorithm based on data from patients without documentation of a mental health sign, symptom, or disorder prior to their levetiracetam prescription. This model incorporated age, sex, recreational drug use, and levetiracetam daily dose; it performed comparably well and might be used to determine safety of prescription, according to Dr. Josephson and his colleagues.
The authors noted that the study was limited by an inability to evaluate medication adherence and seizure type and frequency. One advantage of the study’s design is that it may have circumvented expectation bias because general practitioners were not prone to anticipating psychiatric adverse events or to have a lower threshold for diagnosing them.
The authors disclosed research fellowships and support from foundations and federal agencies.
SOURCE: Josephson CB et al. JAMA Neurol. 2019 Jan 28. doi: 10.1001/jamaneurol.2018.4561.
Among patients with epilepsy, a simple model that incorporates factors such as a patient’s sex and history of depression, anxiety, and recreational drug use may help predict the risk of a psychiatric adverse effect from levetiracetam, according to a study published in JAMA Neurology.
“This study derived 2 simple models that predict the risk of a psychiatric adverse effect from levetiracetam” and can “guide prescription in clinical practice,” said Colin B. Josephson, MD, of the department of clinical neurosciences at the University of Calgary (Canada) and his research colleagues.
Levetiracetam is a commonly used first-line treatment for epilepsy because of its ease of use, broad spectrum of action, and safety profile, the researchers said. Still, psychiatric adverse reactions occur in as many as 16% of patients and frequently require treatment discontinuation.
To evaluate whether routine clinical data can predict which patients with epilepsy will experience a psychiatric adverse event from levetiracetam, the investigators analyzed data from The Health Improvement Network (THIN) database, which includes anonymized patient records from general practices in the United Kingdom. They assessed 21 variables for possible inclusion in prediction models. They identified these variables by searching the literature and weighing input from a panel of experts.
Their analysis included data from Jan. 1, 2000–May 31, 2012. Among the more than 11 million patients in THIN, the researchers identified 7,300 incident cases of epilepsy. The researchers examined when patients received a first prescription for levetiracetam and whether patients experienced a psychiatric symptom or disorder within 2 years of the prescription.
Among 1,173 patients with epilepsy receiving levetiracetam, the median age was 39 years; about half were women. In all, 14.1% experienced a psychiatric symptom or disorder within 2 years of prescription. Women were more likely to report a psychiatric symptom (odds ratio, 1.41), as were patients with a history of social deprivation (OR, 1.15), anxiety (OR, 1.74), recreational drug use (OR, 2.02), or depression (OR, 2.20).
The final model included female sex, history of depression, history of anxiety, and history of recreational drug use. Low socioeconomic status was not included because “it would be challenging to assign this score in clinic,” the authors said.
“There was a gradient in risk probabilities increasing from 8% for 0 risk factors to 11%-17% for 1, 17% to 31% for 2, 30%-42% for 3, and 49% when all risk factors were present,” Dr. Josephson and his colleagues indicated. “The discovered incremental probability of reporting a psychiatric sign can help generate an index of suspicion to counsel patients.”
Using the example of a woman patient with depression, the model “suggests she would be at risk,” with a 22% chance of a psychiatric adverse event in the 2 years after receiving a levetiracetam prescription.
The researchers created a second prediction algorithm based on data from patients without documentation of a mental health sign, symptom, or disorder prior to their levetiracetam prescription. This model incorporated age, sex, recreational drug use, and levetiracetam daily dose; it performed comparably well and might be used to determine safety of prescription, according to Dr. Josephson and his colleagues.
The authors noted that the study was limited by an inability to evaluate medication adherence and seizure type and frequency. One advantage of the study’s design is that it may have circumvented expectation bias because general practitioners were not prone to anticipating psychiatric adverse events or to have a lower threshold for diagnosing them.
The authors disclosed research fellowships and support from foundations and federal agencies.
SOURCE: Josephson CB et al. JAMA Neurol. 2019 Jan 28. doi: 10.1001/jamaneurol.2018.4561.
FROM JAMA NEUROLOGY
Key clinical point: Among patients with epilepsy, a simple model may help predict the risk of a psychiatric adverse effect from levetiracetam.
Major finding: The likelihood of a psychiatric adverse event increases from 8% for patients with no risk factors to 49% with all risk factors present.
Study details: A retrospective open cohort study of 1,173 patients with epilepsy receiving levetiracetam in the United Kingdom.
Disclosures: The authors disclosed research fellowships and support from foundations and federal agencies.
Source: Josephson CB et al. JAMA Neurol. 2019 Jan 28. doi: 10.1001/jamaneurol.2018.4561
Age of migraine onset may affect stroke risk
The age at which a patient develops migraine with aura may be an important factor in assessing stroke risk, according to a prospective cohort study published in Headache.
Patients who had onset of migraine with visual aura after age 50 years had an increased risk of ischemic stroke, compared with patients with no headache, the researchers found. Patients with longer exposure to migraine with visual aura – that is, onset before age 50 years – did not have significantly increased ischemic stroke risk, said X. Michelle Androulakis, MD, of the department of neurology at the University of South Carolina in Columbia, and her colleagues.
“Migraine, especially migraine with aura, is associated with increased risk of ischemic stroke,” but whether age of migraine onset affects the risk of cardiovascular disease has been unclear, the researchers said.
To examine the risk of ischemic stroke in migraineurs with and without aura with onset before and after age 50 years, the investigators conducted a post hoc analysis of data from the ongoing Atherosclerosis Risk in Communities (ARIC) study. The researchers adjusted for potential confounders, including diabetes, body mass index, hypertension, and hyperlipidemia.
In ARIC, participants completed a questionnaire about their migraine history at their third study visit (1993-1995) and were followed for ischemic stroke incidence over 20 years.
Of the 11,592 ARIC participants included in the analysis (mean age, 61 years; 76.5% white; and 55.3% female), 447 had migraine with aura, and 1,128 had migraine without aura. Onset of migraine with aura at age 50 years or older (average duration, 4.75 years) was associated with more than twofold greater risk of ischemic stroke, compared with no headache (multivariable adjusted hazard ratio = 2.17). Onset of migraine with aura before age 50 years (average duration, 28.17 years) was not significantly associated with stroke. A logistic regression model yielded consistent results.
In addition, patients with migraine without aura did not have an increased risk of stroke, regardless of the age of onset. The absolute risk for stroke in migraine with aura was 8.27%, and the absolute risk in migraine without aura was 4.25%.
“We found unexpected results suggesting that the onset of migraine with aura before age 50 is not associated with ischemic stroke. ... These results are specific to first-time ischemic stroke incidents that occurred in mid- to late life; therefore, it cannot be generalized to stroke in younger patients,” the authors wrote.
It could be that migraine with aura symptoms that start at a later age are a red flag for paradoxical emboli from a patent foramen ovale or microemboli, Dr. Androulakis and her colleagues noted. It also is possible that the degree of cortical spreading depression required to induce migraine with aura symptoms is different later in life versus earlier in life.
“This study underscores the importance of MA symptoms onset in evaluation of ischemic stroke risk in late life,” the researchers concluded.
The authors had no relevant conflicts of interest. ARIC has been funded by the National Heart, Lung, and Blood Institute.
SOURCE: Androulakis XM et al. Headache. 2019 Jan 21. doi: 10.1111/head.13468.
The age at which a patient develops migraine with aura may be an important factor in assessing stroke risk, according to a prospective cohort study published in Headache.
Patients who had onset of migraine with visual aura after age 50 years had an increased risk of ischemic stroke, compared with patients with no headache, the researchers found. Patients with longer exposure to migraine with visual aura – that is, onset before age 50 years – did not have significantly increased ischemic stroke risk, said X. Michelle Androulakis, MD, of the department of neurology at the University of South Carolina in Columbia, and her colleagues.
“Migraine, especially migraine with aura, is associated with increased risk of ischemic stroke,” but whether age of migraine onset affects the risk of cardiovascular disease has been unclear, the researchers said.
To examine the risk of ischemic stroke in migraineurs with and without aura with onset before and after age 50 years, the investigators conducted a post hoc analysis of data from the ongoing Atherosclerosis Risk in Communities (ARIC) study. The researchers adjusted for potential confounders, including diabetes, body mass index, hypertension, and hyperlipidemia.
In ARIC, participants completed a questionnaire about their migraine history at their third study visit (1993-1995) and were followed for ischemic stroke incidence over 20 years.
Of the 11,592 ARIC participants included in the analysis (mean age, 61 years; 76.5% white; and 55.3% female), 447 had migraine with aura, and 1,128 had migraine without aura. Onset of migraine with aura at age 50 years or older (average duration, 4.75 years) was associated with more than twofold greater risk of ischemic stroke, compared with no headache (multivariable adjusted hazard ratio = 2.17). Onset of migraine with aura before age 50 years (average duration, 28.17 years) was not significantly associated with stroke. A logistic regression model yielded consistent results.
In addition, patients with migraine without aura did not have an increased risk of stroke, regardless of the age of onset. The absolute risk for stroke in migraine with aura was 8.27%, and the absolute risk in migraine without aura was 4.25%.
“We found unexpected results suggesting that the onset of migraine with aura before age 50 is not associated with ischemic stroke. ... These results are specific to first-time ischemic stroke incidents that occurred in mid- to late life; therefore, it cannot be generalized to stroke in younger patients,” the authors wrote.
It could be that migraine with aura symptoms that start at a later age are a red flag for paradoxical emboli from a patent foramen ovale or microemboli, Dr. Androulakis and her colleagues noted. It also is possible that the degree of cortical spreading depression required to induce migraine with aura symptoms is different later in life versus earlier in life.
“This study underscores the importance of MA symptoms onset in evaluation of ischemic stroke risk in late life,” the researchers concluded.
The authors had no relevant conflicts of interest. ARIC has been funded by the National Heart, Lung, and Blood Institute.
SOURCE: Androulakis XM et al. Headache. 2019 Jan 21. doi: 10.1111/head.13468.
The age at which a patient develops migraine with aura may be an important factor in assessing stroke risk, according to a prospective cohort study published in Headache.
Patients who had onset of migraine with visual aura after age 50 years had an increased risk of ischemic stroke, compared with patients with no headache, the researchers found. Patients with longer exposure to migraine with visual aura – that is, onset before age 50 years – did not have significantly increased ischemic stroke risk, said X. Michelle Androulakis, MD, of the department of neurology at the University of South Carolina in Columbia, and her colleagues.
“Migraine, especially migraine with aura, is associated with increased risk of ischemic stroke,” but whether age of migraine onset affects the risk of cardiovascular disease has been unclear, the researchers said.
To examine the risk of ischemic stroke in migraineurs with and without aura with onset before and after age 50 years, the investigators conducted a post hoc analysis of data from the ongoing Atherosclerosis Risk in Communities (ARIC) study. The researchers adjusted for potential confounders, including diabetes, body mass index, hypertension, and hyperlipidemia.
In ARIC, participants completed a questionnaire about their migraine history at their third study visit (1993-1995) and were followed for ischemic stroke incidence over 20 years.
Of the 11,592 ARIC participants included in the analysis (mean age, 61 years; 76.5% white; and 55.3% female), 447 had migraine with aura, and 1,128 had migraine without aura. Onset of migraine with aura at age 50 years or older (average duration, 4.75 years) was associated with more than twofold greater risk of ischemic stroke, compared with no headache (multivariable adjusted hazard ratio = 2.17). Onset of migraine with aura before age 50 years (average duration, 28.17 years) was not significantly associated with stroke. A logistic regression model yielded consistent results.
In addition, patients with migraine without aura did not have an increased risk of stroke, regardless of the age of onset. The absolute risk for stroke in migraine with aura was 8.27%, and the absolute risk in migraine without aura was 4.25%.
“We found unexpected results suggesting that the onset of migraine with aura before age 50 is not associated with ischemic stroke. ... These results are specific to first-time ischemic stroke incidents that occurred in mid- to late life; therefore, it cannot be generalized to stroke in younger patients,” the authors wrote.
It could be that migraine with aura symptoms that start at a later age are a red flag for paradoxical emboli from a patent foramen ovale or microemboli, Dr. Androulakis and her colleagues noted. It also is possible that the degree of cortical spreading depression required to induce migraine with aura symptoms is different later in life versus earlier in life.
“This study underscores the importance of MA symptoms onset in evaluation of ischemic stroke risk in late life,” the researchers concluded.
The authors had no relevant conflicts of interest. ARIC has been funded by the National Heart, Lung, and Blood Institute.
SOURCE: Androulakis XM et al. Headache. 2019 Jan 21. doi: 10.1111/head.13468.
FROM HEADACHE
Key clinical point: Age of migraine onset may be an important factor in assessing stroke risk.
Major finding: (multivariable adjusted hazard ratio = 2.17).
Study details: A post hoc analysis of data from more than 11,500 participants in the Atherosclerosis Risk in Communities (ARIC) study.
Disclosures: The authors had no relevant conflicts of interest. ARIC has been funded by the National Heart, Lung, and Blood Institute.
Source: Androulakis XM et al. Headache. 2019 Jan 21. doi: 10.1111/head.13468.
How seizure prediction may benefit patients with epilepsy
NEW ORLEANS – For people with epilepsy, “the sudden and apparently unpredictable nature of seizures is one of the most disabling aspects of having the disorder,” said Michael Privitera, MD.
If a patient knew that “tomorrow will be a dangerous day” with a 50% chance of having a seizure, the patient could avoid hazardous activities, try to reduce stress, or increase supervision to reduce the risk of sudden, unexpected death in epilepsy, said Dr. Privitera, professor of neurology and director of the epilepsy center at the University of Cincinnati Gardner Neuroscience Institute. Physicians might be able to intervene during high-risk periods by altering antiepileptic drug regimens.
Evidence suggests that seizure prediction is possible today and that advances in wearable devices and analysis of chronic EEG recordings likely will improve the ability to predict seizures, Dr. Privitera said at the annual meeting of the American Epilepsy Society. Studies have found that some patients can predict the likelihood of seizures in the next 24 hours better than chance. In the future, algorithms that incorporate variables such as pulse, stress, mood, electrodermal activity, circadian rhythms, and EEG may further refine seizure prediction.
A complex picture
One problem with predicting seizures is that “you can have substantial changes in the seizure tendency, but not have a seizure,” Dr. Privitera said. Stress, alcohol, and missed medications, for example, may affect the seizure threshold. “They may be additive, and it may be when those things all hit at once that a seizure happens.”
Many patients report prodromal or premonitory symptoms before a seizure. “Most of us as clinicians will say, ‘Well, maybe you have some inkling, but I don’t think you’re really able to predict it,’ ” Dr. Privitera said.
Sheryl R. Haut, MD, professor of neurology at the Albert Einstein College of Medicine, New York, and her colleagues prospectively looked at patient self-prediction in 2007 (Neurology. 2007 Jan 23;68[4]:262-6). The investigators followed 74 people with epilepsy who completed a daily diary in which they predicted the likelihood of a seizure occurring in the next 24 hours. Their analysis included approximately 15,000 diary days and 1,400 seizure days.
A subset of participants, about 20%, was significantly better than chance at predicting when a seizure would happen. If a patient in this subgroup said that a seizure was extremely likely, then a seizure occurred approximately 37% of the time. If a patient predicted that a seizure was extremely unlikely, there was about a 10% chance of having a seizure.
“This was a pretty substantial difference,” Dr. Privitera said. Combining patients’ predictions with their self-reported stress levels seemed to yield the most accurate predictions.
Stress and the SMILE study
About 90% of people with epilepsy identify at least one seizure precipitant, and the most commonly cited trigger is stress. When Dr. Privitera and his colleagues surveyed patients in their clinic, 82% identified stress as a trigger (Epilepsy Behav. 2014 Dec;41:74-7). More than half of these patients had used some form of stress reduction, such as exercise, yoga, or meditation; 88% of those patients thought that stress reduction helped their seizures.
Underlying anxiety was the only difference between patients who thought that their seizures were triggered by stress and those who did not. Patients who did not think that stress triggered their seizures had significantly lower scores on the Generalized Anxiety Disorders–7.
Subsequently, Dr. Haut, Dr. Privitera, and colleagues conducted the Stress Management Intervention for Living with Epilepsy (SMILE) study, a prospective, controlled trial assessing the efficacy of a stress reduction intervention for reducing seizures, as well as measuring seizure self-prediction (Neurology. 2018 Mar 13;90[11]:e963-70). The researchers randomized patients to a progressive muscle relaxation intervention or to a control group; patients in the control group wrote down their activities for the day.
Patients posted diary entries twice daily into a smartphone, reporting stress levels and mood-related variables. As in Dr. Haut’s earlier study, patients predicted whether having a seizure was extremely unlikely, unlikely, neutral, likely, or extremely likely. Mood and stress variables (such as feeling unpleasant or pleasant, relaxed or stressed, and not worried or extremely worried) were ranked on a visual analog scale from 0 to 100.
The trial included participants who had at least two seizures per month and any seizure trigger. Medications were kept stable throughout the study. During a 2-month baseline, patients tracked their seizures and stress levels. During the 3-month treatment period, patients received the active or control intervention.
In all, 64 subjects completed the study, completing all diary entries on 94% of the days. In the active-treatment group, median seizure frequency decreased by 29%, compared with a 25% decrease in the control group. However, the difference between the groups was not statistically significant. Although the 25% reduction in the control group probably is partly attributable to the placebo effect, part of the decrease may be related to a mindfulness effect from completing the diary, Dr. Privitera said.
The active-treatment group had a statistically significant reduction in self-reported stress, compared with the control group, but this decrease did not correlate with seizure reduction. Changes in anxiety levels also did not correlate with seizures.
“It does not disprove the [stress] hypothesis, but it does tell us that there is more going on with stress and seizure triggers than just patients’ self-reported stress,” Dr. Privitera said.
Patients’ predictions
The seizure prediction findings in SMILE were similar to those of Dr. Haut’s earlier study. Among the 10 highest predictors out of the 64 participants, “when they said that a seizure was extremely likely, they were 8.36 times more likely to have a seizure than when they said a seizure was extremely unlikely,” Dr. Privitera said.
Many patients seemed to increase their predicted seizure probabilities in the days after having a seizure. In addition, feeling sad, nervous, worried, tense, or stressed significantly increased the likelihood that a patient would predict that a seizure was coming. However, these feelings were “not very accurate [for predicting] actual seizures,” he said. “Some people are better predictors, but really the basis of that prediction remains to be seen. One of my hypotheses is that some of these people may actually be responding to subclinical EEG changes.”
Together, these self-prediction studies include data from 4,500 seizures and 26,000 diary entries and show that “there is some information in patient self-report that can help us in understanding how to predict and when to predict seizures,” Dr. Privitera said.
Incorporating cardiac, EEG, and other variables
Various other factors may warrant inclusion in a seizure forecasting system. A new vagus nerve stimulation system responds to heart rate changes that occur at seizure onset. And for decades, researchers have studied the potential for EEG readings to predict seizures. A 2008 analysis of 47 reports concluded that limited progress had been made in predicting a seizure from interictal EEG (Epilepsy Behav. 2008 Jan;12[1]:128-35). Now, however, long-term intracranial recordings are providing new and important information about EEG patterns.
Whereas early studies examined EEG recordings from epilepsy monitoring units – when patients may have been sleep deprived, had medications removed, or recently undergone surgery – chronic intracranial recordings from devices such as the RNS (responsive neurostimulation) System have allowed researchers to look long term at EEG changes that are more representative of patients’ typical EEG patterns.
The RNS System detects interictal spikes and seizure discharges and then provides an electrical stimulation to stop seizures. “When you look at these recordings, there are a lot more electrographic seizures than clinical seizures that trigger these stimulations,” said Dr. Privitera. “If you look at somebody with a typical RNS, they may have 100 stimulations in a day and no clinical seizures. There are lots and lots of subclinical electrographic bursts – and not just spikes, but things that look like short electrographic seizures – that occur throughout the day.”
A handheld device
Researchers in Melbourne designed a system that uses implanted electrodes to provide chronic recordings (Lancet Neurol. 2013 Jun;12[6]:563-71). An algorithm then learned to predict the likelihood of a seizure from the patient’s data as the system recorded over time. The system could indicate when a seizure was likely by displaying a light on a handheld device. Patients were recorded for between 6 months and 3 years.
“There was a statistically significant ability to predict when seizures were happening,” Dr. Privitera said. “There is information in long-term intracranial recordings in many of these people that will help allow us to do a better prediction than what we are able to do right now, which is essentially not much.”
This research suggests that pooling data across patients may not be an effective seizure prediction strategy because different epilepsy types have different patterns. In addition, an individual’s patterns may differ from a group’s patterns. Complicating matters, individual patients may have multiple seizure types with different onset mechanisms.
“Another important lesson is that false positives in a deterministic sense may not represent false positives in a probabilistic sense,” Dr. Privitera said. “That is, when the seizure prediction program – whether it is the diary or the intracranial EEG or anything else – says the threshold changed, but you did not have a seizure, it does not mean that your prediction system was wrong. If the seizure tendency is going up … and your system says the seizure tendency went up, but all you are measuring is actual seizures, it looks like it is a false positive prediction of seizures. But in fact it is a true positive prediction of the seizure tendency changing but not necessarily reaching seizure threshold.”
Multiday patterns
Recent research shows that “we are just at the start,” Dr. Privitera said. “There are patterns underlying seizure frequency that … we are only beginning to be able to look at because of these chronic recordings.”
Baud et al. analyzed interictal epileptiform activity and seizures in patients who have had responsive neurostimulators for as long as 10 years (Nat Commun. 2018 Jan 8;9[1]:88). “What they found was that interictal spikes and rhythmic discharges oscillate with circadian and multiday periods that differ from person to person,” Dr. Privitera said. “There were multiday periodicities, most commonly in the 20- to 30-day duration, that were relatively stable over periods of time that lasted up to years.”
Researchers knew that seizures in women of childbearing age can cluster in association with the menstrual cycle, but similar cycles also were seen in men. In addition, the researchers found that seizures “occur preferentially during the rising phase of these multiday interictal rhythms,” which has implications for seizure forecasts, Dr. Privitera noted.
Stress biomarkers and wearables
Future seizure prediction methods may incorporate other biomarkers, such as stress hormones. A researcher at the University of Cincinnati, Jason Heikenfeld, PhD, is conducting research with a sensor that sticks to the wrist and measures sweat content, Dr. Privitera said. The technology originally was developed to measure sodium and potassium in sweat, but Dr. Privitera’s group has been working with him to measure cortisol, which may be a biomarker for stress and be useful for seizure prediction.
“Multivariate models are needed. We have lots of different ways that we can look at seizure prediction, and most likely the most accurate seizure prediction programs will incorporate multiple different areas,” Dr. Privitera said. “Seizure forecasting is possible. We can do it now. We can probably do it better than chance in many patients. ... It is important because changes in seizure likelihood could lead to pharmacologic or device or behavioral interventions that may help prevent seizures.”
Dr. Privitera reported conducting contracted research for Greenwich and SK Life Science and receiving consulting fees from Upsher-Smith and Astellas.
SOURCE: Privitera M. AES 2018, Judith Hoyer Lecture in Epilepsy.
NEW ORLEANS – For people with epilepsy, “the sudden and apparently unpredictable nature of seizures is one of the most disabling aspects of having the disorder,” said Michael Privitera, MD.
If a patient knew that “tomorrow will be a dangerous day” with a 50% chance of having a seizure, the patient could avoid hazardous activities, try to reduce stress, or increase supervision to reduce the risk of sudden, unexpected death in epilepsy, said Dr. Privitera, professor of neurology and director of the epilepsy center at the University of Cincinnati Gardner Neuroscience Institute. Physicians might be able to intervene during high-risk periods by altering antiepileptic drug regimens.
Evidence suggests that seizure prediction is possible today and that advances in wearable devices and analysis of chronic EEG recordings likely will improve the ability to predict seizures, Dr. Privitera said at the annual meeting of the American Epilepsy Society. Studies have found that some patients can predict the likelihood of seizures in the next 24 hours better than chance. In the future, algorithms that incorporate variables such as pulse, stress, mood, electrodermal activity, circadian rhythms, and EEG may further refine seizure prediction.
A complex picture
One problem with predicting seizures is that “you can have substantial changes in the seizure tendency, but not have a seizure,” Dr. Privitera said. Stress, alcohol, and missed medications, for example, may affect the seizure threshold. “They may be additive, and it may be when those things all hit at once that a seizure happens.”
Many patients report prodromal or premonitory symptoms before a seizure. “Most of us as clinicians will say, ‘Well, maybe you have some inkling, but I don’t think you’re really able to predict it,’ ” Dr. Privitera said.
Sheryl R. Haut, MD, professor of neurology at the Albert Einstein College of Medicine, New York, and her colleagues prospectively looked at patient self-prediction in 2007 (Neurology. 2007 Jan 23;68[4]:262-6). The investigators followed 74 people with epilepsy who completed a daily diary in which they predicted the likelihood of a seizure occurring in the next 24 hours. Their analysis included approximately 15,000 diary days and 1,400 seizure days.
A subset of participants, about 20%, was significantly better than chance at predicting when a seizure would happen. If a patient in this subgroup said that a seizure was extremely likely, then a seizure occurred approximately 37% of the time. If a patient predicted that a seizure was extremely unlikely, there was about a 10% chance of having a seizure.
“This was a pretty substantial difference,” Dr. Privitera said. Combining patients’ predictions with their self-reported stress levels seemed to yield the most accurate predictions.
Stress and the SMILE study
About 90% of people with epilepsy identify at least one seizure precipitant, and the most commonly cited trigger is stress. When Dr. Privitera and his colleagues surveyed patients in their clinic, 82% identified stress as a trigger (Epilepsy Behav. 2014 Dec;41:74-7). More than half of these patients had used some form of stress reduction, such as exercise, yoga, or meditation; 88% of those patients thought that stress reduction helped their seizures.
Underlying anxiety was the only difference between patients who thought that their seizures were triggered by stress and those who did not. Patients who did not think that stress triggered their seizures had significantly lower scores on the Generalized Anxiety Disorders–7.
Subsequently, Dr. Haut, Dr. Privitera, and colleagues conducted the Stress Management Intervention for Living with Epilepsy (SMILE) study, a prospective, controlled trial assessing the efficacy of a stress reduction intervention for reducing seizures, as well as measuring seizure self-prediction (Neurology. 2018 Mar 13;90[11]:e963-70). The researchers randomized patients to a progressive muscle relaxation intervention or to a control group; patients in the control group wrote down their activities for the day.
Patients posted diary entries twice daily into a smartphone, reporting stress levels and mood-related variables. As in Dr. Haut’s earlier study, patients predicted whether having a seizure was extremely unlikely, unlikely, neutral, likely, or extremely likely. Mood and stress variables (such as feeling unpleasant or pleasant, relaxed or stressed, and not worried or extremely worried) were ranked on a visual analog scale from 0 to 100.
The trial included participants who had at least two seizures per month and any seizure trigger. Medications were kept stable throughout the study. During a 2-month baseline, patients tracked their seizures and stress levels. During the 3-month treatment period, patients received the active or control intervention.
In all, 64 subjects completed the study, completing all diary entries on 94% of the days. In the active-treatment group, median seizure frequency decreased by 29%, compared with a 25% decrease in the control group. However, the difference between the groups was not statistically significant. Although the 25% reduction in the control group probably is partly attributable to the placebo effect, part of the decrease may be related to a mindfulness effect from completing the diary, Dr. Privitera said.
The active-treatment group had a statistically significant reduction in self-reported stress, compared with the control group, but this decrease did not correlate with seizure reduction. Changes in anxiety levels also did not correlate with seizures.
“It does not disprove the [stress] hypothesis, but it does tell us that there is more going on with stress and seizure triggers than just patients’ self-reported stress,” Dr. Privitera said.
Patients’ predictions
The seizure prediction findings in SMILE were similar to those of Dr. Haut’s earlier study. Among the 10 highest predictors out of the 64 participants, “when they said that a seizure was extremely likely, they were 8.36 times more likely to have a seizure than when they said a seizure was extremely unlikely,” Dr. Privitera said.
Many patients seemed to increase their predicted seizure probabilities in the days after having a seizure. In addition, feeling sad, nervous, worried, tense, or stressed significantly increased the likelihood that a patient would predict that a seizure was coming. However, these feelings were “not very accurate [for predicting] actual seizures,” he said. “Some people are better predictors, but really the basis of that prediction remains to be seen. One of my hypotheses is that some of these people may actually be responding to subclinical EEG changes.”
Together, these self-prediction studies include data from 4,500 seizures and 26,000 diary entries and show that “there is some information in patient self-report that can help us in understanding how to predict and when to predict seizures,” Dr. Privitera said.
Incorporating cardiac, EEG, and other variables
Various other factors may warrant inclusion in a seizure forecasting system. A new vagus nerve stimulation system responds to heart rate changes that occur at seizure onset. And for decades, researchers have studied the potential for EEG readings to predict seizures. A 2008 analysis of 47 reports concluded that limited progress had been made in predicting a seizure from interictal EEG (Epilepsy Behav. 2008 Jan;12[1]:128-35). Now, however, long-term intracranial recordings are providing new and important information about EEG patterns.
Whereas early studies examined EEG recordings from epilepsy monitoring units – when patients may have been sleep deprived, had medications removed, or recently undergone surgery – chronic intracranial recordings from devices such as the RNS (responsive neurostimulation) System have allowed researchers to look long term at EEG changes that are more representative of patients’ typical EEG patterns.
The RNS System detects interictal spikes and seizure discharges and then provides an electrical stimulation to stop seizures. “When you look at these recordings, there are a lot more electrographic seizures than clinical seizures that trigger these stimulations,” said Dr. Privitera. “If you look at somebody with a typical RNS, they may have 100 stimulations in a day and no clinical seizures. There are lots and lots of subclinical electrographic bursts – and not just spikes, but things that look like short electrographic seizures – that occur throughout the day.”
A handheld device
Researchers in Melbourne designed a system that uses implanted electrodes to provide chronic recordings (Lancet Neurol. 2013 Jun;12[6]:563-71). An algorithm then learned to predict the likelihood of a seizure from the patient’s data as the system recorded over time. The system could indicate when a seizure was likely by displaying a light on a handheld device. Patients were recorded for between 6 months and 3 years.
“There was a statistically significant ability to predict when seizures were happening,” Dr. Privitera said. “There is information in long-term intracranial recordings in many of these people that will help allow us to do a better prediction than what we are able to do right now, which is essentially not much.”
This research suggests that pooling data across patients may not be an effective seizure prediction strategy because different epilepsy types have different patterns. In addition, an individual’s patterns may differ from a group’s patterns. Complicating matters, individual patients may have multiple seizure types with different onset mechanisms.
“Another important lesson is that false positives in a deterministic sense may not represent false positives in a probabilistic sense,” Dr. Privitera said. “That is, when the seizure prediction program – whether it is the diary or the intracranial EEG or anything else – says the threshold changed, but you did not have a seizure, it does not mean that your prediction system was wrong. If the seizure tendency is going up … and your system says the seizure tendency went up, but all you are measuring is actual seizures, it looks like it is a false positive prediction of seizures. But in fact it is a true positive prediction of the seizure tendency changing but not necessarily reaching seizure threshold.”
Multiday patterns
Recent research shows that “we are just at the start,” Dr. Privitera said. “There are patterns underlying seizure frequency that … we are only beginning to be able to look at because of these chronic recordings.”
Baud et al. analyzed interictal epileptiform activity and seizures in patients who have had responsive neurostimulators for as long as 10 years (Nat Commun. 2018 Jan 8;9[1]:88). “What they found was that interictal spikes and rhythmic discharges oscillate with circadian and multiday periods that differ from person to person,” Dr. Privitera said. “There were multiday periodicities, most commonly in the 20- to 30-day duration, that were relatively stable over periods of time that lasted up to years.”
Researchers knew that seizures in women of childbearing age can cluster in association with the menstrual cycle, but similar cycles also were seen in men. In addition, the researchers found that seizures “occur preferentially during the rising phase of these multiday interictal rhythms,” which has implications for seizure forecasts, Dr. Privitera noted.
Stress biomarkers and wearables
Future seizure prediction methods may incorporate other biomarkers, such as stress hormones. A researcher at the University of Cincinnati, Jason Heikenfeld, PhD, is conducting research with a sensor that sticks to the wrist and measures sweat content, Dr. Privitera said. The technology originally was developed to measure sodium and potassium in sweat, but Dr. Privitera’s group has been working with him to measure cortisol, which may be a biomarker for stress and be useful for seizure prediction.
“Multivariate models are needed. We have lots of different ways that we can look at seizure prediction, and most likely the most accurate seizure prediction programs will incorporate multiple different areas,” Dr. Privitera said. “Seizure forecasting is possible. We can do it now. We can probably do it better than chance in many patients. ... It is important because changes in seizure likelihood could lead to pharmacologic or device or behavioral interventions that may help prevent seizures.”
Dr. Privitera reported conducting contracted research for Greenwich and SK Life Science and receiving consulting fees from Upsher-Smith and Astellas.
SOURCE: Privitera M. AES 2018, Judith Hoyer Lecture in Epilepsy.
NEW ORLEANS – For people with epilepsy, “the sudden and apparently unpredictable nature of seizures is one of the most disabling aspects of having the disorder,” said Michael Privitera, MD.
If a patient knew that “tomorrow will be a dangerous day” with a 50% chance of having a seizure, the patient could avoid hazardous activities, try to reduce stress, or increase supervision to reduce the risk of sudden, unexpected death in epilepsy, said Dr. Privitera, professor of neurology and director of the epilepsy center at the University of Cincinnati Gardner Neuroscience Institute. Physicians might be able to intervene during high-risk periods by altering antiepileptic drug regimens.
Evidence suggests that seizure prediction is possible today and that advances in wearable devices and analysis of chronic EEG recordings likely will improve the ability to predict seizures, Dr. Privitera said at the annual meeting of the American Epilepsy Society. Studies have found that some patients can predict the likelihood of seizures in the next 24 hours better than chance. In the future, algorithms that incorporate variables such as pulse, stress, mood, electrodermal activity, circadian rhythms, and EEG may further refine seizure prediction.
A complex picture
One problem with predicting seizures is that “you can have substantial changes in the seizure tendency, but not have a seizure,” Dr. Privitera said. Stress, alcohol, and missed medications, for example, may affect the seizure threshold. “They may be additive, and it may be when those things all hit at once that a seizure happens.”
Many patients report prodromal or premonitory symptoms before a seizure. “Most of us as clinicians will say, ‘Well, maybe you have some inkling, but I don’t think you’re really able to predict it,’ ” Dr. Privitera said.
Sheryl R. Haut, MD, professor of neurology at the Albert Einstein College of Medicine, New York, and her colleagues prospectively looked at patient self-prediction in 2007 (Neurology. 2007 Jan 23;68[4]:262-6). The investigators followed 74 people with epilepsy who completed a daily diary in which they predicted the likelihood of a seizure occurring in the next 24 hours. Their analysis included approximately 15,000 diary days and 1,400 seizure days.
A subset of participants, about 20%, was significantly better than chance at predicting when a seizure would happen. If a patient in this subgroup said that a seizure was extremely likely, then a seizure occurred approximately 37% of the time. If a patient predicted that a seizure was extremely unlikely, there was about a 10% chance of having a seizure.
“This was a pretty substantial difference,” Dr. Privitera said. Combining patients’ predictions with their self-reported stress levels seemed to yield the most accurate predictions.
Stress and the SMILE study
About 90% of people with epilepsy identify at least one seizure precipitant, and the most commonly cited trigger is stress. When Dr. Privitera and his colleagues surveyed patients in their clinic, 82% identified stress as a trigger (Epilepsy Behav. 2014 Dec;41:74-7). More than half of these patients had used some form of stress reduction, such as exercise, yoga, or meditation; 88% of those patients thought that stress reduction helped their seizures.
Underlying anxiety was the only difference between patients who thought that their seizures were triggered by stress and those who did not. Patients who did not think that stress triggered their seizures had significantly lower scores on the Generalized Anxiety Disorders–7.
Subsequently, Dr. Haut, Dr. Privitera, and colleagues conducted the Stress Management Intervention for Living with Epilepsy (SMILE) study, a prospective, controlled trial assessing the efficacy of a stress reduction intervention for reducing seizures, as well as measuring seizure self-prediction (Neurology. 2018 Mar 13;90[11]:e963-70). The researchers randomized patients to a progressive muscle relaxation intervention or to a control group; patients in the control group wrote down their activities for the day.
Patients posted diary entries twice daily into a smartphone, reporting stress levels and mood-related variables. As in Dr. Haut’s earlier study, patients predicted whether having a seizure was extremely unlikely, unlikely, neutral, likely, or extremely likely. Mood and stress variables (such as feeling unpleasant or pleasant, relaxed or stressed, and not worried or extremely worried) were ranked on a visual analog scale from 0 to 100.
The trial included participants who had at least two seizures per month and any seizure trigger. Medications were kept stable throughout the study. During a 2-month baseline, patients tracked their seizures and stress levels. During the 3-month treatment period, patients received the active or control intervention.
In all, 64 subjects completed the study, completing all diary entries on 94% of the days. In the active-treatment group, median seizure frequency decreased by 29%, compared with a 25% decrease in the control group. However, the difference between the groups was not statistically significant. Although the 25% reduction in the control group probably is partly attributable to the placebo effect, part of the decrease may be related to a mindfulness effect from completing the diary, Dr. Privitera said.
The active-treatment group had a statistically significant reduction in self-reported stress, compared with the control group, but this decrease did not correlate with seizure reduction. Changes in anxiety levels also did not correlate with seizures.
“It does not disprove the [stress] hypothesis, but it does tell us that there is more going on with stress and seizure triggers than just patients’ self-reported stress,” Dr. Privitera said.
Patients’ predictions
The seizure prediction findings in SMILE were similar to those of Dr. Haut’s earlier study. Among the 10 highest predictors out of the 64 participants, “when they said that a seizure was extremely likely, they were 8.36 times more likely to have a seizure than when they said a seizure was extremely unlikely,” Dr. Privitera said.
Many patients seemed to increase their predicted seizure probabilities in the days after having a seizure. In addition, feeling sad, nervous, worried, tense, or stressed significantly increased the likelihood that a patient would predict that a seizure was coming. However, these feelings were “not very accurate [for predicting] actual seizures,” he said. “Some people are better predictors, but really the basis of that prediction remains to be seen. One of my hypotheses is that some of these people may actually be responding to subclinical EEG changes.”
Together, these self-prediction studies include data from 4,500 seizures and 26,000 diary entries and show that “there is some information in patient self-report that can help us in understanding how to predict and when to predict seizures,” Dr. Privitera said.
Incorporating cardiac, EEG, and other variables
Various other factors may warrant inclusion in a seizure forecasting system. A new vagus nerve stimulation system responds to heart rate changes that occur at seizure onset. And for decades, researchers have studied the potential for EEG readings to predict seizures. A 2008 analysis of 47 reports concluded that limited progress had been made in predicting a seizure from interictal EEG (Epilepsy Behav. 2008 Jan;12[1]:128-35). Now, however, long-term intracranial recordings are providing new and important information about EEG patterns.
Whereas early studies examined EEG recordings from epilepsy monitoring units – when patients may have been sleep deprived, had medications removed, or recently undergone surgery – chronic intracranial recordings from devices such as the RNS (responsive neurostimulation) System have allowed researchers to look long term at EEG changes that are more representative of patients’ typical EEG patterns.
The RNS System detects interictal spikes and seizure discharges and then provides an electrical stimulation to stop seizures. “When you look at these recordings, there are a lot more electrographic seizures than clinical seizures that trigger these stimulations,” said Dr. Privitera. “If you look at somebody with a typical RNS, they may have 100 stimulations in a day and no clinical seizures. There are lots and lots of subclinical electrographic bursts – and not just spikes, but things that look like short electrographic seizures – that occur throughout the day.”
A handheld device
Researchers in Melbourne designed a system that uses implanted electrodes to provide chronic recordings (Lancet Neurol. 2013 Jun;12[6]:563-71). An algorithm then learned to predict the likelihood of a seizure from the patient’s data as the system recorded over time. The system could indicate when a seizure was likely by displaying a light on a handheld device. Patients were recorded for between 6 months and 3 years.
“There was a statistically significant ability to predict when seizures were happening,” Dr. Privitera said. “There is information in long-term intracranial recordings in many of these people that will help allow us to do a better prediction than what we are able to do right now, which is essentially not much.”
This research suggests that pooling data across patients may not be an effective seizure prediction strategy because different epilepsy types have different patterns. In addition, an individual’s patterns may differ from a group’s patterns. Complicating matters, individual patients may have multiple seizure types with different onset mechanisms.
“Another important lesson is that false positives in a deterministic sense may not represent false positives in a probabilistic sense,” Dr. Privitera said. “That is, when the seizure prediction program – whether it is the diary or the intracranial EEG or anything else – says the threshold changed, but you did not have a seizure, it does not mean that your prediction system was wrong. If the seizure tendency is going up … and your system says the seizure tendency went up, but all you are measuring is actual seizures, it looks like it is a false positive prediction of seizures. But in fact it is a true positive prediction of the seizure tendency changing but not necessarily reaching seizure threshold.”
Multiday patterns
Recent research shows that “we are just at the start,” Dr. Privitera said. “There are patterns underlying seizure frequency that … we are only beginning to be able to look at because of these chronic recordings.”
Baud et al. analyzed interictal epileptiform activity and seizures in patients who have had responsive neurostimulators for as long as 10 years (Nat Commun. 2018 Jan 8;9[1]:88). “What they found was that interictal spikes and rhythmic discharges oscillate with circadian and multiday periods that differ from person to person,” Dr. Privitera said. “There were multiday periodicities, most commonly in the 20- to 30-day duration, that were relatively stable over periods of time that lasted up to years.”
Researchers knew that seizures in women of childbearing age can cluster in association with the menstrual cycle, but similar cycles also were seen in men. In addition, the researchers found that seizures “occur preferentially during the rising phase of these multiday interictal rhythms,” which has implications for seizure forecasts, Dr. Privitera noted.
Stress biomarkers and wearables
Future seizure prediction methods may incorporate other biomarkers, such as stress hormones. A researcher at the University of Cincinnati, Jason Heikenfeld, PhD, is conducting research with a sensor that sticks to the wrist and measures sweat content, Dr. Privitera said. The technology originally was developed to measure sodium and potassium in sweat, but Dr. Privitera’s group has been working with him to measure cortisol, which may be a biomarker for stress and be useful for seizure prediction.
“Multivariate models are needed. We have lots of different ways that we can look at seizure prediction, and most likely the most accurate seizure prediction programs will incorporate multiple different areas,” Dr. Privitera said. “Seizure forecasting is possible. We can do it now. We can probably do it better than chance in many patients. ... It is important because changes in seizure likelihood could lead to pharmacologic or device or behavioral interventions that may help prevent seizures.”
Dr. Privitera reported conducting contracted research for Greenwich and SK Life Science and receiving consulting fees from Upsher-Smith and Astellas.
SOURCE: Privitera M. AES 2018, Judith Hoyer Lecture in Epilepsy.
REPORTING FROM AES 2018
FDA approves generic version of vigabatrin
The drug is approved for the adjunctive treatment of focal seizures in patients aged 10 years and older who have not had an adequate response to other therapies.
The approval was granted to Teva Pharmaceuticals.
An FDA announcement noted that the agency has prioritized the approval of generic versions of drugs to improve access to treatments and to lower drug costs. Vigabatrin had been included on an FDA list of off-patent, off-exclusivity branded drugs without approved generics. The approval of generic vigabatrin “demonstrates that there is an open pathway to approving products like this one,” said FDA Commissioner Scott Gottlieb, MD.
The label for vigabatrin tablets includes a boxed warning for permanent vision loss. The generic vigabatrin tablets are part of a single shared-system Risk Evaluation and Mitigation Strategy (REMS) program with other drug products containing vigabatrin.
The most common side effects associated with vigabatrin tablets include dizziness, fatigue, sleepiness, involuntary eye movement, tremor, blurred vision, memory impairment, weight gain, joint pain, upper respiratory tract infection, aggression, double vision, abnormal coordination, and a confused state. Serious side effects associated with vigabatrin tablets include permanent vision loss and risk of suicidal thoughts or actions.
The drug is approved for the adjunctive treatment of focal seizures in patients aged 10 years and older who have not had an adequate response to other therapies.
The approval was granted to Teva Pharmaceuticals.
An FDA announcement noted that the agency has prioritized the approval of generic versions of drugs to improve access to treatments and to lower drug costs. Vigabatrin had been included on an FDA list of off-patent, off-exclusivity branded drugs without approved generics. The approval of generic vigabatrin “demonstrates that there is an open pathway to approving products like this one,” said FDA Commissioner Scott Gottlieb, MD.
The label for vigabatrin tablets includes a boxed warning for permanent vision loss. The generic vigabatrin tablets are part of a single shared-system Risk Evaluation and Mitigation Strategy (REMS) program with other drug products containing vigabatrin.
The most common side effects associated with vigabatrin tablets include dizziness, fatigue, sleepiness, involuntary eye movement, tremor, blurred vision, memory impairment, weight gain, joint pain, upper respiratory tract infection, aggression, double vision, abnormal coordination, and a confused state. Serious side effects associated with vigabatrin tablets include permanent vision loss and risk of suicidal thoughts or actions.
The drug is approved for the adjunctive treatment of focal seizures in patients aged 10 years and older who have not had an adequate response to other therapies.
The approval was granted to Teva Pharmaceuticals.
An FDA announcement noted that the agency has prioritized the approval of generic versions of drugs to improve access to treatments and to lower drug costs. Vigabatrin had been included on an FDA list of off-patent, off-exclusivity branded drugs without approved generics. The approval of generic vigabatrin “demonstrates that there is an open pathway to approving products like this one,” said FDA Commissioner Scott Gottlieb, MD.
The label for vigabatrin tablets includes a boxed warning for permanent vision loss. The generic vigabatrin tablets are part of a single shared-system Risk Evaluation and Mitigation Strategy (REMS) program with other drug products containing vigabatrin.
The most common side effects associated with vigabatrin tablets include dizziness, fatigue, sleepiness, involuntary eye movement, tremor, blurred vision, memory impairment, weight gain, joint pain, upper respiratory tract infection, aggression, double vision, abnormal coordination, and a confused state. Serious side effects associated with vigabatrin tablets include permanent vision loss and risk of suicidal thoughts or actions.
Tic disorders are associated with obesity and diabetes
The movement disorders are associated with cardiometabolic problems “even after taking into account a number of covariates and shared familial confounders and excluding relevant psychiatric comorbidities,” the researchers wrote. “The results highlight the importance of carefully monitoring cardiometabolic health in patients with Tourette syndrome or chronic tic disorder across the lifespan, particularly in those with comorbid attention-deficit/hyperactivity disorder (ADHD).”
Gustaf Brander, a researcher in the department of clinical neuroscience at Karolinska Institutet in Stockholm, and his colleagues conducted a longitudinal population-based cohort study of individuals living in Sweden between Jan. 1, 1973, and Dec. 31, 2013. The researchers assessed outcomes for patients with previously validated diagnoses of Tourette syndrome or chronic tic disorder in the Swedish National Patient Register. Main outcomes included obesity, dyslipidemia, hypertension, T2DM, and cardiovascular diseases, including ischemic heart diseases, arrhythmia, cerebrovascular diseases, transient ischemic attack, and arteriosclerosis. In addition, the researchers identified families with full siblings discordant for Tourette syndrome or chronic tic disorder.
Of the more than 14 million individuals in the cohort, 7,804 (76.4% male; median age at first diagnosis, 13.3 years) had a diagnosis of Tourette syndrome or chronic tic disorder in specialist care. Furthermore, the cohort included 5,141 families with full siblings who were discordant for these disorders.
Individuals with Tourette syndrome or chronic tic disorder had a higher risk for any metabolic or cardiovascular disorder, compared with the general population (hazard ratio adjusted by sex and birth year [aHR], 1.99) and sibling controls (aHR, 1.37). Specifically, individuals with Tourette syndrome or chronic tic disorder had higher risks for obesity (aHR, 2.76), T2DM(aHR, 1.67), and circulatory system diseases (aHR, 1.76).
The increased risk of any cardiometabolic disorder was significantly greater for males than it was for females (aHRs, 2.13 vs. 1.79), as was the risk of obesity (aHRs, 3.24 vs. 1.97).
The increased risk for cardiometabolic disorders in this patient population was evident by age 8 years. Exclusion of those patients with comorbid ADHD reduced but did not eliminate the risk (aHR, 1.52). The exclusion of other comorbidities did not significantly affect the results. Among patients with Tourette syndrome or chronic tic disorder, those who had received antipsychotic treatment for more than 1 year were significantly less likely to have metabolic and cardiovascular disorders, compared with patients not taking antipsychotic medication. This association may be related to “greater medical vigilance” and “should not be taken as evidence that antipsychotics are free from cardiometabolic adverse effects,” the authors noted.
The study was supported by a research grant from Tourettes Action. In addition, authors reported support from the Swedish Research Council and a Karolinska Institutet PhD stipend. Two authors disclosed personal fees from publishers, and one author disclosed grants and other funding from Shire.
SOURCE: Brander G et al. JAMA Neurol. 2019 Jan 14. doi: 10.1001/jamaneurol.2018.4279.
The movement disorders are associated with cardiometabolic problems “even after taking into account a number of covariates and shared familial confounders and excluding relevant psychiatric comorbidities,” the researchers wrote. “The results highlight the importance of carefully monitoring cardiometabolic health in patients with Tourette syndrome or chronic tic disorder across the lifespan, particularly in those with comorbid attention-deficit/hyperactivity disorder (ADHD).”
Gustaf Brander, a researcher in the department of clinical neuroscience at Karolinska Institutet in Stockholm, and his colleagues conducted a longitudinal population-based cohort study of individuals living in Sweden between Jan. 1, 1973, and Dec. 31, 2013. The researchers assessed outcomes for patients with previously validated diagnoses of Tourette syndrome or chronic tic disorder in the Swedish National Patient Register. Main outcomes included obesity, dyslipidemia, hypertension, T2DM, and cardiovascular diseases, including ischemic heart diseases, arrhythmia, cerebrovascular diseases, transient ischemic attack, and arteriosclerosis. In addition, the researchers identified families with full siblings discordant for Tourette syndrome or chronic tic disorder.
Of the more than 14 million individuals in the cohort, 7,804 (76.4% male; median age at first diagnosis, 13.3 years) had a diagnosis of Tourette syndrome or chronic tic disorder in specialist care. Furthermore, the cohort included 5,141 families with full siblings who were discordant for these disorders.
Individuals with Tourette syndrome or chronic tic disorder had a higher risk for any metabolic or cardiovascular disorder, compared with the general population (hazard ratio adjusted by sex and birth year [aHR], 1.99) and sibling controls (aHR, 1.37). Specifically, individuals with Tourette syndrome or chronic tic disorder had higher risks for obesity (aHR, 2.76), T2DM(aHR, 1.67), and circulatory system diseases (aHR, 1.76).
The increased risk of any cardiometabolic disorder was significantly greater for males than it was for females (aHRs, 2.13 vs. 1.79), as was the risk of obesity (aHRs, 3.24 vs. 1.97).
The increased risk for cardiometabolic disorders in this patient population was evident by age 8 years. Exclusion of those patients with comorbid ADHD reduced but did not eliminate the risk (aHR, 1.52). The exclusion of other comorbidities did not significantly affect the results. Among patients with Tourette syndrome or chronic tic disorder, those who had received antipsychotic treatment for more than 1 year were significantly less likely to have metabolic and cardiovascular disorders, compared with patients not taking antipsychotic medication. This association may be related to “greater medical vigilance” and “should not be taken as evidence that antipsychotics are free from cardiometabolic adverse effects,” the authors noted.
The study was supported by a research grant from Tourettes Action. In addition, authors reported support from the Swedish Research Council and a Karolinska Institutet PhD stipend. Two authors disclosed personal fees from publishers, and one author disclosed grants and other funding from Shire.
SOURCE: Brander G et al. JAMA Neurol. 2019 Jan 14. doi: 10.1001/jamaneurol.2018.4279.
The movement disorders are associated with cardiometabolic problems “even after taking into account a number of covariates and shared familial confounders and excluding relevant psychiatric comorbidities,” the researchers wrote. “The results highlight the importance of carefully monitoring cardiometabolic health in patients with Tourette syndrome or chronic tic disorder across the lifespan, particularly in those with comorbid attention-deficit/hyperactivity disorder (ADHD).”
Gustaf Brander, a researcher in the department of clinical neuroscience at Karolinska Institutet in Stockholm, and his colleagues conducted a longitudinal population-based cohort study of individuals living in Sweden between Jan. 1, 1973, and Dec. 31, 2013. The researchers assessed outcomes for patients with previously validated diagnoses of Tourette syndrome or chronic tic disorder in the Swedish National Patient Register. Main outcomes included obesity, dyslipidemia, hypertension, T2DM, and cardiovascular diseases, including ischemic heart diseases, arrhythmia, cerebrovascular diseases, transient ischemic attack, and arteriosclerosis. In addition, the researchers identified families with full siblings discordant for Tourette syndrome or chronic tic disorder.
Of the more than 14 million individuals in the cohort, 7,804 (76.4% male; median age at first diagnosis, 13.3 years) had a diagnosis of Tourette syndrome or chronic tic disorder in specialist care. Furthermore, the cohort included 5,141 families with full siblings who were discordant for these disorders.
Individuals with Tourette syndrome or chronic tic disorder had a higher risk for any metabolic or cardiovascular disorder, compared with the general population (hazard ratio adjusted by sex and birth year [aHR], 1.99) and sibling controls (aHR, 1.37). Specifically, individuals with Tourette syndrome or chronic tic disorder had higher risks for obesity (aHR, 2.76), T2DM(aHR, 1.67), and circulatory system diseases (aHR, 1.76).
The increased risk of any cardiometabolic disorder was significantly greater for males than it was for females (aHRs, 2.13 vs. 1.79), as was the risk of obesity (aHRs, 3.24 vs. 1.97).
The increased risk for cardiometabolic disorders in this patient population was evident by age 8 years. Exclusion of those patients with comorbid ADHD reduced but did not eliminate the risk (aHR, 1.52). The exclusion of other comorbidities did not significantly affect the results. Among patients with Tourette syndrome or chronic tic disorder, those who had received antipsychotic treatment for more than 1 year were significantly less likely to have metabolic and cardiovascular disorders, compared with patients not taking antipsychotic medication. This association may be related to “greater medical vigilance” and “should not be taken as evidence that antipsychotics are free from cardiometabolic adverse effects,” the authors noted.
The study was supported by a research grant from Tourettes Action. In addition, authors reported support from the Swedish Research Council and a Karolinska Institutet PhD stipend. Two authors disclosed personal fees from publishers, and one author disclosed grants and other funding from Shire.
SOURCE: Brander G et al. JAMA Neurol. 2019 Jan 14. doi: 10.1001/jamaneurol.2018.4279.
FROM JAMA NEUROLOGY
Key clinical point: Monitor cardiometabolic health in patients with Tourette syndrome or chronic tic disorder.
Major finding: Patients with Tourette syndrome or chronic tic disorder have a higher risk of metabolic or cardiovascular disorders, compared with the general population (adjusted hazard ratio, 1.99) and sibling controls (adjusted hazard ratio, 1.37).
Study details: A Swedish longitudinal, population-based cohort study of 7,804 individuals with Tourette syndrome or chronic tic disorder.
Disclosures: The study was supported by a research grant from Tourettes Action. Authors reported support from the Swedish Research Council and a Karolinska Institutet PhD stipend. Two authors disclosed personal fees from publishers, and one author disclosed grants and other funding from Shire.
Source: Brander G et al. JAMA Neurol. 2019 Jan 14. doi: 10.1001/jamaneurol.2018.4279.
Does rituximab delay disability progression in patients with secondary progressive MS?
, according to a retrospective analysis published online Jan. 7 in
The results suggest that “B-cell depletion by rituximab may be therapeutically beneficial in these patients,” said study author Yvonne Naegelin, MD, of the department of neurology at University Hospital Basel, Switzerland, and her colleagues. “A prospective randomized clinical trial with a better level of evidence is needed to confirm the efficacy of rituximab in such patients.”
Research indicates that B cells play a role in the pathogenesis of relapsing-remitting and secondary progressive MS, and rituximab, a monoclonal CD20 antibody, may deplete B cells in the peripheral immune system and CNS. “Owing to the limited treatment options for secondary progressive MS and the extrapolation of results in relapsing-remitting MS and primary progressive MS, rituximab was used off-label for the treatment of secondary progressive MS,” the authors said. They compared disability progression in patients who were treated with rituximab at MS centers in Switzerland with disability of control patients with secondary progressive MS who did not receive rituximab. The control patients were part of an observational cohort study at MS centers in Switzerland and the Netherlands. Data for the present analysis were collected between 2004 and 2017.
The investigators matched rituximab-treated and control patients 1:1 using propensity scores. Matching variables were sex, age, EDSS score, and disease duration at baseline. Rituximab-treated patients had a mean age of 49.7 years, mean disease duration of 18.2 years, and mean EDSS score of 5.9; 59% were women. Controls had a mean age of 51.3 years, mean disease duration of 19.4 years, and mean EDSS score of 5.7; 61% were women.
A covariate-adjusted analysis of the matched set found that rituximab-treated patients had a significantly lower EDSS score during a mean follow-up of 3.5 years (mean difference, –0.52). In addition, time to confirmed disability progression was delayed in the rituximab-treated group (hazard ratio, 0.49). “Approximately 75% of untreated and 50% of treated individuals in our cohorts developed clinically significant confirmed progression for the 10-year period,” Dr. Naegelin and her colleagues reported. Complications, mainly related to infections, occurred in five cases during treatment. The researchers did not identify major safety concerns, however.
Dr. Naegelin had no conflict of interest disclosures. Several coauthors disclosed research support and compensation from pharmaceutical companies.
SOURCE: Naegelin Y et al. JAMA Neurol. 2019 Jan 7. doi: 10.1001/jamaneurol.2018.4239.
, according to a retrospective analysis published online Jan. 7 in
The results suggest that “B-cell depletion by rituximab may be therapeutically beneficial in these patients,” said study author Yvonne Naegelin, MD, of the department of neurology at University Hospital Basel, Switzerland, and her colleagues. “A prospective randomized clinical trial with a better level of evidence is needed to confirm the efficacy of rituximab in such patients.”
Research indicates that B cells play a role in the pathogenesis of relapsing-remitting and secondary progressive MS, and rituximab, a monoclonal CD20 antibody, may deplete B cells in the peripheral immune system and CNS. “Owing to the limited treatment options for secondary progressive MS and the extrapolation of results in relapsing-remitting MS and primary progressive MS, rituximab was used off-label for the treatment of secondary progressive MS,” the authors said. They compared disability progression in patients who were treated with rituximab at MS centers in Switzerland with disability of control patients with secondary progressive MS who did not receive rituximab. The control patients were part of an observational cohort study at MS centers in Switzerland and the Netherlands. Data for the present analysis were collected between 2004 and 2017.
The investigators matched rituximab-treated and control patients 1:1 using propensity scores. Matching variables were sex, age, EDSS score, and disease duration at baseline. Rituximab-treated patients had a mean age of 49.7 years, mean disease duration of 18.2 years, and mean EDSS score of 5.9; 59% were women. Controls had a mean age of 51.3 years, mean disease duration of 19.4 years, and mean EDSS score of 5.7; 61% were women.
A covariate-adjusted analysis of the matched set found that rituximab-treated patients had a significantly lower EDSS score during a mean follow-up of 3.5 years (mean difference, –0.52). In addition, time to confirmed disability progression was delayed in the rituximab-treated group (hazard ratio, 0.49). “Approximately 75% of untreated and 50% of treated individuals in our cohorts developed clinically significant confirmed progression for the 10-year period,” Dr. Naegelin and her colleagues reported. Complications, mainly related to infections, occurred in five cases during treatment. The researchers did not identify major safety concerns, however.
Dr. Naegelin had no conflict of interest disclosures. Several coauthors disclosed research support and compensation from pharmaceutical companies.
SOURCE: Naegelin Y et al. JAMA Neurol. 2019 Jan 7. doi: 10.1001/jamaneurol.2018.4239.
, according to a retrospective analysis published online Jan. 7 in
The results suggest that “B-cell depletion by rituximab may be therapeutically beneficial in these patients,” said study author Yvonne Naegelin, MD, of the department of neurology at University Hospital Basel, Switzerland, and her colleagues. “A prospective randomized clinical trial with a better level of evidence is needed to confirm the efficacy of rituximab in such patients.”
Research indicates that B cells play a role in the pathogenesis of relapsing-remitting and secondary progressive MS, and rituximab, a monoclonal CD20 antibody, may deplete B cells in the peripheral immune system and CNS. “Owing to the limited treatment options for secondary progressive MS and the extrapolation of results in relapsing-remitting MS and primary progressive MS, rituximab was used off-label for the treatment of secondary progressive MS,” the authors said. They compared disability progression in patients who were treated with rituximab at MS centers in Switzerland with disability of control patients with secondary progressive MS who did not receive rituximab. The control patients were part of an observational cohort study at MS centers in Switzerland and the Netherlands. Data for the present analysis were collected between 2004 and 2017.
The investigators matched rituximab-treated and control patients 1:1 using propensity scores. Matching variables were sex, age, EDSS score, and disease duration at baseline. Rituximab-treated patients had a mean age of 49.7 years, mean disease duration of 18.2 years, and mean EDSS score of 5.9; 59% were women. Controls had a mean age of 51.3 years, mean disease duration of 19.4 years, and mean EDSS score of 5.7; 61% were women.
A covariate-adjusted analysis of the matched set found that rituximab-treated patients had a significantly lower EDSS score during a mean follow-up of 3.5 years (mean difference, –0.52). In addition, time to confirmed disability progression was delayed in the rituximab-treated group (hazard ratio, 0.49). “Approximately 75% of untreated and 50% of treated individuals in our cohorts developed clinically significant confirmed progression for the 10-year period,” Dr. Naegelin and her colleagues reported. Complications, mainly related to infections, occurred in five cases during treatment. The researchers did not identify major safety concerns, however.
Dr. Naegelin had no conflict of interest disclosures. Several coauthors disclosed research support and compensation from pharmaceutical companies.
SOURCE: Naegelin Y et al. JAMA Neurol. 2019 Jan 7. doi: 10.1001/jamaneurol.2018.4239.
FROM JAMA NEUROLOGY
Key clinical point: Among patients with secondary progressive MS, those treated with rituximab may accrue less disability.
Major finding: Rituximab-treated patients, compared with controls, had a significantly lower EDSS score during a mean follow-up of 3.5 years (mean difference, –0.52).
Study details: A retrospective study of 88 propensity score–matched patients with secondary progressive MS.
Disclosures: Dr. Naegelin had no disclosures. Several coauthors disclosed research support and compensation from pharmaceutical companies.
Source: Naegelin Y et al. JAMA Neurol. 2019 Jan 7. doi: 10.1001/jamaneurol.2018.4239.