User login
Can Posttraumatic Headache Characteristics Inform Prognosis and Treatment?
OJAI, CA—Soldiers with posttraumatic headaches are “complicated patients,” said Alan G. Finkel, MD, Director of the Carolina Headache Institute in Chapel Hill, North Carolina. No drugs are approved for the treatment of posttraumatic complications, and persistent posttraumatic headaches may interfere with return to military service.
Characteristics of posttraumatic headaches—such as whether they are continuous, nummular, or holocephalic—may provide prognostic clues and suggest possible therapies, Dr. Finkel said at the 10th Annual Winter Conference of the Headache Cooperative of the Pacific. In addition, neurologists can address sleep, mood, and concussion symptoms when managing patients with posttraumatic headache.
Occupational Outcomes
Posttraumatic headaches most commonly are classified as migraine. Other classifications include tension-type headache and trigeminal autonomic cephalalgia. A patient may report multiple types of headache. Dr. Finkel and his research colleagues hypothesized that among patients with posttraumatic headache, the headache diagnosis may not be sufficient to predict occupational outcomes and that other headache characteristics might be more important.
To assess associations between headache characteristics and the outcome of leaving military service for medical reasons, Dr. Finkel and colleagues analyzed data from a retrospective cohort study. The cohort included 95 patients who were referred for headache evaluation at the Brain Injury Center at Womack Army Medical Center, Fort Bragg, North Carolina, between August 2008 and December 2009. The study was published online ahead of print February 27 in Headache.
About 14% of the patients had a history of headache, and about 40% had a prior history of concussion. The most common injury cited was blast injury (53.7%).
People were able to report as many as three headaches (ie, one continuous and two noncontinuous). The 95 patients reported 166 headaches. About 75% of the patients reported a continuous headache. Approximately 72% of patients reported a headache of a migraine type. The most clinically important headache was migraine for 61% of patients, tension-type headache for 4%, and trigeminal autonomic cephalalgias, including hemicrania continua, for 24%.
“The presence of a continuous headache was very likely to predict leaving service, and the headache diagnosis or the presence of a migraine diagnosis did not,” Dr. Finkel said.
Patients with continuous headache were approximately four times more likely to leave military service, compared with patients without continuous headache. Prior history of regular headache also appeared to predict the probability of discharge. Among patients with prior history of headache, continuous holocephalic headache, as well as the tendency to medicate and stay active with the most clinically important headache (as opposed to lying down or continuing activities without medication), also increased the likelihood of severance.
The study’s limitations included its retrospective design, the possibility of recall bias, and the lack of controls, Dr. Finkel noted.
Assessment Tools
When evaluating patients, instruments such as the Neurobehavioral Symptom Inventory and concussion checklists can be useful. “Get some tested baselines that you can then compare longitudinally,” he said.
The Balance Error Scoring System and the King–Devick test can assess concussion symptoms. “While you are making an assessment for persistent posttraumatic headache, make some comments in your chart about … whether or not they have concussive symptoms,” Dr. Finkel said. Neurologists also can assess problems with emotions and mood, which may be treatable. A combination of dextromethorphan hydrobromide and quinidine sulfate is approved for the treatment of emotional incontinence, which is associated with traumatic brain injury. Dr. Finkel uses the Pain Catastrophizing Scale and Posttraumatic Stress Disorder (PTSD) Checklist to evaluate pain-related anxiety. Neurologists also can ask patients about sleep, which may play an important role in patients’ recovery.
Treatment Options
In a clinic-based sample of 100 soldiers with chronic posttraumatic headache after mild head trauma, topiramate appeared to be an effective prophylactic.
Investigators plan to conduct a placebo-controlled trial of prazosin in patients with chronic postconcussive headache. Prazosin, an alpha one antagonist, may be prescribed to improve sleep and reduce nightmares. It may be a treatment option if a patient with chronic headache is hypervigilant and has insomnia, said Dr. Finkel. When prescribing prazosin, it is important to tell patients about the risk of fainting on the first night after taking the drug.
Defense Recommendation
The Department of Defense in February 2016 published a clinical recommendation for the primary care management of headache following concussion or mild traumatic brain injury. The recommendation describes red flags, establishes four categories into which symptoms might fall (ie, migraine, tension-type, cervicogenic, and neuropathic), and provides treatment guidance for each headache category.
If therapy alleviates holocephalic headaches, but focal pain persists, neurologists can try injecting onabotulinum toxin to treat the focal pain, Dr. Finkel said. In a case series of 64 patients with concussion-related headaches who were treated with onabotulinum toxin, 64% reported feeling better. The presence of PTSD did not appear to affect treatment outcomes, Dr. Finkel said.
Exercise and Expectation
Cardinal symptoms of concussion, including headache and PTSD, can improve with exercise, Dr. Finkel said. Evaluating patients on a treadmill can determine whether postconcussive symptoms recur at elevated heart rates. Patients can progressively increase the intensity of exercise until they are ready to resume activity.
When posttraumatic headache persists, neurologists should consider patients’ expectations. Research suggests that the language used to convey a diagnosis (eg, mild head injury, mild traumatic brain injury, or concussion) can affect what symptoms people anticipate. And patients’ perceptions of the illness may play a role in the persistence of postconcussion symptoms. Telling patients that they have a traumatic brain injury or expressing uncertainty about the diagnosis or prognosis is doing them a disservice, he said. “Tell them they are going to get better,” Dr. Finkel said.
—Jake Remaly
Suggested Reading
Erickson JC. Treatment outcomes of chronic post-traumatic headaches after mild head trauma in US soldiers: an observational study. Headache. 2011;51(6):932-944.
Finkel AG, Ivins BJ, Yerry JA, et al. Which matters more? A retrospective cohort study of headache characteristics and diagnosis type in soldiers with mTBI/concussion. Headache. 2017 Feb 27 [Epub ahead of print].
Finkel AG, Yerry JA, Klaric JS, et al. Headache in military service members with a history of mild traumatic brain injury: A cohort study of diagnosis and classification. Cephalalgia. 2016 May 20 [Epub ahead of print].
Whittaker R, Kemp S, House A. Illness perceptions and outcome in mild head injury: a longitudinal study. J Neurol Neurosurg Psychiatry. 2007;78(6):644-646.
Yerry JA, Kuehn D, Finkel AG. Onabotulinum toxin A for the treatment of headache in service members with a history of mild traumatic brain injury: a cohort study. Headache. 2015;55(3):395-406.
OJAI, CA—Soldiers with posttraumatic headaches are “complicated patients,” said Alan G. Finkel, MD, Director of the Carolina Headache Institute in Chapel Hill, North Carolina. No drugs are approved for the treatment of posttraumatic complications, and persistent posttraumatic headaches may interfere with return to military service.
Characteristics of posttraumatic headaches—such as whether they are continuous, nummular, or holocephalic—may provide prognostic clues and suggest possible therapies, Dr. Finkel said at the 10th Annual Winter Conference of the Headache Cooperative of the Pacific. In addition, neurologists can address sleep, mood, and concussion symptoms when managing patients with posttraumatic headache.
Occupational Outcomes
Posttraumatic headaches most commonly are classified as migraine. Other classifications include tension-type headache and trigeminal autonomic cephalalgia. A patient may report multiple types of headache. Dr. Finkel and his research colleagues hypothesized that among patients with posttraumatic headache, the headache diagnosis may not be sufficient to predict occupational outcomes and that other headache characteristics might be more important.
To assess associations between headache characteristics and the outcome of leaving military service for medical reasons, Dr. Finkel and colleagues analyzed data from a retrospective cohort study. The cohort included 95 patients who were referred for headache evaluation at the Brain Injury Center at Womack Army Medical Center, Fort Bragg, North Carolina, between August 2008 and December 2009. The study was published online ahead of print February 27 in Headache.
About 14% of the patients had a history of headache, and about 40% had a prior history of concussion. The most common injury cited was blast injury (53.7%).
People were able to report as many as three headaches (ie, one continuous and two noncontinuous). The 95 patients reported 166 headaches. About 75% of the patients reported a continuous headache. Approximately 72% of patients reported a headache of a migraine type. The most clinically important headache was migraine for 61% of patients, tension-type headache for 4%, and trigeminal autonomic cephalalgias, including hemicrania continua, for 24%.
“The presence of a continuous headache was very likely to predict leaving service, and the headache diagnosis or the presence of a migraine diagnosis did not,” Dr. Finkel said.
Patients with continuous headache were approximately four times more likely to leave military service, compared with patients without continuous headache. Prior history of regular headache also appeared to predict the probability of discharge. Among patients with prior history of headache, continuous holocephalic headache, as well as the tendency to medicate and stay active with the most clinically important headache (as opposed to lying down or continuing activities without medication), also increased the likelihood of severance.
The study’s limitations included its retrospective design, the possibility of recall bias, and the lack of controls, Dr. Finkel noted.
Assessment Tools
When evaluating patients, instruments such as the Neurobehavioral Symptom Inventory and concussion checklists can be useful. “Get some tested baselines that you can then compare longitudinally,” he said.
The Balance Error Scoring System and the King–Devick test can assess concussion symptoms. “While you are making an assessment for persistent posttraumatic headache, make some comments in your chart about … whether or not they have concussive symptoms,” Dr. Finkel said. Neurologists also can assess problems with emotions and mood, which may be treatable. A combination of dextromethorphan hydrobromide and quinidine sulfate is approved for the treatment of emotional incontinence, which is associated with traumatic brain injury. Dr. Finkel uses the Pain Catastrophizing Scale and Posttraumatic Stress Disorder (PTSD) Checklist to evaluate pain-related anxiety. Neurologists also can ask patients about sleep, which may play an important role in patients’ recovery.
Treatment Options
In a clinic-based sample of 100 soldiers with chronic posttraumatic headache after mild head trauma, topiramate appeared to be an effective prophylactic.
Investigators plan to conduct a placebo-controlled trial of prazosin in patients with chronic postconcussive headache. Prazosin, an alpha one antagonist, may be prescribed to improve sleep and reduce nightmares. It may be a treatment option if a patient with chronic headache is hypervigilant and has insomnia, said Dr. Finkel. When prescribing prazosin, it is important to tell patients about the risk of fainting on the first night after taking the drug.
Defense Recommendation
The Department of Defense in February 2016 published a clinical recommendation for the primary care management of headache following concussion or mild traumatic brain injury. The recommendation describes red flags, establishes four categories into which symptoms might fall (ie, migraine, tension-type, cervicogenic, and neuropathic), and provides treatment guidance for each headache category.
If therapy alleviates holocephalic headaches, but focal pain persists, neurologists can try injecting onabotulinum toxin to treat the focal pain, Dr. Finkel said. In a case series of 64 patients with concussion-related headaches who were treated with onabotulinum toxin, 64% reported feeling better. The presence of PTSD did not appear to affect treatment outcomes, Dr. Finkel said.
Exercise and Expectation
Cardinal symptoms of concussion, including headache and PTSD, can improve with exercise, Dr. Finkel said. Evaluating patients on a treadmill can determine whether postconcussive symptoms recur at elevated heart rates. Patients can progressively increase the intensity of exercise until they are ready to resume activity.
When posttraumatic headache persists, neurologists should consider patients’ expectations. Research suggests that the language used to convey a diagnosis (eg, mild head injury, mild traumatic brain injury, or concussion) can affect what symptoms people anticipate. And patients’ perceptions of the illness may play a role in the persistence of postconcussion symptoms. Telling patients that they have a traumatic brain injury or expressing uncertainty about the diagnosis or prognosis is doing them a disservice, he said. “Tell them they are going to get better,” Dr. Finkel said.
—Jake Remaly
Suggested Reading
Erickson JC. Treatment outcomes of chronic post-traumatic headaches after mild head trauma in US soldiers: an observational study. Headache. 2011;51(6):932-944.
Finkel AG, Ivins BJ, Yerry JA, et al. Which matters more? A retrospective cohort study of headache characteristics and diagnosis type in soldiers with mTBI/concussion. Headache. 2017 Feb 27 [Epub ahead of print].
Finkel AG, Yerry JA, Klaric JS, et al. Headache in military service members with a history of mild traumatic brain injury: A cohort study of diagnosis and classification. Cephalalgia. 2016 May 20 [Epub ahead of print].
Whittaker R, Kemp S, House A. Illness perceptions and outcome in mild head injury: a longitudinal study. J Neurol Neurosurg Psychiatry. 2007;78(6):644-646.
Yerry JA, Kuehn D, Finkel AG. Onabotulinum toxin A for the treatment of headache in service members with a history of mild traumatic brain injury: a cohort study. Headache. 2015;55(3):395-406.
OJAI, CA—Soldiers with posttraumatic headaches are “complicated patients,” said Alan G. Finkel, MD, Director of the Carolina Headache Institute in Chapel Hill, North Carolina. No drugs are approved for the treatment of posttraumatic complications, and persistent posttraumatic headaches may interfere with return to military service.
Characteristics of posttraumatic headaches—such as whether they are continuous, nummular, or holocephalic—may provide prognostic clues and suggest possible therapies, Dr. Finkel said at the 10th Annual Winter Conference of the Headache Cooperative of the Pacific. In addition, neurologists can address sleep, mood, and concussion symptoms when managing patients with posttraumatic headache.
Occupational Outcomes
Posttraumatic headaches most commonly are classified as migraine. Other classifications include tension-type headache and trigeminal autonomic cephalalgia. A patient may report multiple types of headache. Dr. Finkel and his research colleagues hypothesized that among patients with posttraumatic headache, the headache diagnosis may not be sufficient to predict occupational outcomes and that other headache characteristics might be more important.
To assess associations between headache characteristics and the outcome of leaving military service for medical reasons, Dr. Finkel and colleagues analyzed data from a retrospective cohort study. The cohort included 95 patients who were referred for headache evaluation at the Brain Injury Center at Womack Army Medical Center, Fort Bragg, North Carolina, between August 2008 and December 2009. The study was published online ahead of print February 27 in Headache.
About 14% of the patients had a history of headache, and about 40% had a prior history of concussion. The most common injury cited was blast injury (53.7%).
People were able to report as many as three headaches (ie, one continuous and two noncontinuous). The 95 patients reported 166 headaches. About 75% of the patients reported a continuous headache. Approximately 72% of patients reported a headache of a migraine type. The most clinically important headache was migraine for 61% of patients, tension-type headache for 4%, and trigeminal autonomic cephalalgias, including hemicrania continua, for 24%.
“The presence of a continuous headache was very likely to predict leaving service, and the headache diagnosis or the presence of a migraine diagnosis did not,” Dr. Finkel said.
Patients with continuous headache were approximately four times more likely to leave military service, compared with patients without continuous headache. Prior history of regular headache also appeared to predict the probability of discharge. Among patients with prior history of headache, continuous holocephalic headache, as well as the tendency to medicate and stay active with the most clinically important headache (as opposed to lying down or continuing activities without medication), also increased the likelihood of severance.
The study’s limitations included its retrospective design, the possibility of recall bias, and the lack of controls, Dr. Finkel noted.
Assessment Tools
When evaluating patients, instruments such as the Neurobehavioral Symptom Inventory and concussion checklists can be useful. “Get some tested baselines that you can then compare longitudinally,” he said.
The Balance Error Scoring System and the King–Devick test can assess concussion symptoms. “While you are making an assessment for persistent posttraumatic headache, make some comments in your chart about … whether or not they have concussive symptoms,” Dr. Finkel said. Neurologists also can assess problems with emotions and mood, which may be treatable. A combination of dextromethorphan hydrobromide and quinidine sulfate is approved for the treatment of emotional incontinence, which is associated with traumatic brain injury. Dr. Finkel uses the Pain Catastrophizing Scale and Posttraumatic Stress Disorder (PTSD) Checklist to evaluate pain-related anxiety. Neurologists also can ask patients about sleep, which may play an important role in patients’ recovery.
Treatment Options
In a clinic-based sample of 100 soldiers with chronic posttraumatic headache after mild head trauma, topiramate appeared to be an effective prophylactic.
Investigators plan to conduct a placebo-controlled trial of prazosin in patients with chronic postconcussive headache. Prazosin, an alpha one antagonist, may be prescribed to improve sleep and reduce nightmares. It may be a treatment option if a patient with chronic headache is hypervigilant and has insomnia, said Dr. Finkel. When prescribing prazosin, it is important to tell patients about the risk of fainting on the first night after taking the drug.
Defense Recommendation
The Department of Defense in February 2016 published a clinical recommendation for the primary care management of headache following concussion or mild traumatic brain injury. The recommendation describes red flags, establishes four categories into which symptoms might fall (ie, migraine, tension-type, cervicogenic, and neuropathic), and provides treatment guidance for each headache category.
If therapy alleviates holocephalic headaches, but focal pain persists, neurologists can try injecting onabotulinum toxin to treat the focal pain, Dr. Finkel said. In a case series of 64 patients with concussion-related headaches who were treated with onabotulinum toxin, 64% reported feeling better. The presence of PTSD did not appear to affect treatment outcomes, Dr. Finkel said.
Exercise and Expectation
Cardinal symptoms of concussion, including headache and PTSD, can improve with exercise, Dr. Finkel said. Evaluating patients on a treadmill can determine whether postconcussive symptoms recur at elevated heart rates. Patients can progressively increase the intensity of exercise until they are ready to resume activity.
When posttraumatic headache persists, neurologists should consider patients’ expectations. Research suggests that the language used to convey a diagnosis (eg, mild head injury, mild traumatic brain injury, or concussion) can affect what symptoms people anticipate. And patients’ perceptions of the illness may play a role in the persistence of postconcussion symptoms. Telling patients that they have a traumatic brain injury or expressing uncertainty about the diagnosis or prognosis is doing them a disservice, he said. “Tell them they are going to get better,” Dr. Finkel said.
—Jake Remaly
Suggested Reading
Erickson JC. Treatment outcomes of chronic post-traumatic headaches after mild head trauma in US soldiers: an observational study. Headache. 2011;51(6):932-944.
Finkel AG, Ivins BJ, Yerry JA, et al. Which matters more? A retrospective cohort study of headache characteristics and diagnosis type in soldiers with mTBI/concussion. Headache. 2017 Feb 27 [Epub ahead of print].
Finkel AG, Yerry JA, Klaric JS, et al. Headache in military service members with a history of mild traumatic brain injury: A cohort study of diagnosis and classification. Cephalalgia. 2016 May 20 [Epub ahead of print].
Whittaker R, Kemp S, House A. Illness perceptions and outcome in mild head injury: a longitudinal study. J Neurol Neurosurg Psychiatry. 2007;78(6):644-646.
Yerry JA, Kuehn D, Finkel AG. Onabotulinum toxin A for the treatment of headache in service members with a history of mild traumatic brain injury: a cohort study. Headache. 2015;55(3):395-406.
What Is the Optimal Number of Antiplatelet Agents for Preventing Recurrent Stroke?
HOUSTON—Although dual antiplatelet therapy provides a greater reduction in the risk of recurrent stroke than does antiplatelet monotherapy, adding a third agent may not increase the benefit, according to research presented at the International Stroke Conference 2017. Administering three antiplatelet agents does, however, increase the risk of bleeding, compared with dual antiplatelet therapy.
In 2012 and 2013, literature reviews suggested that the risk of recurrent stroke was reduced among patients who received two antiplatelet agents, compared with patients who received a single antiplatelet agent. These analyses prompted the question of whether triple antiplatelet therapy would provide a further risk reduction.
To examine this question, Philip Bath, MD, Chair of the Division of Clinical Neuroscience at the University of Nottingham, United Kingdom, and colleagues designed the Triple Antiplatelets for Reducing Dependency After Ischemic Stroke (TARDIS) trial. The researchers randomized patients with acute ischemic stroke or transient ischemic attack (TIA) to intensive therapy (ie, triple antiplatelet therapy) or standard of care (ie, dual antiplatelet therapy) for one month. After 30 days, all patients received standard of care. The study’s primary outcome was recurrent stroke or TIA at 90 days. Eligible patients entered the study within 48 hours of their index event, and patients with any level of stroke severity were accepted.
The investigators administered aspirin, clopidogrel, and dipyridamole to participants at standard doses. At the beginning of the trial, UK guidelines recommended the combination of aspirin and dipyridamole as first-line treatment for prevention of recurrent stroke. In 2010, the National Institute for Health and Care Excellence changed the guideline to recommend clopidogrel alone as first-line treatment. Dr. Bath and colleagues changed the treatment given to their standard-of-care group accordingly.
The researchers intended to enroll 4,100 participants, but ended the trial early on the recommendation of the data-monitoring committee. In all, 3,096 patients were enrolled. Their average age was 69. Approximately 63% of the population was male. The average time of stroke or TIA onset was 29 hours before enrollment. About one-third of patients had been taking antiplatelet therapy at baseline.
Dr. Bath and colleagues found no difference in the rate of recurrent stroke or TIA between treatment groups. Subgroup analysis found that people with an NIH Stroke Scale (NIHSS) score of 3 or lower at baseline tended to benefit from intensive therapy. Standard of care, however, tended to be superior for patients with an NIHSS score greater than 3. In addition, intensive therapy tended to be superior to the combination of aspirin and dipyridamole, but inferior to clopidogrel. Intensive therapy also was associated with significantly more bleeding and more severe bleeding.
—Erik Greb
Suggested Reading
Geeganage CM, Diener HC, Algra A, et al. Dual or mono antiplatelet therapy for patients with acute ischemic stroke or transient ischemic attack: systematic review and meta-analysis of randomized controlled trials. Stroke. 2012;43(4):1058-1066.
TARDIS Trial Investigators, Krishnan K, Beridze M, et al. Safety and efficacy of intensive vs. guideline antiplatelet therapy in high-risk patients with recent ischemic stroke or transient ischemic attack: rationale and design of the Triple Antiplatelets for Reducing Dependency after Ischaemic Stroke (TARDIS) trial (ISRCTN47823388). Int J Stroke. 2015;10(7):1159-1165.
Wong KS, Wang Y, Leng X, et al. Early dual versus mono antiplatelet therapy for acute non-cardioembolic ischemic stroke or transient ischemic attack: an updated systematic review and meta-analysis. Circulation. 2013;128(15):1656-1666.
HOUSTON—Although dual antiplatelet therapy provides a greater reduction in the risk of recurrent stroke than does antiplatelet monotherapy, adding a third agent may not increase the benefit, according to research presented at the International Stroke Conference 2017. Administering three antiplatelet agents does, however, increase the risk of bleeding, compared with dual antiplatelet therapy.
In 2012 and 2013, literature reviews suggested that the risk of recurrent stroke was reduced among patients who received two antiplatelet agents, compared with patients who received a single antiplatelet agent. These analyses prompted the question of whether triple antiplatelet therapy would provide a further risk reduction.
To examine this question, Philip Bath, MD, Chair of the Division of Clinical Neuroscience at the University of Nottingham, United Kingdom, and colleagues designed the Triple Antiplatelets for Reducing Dependency After Ischemic Stroke (TARDIS) trial. The researchers randomized patients with acute ischemic stroke or transient ischemic attack (TIA) to intensive therapy (ie, triple antiplatelet therapy) or standard of care (ie, dual antiplatelet therapy) for one month. After 30 days, all patients received standard of care. The study’s primary outcome was recurrent stroke or TIA at 90 days. Eligible patients entered the study within 48 hours of their index event, and patients with any level of stroke severity were accepted.
The investigators administered aspirin, clopidogrel, and dipyridamole to participants at standard doses. At the beginning of the trial, UK guidelines recommended the combination of aspirin and dipyridamole as first-line treatment for prevention of recurrent stroke. In 2010, the National Institute for Health and Care Excellence changed the guideline to recommend clopidogrel alone as first-line treatment. Dr. Bath and colleagues changed the treatment given to their standard-of-care group accordingly.
The researchers intended to enroll 4,100 participants, but ended the trial early on the recommendation of the data-monitoring committee. In all, 3,096 patients were enrolled. Their average age was 69. Approximately 63% of the population was male. The average time of stroke or TIA onset was 29 hours before enrollment. About one-third of patients had been taking antiplatelet therapy at baseline.
Dr. Bath and colleagues found no difference in the rate of recurrent stroke or TIA between treatment groups. Subgroup analysis found that people with an NIH Stroke Scale (NIHSS) score of 3 or lower at baseline tended to benefit from intensive therapy. Standard of care, however, tended to be superior for patients with an NIHSS score greater than 3. In addition, intensive therapy tended to be superior to the combination of aspirin and dipyridamole, but inferior to clopidogrel. Intensive therapy also was associated with significantly more bleeding and more severe bleeding.
—Erik Greb
Suggested Reading
Geeganage CM, Diener HC, Algra A, et al. Dual or mono antiplatelet therapy for patients with acute ischemic stroke or transient ischemic attack: systematic review and meta-analysis of randomized controlled trials. Stroke. 2012;43(4):1058-1066.
TARDIS Trial Investigators, Krishnan K, Beridze M, et al. Safety and efficacy of intensive vs. guideline antiplatelet therapy in high-risk patients with recent ischemic stroke or transient ischemic attack: rationale and design of the Triple Antiplatelets for Reducing Dependency after Ischaemic Stroke (TARDIS) trial (ISRCTN47823388). Int J Stroke. 2015;10(7):1159-1165.
Wong KS, Wang Y, Leng X, et al. Early dual versus mono antiplatelet therapy for acute non-cardioembolic ischemic stroke or transient ischemic attack: an updated systematic review and meta-analysis. Circulation. 2013;128(15):1656-1666.
HOUSTON—Although dual antiplatelet therapy provides a greater reduction in the risk of recurrent stroke than does antiplatelet monotherapy, adding a third agent may not increase the benefit, according to research presented at the International Stroke Conference 2017. Administering three antiplatelet agents does, however, increase the risk of bleeding, compared with dual antiplatelet therapy.
In 2012 and 2013, literature reviews suggested that the risk of recurrent stroke was reduced among patients who received two antiplatelet agents, compared with patients who received a single antiplatelet agent. These analyses prompted the question of whether triple antiplatelet therapy would provide a further risk reduction.
To examine this question, Philip Bath, MD, Chair of the Division of Clinical Neuroscience at the University of Nottingham, United Kingdom, and colleagues designed the Triple Antiplatelets for Reducing Dependency After Ischemic Stroke (TARDIS) trial. The researchers randomized patients with acute ischemic stroke or transient ischemic attack (TIA) to intensive therapy (ie, triple antiplatelet therapy) or standard of care (ie, dual antiplatelet therapy) for one month. After 30 days, all patients received standard of care. The study’s primary outcome was recurrent stroke or TIA at 90 days. Eligible patients entered the study within 48 hours of their index event, and patients with any level of stroke severity were accepted.
The investigators administered aspirin, clopidogrel, and dipyridamole to participants at standard doses. At the beginning of the trial, UK guidelines recommended the combination of aspirin and dipyridamole as first-line treatment for prevention of recurrent stroke. In 2010, the National Institute for Health and Care Excellence changed the guideline to recommend clopidogrel alone as first-line treatment. Dr. Bath and colleagues changed the treatment given to their standard-of-care group accordingly.
The researchers intended to enroll 4,100 participants, but ended the trial early on the recommendation of the data-monitoring committee. In all, 3,096 patients were enrolled. Their average age was 69. Approximately 63% of the population was male. The average time of stroke or TIA onset was 29 hours before enrollment. About one-third of patients had been taking antiplatelet therapy at baseline.
Dr. Bath and colleagues found no difference in the rate of recurrent stroke or TIA between treatment groups. Subgroup analysis found that people with an NIH Stroke Scale (NIHSS) score of 3 or lower at baseline tended to benefit from intensive therapy. Standard of care, however, tended to be superior for patients with an NIHSS score greater than 3. In addition, intensive therapy tended to be superior to the combination of aspirin and dipyridamole, but inferior to clopidogrel. Intensive therapy also was associated with significantly more bleeding and more severe bleeding.
—Erik Greb
Suggested Reading
Geeganage CM, Diener HC, Algra A, et al. Dual or mono antiplatelet therapy for patients with acute ischemic stroke or transient ischemic attack: systematic review and meta-analysis of randomized controlled trials. Stroke. 2012;43(4):1058-1066.
TARDIS Trial Investigators, Krishnan K, Beridze M, et al. Safety and efficacy of intensive vs. guideline antiplatelet therapy in high-risk patients with recent ischemic stroke or transient ischemic attack: rationale and design of the Triple Antiplatelets for Reducing Dependency after Ischaemic Stroke (TARDIS) trial (ISRCTN47823388). Int J Stroke. 2015;10(7):1159-1165.
Wong KS, Wang Y, Leng X, et al. Early dual versus mono antiplatelet therapy for acute non-cardioembolic ischemic stroke or transient ischemic attack: an updated systematic review and meta-analysis. Circulation. 2013;128(15):1656-1666.
Vaccine for Respiratory Syncytial Virus Enters Phase 1 Testing
A vaccine against respiratory syncytial virus (RSV) is entering a phase 1 safety and tolerability trial. The vaccine developed by scientists at the National Institute of Allergy and Infectious Diseases (NIAID) is badly needed, according to Anthony Fauci, MD, director of NIAID. Although common and causing usually mild symptoms RSV infection also can lead to severe lower respiratory tract diseases, such as pneumonia and bronchiolitis in infants, children, the elderly, and immune-compromised patients. Globally, RSV infections cause upwards of 250,000 deaths each year. “RSV is underappreciated as a major cause of illness and death,” Fauci said.
The vaccine (DS-Cav1) will fill a void. Currently no vaccine is available to prevent RSV infection, and no drug is available to treat it. The monoclonal antibody palivizumab is approved for preventing lower respiratory tract disease caused by RSV in high-risk children, but is not approved for use in the general population.
The study, VRC 317, will enroll healthy adults aged 18 to 50. Participants will be assigned randomly to receive 2 injections 12 weeks apart with the investigational vaccine or the investigational vaccine with alum, a compound commonly added to vaccines to enhance the immune response.
Participants also will be randomly assigned to receive 1 of 3 doses (50, 150, or 500 μg) at both time points. To start, 5 people will receive the 50-µg dose. If they experience no serious adverse reactions attributable to the vaccine the other participants will be vaccinated with the higher doses.
The participants will return for 12 clinic visits over 44 weeks when researchers will conduct physical exams, collect blood samples, and test mucous samples to measure the immune response.
DS-Cav1 is the result of “years of research” at the Vaccine Research Center, the NIH says. Traditionally a vaccine is derived from a weakened or inactivated whole virus. By contrast, DS-Cav1 is a single, structurally engineered protein from the surface of RSV. Co-lead investigator Barney Graham, MD, PhD, deputy VRC director, says, “This work represents how new biological insights from basic research can lead to candidate vaccines for diseases of public health importance.”
A vaccine against respiratory syncytial virus (RSV) is entering a phase 1 safety and tolerability trial. The vaccine developed by scientists at the National Institute of Allergy and Infectious Diseases (NIAID) is badly needed, according to Anthony Fauci, MD, director of NIAID. Although common and causing usually mild symptoms RSV infection also can lead to severe lower respiratory tract diseases, such as pneumonia and bronchiolitis in infants, children, the elderly, and immune-compromised patients. Globally, RSV infections cause upwards of 250,000 deaths each year. “RSV is underappreciated as a major cause of illness and death,” Fauci said.
The vaccine (DS-Cav1) will fill a void. Currently no vaccine is available to prevent RSV infection, and no drug is available to treat it. The monoclonal antibody palivizumab is approved for preventing lower respiratory tract disease caused by RSV in high-risk children, but is not approved for use in the general population.
The study, VRC 317, will enroll healthy adults aged 18 to 50. Participants will be assigned randomly to receive 2 injections 12 weeks apart with the investigational vaccine or the investigational vaccine with alum, a compound commonly added to vaccines to enhance the immune response.
Participants also will be randomly assigned to receive 1 of 3 doses (50, 150, or 500 μg) at both time points. To start, 5 people will receive the 50-µg dose. If they experience no serious adverse reactions attributable to the vaccine the other participants will be vaccinated with the higher doses.
The participants will return for 12 clinic visits over 44 weeks when researchers will conduct physical exams, collect blood samples, and test mucous samples to measure the immune response.
DS-Cav1 is the result of “years of research” at the Vaccine Research Center, the NIH says. Traditionally a vaccine is derived from a weakened or inactivated whole virus. By contrast, DS-Cav1 is a single, structurally engineered protein from the surface of RSV. Co-lead investigator Barney Graham, MD, PhD, deputy VRC director, says, “This work represents how new biological insights from basic research can lead to candidate vaccines for diseases of public health importance.”
A vaccine against respiratory syncytial virus (RSV) is entering a phase 1 safety and tolerability trial. The vaccine developed by scientists at the National Institute of Allergy and Infectious Diseases (NIAID) is badly needed, according to Anthony Fauci, MD, director of NIAID. Although common and causing usually mild symptoms RSV infection also can lead to severe lower respiratory tract diseases, such as pneumonia and bronchiolitis in infants, children, the elderly, and immune-compromised patients. Globally, RSV infections cause upwards of 250,000 deaths each year. “RSV is underappreciated as a major cause of illness and death,” Fauci said.
The vaccine (DS-Cav1) will fill a void. Currently no vaccine is available to prevent RSV infection, and no drug is available to treat it. The monoclonal antibody palivizumab is approved for preventing lower respiratory tract disease caused by RSV in high-risk children, but is not approved for use in the general population.
The study, VRC 317, will enroll healthy adults aged 18 to 50. Participants will be assigned randomly to receive 2 injections 12 weeks apart with the investigational vaccine or the investigational vaccine with alum, a compound commonly added to vaccines to enhance the immune response.
Participants also will be randomly assigned to receive 1 of 3 doses (50, 150, or 500 μg) at both time points. To start, 5 people will receive the 50-µg dose. If they experience no serious adverse reactions attributable to the vaccine the other participants will be vaccinated with the higher doses.
The participants will return for 12 clinic visits over 44 weeks when researchers will conduct physical exams, collect blood samples, and test mucous samples to measure the immune response.
DS-Cav1 is the result of “years of research” at the Vaccine Research Center, the NIH says. Traditionally a vaccine is derived from a weakened or inactivated whole virus. By contrast, DS-Cav1 is a single, structurally engineered protein from the surface of RSV. Co-lead investigator Barney Graham, MD, PhD, deputy VRC director, says, “This work represents how new biological insights from basic research can lead to candidate vaccines for diseases of public health importance.”
WES misses genes associated with leukemia, other diseases
Whole-exome sequencing (WES) may routinely miss genetic variations associated with leukemia and other diseases, according to research published in Scientific Reports.
The study revealed 832 genes that have low coverage across multiple WES platforms.
These genes are associated with leukemia, psoriasis, heart failure, and other diseases, and they may be missed by researchers using WES to study these diseases.
“Although it was known that coverage—the average number of times a given piece of DNA is read during sequencing—could be uneven in whole-exome sequencing, our new methods are the first to really quantify this,” said study author Santhosh Girirajan, MBBS, PhD, of The Pennsylvania State University, University Park.
“Adequate coverage—often as many as 70 or more reads for each piece of DNA—increases our confidence that the sequence is accurate, and, without it, it is nearly impossible to make confident predictions about the relationship between a mutation in a gene and a disease.”
“In our study, we found 832 genes that have systematically low coverage across 3 different sequencing platforms, meaning that these genes would be missed in disease studies.”
The researchers said low-coverage regions may result from limited precision in WES technologies due to certain genomic features.
Highly repetitive stretches of DNA can prevent the sequencer from reading the DNA properly. The study showed that at least 60% of low-coverage genes occur near DNA repeats.
“One solution to this problem is for researchers to use whole-genome sequencing, which examines all base pairs of DNA instead of just the regions that contain genes,” Dr Girirajan said. “Our study found that whole-genome data had significantly fewer low-coverage genes than whole-exome data, and its coverage is more uniformly distributed across all parts of the genome.”
“However, the costs of whole-exome sequencing are still significantly lower than whole-genome sequencing. Until the costs of whole-genome sequencing is no longer a barrier, human genetics researchers should be aware of these limitations in whole-exome sequencing technologies.”
Whole-exome sequencing (WES) may routinely miss genetic variations associated with leukemia and other diseases, according to research published in Scientific Reports.
The study revealed 832 genes that have low coverage across multiple WES platforms.
These genes are associated with leukemia, psoriasis, heart failure, and other diseases, and they may be missed by researchers using WES to study these diseases.
“Although it was known that coverage—the average number of times a given piece of DNA is read during sequencing—could be uneven in whole-exome sequencing, our new methods are the first to really quantify this,” said study author Santhosh Girirajan, MBBS, PhD, of The Pennsylvania State University, University Park.
“Adequate coverage—often as many as 70 or more reads for each piece of DNA—increases our confidence that the sequence is accurate, and, without it, it is nearly impossible to make confident predictions about the relationship between a mutation in a gene and a disease.”
“In our study, we found 832 genes that have systematically low coverage across 3 different sequencing platforms, meaning that these genes would be missed in disease studies.”
The researchers said low-coverage regions may result from limited precision in WES technologies due to certain genomic features.
Highly repetitive stretches of DNA can prevent the sequencer from reading the DNA properly. The study showed that at least 60% of low-coverage genes occur near DNA repeats.
“One solution to this problem is for researchers to use whole-genome sequencing, which examines all base pairs of DNA instead of just the regions that contain genes,” Dr Girirajan said. “Our study found that whole-genome data had significantly fewer low-coverage genes than whole-exome data, and its coverage is more uniformly distributed across all parts of the genome.”
“However, the costs of whole-exome sequencing are still significantly lower than whole-genome sequencing. Until the costs of whole-genome sequencing is no longer a barrier, human genetics researchers should be aware of these limitations in whole-exome sequencing technologies.”
Whole-exome sequencing (WES) may routinely miss genetic variations associated with leukemia and other diseases, according to research published in Scientific Reports.
The study revealed 832 genes that have low coverage across multiple WES platforms.
These genes are associated with leukemia, psoriasis, heart failure, and other diseases, and they may be missed by researchers using WES to study these diseases.
“Although it was known that coverage—the average number of times a given piece of DNA is read during sequencing—could be uneven in whole-exome sequencing, our new methods are the first to really quantify this,” said study author Santhosh Girirajan, MBBS, PhD, of The Pennsylvania State University, University Park.
“Adequate coverage—often as many as 70 or more reads for each piece of DNA—increases our confidence that the sequence is accurate, and, without it, it is nearly impossible to make confident predictions about the relationship between a mutation in a gene and a disease.”
“In our study, we found 832 genes that have systematically low coverage across 3 different sequencing platforms, meaning that these genes would be missed in disease studies.”
The researchers said low-coverage regions may result from limited precision in WES technologies due to certain genomic features.
Highly repetitive stretches of DNA can prevent the sequencer from reading the DNA properly. The study showed that at least 60% of low-coverage genes occur near DNA repeats.
“One solution to this problem is for researchers to use whole-genome sequencing, which examines all base pairs of DNA instead of just the regions that contain genes,” Dr Girirajan said. “Our study found that whole-genome data had significantly fewer low-coverage genes than whole-exome data, and its coverage is more uniformly distributed across all parts of the genome.”
“However, the costs of whole-exome sequencing are still significantly lower than whole-genome sequencing. Until the costs of whole-genome sequencing is no longer a barrier, human genetics researchers should be aware of these limitations in whole-exome sequencing technologies.”
Study reveals ‘substantial’ malaria burden in US
Malaria imposes a substantial disease burden in the US, according to researchers.
Their study indicates that malaria hospitalizations and deaths in the US are more common than generally appreciated, as a steady stream of travelers return home with the disease.
In fact, malaria hospitalizations and deaths exceeded hospitalizations and deaths from other travel-related illnesses and generated about half a billion dollars in healthcare costs over a 15-year period.
These findings were published in the American Journal of Tropical Medicine and Hygiene.
“It appears more and more Americans are traveling to areas where malaria is common, and many of them are not taking preventive measures, such as using antimalarial preventive medications and mosquito repellents, even though they are very effective at preventing infections,” said study author Diana Khuu, PhD, of the University of California, Los Angeles.
For this study, Dr Khuu and her colleagues looked for malaria patients in a database maintained by the federal Agency for Healthcare Research and Quality that tracks hospital admissions nationwide.
The researchers found that, between 2000 and 2014, 22,029 people were admitted to US hospitals due to complications from malaria, 4823 patients were diagnosed with severe malaria, and 182 of these patients died.
Most of the deaths and severe disease were linked to infections with the Plasmodium falciparum parasite. However, in almost half of the malaria-related hospitalizations, there was no indication of parasite type.
The majority of malaria hospitalizations occurred in the eastern US in states along the Atlantic seaboard, and men accounted for 60% of the malaria-related hospital admissions.
Malaria hospitalizations were more common in the US than hospitalizations for many other travel-associated diseases. For example, between 2000 and 2014, dengue fever generated an average of 259 hospitalizations a year (compared with 1489 for malaria).
The average cost of treating a malaria patient was $25,789, and the total bill for treating malaria patients in the US from 2000 to 2014 was about $555 million.
The researchers estimated that, each year, there are about 2100 people in the US suffering from malaria, since about 69% require hospital treatment.
That case count would exceed the high end of the official estimate from the US Centers for Disease Control and Prevention (CDC) of 1500 to 2000 cases per year.
Dr Khuu attributed the difference to the fact that the CDC’s malaria count is based on reports submitted to the agency by hospitals or physicians, and hospital admission records that were used in the current study may capture additional cases that have not been reported to CDC.
While those admissions records did not include travel history, the researchers believe the malaria infections they documented most likely were acquired during travel to parts of Africa, Asia, and Latin America, where malaria is still common.
However, Dr Khuu noted that mosquitoes capable of carrying malaria are common in many parts of the US. So increases in the number of travelers coming home with the disease increases the risk of malaria re-establishing itself in the US.
Malaria imposes a substantial disease burden in the US, according to researchers.
Their study indicates that malaria hospitalizations and deaths in the US are more common than generally appreciated, as a steady stream of travelers return home with the disease.
In fact, malaria hospitalizations and deaths exceeded hospitalizations and deaths from other travel-related illnesses and generated about half a billion dollars in healthcare costs over a 15-year period.
These findings were published in the American Journal of Tropical Medicine and Hygiene.
“It appears more and more Americans are traveling to areas where malaria is common, and many of them are not taking preventive measures, such as using antimalarial preventive medications and mosquito repellents, even though they are very effective at preventing infections,” said study author Diana Khuu, PhD, of the University of California, Los Angeles.
For this study, Dr Khuu and her colleagues looked for malaria patients in a database maintained by the federal Agency for Healthcare Research and Quality that tracks hospital admissions nationwide.
The researchers found that, between 2000 and 2014, 22,029 people were admitted to US hospitals due to complications from malaria, 4823 patients were diagnosed with severe malaria, and 182 of these patients died.
Most of the deaths and severe disease were linked to infections with the Plasmodium falciparum parasite. However, in almost half of the malaria-related hospitalizations, there was no indication of parasite type.
The majority of malaria hospitalizations occurred in the eastern US in states along the Atlantic seaboard, and men accounted for 60% of the malaria-related hospital admissions.
Malaria hospitalizations were more common in the US than hospitalizations for many other travel-associated diseases. For example, between 2000 and 2014, dengue fever generated an average of 259 hospitalizations a year (compared with 1489 for malaria).
The average cost of treating a malaria patient was $25,789, and the total bill for treating malaria patients in the US from 2000 to 2014 was about $555 million.
The researchers estimated that, each year, there are about 2100 people in the US suffering from malaria, since about 69% require hospital treatment.
That case count would exceed the high end of the official estimate from the US Centers for Disease Control and Prevention (CDC) of 1500 to 2000 cases per year.
Dr Khuu attributed the difference to the fact that the CDC’s malaria count is based on reports submitted to the agency by hospitals or physicians, and hospital admission records that were used in the current study may capture additional cases that have not been reported to CDC.
While those admissions records did not include travel history, the researchers believe the malaria infections they documented most likely were acquired during travel to parts of Africa, Asia, and Latin America, where malaria is still common.
However, Dr Khuu noted that mosquitoes capable of carrying malaria are common in many parts of the US. So increases in the number of travelers coming home with the disease increases the risk of malaria re-establishing itself in the US.
Malaria imposes a substantial disease burden in the US, according to researchers.
Their study indicates that malaria hospitalizations and deaths in the US are more common than generally appreciated, as a steady stream of travelers return home with the disease.
In fact, malaria hospitalizations and deaths exceeded hospitalizations and deaths from other travel-related illnesses and generated about half a billion dollars in healthcare costs over a 15-year period.
These findings were published in the American Journal of Tropical Medicine and Hygiene.
“It appears more and more Americans are traveling to areas where malaria is common, and many of them are not taking preventive measures, such as using antimalarial preventive medications and mosquito repellents, even though they are very effective at preventing infections,” said study author Diana Khuu, PhD, of the University of California, Los Angeles.
For this study, Dr Khuu and her colleagues looked for malaria patients in a database maintained by the federal Agency for Healthcare Research and Quality that tracks hospital admissions nationwide.
The researchers found that, between 2000 and 2014, 22,029 people were admitted to US hospitals due to complications from malaria, 4823 patients were diagnosed with severe malaria, and 182 of these patients died.
Most of the deaths and severe disease were linked to infections with the Plasmodium falciparum parasite. However, in almost half of the malaria-related hospitalizations, there was no indication of parasite type.
The majority of malaria hospitalizations occurred in the eastern US in states along the Atlantic seaboard, and men accounted for 60% of the malaria-related hospital admissions.
Malaria hospitalizations were more common in the US than hospitalizations for many other travel-associated diseases. For example, between 2000 and 2014, dengue fever generated an average of 259 hospitalizations a year (compared with 1489 for malaria).
The average cost of treating a malaria patient was $25,789, and the total bill for treating malaria patients in the US from 2000 to 2014 was about $555 million.
The researchers estimated that, each year, there are about 2100 people in the US suffering from malaria, since about 69% require hospital treatment.
That case count would exceed the high end of the official estimate from the US Centers for Disease Control and Prevention (CDC) of 1500 to 2000 cases per year.
Dr Khuu attributed the difference to the fact that the CDC’s malaria count is based on reports submitted to the agency by hospitals or physicians, and hospital admission records that were used in the current study may capture additional cases that have not been reported to CDC.
While those admissions records did not include travel history, the researchers believe the malaria infections they documented most likely were acquired during travel to parts of Africa, Asia, and Latin America, where malaria is still common.
However, Dr Khuu noted that mosquitoes capable of carrying malaria are common in many parts of the US. So increases in the number of travelers coming home with the disease increases the risk of malaria re-establishing itself in the US.
A spouse’s cancer diagnosis can lower household income
A spouse’s cancer diagnosis can significantly diminish family income, according to research published in the Journal of Health Economics.
Investigators tracked changes in employment and income among working-age couples in Canada and found that, on average, a spousal cancer diagnosis results in a 5% decline in household income for men and a 9% decline for women.
“The average annual household income for the working-age couples we studied was about $100,000, so the loss of income per family is about $5000 to $9000, which is a pretty substantial decline,” said study author R. Vincent Pohl, PhD, of the University of Georgia in Athens, Georgia.
“In a situation where one household member has a devastating diagnosis, it leads to the whole household suffering economically.”
One reason for the income decline is attributed to what’s known as the caregiver effect—when one family member reduces his or her own employment to support another.
“We thought that the household’s lessened income could happen in one of two ways,” Dr Pohl said. “One is that the person who is diagnosed might not be able to work because they are getting treatment or they’re too sick to work.”
“The second is what happens to their spouse. Does the spouse work more to make up for the lost income or does the spouse also reduce his or her labor supply in order to take care of the spouse that is diagnosed with cancer? We find the latter, that spouses reduce their labor supply and therefore have lowered income levels, which leads to the household having lower income levels as well.”
The investigators found that, in the 5 years after a spouse’s cancer diagnosis, both husbands and wives reduced their employment rates by about 2.4 percentage points, on average.
The women had lower average employment rates, so the decrease represented a larger relative decline for them.
When a wife was diagnosed with cancer, her husband’s annual earnings decreased by about $2000, or 3.5% of his income.
When a husband was diagnosed with cancer, his wife’s annual earnings decreased by about $1500, or 6% of her income.
Total family income decreased by up to 4.8% among men and 8.5% among women.
The investigators found the declines were due to lower earnings among both cancer patients and their spouses.
“What we need to think about, in terms of policy implications, is how we can protect not just individuals from the consequences of getting sick, but their entire family,” Dr Pohl said. That’s not really something that existing policies do.”
“If you think about disability insurance, it’s a function of an individual’s inability to work. It doesn’t take into account that family members might have to take care of an individual and therefore might also lose their job or reduce their working hours and, thus, their income.”
Dr Pohl said this study allowed the investigators to examine behavior on a level that’s representative for the entire country of Canada, but the findings may not be transferable to the US, where healthcare is handled differently than in many developed nations.
“One reason why we don’t see that the spouse works more, potentially, is that health insurance is not provided through jobs in Canada,” Dr Pohl said. “In the United States, we could expect that if one spouse is diagnosed with a disease, the other spouse has to keep their job in order to keep health insurance for the family.”
A spouse’s cancer diagnosis can significantly diminish family income, according to research published in the Journal of Health Economics.
Investigators tracked changes in employment and income among working-age couples in Canada and found that, on average, a spousal cancer diagnosis results in a 5% decline in household income for men and a 9% decline for women.
“The average annual household income for the working-age couples we studied was about $100,000, so the loss of income per family is about $5000 to $9000, which is a pretty substantial decline,” said study author R. Vincent Pohl, PhD, of the University of Georgia in Athens, Georgia.
“In a situation where one household member has a devastating diagnosis, it leads to the whole household suffering economically.”
One reason for the income decline is attributed to what’s known as the caregiver effect—when one family member reduces his or her own employment to support another.
“We thought that the household’s lessened income could happen in one of two ways,” Dr Pohl said. “One is that the person who is diagnosed might not be able to work because they are getting treatment or they’re too sick to work.”
“The second is what happens to their spouse. Does the spouse work more to make up for the lost income or does the spouse also reduce his or her labor supply in order to take care of the spouse that is diagnosed with cancer? We find the latter, that spouses reduce their labor supply and therefore have lowered income levels, which leads to the household having lower income levels as well.”
The investigators found that, in the 5 years after a spouse’s cancer diagnosis, both husbands and wives reduced their employment rates by about 2.4 percentage points, on average.
The women had lower average employment rates, so the decrease represented a larger relative decline for them.
When a wife was diagnosed with cancer, her husband’s annual earnings decreased by about $2000, or 3.5% of his income.
When a husband was diagnosed with cancer, his wife’s annual earnings decreased by about $1500, or 6% of her income.
Total family income decreased by up to 4.8% among men and 8.5% among women.
The investigators found the declines were due to lower earnings among both cancer patients and their spouses.
“What we need to think about, in terms of policy implications, is how we can protect not just individuals from the consequences of getting sick, but their entire family,” Dr Pohl said. That’s not really something that existing policies do.”
“If you think about disability insurance, it’s a function of an individual’s inability to work. It doesn’t take into account that family members might have to take care of an individual and therefore might also lose their job or reduce their working hours and, thus, their income.”
Dr Pohl said this study allowed the investigators to examine behavior on a level that’s representative for the entire country of Canada, but the findings may not be transferable to the US, where healthcare is handled differently than in many developed nations.
“One reason why we don’t see that the spouse works more, potentially, is that health insurance is not provided through jobs in Canada,” Dr Pohl said. “In the United States, we could expect that if one spouse is diagnosed with a disease, the other spouse has to keep their job in order to keep health insurance for the family.”
A spouse’s cancer diagnosis can significantly diminish family income, according to research published in the Journal of Health Economics.
Investigators tracked changes in employment and income among working-age couples in Canada and found that, on average, a spousal cancer diagnosis results in a 5% decline in household income for men and a 9% decline for women.
“The average annual household income for the working-age couples we studied was about $100,000, so the loss of income per family is about $5000 to $9000, which is a pretty substantial decline,” said study author R. Vincent Pohl, PhD, of the University of Georgia in Athens, Georgia.
“In a situation where one household member has a devastating diagnosis, it leads to the whole household suffering economically.”
One reason for the income decline is attributed to what’s known as the caregiver effect—when one family member reduces his or her own employment to support another.
“We thought that the household’s lessened income could happen in one of two ways,” Dr Pohl said. “One is that the person who is diagnosed might not be able to work because they are getting treatment or they’re too sick to work.”
“The second is what happens to their spouse. Does the spouse work more to make up for the lost income or does the spouse also reduce his or her labor supply in order to take care of the spouse that is diagnosed with cancer? We find the latter, that spouses reduce their labor supply and therefore have lowered income levels, which leads to the household having lower income levels as well.”
The investigators found that, in the 5 years after a spouse’s cancer diagnosis, both husbands and wives reduced their employment rates by about 2.4 percentage points, on average.
The women had lower average employment rates, so the decrease represented a larger relative decline for them.
When a wife was diagnosed with cancer, her husband’s annual earnings decreased by about $2000, or 3.5% of his income.
When a husband was diagnosed with cancer, his wife’s annual earnings decreased by about $1500, or 6% of her income.
Total family income decreased by up to 4.8% among men and 8.5% among women.
The investigators found the declines were due to lower earnings among both cancer patients and their spouses.
“What we need to think about, in terms of policy implications, is how we can protect not just individuals from the consequences of getting sick, but their entire family,” Dr Pohl said. That’s not really something that existing policies do.”
“If you think about disability insurance, it’s a function of an individual’s inability to work. It doesn’t take into account that family members might have to take care of an individual and therefore might also lose their job or reduce their working hours and, thus, their income.”
Dr Pohl said this study allowed the investigators to examine behavior on a level that’s representative for the entire country of Canada, but the findings may not be transferable to the US, where healthcare is handled differently than in many developed nations.
“One reason why we don’t see that the spouse works more, potentially, is that health insurance is not provided through jobs in Canada,” Dr Pohl said. “In the United States, we could expect that if one spouse is diagnosed with a disease, the other spouse has to keep their job in order to keep health insurance for the family.”
Price transparency doesn’t impact ordering of lab tests
Seeing the cost of lab tests in patients’ health records doesn’t deter doctors from ordering the tests, according to research published in JAMA Internal Medicine.
Results of a large study showed that displaying Medicare allowable fees for inpatient lab tests did not have an overall impact on how clinicians ordered tests.
“Price transparency is increasingly being considered by hospitals and other healthcare organizations as a way to nudge doctors and patients toward higher-value care, but the best way to design these types of interventions has not been well-tested,” said study author Mitesh S. Patel, MD, of the University of Pennsylvania School of Medicine in Philadelphia.
“Our findings indicate that price transparency alone was not enough to change clinician behavior and that future price transparency interventions may need to be better targeted, framed, or combined with other approaches to be more successful.”
In the new study—the largest of its kind—researchers randomly assigned 60 groups of inpatient lab tests to either display Medicare allowable fees in the patient’s electronic health record (intervention arm) or not (control arm).
The trial was conducted at 3 hospitals within the University of Pennsylvania Health System over a 1-year period. Researchers compared changes in the number of tests ordered per patient per day, and associated fees, for 98,529 patients (totaling 142,921 hospital admissions).
In the year prior to the study, when cost information was not displayed, the average number of tests and associated fees ordered per patient per day was 2.31 tests, totaling $27.77, in the control group and 3.93 tests, totaling $37.84, in the intervention group.
After the intervention, when cost information was displayed for the intervention group, the average number of tests and associated fees ordered per patient per day did not change significantly. It was 2.34 tests, totaling $27.59, in the control group, and 4.01 tests, totaling $38.85, in the intervention group.
Though the study showed no overall effect, the researchers noted findings in specific patient groups that have implications for how to improve price transparency in the future.
For example, there was a slight decrease in test ordering for patients admitted to the intensive care unit—an environment in which doctors are making rapid decisions and may be more exposed to the price transparency intervention.
The researchers also found the most expensive tests were ordered less often, and the cheaper tests were ordered more often.
“Electronic health records are constantly being changed, from how choices are offered to the way information is framed,” said study author C. William Hanson, MD, of the University of Pennsylvania Health System.
“By systematically testing these approaches through real-world experiments, health systems can leverage this new evidence to continue to improve the way care is delivered for our patients.”
“Price transparency continues to be an important initiative,” Dr Patel added, “but the results of this clinical trial indicate that these approaches need to be better designed to effectively change behavior.”
Seeing the cost of lab tests in patients’ health records doesn’t deter doctors from ordering the tests, according to research published in JAMA Internal Medicine.
Results of a large study showed that displaying Medicare allowable fees for inpatient lab tests did not have an overall impact on how clinicians ordered tests.
“Price transparency is increasingly being considered by hospitals and other healthcare organizations as a way to nudge doctors and patients toward higher-value care, but the best way to design these types of interventions has not been well-tested,” said study author Mitesh S. Patel, MD, of the University of Pennsylvania School of Medicine in Philadelphia.
“Our findings indicate that price transparency alone was not enough to change clinician behavior and that future price transparency interventions may need to be better targeted, framed, or combined with other approaches to be more successful.”
In the new study—the largest of its kind—researchers randomly assigned 60 groups of inpatient lab tests to either display Medicare allowable fees in the patient’s electronic health record (intervention arm) or not (control arm).
The trial was conducted at 3 hospitals within the University of Pennsylvania Health System over a 1-year period. Researchers compared changes in the number of tests ordered per patient per day, and associated fees, for 98,529 patients (totaling 142,921 hospital admissions).
In the year prior to the study, when cost information was not displayed, the average number of tests and associated fees ordered per patient per day was 2.31 tests, totaling $27.77, in the control group and 3.93 tests, totaling $37.84, in the intervention group.
After the intervention, when cost information was displayed for the intervention group, the average number of tests and associated fees ordered per patient per day did not change significantly. It was 2.34 tests, totaling $27.59, in the control group, and 4.01 tests, totaling $38.85, in the intervention group.
Though the study showed no overall effect, the researchers noted findings in specific patient groups that have implications for how to improve price transparency in the future.
For example, there was a slight decrease in test ordering for patients admitted to the intensive care unit—an environment in which doctors are making rapid decisions and may be more exposed to the price transparency intervention.
The researchers also found the most expensive tests were ordered less often, and the cheaper tests were ordered more often.
“Electronic health records are constantly being changed, from how choices are offered to the way information is framed,” said study author C. William Hanson, MD, of the University of Pennsylvania Health System.
“By systematically testing these approaches through real-world experiments, health systems can leverage this new evidence to continue to improve the way care is delivered for our patients.”
“Price transparency continues to be an important initiative,” Dr Patel added, “but the results of this clinical trial indicate that these approaches need to be better designed to effectively change behavior.”
Seeing the cost of lab tests in patients’ health records doesn’t deter doctors from ordering the tests, according to research published in JAMA Internal Medicine.
Results of a large study showed that displaying Medicare allowable fees for inpatient lab tests did not have an overall impact on how clinicians ordered tests.
“Price transparency is increasingly being considered by hospitals and other healthcare organizations as a way to nudge doctors and patients toward higher-value care, but the best way to design these types of interventions has not been well-tested,” said study author Mitesh S. Patel, MD, of the University of Pennsylvania School of Medicine in Philadelphia.
“Our findings indicate that price transparency alone was not enough to change clinician behavior and that future price transparency interventions may need to be better targeted, framed, or combined with other approaches to be more successful.”
In the new study—the largest of its kind—researchers randomly assigned 60 groups of inpatient lab tests to either display Medicare allowable fees in the patient’s electronic health record (intervention arm) or not (control arm).
The trial was conducted at 3 hospitals within the University of Pennsylvania Health System over a 1-year period. Researchers compared changes in the number of tests ordered per patient per day, and associated fees, for 98,529 patients (totaling 142,921 hospital admissions).
In the year prior to the study, when cost information was not displayed, the average number of tests and associated fees ordered per patient per day was 2.31 tests, totaling $27.77, in the control group and 3.93 tests, totaling $37.84, in the intervention group.
After the intervention, when cost information was displayed for the intervention group, the average number of tests and associated fees ordered per patient per day did not change significantly. It was 2.34 tests, totaling $27.59, in the control group, and 4.01 tests, totaling $38.85, in the intervention group.
Though the study showed no overall effect, the researchers noted findings in specific patient groups that have implications for how to improve price transparency in the future.
For example, there was a slight decrease in test ordering for patients admitted to the intensive care unit—an environment in which doctors are making rapid decisions and may be more exposed to the price transparency intervention.
The researchers also found the most expensive tests were ordered less often, and the cheaper tests were ordered more often.
“Electronic health records are constantly being changed, from how choices are offered to the way information is framed,” said study author C. William Hanson, MD, of the University of Pennsylvania Health System.
“By systematically testing these approaches through real-world experiments, health systems can leverage this new evidence to continue to improve the way care is delivered for our patients.”
“Price transparency continues to be an important initiative,” Dr Patel added, “but the results of this clinical trial indicate that these approaches need to be better designed to effectively change behavior.”
Guideline for reversal of antithrombotics in intracranial hemorrhage
Clinical Question: What is the current guideline for reversal of antithrombotics in intracranial hemorrhage (ICH)?
Background: Antithrombotics are used to treat or decrease the risk of thromboembolic events, and the use is expected to rise in the future because of an aging population and conditions such as atrial fibrillation. Patients on antithrombotics who experience spontaneous ICH have a higher risk of death or poor outcome, compared with those who are not. Rapid reversal of coagulopathy may help to improve outcomes.
Study design: A 13-person, multi-institutional, international committee with expertise in relevant medical fields reviewed a total of 488 articles to develop guidelines and treatment recommendations.
Synopsis: The committee developed guidelines for the reversal of antithrombotics after reviewing a total of 488 articles up through November 2015. The quality of evidence and treatment recommendations were drafted based on the GRADE system, as follows:
• Vitamin K antagonists: If international normalized ratio is greater than or equal to 1.4, administer vitamin K 10 mg IV, plus 3-4 factor prothrombin complex concentrate (PCC) or fresh frozen plasma.
• Direct factor Xa inhibitors: activated charcoal within 2 hr of ingestion, activated PCC or 4 factor PCC.
• Direct thrombin inhibitors – Dabigatran: activated charcoal within 2 hr of ingestion and Idarucizumab. Consider hemodialysis. Other DTIs: activated PCC or 4 factor PCC.
• Unfractionated heparin: protamine IV.
• Low-molecular-weight heparins – Enoxaparin: protamine IV, dose based on time of enoxaparin administration. Dalteparin/nadroparin/tinzaparin: protamine IV or recombinant factor (rF)VIIa.
• Danaparoid: rFVIIa.
• Pentasaccharides: activated PCC.
• Thrombolytic agents: cryoprecipitate 10 units or antifibrinolytics.
• Antiplatelet agents: desmopressin 0.4 mcg or platelet transfusion in neurosurgical procedure.
Bottom Line: This is a statement of the guideline for reversal of antithrombotics in intracranial hemorrhage from the Neurocritical Care Society and the Society of Critical Care Medicine.
Citation: Frontera J, Lewin JJ, Rabinstein AA, et al. “Guideline for reversal of antithrombotics in intracranial hemorrhage: a statement for healthcare professionals from the Neurocritical Care Society and Society of Critical Care Medicine.” Neurocrit Care. 2016 Feb;24(1):6-46.
Dr. Kim is clinical assistant professor in the division of hospital medicine, Loyola University Chicago, Maywood, Ill.
Clinical Question: What is the current guideline for reversal of antithrombotics in intracranial hemorrhage (ICH)?
Background: Antithrombotics are used to treat or decrease the risk of thromboembolic events, and the use is expected to rise in the future because of an aging population and conditions such as atrial fibrillation. Patients on antithrombotics who experience spontaneous ICH have a higher risk of death or poor outcome, compared with those who are not. Rapid reversal of coagulopathy may help to improve outcomes.
Study design: A 13-person, multi-institutional, international committee with expertise in relevant medical fields reviewed a total of 488 articles to develop guidelines and treatment recommendations.
Synopsis: The committee developed guidelines for the reversal of antithrombotics after reviewing a total of 488 articles up through November 2015. The quality of evidence and treatment recommendations were drafted based on the GRADE system, as follows:
• Vitamin K antagonists: If international normalized ratio is greater than or equal to 1.4, administer vitamin K 10 mg IV, plus 3-4 factor prothrombin complex concentrate (PCC) or fresh frozen plasma.
• Direct factor Xa inhibitors: activated charcoal within 2 hr of ingestion, activated PCC or 4 factor PCC.
• Direct thrombin inhibitors – Dabigatran: activated charcoal within 2 hr of ingestion and Idarucizumab. Consider hemodialysis. Other DTIs: activated PCC or 4 factor PCC.
• Unfractionated heparin: protamine IV.
• Low-molecular-weight heparins – Enoxaparin: protamine IV, dose based on time of enoxaparin administration. Dalteparin/nadroparin/tinzaparin: protamine IV or recombinant factor (rF)VIIa.
• Danaparoid: rFVIIa.
• Pentasaccharides: activated PCC.
• Thrombolytic agents: cryoprecipitate 10 units or antifibrinolytics.
• Antiplatelet agents: desmopressin 0.4 mcg or platelet transfusion in neurosurgical procedure.
Bottom Line: This is a statement of the guideline for reversal of antithrombotics in intracranial hemorrhage from the Neurocritical Care Society and the Society of Critical Care Medicine.
Citation: Frontera J, Lewin JJ, Rabinstein AA, et al. “Guideline for reversal of antithrombotics in intracranial hemorrhage: a statement for healthcare professionals from the Neurocritical Care Society and Society of Critical Care Medicine.” Neurocrit Care. 2016 Feb;24(1):6-46.
Dr. Kim is clinical assistant professor in the division of hospital medicine, Loyola University Chicago, Maywood, Ill.
Clinical Question: What is the current guideline for reversal of antithrombotics in intracranial hemorrhage (ICH)?
Background: Antithrombotics are used to treat or decrease the risk of thromboembolic events, and the use is expected to rise in the future because of an aging population and conditions such as atrial fibrillation. Patients on antithrombotics who experience spontaneous ICH have a higher risk of death or poor outcome, compared with those who are not. Rapid reversal of coagulopathy may help to improve outcomes.
Study design: A 13-person, multi-institutional, international committee with expertise in relevant medical fields reviewed a total of 488 articles to develop guidelines and treatment recommendations.
Synopsis: The committee developed guidelines for the reversal of antithrombotics after reviewing a total of 488 articles up through November 2015. The quality of evidence and treatment recommendations were drafted based on the GRADE system, as follows:
• Vitamin K antagonists: If international normalized ratio is greater than or equal to 1.4, administer vitamin K 10 mg IV, plus 3-4 factor prothrombin complex concentrate (PCC) or fresh frozen plasma.
• Direct factor Xa inhibitors: activated charcoal within 2 hr of ingestion, activated PCC or 4 factor PCC.
• Direct thrombin inhibitors – Dabigatran: activated charcoal within 2 hr of ingestion and Idarucizumab. Consider hemodialysis. Other DTIs: activated PCC or 4 factor PCC.
• Unfractionated heparin: protamine IV.
• Low-molecular-weight heparins – Enoxaparin: protamine IV, dose based on time of enoxaparin administration. Dalteparin/nadroparin/tinzaparin: protamine IV or recombinant factor (rF)VIIa.
• Danaparoid: rFVIIa.
• Pentasaccharides: activated PCC.
• Thrombolytic agents: cryoprecipitate 10 units or antifibrinolytics.
• Antiplatelet agents: desmopressin 0.4 mcg or platelet transfusion in neurosurgical procedure.
Bottom Line: This is a statement of the guideline for reversal of antithrombotics in intracranial hemorrhage from the Neurocritical Care Society and the Society of Critical Care Medicine.
Citation: Frontera J, Lewin JJ, Rabinstein AA, et al. “Guideline for reversal of antithrombotics in intracranial hemorrhage: a statement for healthcare professionals from the Neurocritical Care Society and Society of Critical Care Medicine.” Neurocrit Care. 2016 Feb;24(1):6-46.
Dr. Kim is clinical assistant professor in the division of hospital medicine, Loyola University Chicago, Maywood, Ill.
Genetic advances push personalization of cancer prevention
BOSTON – The avenues for identifying those individuals most likely to benefit from surveillance and chemoprevention of colorectal cancer (CRC) are multiplying, experts agreed at the 2017 AGA Tech Summit sponsored by the AGA Center for GI Innovation and Technology.
In a series of three consecutive presentations, the first updated efforts to improve germline genetic testing. The second identified biomarkers that appear likely to improve the risk-to-benefit ratio from aspirin in cancer prevention. The third cautioned about the potential pitfalls of genetic testing for cancer risk even though the overall effect was to highlight the value of this tool when performed appropriately.
Novel tools for germline testing
Dr. Syngal noted that the management of hereditary cancers differs from that of sporadic cancers in several ways – the surgical management of cancer, screening, and surveillance after the treatment of the primary cancer, surveillance for associated cancers, screening and surveillance of family members, and reproductive counseling.
In her talk, “Novel Tools to Accelerate Implementation of Germline Genetic Testing,” Dr. Syngal discussed the PREMM (Prediction Model for MLH1 and MSH2 gene mutations) risk assessment tool first developed in 2006. The web-based PREMM calculates a risk score based on personal and family histories, Dr. Syngal said. “If an individual scores above a certain threshold, they are referred to genetic counseling and possibly testing,” she noted.
A second-generation tool for identifying patients at increased genetic risk for CRC, called PREMM1,2,6, was based on a larger patient sample, and published in 2011. That model “is recommended by the National Comprehensive Cancer Network for testing for Lynch syndrome, so its use is already part of clinical practice in gastroenterology,” Dr. Syngal reported.
A third model, called PREMM5, was developed on the basis of an even larger patient sample, and it is now being tested in primary care practices with plans in the works to implement the model in general gastroenterology, oncology, and ob.gyn. settings, she added.
“There are probably about 1 million people in the United States who have Lynch syndrome and don’t know it,” according to Dr. Syngal, who noted that PREMM can be completed in only a minute or two by clinicians or patients once they collected information about CRC history among their blood relatives.
Ultimately, the same approaches can be applied to risk assessment for other GI cancers in which the goal is to first identify patients at an increased likelihood of having a genetic mutation that predicts cancer risk and then employing multigene panels to narrow down those who could benefit from specific surveillance strategies.
However, by itself, the work to develop better strategies to identify Lynch syndrome is important, according to Dr. Syngal. When a group of experts was recently convened under the Cancer Moonshot Program championed by former Vice President Joe Biden, “Lynch syndrome was identified as the top priority in terms of cancer prevention” over the coming 5-10 years.
Biomarkers for GI cancer chemoprevention
Citing a long list of studies and trials that have associated aspirin with protection against CRC, including a randomized controlled trial in Lynch syndrome, Dr. Chan characterized the evidence of a chemoprotective effect from ASA against GI cancer as “overwhelming.” However, not all individuals may derive a favorable risk-to-benefit ratio due to the antiplatelet effects of ASA that increase risk of bleeding events in the GI tract and elsewhere.
The best approach to providing a favorable risk-to-benefit ratio may involve identifying biomarkers that predict benefit. The progress in understanding how ASA inhibits pathways of tumor development has provided candidates, according to Dr. Chan, citing a series of studies, including those conducted at his center.
One proof of concept was derived from work in evaluating tumor expression of cyclooxygenase (COX), the enzyme that converts arachidonic acid to prostaglandins. Dr. Chan explained that prostaglandin are linked to many downstream cancer-promoting pathways. In a study that involved molecular analysis on tumor specimens from patients who participated in a cohort study, the presence of the COX-2 isoenzyme was a differentiator.
“When the evaluation was performed according to COX-2 status, we saw about a 30% reduction in risk for COX-2-positive tumors. In contrast, we did not see any appreciable reduction in risk in tumors that were COX-2-negative,” Dr. Chan reported.
Although Dr. Chan acknowledged that about 80% of CRC tumors are COX-2-positive, which may explain why CRC protection from ASA is observed in an unselected population, he reported that this study has supported pursuit of additional biomarkers, particularly those that might predict protection from ASA before or at the very earliest stages of the neoplastic process.
One such candidate is 15-prostaglandin dehydrogenase (15-PGDH) enzyme, which is known to catabolize or breakdown prostaglandin. When 15-PDGH levels were evaluated in normal tissue samples adjacent to CRC tumors, there was about a 50% reduction in risk of CRC in those with high 15-PGDH “but there was actually no reduction in risk among those with low 15-PGDH,” Dr. Chan reported.
Currently, ASA prophylaxis for preventing CRC is recommended by the U.S. Preventive Services Task Force in individuals who also have an increased risk of cardiovascular disease, but there is no accepted formula for weighing CRC risk against risk of bleeding. Biomarkers like 15-PGDH could be instrumental in guiding decisions for gastroenterologists.
“So you can imagine a strategy in which the endoscopist removes the polyp and also takes a biopsy of adjacent normal tissue to determine whether there are sufficient levels of 15-PGDH to potentially predict whether ASA will have a preventive effect,” Dr. Chan said.
There are other biomarkers also being pursued for their same potential to identify patients with a high likelihood to benefit from ASA, which Dr. Chan suggested could help clinicians determine when ASA is appropriate.
In clinical medicine, “we think a lot about biomarkers in respect to predicting who will benefit from cancer therapy, but we have thought less about whether we have the ability to use biomarkers to determine who would potentially be benefiting from a preventive intervention,” Dr. Chan said.
Genetic testing for GI cancer risk
“We have to be honest with ourselves about the limitations,” said Dr. Yurgelun, who expressed concern about how some of the commercially available multigene panels are being marketed and employed by both patients and physicians.
Despite the expectations of those without experience interpreting the results, “genetic testing often fails to give a black and white answer,” Dr. Yurgelun said in an interview. “Finding an inherited mutation in a cancer susceptibility gene carries plenty of uncertainty in many cases.”
One issue for interpretation of multigene panels involves variants of uncertain significance (VUS). These are germline genetic alterations that have been observed in individuals undergoing genetic testing but have not been confirmed as causative of inherited cancer risk.
“VUS are findings for which data are insufficient to classify them as either pathogenic or benign, and there’s significant concern about both patients and clinicians overinterpreting or misinterpreting their significance,” Dr. Yurgelun cautioned. In particular, aggressive intervention undertaken because of a VUS could introduce risk without benefit. He cited a recently published article that documented aggressive surgical management in the presence of VUS mutations “even though their genetic testing really should not have dictated that.”
Even when a mutation is present with a clear association with increased cancer risk, penetrance is another concept that may not be fully appreciated in gene panel interpretation. Penetrance expresses the proportion of patients with that mutation who will develop a cancer with which the gene mutation is associated.
“An inherited mutation in a cancer susceptibility gene influences their probability of developing cancer, but the odds are never zero and they are essentially never 100%,” Dr. Yurgelun explained. “Genetics is not necessarily destiny,” he added.
Genetic testing is important now, and its clinical value is improving, Dr. Yurgelun emphasized, but he cautioned that the “explosion in the availability of genetic testing” has the potential to cause unrealistic expectations and the potential for misuse of the information.
“Clinicians should expect to still encounter lots of uncertainty with genetic testing, which is why it’s so critically important that such testing be done with the guidance of professional genetic counselors who can help navigate many of these uncertainties. While our technology now allows for a tremendous breadth of genetic testing options, these options have hugely expanded the number of questions and the amount of uncertainty that can be generated,” he added.
Heidi Splete contributed to this report.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
BOSTON – The avenues for identifying those individuals most likely to benefit from surveillance and chemoprevention of colorectal cancer (CRC) are multiplying, experts agreed at the 2017 AGA Tech Summit sponsored by the AGA Center for GI Innovation and Technology.
In a series of three consecutive presentations, the first updated efforts to improve germline genetic testing. The second identified biomarkers that appear likely to improve the risk-to-benefit ratio from aspirin in cancer prevention. The third cautioned about the potential pitfalls of genetic testing for cancer risk even though the overall effect was to highlight the value of this tool when performed appropriately.
Novel tools for germline testing
Dr. Syngal noted that the management of hereditary cancers differs from that of sporadic cancers in several ways – the surgical management of cancer, screening, and surveillance after the treatment of the primary cancer, surveillance for associated cancers, screening and surveillance of family members, and reproductive counseling.
In her talk, “Novel Tools to Accelerate Implementation of Germline Genetic Testing,” Dr. Syngal discussed the PREMM (Prediction Model for MLH1 and MSH2 gene mutations) risk assessment tool first developed in 2006. The web-based PREMM calculates a risk score based on personal and family histories, Dr. Syngal said. “If an individual scores above a certain threshold, they are referred to genetic counseling and possibly testing,” she noted.
A second-generation tool for identifying patients at increased genetic risk for CRC, called PREMM1,2,6, was based on a larger patient sample, and published in 2011. That model “is recommended by the National Comprehensive Cancer Network for testing for Lynch syndrome, so its use is already part of clinical practice in gastroenterology,” Dr. Syngal reported.
A third model, called PREMM5, was developed on the basis of an even larger patient sample, and it is now being tested in primary care practices with plans in the works to implement the model in general gastroenterology, oncology, and ob.gyn. settings, she added.
“There are probably about 1 million people in the United States who have Lynch syndrome and don’t know it,” according to Dr. Syngal, who noted that PREMM can be completed in only a minute or two by clinicians or patients once they collected information about CRC history among their blood relatives.
Ultimately, the same approaches can be applied to risk assessment for other GI cancers in which the goal is to first identify patients at an increased likelihood of having a genetic mutation that predicts cancer risk and then employing multigene panels to narrow down those who could benefit from specific surveillance strategies.
However, by itself, the work to develop better strategies to identify Lynch syndrome is important, according to Dr. Syngal. When a group of experts was recently convened under the Cancer Moonshot Program championed by former Vice President Joe Biden, “Lynch syndrome was identified as the top priority in terms of cancer prevention” over the coming 5-10 years.
Biomarkers for GI cancer chemoprevention
Citing a long list of studies and trials that have associated aspirin with protection against CRC, including a randomized controlled trial in Lynch syndrome, Dr. Chan characterized the evidence of a chemoprotective effect from ASA against GI cancer as “overwhelming.” However, not all individuals may derive a favorable risk-to-benefit ratio due to the antiplatelet effects of ASA that increase risk of bleeding events in the GI tract and elsewhere.
The best approach to providing a favorable risk-to-benefit ratio may involve identifying biomarkers that predict benefit. The progress in understanding how ASA inhibits pathways of tumor development has provided candidates, according to Dr. Chan, citing a series of studies, including those conducted at his center.
One proof of concept was derived from work in evaluating tumor expression of cyclooxygenase (COX), the enzyme that converts arachidonic acid to prostaglandins. Dr. Chan explained that prostaglandin are linked to many downstream cancer-promoting pathways. In a study that involved molecular analysis on tumor specimens from patients who participated in a cohort study, the presence of the COX-2 isoenzyme was a differentiator.
“When the evaluation was performed according to COX-2 status, we saw about a 30% reduction in risk for COX-2-positive tumors. In contrast, we did not see any appreciable reduction in risk in tumors that were COX-2-negative,” Dr. Chan reported.
Although Dr. Chan acknowledged that about 80% of CRC tumors are COX-2-positive, which may explain why CRC protection from ASA is observed in an unselected population, he reported that this study has supported pursuit of additional biomarkers, particularly those that might predict protection from ASA before or at the very earliest stages of the neoplastic process.
One such candidate is 15-prostaglandin dehydrogenase (15-PGDH) enzyme, which is known to catabolize or breakdown prostaglandin. When 15-PDGH levels were evaluated in normal tissue samples adjacent to CRC tumors, there was about a 50% reduction in risk of CRC in those with high 15-PGDH “but there was actually no reduction in risk among those with low 15-PGDH,” Dr. Chan reported.
Currently, ASA prophylaxis for preventing CRC is recommended by the U.S. Preventive Services Task Force in individuals who also have an increased risk of cardiovascular disease, but there is no accepted formula for weighing CRC risk against risk of bleeding. Biomarkers like 15-PGDH could be instrumental in guiding decisions for gastroenterologists.
“So you can imagine a strategy in which the endoscopist removes the polyp and also takes a biopsy of adjacent normal tissue to determine whether there are sufficient levels of 15-PGDH to potentially predict whether ASA will have a preventive effect,” Dr. Chan said.
There are other biomarkers also being pursued for their same potential to identify patients with a high likelihood to benefit from ASA, which Dr. Chan suggested could help clinicians determine when ASA is appropriate.
In clinical medicine, “we think a lot about biomarkers in respect to predicting who will benefit from cancer therapy, but we have thought less about whether we have the ability to use biomarkers to determine who would potentially be benefiting from a preventive intervention,” Dr. Chan said.
Genetic testing for GI cancer risk
“We have to be honest with ourselves about the limitations,” said Dr. Yurgelun, who expressed concern about how some of the commercially available multigene panels are being marketed and employed by both patients and physicians.
Despite the expectations of those without experience interpreting the results, “genetic testing often fails to give a black and white answer,” Dr. Yurgelun said in an interview. “Finding an inherited mutation in a cancer susceptibility gene carries plenty of uncertainty in many cases.”
One issue for interpretation of multigene panels involves variants of uncertain significance (VUS). These are germline genetic alterations that have been observed in individuals undergoing genetic testing but have not been confirmed as causative of inherited cancer risk.
“VUS are findings for which data are insufficient to classify them as either pathogenic or benign, and there’s significant concern about both patients and clinicians overinterpreting or misinterpreting their significance,” Dr. Yurgelun cautioned. In particular, aggressive intervention undertaken because of a VUS could introduce risk without benefit. He cited a recently published article that documented aggressive surgical management in the presence of VUS mutations “even though their genetic testing really should not have dictated that.”
Even when a mutation is present with a clear association with increased cancer risk, penetrance is another concept that may not be fully appreciated in gene panel interpretation. Penetrance expresses the proportion of patients with that mutation who will develop a cancer with which the gene mutation is associated.
“An inherited mutation in a cancer susceptibility gene influences their probability of developing cancer, but the odds are never zero and they are essentially never 100%,” Dr. Yurgelun explained. “Genetics is not necessarily destiny,” he added.
Genetic testing is important now, and its clinical value is improving, Dr. Yurgelun emphasized, but he cautioned that the “explosion in the availability of genetic testing” has the potential to cause unrealistic expectations and the potential for misuse of the information.
“Clinicians should expect to still encounter lots of uncertainty with genetic testing, which is why it’s so critically important that such testing be done with the guidance of professional genetic counselors who can help navigate many of these uncertainties. While our technology now allows for a tremendous breadth of genetic testing options, these options have hugely expanded the number of questions and the amount of uncertainty that can be generated,” he added.
Heidi Splete contributed to this report.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
BOSTON – The avenues for identifying those individuals most likely to benefit from surveillance and chemoprevention of colorectal cancer (CRC) are multiplying, experts agreed at the 2017 AGA Tech Summit sponsored by the AGA Center for GI Innovation and Technology.
In a series of three consecutive presentations, the first updated efforts to improve germline genetic testing. The second identified biomarkers that appear likely to improve the risk-to-benefit ratio from aspirin in cancer prevention. The third cautioned about the potential pitfalls of genetic testing for cancer risk even though the overall effect was to highlight the value of this tool when performed appropriately.
Novel tools for germline testing
Dr. Syngal noted that the management of hereditary cancers differs from that of sporadic cancers in several ways – the surgical management of cancer, screening, and surveillance after the treatment of the primary cancer, surveillance for associated cancers, screening and surveillance of family members, and reproductive counseling.
In her talk, “Novel Tools to Accelerate Implementation of Germline Genetic Testing,” Dr. Syngal discussed the PREMM (Prediction Model for MLH1 and MSH2 gene mutations) risk assessment tool first developed in 2006. The web-based PREMM calculates a risk score based on personal and family histories, Dr. Syngal said. “If an individual scores above a certain threshold, they are referred to genetic counseling and possibly testing,” she noted.
A second-generation tool for identifying patients at increased genetic risk for CRC, called PREMM1,2,6, was based on a larger patient sample, and published in 2011. That model “is recommended by the National Comprehensive Cancer Network for testing for Lynch syndrome, so its use is already part of clinical practice in gastroenterology,” Dr. Syngal reported.
A third model, called PREMM5, was developed on the basis of an even larger patient sample, and it is now being tested in primary care practices with plans in the works to implement the model in general gastroenterology, oncology, and ob.gyn. settings, she added.
“There are probably about 1 million people in the United States who have Lynch syndrome and don’t know it,” according to Dr. Syngal, who noted that PREMM can be completed in only a minute or two by clinicians or patients once they collected information about CRC history among their blood relatives.
Ultimately, the same approaches can be applied to risk assessment for other GI cancers in which the goal is to first identify patients at an increased likelihood of having a genetic mutation that predicts cancer risk and then employing multigene panels to narrow down those who could benefit from specific surveillance strategies.
However, by itself, the work to develop better strategies to identify Lynch syndrome is important, according to Dr. Syngal. When a group of experts was recently convened under the Cancer Moonshot Program championed by former Vice President Joe Biden, “Lynch syndrome was identified as the top priority in terms of cancer prevention” over the coming 5-10 years.
Biomarkers for GI cancer chemoprevention
Citing a long list of studies and trials that have associated aspirin with protection against CRC, including a randomized controlled trial in Lynch syndrome, Dr. Chan characterized the evidence of a chemoprotective effect from ASA against GI cancer as “overwhelming.” However, not all individuals may derive a favorable risk-to-benefit ratio due to the antiplatelet effects of ASA that increase risk of bleeding events in the GI tract and elsewhere.
The best approach to providing a favorable risk-to-benefit ratio may involve identifying biomarkers that predict benefit. The progress in understanding how ASA inhibits pathways of tumor development has provided candidates, according to Dr. Chan, citing a series of studies, including those conducted at his center.
One proof of concept was derived from work in evaluating tumor expression of cyclooxygenase (COX), the enzyme that converts arachidonic acid to prostaglandins. Dr. Chan explained that prostaglandin are linked to many downstream cancer-promoting pathways. In a study that involved molecular analysis on tumor specimens from patients who participated in a cohort study, the presence of the COX-2 isoenzyme was a differentiator.
“When the evaluation was performed according to COX-2 status, we saw about a 30% reduction in risk for COX-2-positive tumors. In contrast, we did not see any appreciable reduction in risk in tumors that were COX-2-negative,” Dr. Chan reported.
Although Dr. Chan acknowledged that about 80% of CRC tumors are COX-2-positive, which may explain why CRC protection from ASA is observed in an unselected population, he reported that this study has supported pursuit of additional biomarkers, particularly those that might predict protection from ASA before or at the very earliest stages of the neoplastic process.
One such candidate is 15-prostaglandin dehydrogenase (15-PGDH) enzyme, which is known to catabolize or breakdown prostaglandin. When 15-PDGH levels were evaluated in normal tissue samples adjacent to CRC tumors, there was about a 50% reduction in risk of CRC in those with high 15-PGDH “but there was actually no reduction in risk among those with low 15-PGDH,” Dr. Chan reported.
Currently, ASA prophylaxis for preventing CRC is recommended by the U.S. Preventive Services Task Force in individuals who also have an increased risk of cardiovascular disease, but there is no accepted formula for weighing CRC risk against risk of bleeding. Biomarkers like 15-PGDH could be instrumental in guiding decisions for gastroenterologists.
“So you can imagine a strategy in which the endoscopist removes the polyp and also takes a biopsy of adjacent normal tissue to determine whether there are sufficient levels of 15-PGDH to potentially predict whether ASA will have a preventive effect,” Dr. Chan said.
There are other biomarkers also being pursued for their same potential to identify patients with a high likelihood to benefit from ASA, which Dr. Chan suggested could help clinicians determine when ASA is appropriate.
In clinical medicine, “we think a lot about biomarkers in respect to predicting who will benefit from cancer therapy, but we have thought less about whether we have the ability to use biomarkers to determine who would potentially be benefiting from a preventive intervention,” Dr. Chan said.
Genetic testing for GI cancer risk
“We have to be honest with ourselves about the limitations,” said Dr. Yurgelun, who expressed concern about how some of the commercially available multigene panels are being marketed and employed by both patients and physicians.
Despite the expectations of those without experience interpreting the results, “genetic testing often fails to give a black and white answer,” Dr. Yurgelun said in an interview. “Finding an inherited mutation in a cancer susceptibility gene carries plenty of uncertainty in many cases.”
One issue for interpretation of multigene panels involves variants of uncertain significance (VUS). These are germline genetic alterations that have been observed in individuals undergoing genetic testing but have not been confirmed as causative of inherited cancer risk.
“VUS are findings for which data are insufficient to classify them as either pathogenic or benign, and there’s significant concern about both patients and clinicians overinterpreting or misinterpreting their significance,” Dr. Yurgelun cautioned. In particular, aggressive intervention undertaken because of a VUS could introduce risk without benefit. He cited a recently published article that documented aggressive surgical management in the presence of VUS mutations “even though their genetic testing really should not have dictated that.”
Even when a mutation is present with a clear association with increased cancer risk, penetrance is another concept that may not be fully appreciated in gene panel interpretation. Penetrance expresses the proportion of patients with that mutation who will develop a cancer with which the gene mutation is associated.
“An inherited mutation in a cancer susceptibility gene influences their probability of developing cancer, but the odds are never zero and they are essentially never 100%,” Dr. Yurgelun explained. “Genetics is not necessarily destiny,” he added.
Genetic testing is important now, and its clinical value is improving, Dr. Yurgelun emphasized, but he cautioned that the “explosion in the availability of genetic testing” has the potential to cause unrealistic expectations and the potential for misuse of the information.
“Clinicians should expect to still encounter lots of uncertainty with genetic testing, which is why it’s so critically important that such testing be done with the guidance of professional genetic counselors who can help navigate many of these uncertainties. While our technology now allows for a tremendous breadth of genetic testing options, these options have hugely expanded the number of questions and the amount of uncertainty that can be generated,” he added.
Heidi Splete contributed to this report.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
EXPERT ANALYSIS FROM THE 2017 AGA TECH SUMMIT
Can Metabolomic Profiling Predict Parkinson’s Disease Progression?
Metabolomic profiling of plasma strongly predicts Parkinson’s disease progression, according to a study published February 28 in Neurology. Metabolomic biomarkers may help researchers better understand Parkinson’s disease pathogenesis.
“Our findings offer novel biomarkers for studying Parkinson’s disease progression and, with them, several new directions for investigation of its pathogenesis,” said Peter A. LeWitt, MD, Professor of Neurology at Henry Ford Hospital and Wayne State University School of Medicine in Detroit. Diagnosing and measuring progression of Parkinson’s disease continues to present many challenges. How to identify biomarkers with high specificity and sensitivity also remains unclear. The latest methodologies of metabolomic analysis can measure a large fraction of low-molecular-weight compounds in biospecimens for characterizing the biochemical environment of the body.
Dr. LeWitt and colleagues sought to determine whether a Parkinson’s disease–specific biochemical signature might be found in plasma and CSF. They used ultra-high performance liquid chromatography linked to gas chromatography and tandem mass spectrometry to measure concentrations of small-molecule constituents of plasma and CSF of 49 unmedicated patients with mild parkinsonism. Participants were between ages 38 and 78, and the mean age was 62.9. Investigators collected specimens twice: at baseline and up to 24 months later. During the study, patients’ mean Unified Parkinson’s Disease Rating Scale (UPDRS) parts II and III scores increased by 47%.
The investigators performed unbiased univariate and multivariate analyses of the measured compounds to determine associations with Parkinson’s disease progression. The analyses included fitting data in multiple linear regressions with variable selection using the Least Absolute
—Erica Tricarico
Suggested Reading
LeWitt PA, Li J, Lu M, et al. Metabolomic biomarkers as strong correlates of Parkinson disease progression. Neurology. 2017;88(9):862-869.
Metabolomic profiling of plasma strongly predicts Parkinson’s disease progression, according to a study published February 28 in Neurology. Metabolomic biomarkers may help researchers better understand Parkinson’s disease pathogenesis.
“Our findings offer novel biomarkers for studying Parkinson’s disease progression and, with them, several new directions for investigation of its pathogenesis,” said Peter A. LeWitt, MD, Professor of Neurology at Henry Ford Hospital and Wayne State University School of Medicine in Detroit. Diagnosing and measuring progression of Parkinson’s disease continues to present many challenges. How to identify biomarkers with high specificity and sensitivity also remains unclear. The latest methodologies of metabolomic analysis can measure a large fraction of low-molecular-weight compounds in biospecimens for characterizing the biochemical environment of the body.
Dr. LeWitt and colleagues sought to determine whether a Parkinson’s disease–specific biochemical signature might be found in plasma and CSF. They used ultra-high performance liquid chromatography linked to gas chromatography and tandem mass spectrometry to measure concentrations of small-molecule constituents of plasma and CSF of 49 unmedicated patients with mild parkinsonism. Participants were between ages 38 and 78, and the mean age was 62.9. Investigators collected specimens twice: at baseline and up to 24 months later. During the study, patients’ mean Unified Parkinson’s Disease Rating Scale (UPDRS) parts II and III scores increased by 47%.
The investigators performed unbiased univariate and multivariate analyses of the measured compounds to determine associations with Parkinson’s disease progression. The analyses included fitting data in multiple linear regressions with variable selection using the Least Absolute
—Erica Tricarico
Suggested Reading
LeWitt PA, Li J, Lu M, et al. Metabolomic biomarkers as strong correlates of Parkinson disease progression. Neurology. 2017;88(9):862-869.
Metabolomic profiling of plasma strongly predicts Parkinson’s disease progression, according to a study published February 28 in Neurology. Metabolomic biomarkers may help researchers better understand Parkinson’s disease pathogenesis.
“Our findings offer novel biomarkers for studying Parkinson’s disease progression and, with them, several new directions for investigation of its pathogenesis,” said Peter A. LeWitt, MD, Professor of Neurology at Henry Ford Hospital and Wayne State University School of Medicine in Detroit. Diagnosing and measuring progression of Parkinson’s disease continues to present many challenges. How to identify biomarkers with high specificity and sensitivity also remains unclear. The latest methodologies of metabolomic analysis can measure a large fraction of low-molecular-weight compounds in biospecimens for characterizing the biochemical environment of the body.
Dr. LeWitt and colleagues sought to determine whether a Parkinson’s disease–specific biochemical signature might be found in plasma and CSF. They used ultra-high performance liquid chromatography linked to gas chromatography and tandem mass spectrometry to measure concentrations of small-molecule constituents of plasma and CSF of 49 unmedicated patients with mild parkinsonism. Participants were between ages 38 and 78, and the mean age was 62.9. Investigators collected specimens twice: at baseline and up to 24 months later. During the study, patients’ mean Unified Parkinson’s Disease Rating Scale (UPDRS) parts II and III scores increased by 47%.
The investigators performed unbiased univariate and multivariate analyses of the measured compounds to determine associations with Parkinson’s disease progression. The analyses included fitting data in multiple linear regressions with variable selection using the Least Absolute
—Erica Tricarico
Suggested Reading
LeWitt PA, Li J, Lu M, et al. Metabolomic biomarkers as strong correlates of Parkinson disease progression. Neurology. 2017;88(9):862-869.