Conference News Roundup—American Heart Association

Article Type
Changed
Mon, 01/07/2019 - 10:37

Does Cholesterol Testing Reduce Risk of Recurrent Stroke?

When a patient has a heart attack or stroke, it is critical for his or her clinician to perform a follow-up cholesterol test, according to a study conducted at the Intermountain Medical Center Heart Institute in Salt Lake City. This additional measure is significantly associated with reduced risk of having another serious cardiovascular episode.

Investigators found that patients who do not follow up with their doctor by getting a low-density lipoprotein (LDL) cholesterol test after a heart attack or stroke are significantly more likely to have a recurrence. They also found a significant and clinically meaningful difference in major adverse outcomes—including death, heart attack, stroke, and a vascular bypass or an angioplasty—based on whether or not a patient has a follow-up measurement of his or her LDL cholesterol.

“It is clear that anyone with a previous heart problem caused by clogged arteries should be taking a cholesterol-lowering medication,” said Kirk U. Knowlton, MD, Director of Cardiovascular Research at the Intermountain Medical Center Heart Institute.

The study of more than 60,000 patients with known heart disease, cerebrovascular disease, or peripheral artery disease, including patients with stroke and heart attack, showed that the major adverse clinical event rate was lower in patients who took cholesterol-lowering statins and in those who did not take them if their LDL was measured.

“The large difference is surprising. The risk of dying after three years with no LDL follow-up is 21% versus 5.9% for patients who have an LDL follow-up,” said Dr. Knowlton.

Researchers reviewed Intermountain Healthcare’s enterprise data warehouse to identify all adults who came to one of Intermountain’s 22 hospitals for the first time with a heart attack or stroke. These data included patients with coronary artery disease, cerebrovascular disease, and peripheral arterial disease admitted between January 1, 1999, and December 31, 2013.

Investigators observed patients who survived and were followed for three years or more or until their death. Patient demographics, history, prescribed medications, and whether LDL was measured was analyzed.

The study compared 62,070 patients in the database who met the study criteria. The mean age was 66, and 65% of patients were male. Of those who met the criteria, 69.3% had coronary artery disease, 18.6% had cerebrovascular disease, and 12.1% had peripheral arterial disease when they came to the hospital with their first heart attack or stroke.

Researchers found that the risk of a patient having a secondary event or dying decreased in patients who had a follow-up LDL test before a subsequent adverse outcome or before the end of their follow-up.

Coffee Is Associated With Lower Risk of Heart Failure and Stroke

Drinking coffee may be associated with a decreased risk of heart failure or stroke, according to researchers.

Investigators used machine learning to analyze data from the long-running Framingham Heart Study, which includes information about what people eat and their cardiovascular health. They found that every additional cup of coffee consumed per week was associated with a 7% decreased risk of heart failure and an 8% reduced risk of stroke, compared with non-coffee drinkers.

The researchers further investigated the machine learning results using traditional analysis in two studies with similar sets of data: the Cardiovascular Heart Study and the Atherosclerosis Risk In Communities Study. The association between drinking coffee and a decreased risk of heart failure and stroke was observed consistently in all three studies.

Another potential risk factor identified by machine-learning analysis was red meat consumption. The association between red meat consumption and heart failure or stroke was less clear, however. Eating red meat was associated with decreased risk of heart failure and stroke in the Framingham Heart Study, but validating the finding in comparable studies is more challenging due to differences in the definitions of red meat between studies, said the researchers. Further investigation to better determine how red meat consumption affects risk for heart failure and stroke is ongoing.

The researchers also built a predictive model using known risk factors from the Framingham Risk Score such as blood pressure, age, and other patient characteristics associated with cardiovascular disease. “By including coffee in the model, the prediction accuracy increased by 4%. Machine learning may be a useful addition to the way we look at data and may help us to find new ways to lower the risk of heart failure and strokes,” said David Kao, MD, Assistant Professor of Medicine in the Division of Cardiology at the University of Colorado School of Medicine in Aurora.

Statins May Improve Stroke Outcome

Patients with a prior history of heart attack or stroke have better outcomes when cholesterol-lowering medications are used after they are discharged from the hospital, according to researchers.

 

 

Prior surveys in hospitals found that statins are not being used consistently in patients who have been admitted to the hospital following a heart attack or stroke. Researchers also found that when the medication is prescribed, dosing is likely not as high as it should be to provide optimal benefits.

Researchers examined more than 62,000 records of patients from the Intermountain Healthcare system between 1999 and 2013 who survived an initial atherosclerotic cardiovascular disease event, such as a heart attack or stroke. They were then followed for three years or until death to identify the effectiveness of statin use prescribed at the time of their discharge.

“Patients who were prescribed a statin medication following an initial heart attack or stroke reduced their risk of a future adverse event such as a future heart attack, stroke, revascularization, or death by almost 25%—the rate dropped from 34% to 26%,” said Jeffrey L. Anderson, MD, a cardiovascular researcher at the Intermountain Medical Center Heart Institute. “The patients who were discharged on what is considered a high-intensity dose of a statin saw a 21% reduction in their risk,” compared with those discharged on a low-intensity statin dose.”

Investigators found that 30% of patients in the study who were discharged from the hospital following a heart attack or stroke were not prescribed a statin. This factor led to worse outcomes for those patients.

Researchers also found that only 13% of patients were given a high-intensity dose of statins, but noted that patients on those higher doses experienced fewer heart attacks or strokes. For patients younger than age 76, a high-intensity statin is indicated, according to American Heart Association guidelines. Only 17.7% of these patients were discharged on a high-intensity dose, however.

Issue
Neurology Reviews - 26(1)
Publications
Topics
Page Number
50
Sections

Does Cholesterol Testing Reduce Risk of Recurrent Stroke?

When a patient has a heart attack or stroke, it is critical for his or her clinician to perform a follow-up cholesterol test, according to a study conducted at the Intermountain Medical Center Heart Institute in Salt Lake City. This additional measure is significantly associated with reduced risk of having another serious cardiovascular episode.

Investigators found that patients who do not follow up with their doctor by getting a low-density lipoprotein (LDL) cholesterol test after a heart attack or stroke are significantly more likely to have a recurrence. They also found a significant and clinically meaningful difference in major adverse outcomes—including death, heart attack, stroke, and a vascular bypass or an angioplasty—based on whether or not a patient has a follow-up measurement of his or her LDL cholesterol.

“It is clear that anyone with a previous heart problem caused by clogged arteries should be taking a cholesterol-lowering medication,” said Kirk U. Knowlton, MD, Director of Cardiovascular Research at the Intermountain Medical Center Heart Institute.

The study of more than 60,000 patients with known heart disease, cerebrovascular disease, or peripheral artery disease, including patients with stroke and heart attack, showed that the major adverse clinical event rate was lower in patients who took cholesterol-lowering statins and in those who did not take them if their LDL was measured.

“The large difference is surprising. The risk of dying after three years with no LDL follow-up is 21% versus 5.9% for patients who have an LDL follow-up,” said Dr. Knowlton.

Researchers reviewed Intermountain Healthcare’s enterprise data warehouse to identify all adults who came to one of Intermountain’s 22 hospitals for the first time with a heart attack or stroke. These data included patients with coronary artery disease, cerebrovascular disease, and peripheral arterial disease admitted between January 1, 1999, and December 31, 2013.

Investigators observed patients who survived and were followed for three years or more or until their death. Patient demographics, history, prescribed medications, and whether LDL was measured was analyzed.

The study compared 62,070 patients in the database who met the study criteria. The mean age was 66, and 65% of patients were male. Of those who met the criteria, 69.3% had coronary artery disease, 18.6% had cerebrovascular disease, and 12.1% had peripheral arterial disease when they came to the hospital with their first heart attack or stroke.

Researchers found that the risk of a patient having a secondary event or dying decreased in patients who had a follow-up LDL test before a subsequent adverse outcome or before the end of their follow-up.

Coffee Is Associated With Lower Risk of Heart Failure and Stroke

Drinking coffee may be associated with a decreased risk of heart failure or stroke, according to researchers.

Investigators used machine learning to analyze data from the long-running Framingham Heart Study, which includes information about what people eat and their cardiovascular health. They found that every additional cup of coffee consumed per week was associated with a 7% decreased risk of heart failure and an 8% reduced risk of stroke, compared with non-coffee drinkers.

The researchers further investigated the machine learning results using traditional analysis in two studies with similar sets of data: the Cardiovascular Heart Study and the Atherosclerosis Risk In Communities Study. The association between drinking coffee and a decreased risk of heart failure and stroke was observed consistently in all three studies.

Another potential risk factor identified by machine-learning analysis was red meat consumption. The association between red meat consumption and heart failure or stroke was less clear, however. Eating red meat was associated with decreased risk of heart failure and stroke in the Framingham Heart Study, but validating the finding in comparable studies is more challenging due to differences in the definitions of red meat between studies, said the researchers. Further investigation to better determine how red meat consumption affects risk for heart failure and stroke is ongoing.

The researchers also built a predictive model using known risk factors from the Framingham Risk Score such as blood pressure, age, and other patient characteristics associated with cardiovascular disease. “By including coffee in the model, the prediction accuracy increased by 4%. Machine learning may be a useful addition to the way we look at data and may help us to find new ways to lower the risk of heart failure and strokes,” said David Kao, MD, Assistant Professor of Medicine in the Division of Cardiology at the University of Colorado School of Medicine in Aurora.

Statins May Improve Stroke Outcome

Patients with a prior history of heart attack or stroke have better outcomes when cholesterol-lowering medications are used after they are discharged from the hospital, according to researchers.

 

 

Prior surveys in hospitals found that statins are not being used consistently in patients who have been admitted to the hospital following a heart attack or stroke. Researchers also found that when the medication is prescribed, dosing is likely not as high as it should be to provide optimal benefits.

Researchers examined more than 62,000 records of patients from the Intermountain Healthcare system between 1999 and 2013 who survived an initial atherosclerotic cardiovascular disease event, such as a heart attack or stroke. They were then followed for three years or until death to identify the effectiveness of statin use prescribed at the time of their discharge.

“Patients who were prescribed a statin medication following an initial heart attack or stroke reduced their risk of a future adverse event such as a future heart attack, stroke, revascularization, or death by almost 25%—the rate dropped from 34% to 26%,” said Jeffrey L. Anderson, MD, a cardiovascular researcher at the Intermountain Medical Center Heart Institute. “The patients who were discharged on what is considered a high-intensity dose of a statin saw a 21% reduction in their risk,” compared with those discharged on a low-intensity statin dose.”

Investigators found that 30% of patients in the study who were discharged from the hospital following a heart attack or stroke were not prescribed a statin. This factor led to worse outcomes for those patients.

Researchers also found that only 13% of patients were given a high-intensity dose of statins, but noted that patients on those higher doses experienced fewer heart attacks or strokes. For patients younger than age 76, a high-intensity statin is indicated, according to American Heart Association guidelines. Only 17.7% of these patients were discharged on a high-intensity dose, however.

Does Cholesterol Testing Reduce Risk of Recurrent Stroke?

When a patient has a heart attack or stroke, it is critical for his or her clinician to perform a follow-up cholesterol test, according to a study conducted at the Intermountain Medical Center Heart Institute in Salt Lake City. This additional measure is significantly associated with reduced risk of having another serious cardiovascular episode.

Investigators found that patients who do not follow up with their doctor by getting a low-density lipoprotein (LDL) cholesterol test after a heart attack or stroke are significantly more likely to have a recurrence. They also found a significant and clinically meaningful difference in major adverse outcomes—including death, heart attack, stroke, and a vascular bypass or an angioplasty—based on whether or not a patient has a follow-up measurement of his or her LDL cholesterol.

“It is clear that anyone with a previous heart problem caused by clogged arteries should be taking a cholesterol-lowering medication,” said Kirk U. Knowlton, MD, Director of Cardiovascular Research at the Intermountain Medical Center Heart Institute.

The study of more than 60,000 patients with known heart disease, cerebrovascular disease, or peripheral artery disease, including patients with stroke and heart attack, showed that the major adverse clinical event rate was lower in patients who took cholesterol-lowering statins and in those who did not take them if their LDL was measured.

“The large difference is surprising. The risk of dying after three years with no LDL follow-up is 21% versus 5.9% for patients who have an LDL follow-up,” said Dr. Knowlton.

Researchers reviewed Intermountain Healthcare’s enterprise data warehouse to identify all adults who came to one of Intermountain’s 22 hospitals for the first time with a heart attack or stroke. These data included patients with coronary artery disease, cerebrovascular disease, and peripheral arterial disease admitted between January 1, 1999, and December 31, 2013.

Investigators observed patients who survived and were followed for three years or more or until their death. Patient demographics, history, prescribed medications, and whether LDL was measured was analyzed.

The study compared 62,070 patients in the database who met the study criteria. The mean age was 66, and 65% of patients were male. Of those who met the criteria, 69.3% had coronary artery disease, 18.6% had cerebrovascular disease, and 12.1% had peripheral arterial disease when they came to the hospital with their first heart attack or stroke.

Researchers found that the risk of a patient having a secondary event or dying decreased in patients who had a follow-up LDL test before a subsequent adverse outcome or before the end of their follow-up.

Coffee Is Associated With Lower Risk of Heart Failure and Stroke

Drinking coffee may be associated with a decreased risk of heart failure or stroke, according to researchers.

Investigators used machine learning to analyze data from the long-running Framingham Heart Study, which includes information about what people eat and their cardiovascular health. They found that every additional cup of coffee consumed per week was associated with a 7% decreased risk of heart failure and an 8% reduced risk of stroke, compared with non-coffee drinkers.

The researchers further investigated the machine learning results using traditional analysis in two studies with similar sets of data: the Cardiovascular Heart Study and the Atherosclerosis Risk In Communities Study. The association between drinking coffee and a decreased risk of heart failure and stroke was observed consistently in all three studies.

Another potential risk factor identified by machine-learning analysis was red meat consumption. The association between red meat consumption and heart failure or stroke was less clear, however. Eating red meat was associated with decreased risk of heart failure and stroke in the Framingham Heart Study, but validating the finding in comparable studies is more challenging due to differences in the definitions of red meat between studies, said the researchers. Further investigation to better determine how red meat consumption affects risk for heart failure and stroke is ongoing.

The researchers also built a predictive model using known risk factors from the Framingham Risk Score such as blood pressure, age, and other patient characteristics associated with cardiovascular disease. “By including coffee in the model, the prediction accuracy increased by 4%. Machine learning may be a useful addition to the way we look at data and may help us to find new ways to lower the risk of heart failure and strokes,” said David Kao, MD, Assistant Professor of Medicine in the Division of Cardiology at the University of Colorado School of Medicine in Aurora.

Statins May Improve Stroke Outcome

Patients with a prior history of heart attack or stroke have better outcomes when cholesterol-lowering medications are used after they are discharged from the hospital, according to researchers.

 

 

Prior surveys in hospitals found that statins are not being used consistently in patients who have been admitted to the hospital following a heart attack or stroke. Researchers also found that when the medication is prescribed, dosing is likely not as high as it should be to provide optimal benefits.

Researchers examined more than 62,000 records of patients from the Intermountain Healthcare system between 1999 and 2013 who survived an initial atherosclerotic cardiovascular disease event, such as a heart attack or stroke. They were then followed for three years or until death to identify the effectiveness of statin use prescribed at the time of their discharge.

“Patients who were prescribed a statin medication following an initial heart attack or stroke reduced their risk of a future adverse event such as a future heart attack, stroke, revascularization, or death by almost 25%—the rate dropped from 34% to 26%,” said Jeffrey L. Anderson, MD, a cardiovascular researcher at the Intermountain Medical Center Heart Institute. “The patients who were discharged on what is considered a high-intensity dose of a statin saw a 21% reduction in their risk,” compared with those discharged on a low-intensity statin dose.”

Investigators found that 30% of patients in the study who were discharged from the hospital following a heart attack or stroke were not prescribed a statin. This factor led to worse outcomes for those patients.

Researchers also found that only 13% of patients were given a high-intensity dose of statins, but noted that patients on those higher doses experienced fewer heart attacks or strokes. For patients younger than age 76, a high-intensity statin is indicated, according to American Heart Association guidelines. Only 17.7% of these patients were discharged on a high-intensity dose, however.

Issue
Neurology Reviews - 26(1)
Issue
Neurology Reviews - 26(1)
Page Number
50
Page Number
50
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Withholding elective surgery in smokers, obese patients

Article Type
Changed
Thu, 03/28/2019 - 14:43

 

No one will argue that obesity and tobacco aren’t serious public health issues, underlying many causes of morbidity and mortality. As a result, they’re driving factors behind a fair amount of health care spending.

In England, the county of Hertfordshire recently adopted a new approach to this: a ban on routine, nonurgent surgeries for both. Those with a body mass index of 30-40 kg/m2 must lose 10% of their weight over 9 months to qualify for a procedure, while those with a BMI above 40 must lose 15%. Smokers have to go 8 weeks without a cigarette and take breath tests to prove it.

zoom-zoom/Thinkstock

The group that formulated the plan noted that resources to help these groups achieve such goals are (and will continue to be) freely available.

Not unexpectedly, the plan is controversial. Robert West, MD, a professor of health psychology and director of tobacco studies at the University College London, told CNN that “rationing treatment on the basis of unhealthy behaviors betrays an extraordinary naivety about what drives those behaviors.”

Of course, this debate is nothing new. In December 2014, I wrote about surgeons in the United States who were refusing to do elective hernia repairs on smokers because of their higher complication rates.

A lot of this is framed in terms of money, since that’s the driving factor. Obese patients and smokers do have higher rates of surgical complications in general, with longer recoveries and, hence, higher costs. This policy tries to put greater responsibility on patients to maintain their own health and well-being. After all, financial resources are a finite, shared commodity.

Dr. Allan M. Block, a neurologist in Scottsdale, Arizona.
Dr. Allan M. Block
You can argue this in the other direction, too. Putting off elective procedures (let’s use knee replacements as an example) could increase other expenses: more visits to pain specialists, more tests, a greater risk of falls, and increased use of steroid and cartilage injections, NSAIDs, and narcotics with their respective complications. The financial sword cuts both ways.

Like everything in modern health care, there’s no easy answer. Insurers and doctors try to balance better outcomes vs. greater good and cost savings.

Medicine is, and always will be, an ongoing experiment, where some things work, some don’t, and we learn from time and experience.

Unfortunately, patients and their doctors are often caught in the middle, trapped between market and political forces on one side and caring for those who need us on the other. That’s never good, or easy, for those directly involved with individual patients on the front lines of medical care.
 

Dr. Block has a solo neurology practice in Scottsdale, Ariz.

Publications
Topics
Sections
Related Articles

 

No one will argue that obesity and tobacco aren’t serious public health issues, underlying many causes of morbidity and mortality. As a result, they’re driving factors behind a fair amount of health care spending.

In England, the county of Hertfordshire recently adopted a new approach to this: a ban on routine, nonurgent surgeries for both. Those with a body mass index of 30-40 kg/m2 must lose 10% of their weight over 9 months to qualify for a procedure, while those with a BMI above 40 must lose 15%. Smokers have to go 8 weeks without a cigarette and take breath tests to prove it.

zoom-zoom/Thinkstock

The group that formulated the plan noted that resources to help these groups achieve such goals are (and will continue to be) freely available.

Not unexpectedly, the plan is controversial. Robert West, MD, a professor of health psychology and director of tobacco studies at the University College London, told CNN that “rationing treatment on the basis of unhealthy behaviors betrays an extraordinary naivety about what drives those behaviors.”

Of course, this debate is nothing new. In December 2014, I wrote about surgeons in the United States who were refusing to do elective hernia repairs on smokers because of their higher complication rates.

A lot of this is framed in terms of money, since that’s the driving factor. Obese patients and smokers do have higher rates of surgical complications in general, with longer recoveries and, hence, higher costs. This policy tries to put greater responsibility on patients to maintain their own health and well-being. After all, financial resources are a finite, shared commodity.

Dr. Allan M. Block, a neurologist in Scottsdale, Arizona.
Dr. Allan M. Block
You can argue this in the other direction, too. Putting off elective procedures (let’s use knee replacements as an example) could increase other expenses: more visits to pain specialists, more tests, a greater risk of falls, and increased use of steroid and cartilage injections, NSAIDs, and narcotics with their respective complications. The financial sword cuts both ways.

Like everything in modern health care, there’s no easy answer. Insurers and doctors try to balance better outcomes vs. greater good and cost savings.

Medicine is, and always will be, an ongoing experiment, where some things work, some don’t, and we learn from time and experience.

Unfortunately, patients and their doctors are often caught in the middle, trapped between market and political forces on one side and caring for those who need us on the other. That’s never good, or easy, for those directly involved with individual patients on the front lines of medical care.
 

Dr. Block has a solo neurology practice in Scottsdale, Ariz.

 

No one will argue that obesity and tobacco aren’t serious public health issues, underlying many causes of morbidity and mortality. As a result, they’re driving factors behind a fair amount of health care spending.

In England, the county of Hertfordshire recently adopted a new approach to this: a ban on routine, nonurgent surgeries for both. Those with a body mass index of 30-40 kg/m2 must lose 10% of their weight over 9 months to qualify for a procedure, while those with a BMI above 40 must lose 15%. Smokers have to go 8 weeks without a cigarette and take breath tests to prove it.

zoom-zoom/Thinkstock

The group that formulated the plan noted that resources to help these groups achieve such goals are (and will continue to be) freely available.

Not unexpectedly, the plan is controversial. Robert West, MD, a professor of health psychology and director of tobacco studies at the University College London, told CNN that “rationing treatment on the basis of unhealthy behaviors betrays an extraordinary naivety about what drives those behaviors.”

Of course, this debate is nothing new. In December 2014, I wrote about surgeons in the United States who were refusing to do elective hernia repairs on smokers because of their higher complication rates.

A lot of this is framed in terms of money, since that’s the driving factor. Obese patients and smokers do have higher rates of surgical complications in general, with longer recoveries and, hence, higher costs. This policy tries to put greater responsibility on patients to maintain their own health and well-being. After all, financial resources are a finite, shared commodity.

Dr. Allan M. Block, a neurologist in Scottsdale, Arizona.
Dr. Allan M. Block
You can argue this in the other direction, too. Putting off elective procedures (let’s use knee replacements as an example) could increase other expenses: more visits to pain specialists, more tests, a greater risk of falls, and increased use of steroid and cartilage injections, NSAIDs, and narcotics with their respective complications. The financial sword cuts both ways.

Like everything in modern health care, there’s no easy answer. Insurers and doctors try to balance better outcomes vs. greater good and cost savings.

Medicine is, and always will be, an ongoing experiment, where some things work, some don’t, and we learn from time and experience.

Unfortunately, patients and their doctors are often caught in the middle, trapped between market and political forces on one side and caring for those who need us on the other. That’s never good, or easy, for those directly involved with individual patients on the front lines of medical care.
 

Dr. Block has a solo neurology practice in Scottsdale, Ariz.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Is the Estrogen–CGRP Relationship Relevant to Migraine?

Article Type
Changed
Thu, 12/15/2022 - 15:52
Researchers have identified interactions between ovarian steroid hormones, CGRP, and the trigeminovascular system.

Calcitonin gene-related peptide (CGRP) plays a key role in migraine pathophysiology, and recent studies have identified interactions between ovarian steroid hormones, CGRP, and the trigeminovascular system, according to a review published online ahead of print October 30, 2017, in Cephalalgia.

“Numerous animal and human studies have shown that cyclic fluctuations of ovarian hormones (mainly estrogen) modulate CGRP in the peripheral and central trigeminovascular system; this [effect] is especially relevant now that novel antibodies directed against CGRP or its receptor are currently in clinical trials,” said Alejandro Labastida-Ramírez of the Division of Vascular Medicine and Pharmacology at Erasmus University Medical Center in Rotterdam, the Netherlands, and colleagues.

The relationship between estrogen and CGRP seems to be “a key factor involved in the higher prevalence of migraine in women,” the authors said. “Future studies should focus on how fluctuations of gonadal hormones influence migraine pathophysiology in both genders…. Hopefully, these sex-related differences may contribute to the development of gender-specific therapies.”

Interplay of Hormones and CGRP

A clinical study by Stevenson et al in 1986 was one of the first to discover a relationship between female sex hormones and CGRP. In this study, concentrations of immunoreactive plasma CGRP in healthy women were significantly increased throughout pregnancy and decreased after delivery. A 1990 study by Valdemarsson et al found that in healthy subjects, immunoreactive plasma CGRP levels were significantly higher in females than in males. “The use of combined contraceptive pills was associated with even higher levels of immunoreactive CGRP in plasma,” said the review authors. “Accordingly, in postmenopausal women, decreased estradiol serum levels were positively correlated with decreased plasma immunoreactive CGRP concentrations, [suggesting] that the CGRP system could be influenced directly by endogenous or exogenous ovarian steroid hormones.”

Ibrahimi et al in 2017 used an experimental model to explore gender differences in CGRP-dependent dermal blood flow in healthy subjects and migraineurs. Dermal blood flow in males did not vary over time and was comparable between healthy subjects and migraineurs. In healthy women, fluctuations of ovarian steroid hormones influenced CGRP-dependent dermal blood flow. “Interestingly, in female migraine patients, dermal blood flow responses were elevated, compared to healthy subjects, but these responses were independent of the menstrual cycle,” the review authors noted.

Therapeutic Trials

Three humanized monoclonal antibodies targeting CGRP and one fully human monoclonal antibody targeting the CGRP receptor are in development.

While trials indicate that CGRP blockade is effective for treating migraine, further studies are needed to “elucidate whether these novel drugs are safe in individuals with cardiovascular risk factors, if there are any consequences of chronic CGRP inhibition in young reproductive women with a normal menstrual cycle, and whether efficacy depends on the phase of the menstrual cycle,” the authors said.

—Jake Remaly

Suggested Reading

Ibrahimi K, Vermeersch S, Frederiks P, et al. The influence of migraine and female hormones on capsaicin-induced dermal blood flow. Cephalalgia. 2017;37(12):1164-1172.

Labastida-Ramírez A, Rubio-Beltrán E, Villalón CM, MaassenVanDenBrink A. Gender aspects of CGRP in migraine. Cephalalgia. 2017 Oct 30 [Epub ahead of print].

Stevenson JC, Macdonald DW, Warren RC, et al. Increased concentration of circulating calcitonin gene related peptide during normal human pregnancy. Br Med J (Clin Res Ed). 1986;293(6558):1329-1330.

Valdemarsson S, Edvinsson L, Hedner P, Ekman R. Hormonal influence on calcitonin gene-related peptide in man: effects of sex difference and contraceptive pills. Scand J Clin Lab Invest. 1990;50(4):385-388.

Issue
Neurology Reviews - 26(1)
Publications
Topics
Page Number
49
Sections
Related Articles
Researchers have identified interactions between ovarian steroid hormones, CGRP, and the trigeminovascular system.
Researchers have identified interactions between ovarian steroid hormones, CGRP, and the trigeminovascular system.

Calcitonin gene-related peptide (CGRP) plays a key role in migraine pathophysiology, and recent studies have identified interactions between ovarian steroid hormones, CGRP, and the trigeminovascular system, according to a review published online ahead of print October 30, 2017, in Cephalalgia.

“Numerous animal and human studies have shown that cyclic fluctuations of ovarian hormones (mainly estrogen) modulate CGRP in the peripheral and central trigeminovascular system; this [effect] is especially relevant now that novel antibodies directed against CGRP or its receptor are currently in clinical trials,” said Alejandro Labastida-Ramírez of the Division of Vascular Medicine and Pharmacology at Erasmus University Medical Center in Rotterdam, the Netherlands, and colleagues.

The relationship between estrogen and CGRP seems to be “a key factor involved in the higher prevalence of migraine in women,” the authors said. “Future studies should focus on how fluctuations of gonadal hormones influence migraine pathophysiology in both genders…. Hopefully, these sex-related differences may contribute to the development of gender-specific therapies.”

Interplay of Hormones and CGRP

A clinical study by Stevenson et al in 1986 was one of the first to discover a relationship between female sex hormones and CGRP. In this study, concentrations of immunoreactive plasma CGRP in healthy women were significantly increased throughout pregnancy and decreased after delivery. A 1990 study by Valdemarsson et al found that in healthy subjects, immunoreactive plasma CGRP levels were significantly higher in females than in males. “The use of combined contraceptive pills was associated with even higher levels of immunoreactive CGRP in plasma,” said the review authors. “Accordingly, in postmenopausal women, decreased estradiol serum levels were positively correlated with decreased plasma immunoreactive CGRP concentrations, [suggesting] that the CGRP system could be influenced directly by endogenous or exogenous ovarian steroid hormones.”

Ibrahimi et al in 2017 used an experimental model to explore gender differences in CGRP-dependent dermal blood flow in healthy subjects and migraineurs. Dermal blood flow in males did not vary over time and was comparable between healthy subjects and migraineurs. In healthy women, fluctuations of ovarian steroid hormones influenced CGRP-dependent dermal blood flow. “Interestingly, in female migraine patients, dermal blood flow responses were elevated, compared to healthy subjects, but these responses were independent of the menstrual cycle,” the review authors noted.

Therapeutic Trials

Three humanized monoclonal antibodies targeting CGRP and one fully human monoclonal antibody targeting the CGRP receptor are in development.

While trials indicate that CGRP blockade is effective for treating migraine, further studies are needed to “elucidate whether these novel drugs are safe in individuals with cardiovascular risk factors, if there are any consequences of chronic CGRP inhibition in young reproductive women with a normal menstrual cycle, and whether efficacy depends on the phase of the menstrual cycle,” the authors said.

—Jake Remaly

Suggested Reading

Ibrahimi K, Vermeersch S, Frederiks P, et al. The influence of migraine and female hormones on capsaicin-induced dermal blood flow. Cephalalgia. 2017;37(12):1164-1172.

Labastida-Ramírez A, Rubio-Beltrán E, Villalón CM, MaassenVanDenBrink A. Gender aspects of CGRP in migraine. Cephalalgia. 2017 Oct 30 [Epub ahead of print].

Stevenson JC, Macdonald DW, Warren RC, et al. Increased concentration of circulating calcitonin gene related peptide during normal human pregnancy. Br Med J (Clin Res Ed). 1986;293(6558):1329-1330.

Valdemarsson S, Edvinsson L, Hedner P, Ekman R. Hormonal influence on calcitonin gene-related peptide in man: effects of sex difference and contraceptive pills. Scand J Clin Lab Invest. 1990;50(4):385-388.

Calcitonin gene-related peptide (CGRP) plays a key role in migraine pathophysiology, and recent studies have identified interactions between ovarian steroid hormones, CGRP, and the trigeminovascular system, according to a review published online ahead of print October 30, 2017, in Cephalalgia.

“Numerous animal and human studies have shown that cyclic fluctuations of ovarian hormones (mainly estrogen) modulate CGRP in the peripheral and central trigeminovascular system; this [effect] is especially relevant now that novel antibodies directed against CGRP or its receptor are currently in clinical trials,” said Alejandro Labastida-Ramírez of the Division of Vascular Medicine and Pharmacology at Erasmus University Medical Center in Rotterdam, the Netherlands, and colleagues.

The relationship between estrogen and CGRP seems to be “a key factor involved in the higher prevalence of migraine in women,” the authors said. “Future studies should focus on how fluctuations of gonadal hormones influence migraine pathophysiology in both genders…. Hopefully, these sex-related differences may contribute to the development of gender-specific therapies.”

Interplay of Hormones and CGRP

A clinical study by Stevenson et al in 1986 was one of the first to discover a relationship between female sex hormones and CGRP. In this study, concentrations of immunoreactive plasma CGRP in healthy women were significantly increased throughout pregnancy and decreased after delivery. A 1990 study by Valdemarsson et al found that in healthy subjects, immunoreactive plasma CGRP levels were significantly higher in females than in males. “The use of combined contraceptive pills was associated with even higher levels of immunoreactive CGRP in plasma,” said the review authors. “Accordingly, in postmenopausal women, decreased estradiol serum levels were positively correlated with decreased plasma immunoreactive CGRP concentrations, [suggesting] that the CGRP system could be influenced directly by endogenous or exogenous ovarian steroid hormones.”

Ibrahimi et al in 2017 used an experimental model to explore gender differences in CGRP-dependent dermal blood flow in healthy subjects and migraineurs. Dermal blood flow in males did not vary over time and was comparable between healthy subjects and migraineurs. In healthy women, fluctuations of ovarian steroid hormones influenced CGRP-dependent dermal blood flow. “Interestingly, in female migraine patients, dermal blood flow responses were elevated, compared to healthy subjects, but these responses were independent of the menstrual cycle,” the review authors noted.

Therapeutic Trials

Three humanized monoclonal antibodies targeting CGRP and one fully human monoclonal antibody targeting the CGRP receptor are in development.

While trials indicate that CGRP blockade is effective for treating migraine, further studies are needed to “elucidate whether these novel drugs are safe in individuals with cardiovascular risk factors, if there are any consequences of chronic CGRP inhibition in young reproductive women with a normal menstrual cycle, and whether efficacy depends on the phase of the menstrual cycle,” the authors said.

—Jake Remaly

Suggested Reading

Ibrahimi K, Vermeersch S, Frederiks P, et al. The influence of migraine and female hormones on capsaicin-induced dermal blood flow. Cephalalgia. 2017;37(12):1164-1172.

Labastida-Ramírez A, Rubio-Beltrán E, Villalón CM, MaassenVanDenBrink A. Gender aspects of CGRP in migraine. Cephalalgia. 2017 Oct 30 [Epub ahead of print].

Stevenson JC, Macdonald DW, Warren RC, et al. Increased concentration of circulating calcitonin gene related peptide during normal human pregnancy. Br Med J (Clin Res Ed). 1986;293(6558):1329-1330.

Valdemarsson S, Edvinsson L, Hedner P, Ekman R. Hormonal influence on calcitonin gene-related peptide in man: effects of sex difference and contraceptive pills. Scand J Clin Lab Invest. 1990;50(4):385-388.

Issue
Neurology Reviews - 26(1)
Issue
Neurology Reviews - 26(1)
Page Number
49
Page Number
49
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Debunking Psoriasis Myths: How to Help Patients Who Are Afraid of Injections

Article Type
Changed
Thu, 12/15/2022 - 14:52
Display Headline
Debunking Psoriasis Myths: How to Help Patients Who Are Afraid of Injections

Myth: Patients Are Not Willing to Give Themselves Injections

Injectable biologics target specific parts of the immune system, making them popular treatment options for psoriasis patients, with ample research on their efficacy. Performing a self-injection can be daunting for patients trying a biologic for the first time, and clinicians should be aware of the dearth of patient education material. Although patients may be fearful of self-injections, especially the first few treatments, their worries can be assuaged with proper instruction and appropriate delivery method.

Abrouk et al sought to provide an online guide and video on biologic injections to increase the success of the therapy and compliance among patients. They created a printable guide that covers the supplies needed, procedure techniques, and plans for traveling with medications. Because pain is a common concern for patients, they suggest numbing the injection area with an ice pack first. They also offer tips on dealing with injection-site reactions such as redness or bruising.

Nurse practitioners and physician assistants can be used to give psoriasis patients more personalized attention regarding the fear of injections. They can explain the injection procedures and describe differences between administration techniques. Some patients may prefer using an autoinjector versus a prefilled syringe, which may impact the treatment administered. Taking photographs to show progress with therapy also may motivate patients to tolerate therapy.

The National Psoriasis Foundation provides the following tips to make it easier for patients to self-inject and reduce the chance of an injection-site reaction:

  • Pick an easy injection site, such as the top of the rights, abdomen, or back of the arms.
  • Rotate injection sites from right to left.
  • Numb the area.
  • Warm the pen up by taking it out of the refrigerator 1.5 hours before it is used.
  • Be patient and avoid moving the injection pen before the needle is finished administering the drug.

By giving psoriasis patients educational materials, you can empower them to control their disease with injectable biologics.

Expert Commentary

Most of my patients who use a biologic for the first time are undaunted by learning to inject themselves. I can think of just 1 of my ~300 biologic patients who has to come in every few weeks for their medicine to be injected by one of our nurses. Surprisingly, some patients (I'd estimate 5% of my biologic patients) actually prefer the syringe compared to the autoinjector, with some comments saying that the syringe is less painful and less abrupt. Needle phobia should not be a reason to not prescribe a biologic for a patient with severe psoriasis who needs it.

—Jashin J. Wu, MD (Los Angeles, California)

References

Abrouk M, Nakamura M, Zhu TH, et al. The patient’s guide to psoriasis treatment. part 3: biologic injectables. Dermatol Ther (Heidelb). 2016;6:325-331.

Aldredge LM, Young MS. Providing guidance for patients with moderate-to-severe psoriasis who are candidates for biologic therapy. J Dermatol Nurses Assoc. 2016;8:14-26.

National Psoriasis Foundation. Self-injections 101. https://www.psoriasis.org/about-psoriasis/treatments/biologics/self-injections-101. Accessed January 2, 2018.

Publications
Topics
Sections

Myth: Patients Are Not Willing to Give Themselves Injections

Injectable biologics target specific parts of the immune system, making them popular treatment options for psoriasis patients, with ample research on their efficacy. Performing a self-injection can be daunting for patients trying a biologic for the first time, and clinicians should be aware of the dearth of patient education material. Although patients may be fearful of self-injections, especially the first few treatments, their worries can be assuaged with proper instruction and appropriate delivery method.

Abrouk et al sought to provide an online guide and video on biologic injections to increase the success of the therapy and compliance among patients. They created a printable guide that covers the supplies needed, procedure techniques, and plans for traveling with medications. Because pain is a common concern for patients, they suggest numbing the injection area with an ice pack first. They also offer tips on dealing with injection-site reactions such as redness or bruising.

Nurse practitioners and physician assistants can be used to give psoriasis patients more personalized attention regarding the fear of injections. They can explain the injection procedures and describe differences between administration techniques. Some patients may prefer using an autoinjector versus a prefilled syringe, which may impact the treatment administered. Taking photographs to show progress with therapy also may motivate patients to tolerate therapy.

The National Psoriasis Foundation provides the following tips to make it easier for patients to self-inject and reduce the chance of an injection-site reaction:

  • Pick an easy injection site, such as the top of the rights, abdomen, or back of the arms.
  • Rotate injection sites from right to left.
  • Numb the area.
  • Warm the pen up by taking it out of the refrigerator 1.5 hours before it is used.
  • Be patient and avoid moving the injection pen before the needle is finished administering the drug.

By giving psoriasis patients educational materials, you can empower them to control their disease with injectable biologics.

Expert Commentary

Most of my patients who use a biologic for the first time are undaunted by learning to inject themselves. I can think of just 1 of my ~300 biologic patients who has to come in every few weeks for their medicine to be injected by one of our nurses. Surprisingly, some patients (I'd estimate 5% of my biologic patients) actually prefer the syringe compared to the autoinjector, with some comments saying that the syringe is less painful and less abrupt. Needle phobia should not be a reason to not prescribe a biologic for a patient with severe psoriasis who needs it.

—Jashin J. Wu, MD (Los Angeles, California)

Myth: Patients Are Not Willing to Give Themselves Injections

Injectable biologics target specific parts of the immune system, making them popular treatment options for psoriasis patients, with ample research on their efficacy. Performing a self-injection can be daunting for patients trying a biologic for the first time, and clinicians should be aware of the dearth of patient education material. Although patients may be fearful of self-injections, especially the first few treatments, their worries can be assuaged with proper instruction and appropriate delivery method.

Abrouk et al sought to provide an online guide and video on biologic injections to increase the success of the therapy and compliance among patients. They created a printable guide that covers the supplies needed, procedure techniques, and plans for traveling with medications. Because pain is a common concern for patients, they suggest numbing the injection area with an ice pack first. They also offer tips on dealing with injection-site reactions such as redness or bruising.

Nurse practitioners and physician assistants can be used to give psoriasis patients more personalized attention regarding the fear of injections. They can explain the injection procedures and describe differences between administration techniques. Some patients may prefer using an autoinjector versus a prefilled syringe, which may impact the treatment administered. Taking photographs to show progress with therapy also may motivate patients to tolerate therapy.

The National Psoriasis Foundation provides the following tips to make it easier for patients to self-inject and reduce the chance of an injection-site reaction:

  • Pick an easy injection site, such as the top of the rights, abdomen, or back of the arms.
  • Rotate injection sites from right to left.
  • Numb the area.
  • Warm the pen up by taking it out of the refrigerator 1.5 hours before it is used.
  • Be patient and avoid moving the injection pen before the needle is finished administering the drug.

By giving psoriasis patients educational materials, you can empower them to control their disease with injectable biologics.

Expert Commentary

Most of my patients who use a biologic for the first time are undaunted by learning to inject themselves. I can think of just 1 of my ~300 biologic patients who has to come in every few weeks for their medicine to be injected by one of our nurses. Surprisingly, some patients (I'd estimate 5% of my biologic patients) actually prefer the syringe compared to the autoinjector, with some comments saying that the syringe is less painful and less abrupt. Needle phobia should not be a reason to not prescribe a biologic for a patient with severe psoriasis who needs it.

—Jashin J. Wu, MD (Los Angeles, California)

References

Abrouk M, Nakamura M, Zhu TH, et al. The patient’s guide to psoriasis treatment. part 3: biologic injectables. Dermatol Ther (Heidelb). 2016;6:325-331.

Aldredge LM, Young MS. Providing guidance for patients with moderate-to-severe psoriasis who are candidates for biologic therapy. J Dermatol Nurses Assoc. 2016;8:14-26.

National Psoriasis Foundation. Self-injections 101. https://www.psoriasis.org/about-psoriasis/treatments/biologics/self-injections-101. Accessed January 2, 2018.

References

Abrouk M, Nakamura M, Zhu TH, et al. The patient’s guide to psoriasis treatment. part 3: biologic injectables. Dermatol Ther (Heidelb). 2016;6:325-331.

Aldredge LM, Young MS. Providing guidance for patients with moderate-to-severe psoriasis who are candidates for biologic therapy. J Dermatol Nurses Assoc. 2016;8:14-26.

National Psoriasis Foundation. Self-injections 101. https://www.psoriasis.org/about-psoriasis/treatments/biologics/self-injections-101. Accessed January 2, 2018.

Publications
Publications
Topics
Article Type
Display Headline
Debunking Psoriasis Myths: How to Help Patients Who Are Afraid of Injections
Display Headline
Debunking Psoriasis Myths: How to Help Patients Who Are Afraid of Injections
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Point/Counterpoint: Should FEVAR be used for a short neck?

Article Type
Changed
Wed, 01/02/2019 - 10:03

 

FEVAR is generally the best option

The advent of endovascular aortic aneurysm repair (EVAR) has steadily become the standard of care in the management of infrarenal abdominal aortic aneurysms (AAAs). In fact, it has now surpassed open surgical repair and is the predominant therapeutic modality in managing this disease entity. Clearly, there are specific anatomic and technical factors that may preclude the use of traditional EVAR – most notably, challenging proximal neck anatomy, be it insufficient length or severe angulation.

It is estimated that up to 30%-40% of patients are unsuitable candidates because of these concerns.1 Such patients are thus relegated to traditional open repair with the associated concerns for supravisceral clamping, including dramatic changes in hemodynamics and prolonged ICU and hospital stays.

Dr. Nicholas J. Mouawad
However, with increasing surgeon experience and volume, complex endovascular strategies are being championed and performed, including use of traditional infrarenal devices outside the instructions-for-use indications, “back-table” physician modified devices, chimney/snorkel barreled parallel covered grafts (Ch-EVAR), custom built fenestrated endografts (FEVAR), and use of adjunctive techniques such as endoanchors.

Open surgical repair of pararenal, juxtarenal, and suprarenal AAAs is tried, tested, and durable. Knott and the group from Mayo Clinic reviewed their repair of 126 consecutive elective juxtarenal AAAs requiring suprarenal aortic clamping noting a 30-day mortality of .8%.2 More recent data from Kabbani and the Henry Ford group involved their 27-year clinical experience suggesting that open repair of complex proximal aortic aneurysms can be performed with clinical outcomes that are similar to those of open infrarenal repair.3 I respectfully accept this traditional – and historic – treatment modality.

However, we vascular surgeons are progressive and resilient in our quest to innovate and modernize – some of us even modified the traditional endografts on the back table. We charged forward despite the initial paucity of data on infrarenal EVAR compared to traditional open repair of aneurysms in the past. Now, a large percentage of infrarenal AAA repairs are performed via EVAR. In fact, our steadfast progression to advanced endovascular techniques has raised the concern that our graduating trainees are no longer proficient, competent, or even capable, in open complex aneurysm repair!

Tsilimparis and colleagues reported the first outcomes comparing open repair and FEVAR.4 They queried the NSQIP database comparing 1,091 patients undergoing open repair with 264 in the FEVAR group. There was an increased risk of morbidity in all combined endpoints including pulmonary and cardiovascular complications as well as length of stay in patients undergoing the open repair group. A larger comprehensive review pooled the results from 8 FEVAR and 12 open repair series. Analysis of the data found the groups to be identical. Open repair, however, was found to have an increased 30-day mortality when compared to FEVAR (relative risk 1.03, 2% increased absolute mortality).5

Gupta and colleagues reported the latest multi-institutional data noting that open repair was associated with higher risk than FEVAR for 30-day mortality, cardiac and pulmonary complications, renal failure requiring dialysis, return to the operating room, and in this age of cost-containment, length of stay (2 days vs. 7 days; P less than .0001).6

A further study by Donas and colleagues evaluated 90 consecutive patients with primary degenerative juxtarenal AAAs to different operative strategies based on morphologic and clinical characteristics – 29 FEVAR, 30 chEVAR, and 31 open repair.7 Early procedure-related and all-cause 30-day mortality was 0% in the endovascular group and 6.4% in the open group.

Although open repair for juxtarenal AAAs is the gold standard, short- and mid-term data on the outcomes for complex endovascular repair are excellent. Data on long-term durability, graft fixation/migration as well as the integrity of the graft and concerns for endoleaks and branch vessel patency, however, are limited. We do not have long-term data because we have not been doing these newer procedures for that long – but the data thus far show great promise.

We need to continue to challenge the status quo, particularly when the current data are satisfactory and the procedure feasible. With our continued appraisal of the data we publish as vascular surgeons, we can then identify if these innovations and techniques will withstand the test of time. After all, we are vascular surgeons (particularly those of us who have trained extensively in open repair) – and if open repair is necessary, then we will be ready.

But, if I can avoid a thoracoabdominal incision for a few percutaneous access sites, then sign me up!
 

Dr. Mouawad is chief of vascular and endovascular surgery, medical director of the vascular laboratory, and vice-chair of the department of surgery at McLaren Bay Region, Bay City, Mich. He is assistant professor of surgery at Michigan State University and Central Michigan University.

References

1. Perspect Vasc Surg Endovasc Ther. 2009;21:13-8.

2. J Vasc Surg. 2008;47:695-701.

3. J Vasc Surg. 2014;59:1488-94.

4. Ann Vasc Surg. 2013;27(3):267-73.

5. Eur J Vasc Endovasc Surg. 2009;38(1):35-41.

6. J Vasc Surg. 2017 Dec;66(6):1653-8.

7. J Vasc Surg. 2012 Aug;56(2):285-90.
 

FEVAR may not be the best choice


Over the past 3 decades, EVAR, with its very low periprocedural morbidity and mortality, and satisfactory long-term results, has become the primary treatment modality for the majority of infrarenal AAAs. The success of stent grafts for the repair of AAA relies heavily on satisfactory proximal and distal seal zones. Each commercially available standard EVAR graft has a manufacturer’s instructions for use requiring a proximal landing zone length of between 10 and 15 mm. Patients with less than this required length are considered to have “short necks.” Evaluation of this group of patients has demonstrated that this is not an uncommon finding and that EVAR performed outside the instructions for use has been associated with an increased risk of intraoperative failure, aneurysm expansion, and later complications.1-3

Current treatment options for patients with short necks include open surgical repair (OSR), FEVAR, and EVAR with the chimney graft technique (Ch-EVAR).

Dr. Mitchell Weaver
The Ch-EVAR technique currently lacks any significant long term follow-up, and with the availability of more proven commercially available devices is presently a lower tier endovascular treatment option. There are no head-to-head trials available between FEVAR and OSR of short neck aneurysms to guide our treatment choice.

Thus, current knowledge acquired from case series, registries, and clinical experience must be used in deciding which therapeutic option to offer patients. Relevant factors influencing this decision include the availability and adaptability of the technique, early outcomes including technical success, morbidity and mortality, and late outcomes including survival, need for reintervention, and other late morbidity. Finally, in an era of value-based medical care, cost also must be considered.

Currently there is only one Food and Drug Administration–approved fenestrated graft. When used in properly selected patients, excellent periprocedural results have been reported approaching those of standard EVAR. However, there are limitations in both the availability and adaptability of FEVAR. These grafts are custom made for each patient, often requiring several weeks of lead time. Adaptability also has its limitations, including access vessels, severe neck angulation, calcification, mural thrombus, and branch vessel size, number, location, and associated arterial disease. Any of these factors may preclude the use of this technology. Open repair, on the other hand, is not limited by graft availability and allows for custom modification for each patient’s specific disease morphology. The essential limitation for open repair is the patient’s physiological ability to withstand the operation.

Several studies attempting to compare the early outcomes of FEVAR versus comparable patients undergoing OSR of similar aneurysms have reported significantly lower 30-day mortality and major morbidity rates for FEVAR.4,5 However, Rao et al., in a recent systematic review and meta-analysis that included data on 2,326 patients from 35 case series reporting on elective repair of juxtarenal aneurysms by either OSR or FEVAR, found perioperative mortality to not be significantly different (4.1% for both). Also, no significant difference was found between the two groups when evaluating postoperative renal insufficiency and need for permanent dialysis. However, OSR did have significantly higher major complication rates (25% vs. 15.7%).6 These findings suggest that both modalities can be performed successfully, but that long term outcomes need to be considered to determine if the increased initial morbidity of OSR is justified by differences in long term results between the two modalities.

Open surgical repair of juxtarenal AAA has been shown to be a durable repair.7 While early and even intermediate results of FEVAR appear to be satisfactory, long-term durability has yet to be determined.4,8 Along with continuing to exclude the aneurysm sac, as with standard EVAR, there is the additional concern regarding the outcome of the organs supplied by the fenestrated/stent-grafted branches with FEVAR. Longer-term follow-up in the same review by Rao et al. showed that significantly more FEVAR patients developed renal failure compared with OSR patients (19.7% vs. 7.7%). FEVAR patients also had a higher rate of reintervention.

And finally, long-term survival was significantly greater in OSR patients compared to FEVAR at 3 and 5 years (80% vs. 74% vs. 73% vs. 55%). These authors concluded that open repair remains the gold standard while FEVAR is a favorable option for high risk patients.6

These new and innovative stent graft devices come at considerable expense. While this aspect of FEVAR has not been extensively studied, Michel et al., in their report from the multicenter prospective Windows registry, attempted to evaluate the economic aspect of FEVAR. They compared a group of patients who underwent FEVAR to patients from a large national hospital discharge database who underwent OSR. No difference in 30-day mortality was noted between these two groups; however, there was a significantly greater cost with FEVAR. The authors concluded that FEVAR did not appear to be justified for patients fit for open surgery with juxtarenal AAA.9

For now, the roles of OSR and FEVAR would appear to be complementary. Current evidence suggests that OSR is the most appropriate intervention for good risk patients with an anticipated longer life expectancy. Patients with appropriate anatomy for FEVAR and who are at higher risk for open repair would benefit from FEVAR. As further experience and outcomes are accumulated, our ability to select the appropriate therapy for individual patients should improve.

Dr. Weaver is an assistant clinical professor for surgery at Wayne State School of Medicine, Detroit, and an attending in the division of vascular surgery, Henry Ford Hospital.

References

1. Ir J Med Sci. 2015;184(1):249-55.

2. Circulation. 2011;123(24):2848-55.

3. J Endovasc Therapy. 2001;8(5):457-64.

4. Eur J Vasc Endovasc Surg. 2009;38(1):35-41.

5. Ann Vasc Surg. 2013;27(3):267-73.

6. J Vasc Surg. 2015;61(1):242-55.

7. J Vasc Surg. 2012;56(1):2-7.

8. J Cardiovasc Surg. 2015;56(3):331-7.

9. Eur J Vasc Endovasc Surg. 2015;50(2):189-96.

Publications
Topics
Sections

 

FEVAR is generally the best option

The advent of endovascular aortic aneurysm repair (EVAR) has steadily become the standard of care in the management of infrarenal abdominal aortic aneurysms (AAAs). In fact, it has now surpassed open surgical repair and is the predominant therapeutic modality in managing this disease entity. Clearly, there are specific anatomic and technical factors that may preclude the use of traditional EVAR – most notably, challenging proximal neck anatomy, be it insufficient length or severe angulation.

It is estimated that up to 30%-40% of patients are unsuitable candidates because of these concerns.1 Such patients are thus relegated to traditional open repair with the associated concerns for supravisceral clamping, including dramatic changes in hemodynamics and prolonged ICU and hospital stays.

Dr. Nicholas J. Mouawad
However, with increasing surgeon experience and volume, complex endovascular strategies are being championed and performed, including use of traditional infrarenal devices outside the instructions-for-use indications, “back-table” physician modified devices, chimney/snorkel barreled parallel covered grafts (Ch-EVAR), custom built fenestrated endografts (FEVAR), and use of adjunctive techniques such as endoanchors.

Open surgical repair of pararenal, juxtarenal, and suprarenal AAAs is tried, tested, and durable. Knott and the group from Mayo Clinic reviewed their repair of 126 consecutive elective juxtarenal AAAs requiring suprarenal aortic clamping noting a 30-day mortality of .8%.2 More recent data from Kabbani and the Henry Ford group involved their 27-year clinical experience suggesting that open repair of complex proximal aortic aneurysms can be performed with clinical outcomes that are similar to those of open infrarenal repair.3 I respectfully accept this traditional – and historic – treatment modality.

However, we vascular surgeons are progressive and resilient in our quest to innovate and modernize – some of us even modified the traditional endografts on the back table. We charged forward despite the initial paucity of data on infrarenal EVAR compared to traditional open repair of aneurysms in the past. Now, a large percentage of infrarenal AAA repairs are performed via EVAR. In fact, our steadfast progression to advanced endovascular techniques has raised the concern that our graduating trainees are no longer proficient, competent, or even capable, in open complex aneurysm repair!

Tsilimparis and colleagues reported the first outcomes comparing open repair and FEVAR.4 They queried the NSQIP database comparing 1,091 patients undergoing open repair with 264 in the FEVAR group. There was an increased risk of morbidity in all combined endpoints including pulmonary and cardiovascular complications as well as length of stay in patients undergoing the open repair group. A larger comprehensive review pooled the results from 8 FEVAR and 12 open repair series. Analysis of the data found the groups to be identical. Open repair, however, was found to have an increased 30-day mortality when compared to FEVAR (relative risk 1.03, 2% increased absolute mortality).5

Gupta and colleagues reported the latest multi-institutional data noting that open repair was associated with higher risk than FEVAR for 30-day mortality, cardiac and pulmonary complications, renal failure requiring dialysis, return to the operating room, and in this age of cost-containment, length of stay (2 days vs. 7 days; P less than .0001).6

A further study by Donas and colleagues evaluated 90 consecutive patients with primary degenerative juxtarenal AAAs to different operative strategies based on morphologic and clinical characteristics – 29 FEVAR, 30 chEVAR, and 31 open repair.7 Early procedure-related and all-cause 30-day mortality was 0% in the endovascular group and 6.4% in the open group.

Although open repair for juxtarenal AAAs is the gold standard, short- and mid-term data on the outcomes for complex endovascular repair are excellent. Data on long-term durability, graft fixation/migration as well as the integrity of the graft and concerns for endoleaks and branch vessel patency, however, are limited. We do not have long-term data because we have not been doing these newer procedures for that long – but the data thus far show great promise.

We need to continue to challenge the status quo, particularly when the current data are satisfactory and the procedure feasible. With our continued appraisal of the data we publish as vascular surgeons, we can then identify if these innovations and techniques will withstand the test of time. After all, we are vascular surgeons (particularly those of us who have trained extensively in open repair) – and if open repair is necessary, then we will be ready.

But, if I can avoid a thoracoabdominal incision for a few percutaneous access sites, then sign me up!
 

Dr. Mouawad is chief of vascular and endovascular surgery, medical director of the vascular laboratory, and vice-chair of the department of surgery at McLaren Bay Region, Bay City, Mich. He is assistant professor of surgery at Michigan State University and Central Michigan University.

References

1. Perspect Vasc Surg Endovasc Ther. 2009;21:13-8.

2. J Vasc Surg. 2008;47:695-701.

3. J Vasc Surg. 2014;59:1488-94.

4. Ann Vasc Surg. 2013;27(3):267-73.

5. Eur J Vasc Endovasc Surg. 2009;38(1):35-41.

6. J Vasc Surg. 2017 Dec;66(6):1653-8.

7. J Vasc Surg. 2012 Aug;56(2):285-90.
 

FEVAR may not be the best choice


Over the past 3 decades, EVAR, with its very low periprocedural morbidity and mortality, and satisfactory long-term results, has become the primary treatment modality for the majority of infrarenal AAAs. The success of stent grafts for the repair of AAA relies heavily on satisfactory proximal and distal seal zones. Each commercially available standard EVAR graft has a manufacturer’s instructions for use requiring a proximal landing zone length of between 10 and 15 mm. Patients with less than this required length are considered to have “short necks.” Evaluation of this group of patients has demonstrated that this is not an uncommon finding and that EVAR performed outside the instructions for use has been associated with an increased risk of intraoperative failure, aneurysm expansion, and later complications.1-3

Current treatment options for patients with short necks include open surgical repair (OSR), FEVAR, and EVAR with the chimney graft technique (Ch-EVAR).

Dr. Mitchell Weaver
The Ch-EVAR technique currently lacks any significant long term follow-up, and with the availability of more proven commercially available devices is presently a lower tier endovascular treatment option. There are no head-to-head trials available between FEVAR and OSR of short neck aneurysms to guide our treatment choice.

Thus, current knowledge acquired from case series, registries, and clinical experience must be used in deciding which therapeutic option to offer patients. Relevant factors influencing this decision include the availability and adaptability of the technique, early outcomes including technical success, morbidity and mortality, and late outcomes including survival, need for reintervention, and other late morbidity. Finally, in an era of value-based medical care, cost also must be considered.

Currently there is only one Food and Drug Administration–approved fenestrated graft. When used in properly selected patients, excellent periprocedural results have been reported approaching those of standard EVAR. However, there are limitations in both the availability and adaptability of FEVAR. These grafts are custom made for each patient, often requiring several weeks of lead time. Adaptability also has its limitations, including access vessels, severe neck angulation, calcification, mural thrombus, and branch vessel size, number, location, and associated arterial disease. Any of these factors may preclude the use of this technology. Open repair, on the other hand, is not limited by graft availability and allows for custom modification for each patient’s specific disease morphology. The essential limitation for open repair is the patient’s physiological ability to withstand the operation.

Several studies attempting to compare the early outcomes of FEVAR versus comparable patients undergoing OSR of similar aneurysms have reported significantly lower 30-day mortality and major morbidity rates for FEVAR.4,5 However, Rao et al., in a recent systematic review and meta-analysis that included data on 2,326 patients from 35 case series reporting on elective repair of juxtarenal aneurysms by either OSR or FEVAR, found perioperative mortality to not be significantly different (4.1% for both). Also, no significant difference was found between the two groups when evaluating postoperative renal insufficiency and need for permanent dialysis. However, OSR did have significantly higher major complication rates (25% vs. 15.7%).6 These findings suggest that both modalities can be performed successfully, but that long term outcomes need to be considered to determine if the increased initial morbidity of OSR is justified by differences in long term results between the two modalities.

Open surgical repair of juxtarenal AAA has been shown to be a durable repair.7 While early and even intermediate results of FEVAR appear to be satisfactory, long-term durability has yet to be determined.4,8 Along with continuing to exclude the aneurysm sac, as with standard EVAR, there is the additional concern regarding the outcome of the organs supplied by the fenestrated/stent-grafted branches with FEVAR. Longer-term follow-up in the same review by Rao et al. showed that significantly more FEVAR patients developed renal failure compared with OSR patients (19.7% vs. 7.7%). FEVAR patients also had a higher rate of reintervention.

And finally, long-term survival was significantly greater in OSR patients compared to FEVAR at 3 and 5 years (80% vs. 74% vs. 73% vs. 55%). These authors concluded that open repair remains the gold standard while FEVAR is a favorable option for high risk patients.6

These new and innovative stent graft devices come at considerable expense. While this aspect of FEVAR has not been extensively studied, Michel et al., in their report from the multicenter prospective Windows registry, attempted to evaluate the economic aspect of FEVAR. They compared a group of patients who underwent FEVAR to patients from a large national hospital discharge database who underwent OSR. No difference in 30-day mortality was noted between these two groups; however, there was a significantly greater cost with FEVAR. The authors concluded that FEVAR did not appear to be justified for patients fit for open surgery with juxtarenal AAA.9

For now, the roles of OSR and FEVAR would appear to be complementary. Current evidence suggests that OSR is the most appropriate intervention for good risk patients with an anticipated longer life expectancy. Patients with appropriate anatomy for FEVAR and who are at higher risk for open repair would benefit from FEVAR. As further experience and outcomes are accumulated, our ability to select the appropriate therapy for individual patients should improve.

Dr. Weaver is an assistant clinical professor for surgery at Wayne State School of Medicine, Detroit, and an attending in the division of vascular surgery, Henry Ford Hospital.

References

1. Ir J Med Sci. 2015;184(1):249-55.

2. Circulation. 2011;123(24):2848-55.

3. J Endovasc Therapy. 2001;8(5):457-64.

4. Eur J Vasc Endovasc Surg. 2009;38(1):35-41.

5. Ann Vasc Surg. 2013;27(3):267-73.

6. J Vasc Surg. 2015;61(1):242-55.

7. J Vasc Surg. 2012;56(1):2-7.

8. J Cardiovasc Surg. 2015;56(3):331-7.

9. Eur J Vasc Endovasc Surg. 2015;50(2):189-96.

 

FEVAR is generally the best option

The advent of endovascular aortic aneurysm repair (EVAR) has steadily become the standard of care in the management of infrarenal abdominal aortic aneurysms (AAAs). In fact, it has now surpassed open surgical repair and is the predominant therapeutic modality in managing this disease entity. Clearly, there are specific anatomic and technical factors that may preclude the use of traditional EVAR – most notably, challenging proximal neck anatomy, be it insufficient length or severe angulation.

It is estimated that up to 30%-40% of patients are unsuitable candidates because of these concerns.1 Such patients are thus relegated to traditional open repair with the associated concerns for supravisceral clamping, including dramatic changes in hemodynamics and prolonged ICU and hospital stays.

Dr. Nicholas J. Mouawad
However, with increasing surgeon experience and volume, complex endovascular strategies are being championed and performed, including use of traditional infrarenal devices outside the instructions-for-use indications, “back-table” physician modified devices, chimney/snorkel barreled parallel covered grafts (Ch-EVAR), custom built fenestrated endografts (FEVAR), and use of adjunctive techniques such as endoanchors.

Open surgical repair of pararenal, juxtarenal, and suprarenal AAAs is tried, tested, and durable. Knott and the group from Mayo Clinic reviewed their repair of 126 consecutive elective juxtarenal AAAs requiring suprarenal aortic clamping noting a 30-day mortality of .8%.2 More recent data from Kabbani and the Henry Ford group involved their 27-year clinical experience suggesting that open repair of complex proximal aortic aneurysms can be performed with clinical outcomes that are similar to those of open infrarenal repair.3 I respectfully accept this traditional – and historic – treatment modality.

However, we vascular surgeons are progressive and resilient in our quest to innovate and modernize – some of us even modified the traditional endografts on the back table. We charged forward despite the initial paucity of data on infrarenal EVAR compared to traditional open repair of aneurysms in the past. Now, a large percentage of infrarenal AAA repairs are performed via EVAR. In fact, our steadfast progression to advanced endovascular techniques has raised the concern that our graduating trainees are no longer proficient, competent, or even capable, in open complex aneurysm repair!

Tsilimparis and colleagues reported the first outcomes comparing open repair and FEVAR.4 They queried the NSQIP database comparing 1,091 patients undergoing open repair with 264 in the FEVAR group. There was an increased risk of morbidity in all combined endpoints including pulmonary and cardiovascular complications as well as length of stay in patients undergoing the open repair group. A larger comprehensive review pooled the results from 8 FEVAR and 12 open repair series. Analysis of the data found the groups to be identical. Open repair, however, was found to have an increased 30-day mortality when compared to FEVAR (relative risk 1.03, 2% increased absolute mortality).5

Gupta and colleagues reported the latest multi-institutional data noting that open repair was associated with higher risk than FEVAR for 30-day mortality, cardiac and pulmonary complications, renal failure requiring dialysis, return to the operating room, and in this age of cost-containment, length of stay (2 days vs. 7 days; P less than .0001).6

A further study by Donas and colleagues evaluated 90 consecutive patients with primary degenerative juxtarenal AAAs to different operative strategies based on morphologic and clinical characteristics – 29 FEVAR, 30 chEVAR, and 31 open repair.7 Early procedure-related and all-cause 30-day mortality was 0% in the endovascular group and 6.4% in the open group.

Although open repair for juxtarenal AAAs is the gold standard, short- and mid-term data on the outcomes for complex endovascular repair are excellent. Data on long-term durability, graft fixation/migration as well as the integrity of the graft and concerns for endoleaks and branch vessel patency, however, are limited. We do not have long-term data because we have not been doing these newer procedures for that long – but the data thus far show great promise.

We need to continue to challenge the status quo, particularly when the current data are satisfactory and the procedure feasible. With our continued appraisal of the data we publish as vascular surgeons, we can then identify if these innovations and techniques will withstand the test of time. After all, we are vascular surgeons (particularly those of us who have trained extensively in open repair) – and if open repair is necessary, then we will be ready.

But, if I can avoid a thoracoabdominal incision for a few percutaneous access sites, then sign me up!
 

Dr. Mouawad is chief of vascular and endovascular surgery, medical director of the vascular laboratory, and vice-chair of the department of surgery at McLaren Bay Region, Bay City, Mich. He is assistant professor of surgery at Michigan State University and Central Michigan University.

References

1. Perspect Vasc Surg Endovasc Ther. 2009;21:13-8.

2. J Vasc Surg. 2008;47:695-701.

3. J Vasc Surg. 2014;59:1488-94.

4. Ann Vasc Surg. 2013;27(3):267-73.

5. Eur J Vasc Endovasc Surg. 2009;38(1):35-41.

6. J Vasc Surg. 2017 Dec;66(6):1653-8.

7. J Vasc Surg. 2012 Aug;56(2):285-90.
 

FEVAR may not be the best choice


Over the past 3 decades, EVAR, with its very low periprocedural morbidity and mortality, and satisfactory long-term results, has become the primary treatment modality for the majority of infrarenal AAAs. The success of stent grafts for the repair of AAA relies heavily on satisfactory proximal and distal seal zones. Each commercially available standard EVAR graft has a manufacturer’s instructions for use requiring a proximal landing zone length of between 10 and 15 mm. Patients with less than this required length are considered to have “short necks.” Evaluation of this group of patients has demonstrated that this is not an uncommon finding and that EVAR performed outside the instructions for use has been associated with an increased risk of intraoperative failure, aneurysm expansion, and later complications.1-3

Current treatment options for patients with short necks include open surgical repair (OSR), FEVAR, and EVAR with the chimney graft technique (Ch-EVAR).

Dr. Mitchell Weaver
The Ch-EVAR technique currently lacks any significant long term follow-up, and with the availability of more proven commercially available devices is presently a lower tier endovascular treatment option. There are no head-to-head trials available between FEVAR and OSR of short neck aneurysms to guide our treatment choice.

Thus, current knowledge acquired from case series, registries, and clinical experience must be used in deciding which therapeutic option to offer patients. Relevant factors influencing this decision include the availability and adaptability of the technique, early outcomes including technical success, morbidity and mortality, and late outcomes including survival, need for reintervention, and other late morbidity. Finally, in an era of value-based medical care, cost also must be considered.

Currently there is only one Food and Drug Administration–approved fenestrated graft. When used in properly selected patients, excellent periprocedural results have been reported approaching those of standard EVAR. However, there are limitations in both the availability and adaptability of FEVAR. These grafts are custom made for each patient, often requiring several weeks of lead time. Adaptability also has its limitations, including access vessels, severe neck angulation, calcification, mural thrombus, and branch vessel size, number, location, and associated arterial disease. Any of these factors may preclude the use of this technology. Open repair, on the other hand, is not limited by graft availability and allows for custom modification for each patient’s specific disease morphology. The essential limitation for open repair is the patient’s physiological ability to withstand the operation.

Several studies attempting to compare the early outcomes of FEVAR versus comparable patients undergoing OSR of similar aneurysms have reported significantly lower 30-day mortality and major morbidity rates for FEVAR.4,5 However, Rao et al., in a recent systematic review and meta-analysis that included data on 2,326 patients from 35 case series reporting on elective repair of juxtarenal aneurysms by either OSR or FEVAR, found perioperative mortality to not be significantly different (4.1% for both). Also, no significant difference was found between the two groups when evaluating postoperative renal insufficiency and need for permanent dialysis. However, OSR did have significantly higher major complication rates (25% vs. 15.7%).6 These findings suggest that both modalities can be performed successfully, but that long term outcomes need to be considered to determine if the increased initial morbidity of OSR is justified by differences in long term results between the two modalities.

Open surgical repair of juxtarenal AAA has been shown to be a durable repair.7 While early and even intermediate results of FEVAR appear to be satisfactory, long-term durability has yet to be determined.4,8 Along with continuing to exclude the aneurysm sac, as with standard EVAR, there is the additional concern regarding the outcome of the organs supplied by the fenestrated/stent-grafted branches with FEVAR. Longer-term follow-up in the same review by Rao et al. showed that significantly more FEVAR patients developed renal failure compared with OSR patients (19.7% vs. 7.7%). FEVAR patients also had a higher rate of reintervention.

And finally, long-term survival was significantly greater in OSR patients compared to FEVAR at 3 and 5 years (80% vs. 74% vs. 73% vs. 55%). These authors concluded that open repair remains the gold standard while FEVAR is a favorable option for high risk patients.6

These new and innovative stent graft devices come at considerable expense. While this aspect of FEVAR has not been extensively studied, Michel et al., in their report from the multicenter prospective Windows registry, attempted to evaluate the economic aspect of FEVAR. They compared a group of patients who underwent FEVAR to patients from a large national hospital discharge database who underwent OSR. No difference in 30-day mortality was noted between these two groups; however, there was a significantly greater cost with FEVAR. The authors concluded that FEVAR did not appear to be justified for patients fit for open surgery with juxtarenal AAA.9

For now, the roles of OSR and FEVAR would appear to be complementary. Current evidence suggests that OSR is the most appropriate intervention for good risk patients with an anticipated longer life expectancy. Patients with appropriate anatomy for FEVAR and who are at higher risk for open repair would benefit from FEVAR. As further experience and outcomes are accumulated, our ability to select the appropriate therapy for individual patients should improve.

Dr. Weaver is an assistant clinical professor for surgery at Wayne State School of Medicine, Detroit, and an attending in the division of vascular surgery, Henry Ford Hospital.

References

1. Ir J Med Sci. 2015;184(1):249-55.

2. Circulation. 2011;123(24):2848-55.

3. J Endovasc Therapy. 2001;8(5):457-64.

4. Eur J Vasc Endovasc Surg. 2009;38(1):35-41.

5. Ann Vasc Surg. 2013;27(3):267-73.

6. J Vasc Surg. 2015;61(1):242-55.

7. J Vasc Surg. 2012;56(1):2-7.

8. J Cardiovasc Surg. 2015;56(3):331-7.

9. Eur J Vasc Endovasc Surg. 2015;50(2):189-96.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

MRI-guided neurofeedback improves ADHD long term in adolescent boys

Article Type
Changed
Fri, 01/18/2019 - 17:18

 

– Neurofeedback based upon real-time functional magnetic resonance imaging resulted in long-term reduction in attention-deficit/hyperactivity disorder symptoms in adolescents in a randomized controlled proof-of-concept study, Katya Rubia, PhD, reported at the annual congress of the European College of Neuropsychopharmacology.

The effect size of the improvement when measured at follow-up 11 months after completing the functional MRI-based neurofeedback (fMRI-NF) training exercises was moderate to large and comparable to that of psychostimulant medication in published placebo-controlled clinical trials. But the effects of the medications last only 24 hours after administration, and the drugs have side effects.

Bruce Jancin/Frontline Medical News
Katya Rubia, PhD
Thus, fMRI-NF offers several major advantages over drug therapy: “Learning brain self-regulation enhances neuroplasticity, and the effects are likely to be longer lasting than with external drug stimulation. Neurofeedback seems to have no side effects, and is preferred by parents and patients. And the long-term effects of stimulant medication on the developing brain are unknown,” said Dr. Rubia, professor of cognitive neuroscience and head of the section of developmental neurobiology and neuroimaging at the Institute of Psychiatry at King’s College London.

Neurofeedback is an operant conditioning procedure, which, through trial and error, teaches patients to self-regulate specific areas of the brain involved in psychopathology. EEG-based neurofeedback for ADHD has been extensively studied, with generally small to medium effect sizes being reported. Morever, patients need to be very highly motivated in order to succeed at EEG-NF: It takes 30-40 EEG-NF sessions, each an hour long, in order to learn targeted brain self-control in ADHD, whereas in Dr. Rubia’s study, patients learned to self-regulate brain activity in an average of eight fMRI sessions, each lasting 8.5 minutes, over the course of 2 weeks. The far speedier learning curve is probably tied to the superior specificity of spatial localization afforded by fMRI neurofeedback, according to the neuroscientist.

Also, fMRI-NF can reach certain key regions of the brain involved in ADHD that EEG-NF cannot, most notably the inferior frontal cortex (IFC) and basal ganglia, she added.

The target region in the proof-of-concept study was the right IFC, an area important for cognitive control, attention, and timing. Functional neuroimaging studies consistently have shown that the right IFC is underactive in ADHD, and that psychostimulant medications upregulate this area. A dysfunctional right IFC is an ADHD-specific abnormality not present in children with obessive-compulsive disorder (JAMA Psychiatry. 2016 Aug 1;73[8]:815-25), conduct disorder, or autism.

“The IFC seems to be a very good functional biomarker for ADHD,” Dr. Rubia said.

The proof-of-concept study, published in Human Brain Mapping, included 31 boys with a DSM-5 diagnosis of ADHD, aged 12-17, who were randomized to fMRI-NF of the right IFC or, as a control condition, to fMRI-NF targeting the left parahippocampal gyrus. Two patients had the inattentive subtype of ADHD; the rest had the combined hyperactive/inattentive form. Parents and patients were blinded as to their study arm.

The fMRI-NF training teaches subjects to self-regulate the blood oxygen level–dependent response of target areas of the brain. So this program uses neuroimaging as a treatment. It is neuroimaging employed as neurotherapy. To make the training experience more attractive to young patients, it was presented as a computer game: By making progress in controlling their brain activity, patients could launch a rocket ship on the screen. With further progress, they could send the rocket through the atmosphere into space and eventually land it on another planet.

The primary study endpoint was change in the ADHD Rating Scale. The group that targeted self-upregulation of right IFC activity showed roughly a 20% improvement in scores, from a baseline mean total score of 36.7 to 30.2 immediately post treatment, further improving to a score of 26.7 at roughly 11 months of follow-up. Mean scores on the inattention subscale improved from 19.8 to 15.9 immediately post treatment and 15.3 at follow-up. Scores on the hyperactivity/impulsivity subscale went from 16.9 before treatment to 14.2 after treatment and 11.5 at follow-up.

There were no side effects of fMRI-NF in either study arm.

However, a degree of uncertainty exists regarding the clinical significance of the results, Dr. Rubia said. That’s because the control group showed a similar degree of improvement in ADHD symptoms immediately after learning to upregulate the left parahippocampal gyrus, although their scores did backslide modestly during 11 months of follow-up, while the IFC group continued to improve.

Dr. Rubia acknowledged that this raises the possibility that the observed improvement in clinical symptoms achieved through fMRI-NF could be attributable to a placebo effect. However, she said she believes this is unlikely for several reasons. For one, brain scans showed that targeting either the right IFC or the left parahippocampal gyrus not only resulted in upregulation of activity in those specific regions, but throughout the broader neural networks of which they are a part. The right IFC upregulators showed activation of a bilateral dorsolateral prefrontal cortex/IFC-insular-striato-cerebellar cognitive control network. In contrast, the boys who targeted the left parahippocampal gyrus experienced activation of associated posterior visual-spatial attention regions, which are relevant to ADHD. This made for a far from ideal control group.

Also, the amount of improvement in ADHD symptoms in the right IFC-targeted group correlated with the degree of activation of that region, indicative of a brain-behavior correlation that speaks against a nonspecific effect.

Because this was a small, unpowered pilot study and interest remains intense in potential nonpharmacologic treatments for ADHD, the U.K. Medical Research Council is funding Dr. Rubia and her colleagues for a new 100-patient study – including a sham fMRI-NF arm – in order to definitively address the possibility of a placebo effect. The study also will attempt to pin down the patient population most likely to benefit from fMRI-NF. “It’s possible that the inattentive subtype of ADHD will respond best. Neurofeedback is, after all, a form of attention training,” she noted.

While real-time fMRI-NF might sound prohibitively expensive for widespread use in clinical practice for a disorder as common as ADHD, which has an estimated prevalence of about 7%, it might actually stack up reasonably well in a cost-benefit analysis, compared with ongoing medication costs and side effects or with a year’s worth of weekly psychotherapy, according to Dr. Rubia.

In parallel with the ongoing sham-controlled fMRI-NF study, Dr. Rubia also is conducting a clinical trial of transcranial direct current stimulation of the right IFC in combination with cognitive training. The idea is to study the clinical impact of directly upregulating activity in this area of the brain, bypassing the added step of training patients to gain self-control over this dysregulated region. The early findings, she said, look promising.

The fMRI-NF study (Hum Brain Mapp. 2017 Jun;38[6]:3190-209) was sponsored by the U.K. National Institute for Health Research and the Maudsley NHS Foundation Trust. Dr. Rubia reported receiving speakers honoraria from Lilly, Shire, and Medice.

Source: Rubia K et al. European College of Neuropsychopharmacology.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Neurofeedback based upon real-time functional magnetic resonance imaging resulted in long-term reduction in attention-deficit/hyperactivity disorder symptoms in adolescents in a randomized controlled proof-of-concept study, Katya Rubia, PhD, reported at the annual congress of the European College of Neuropsychopharmacology.

The effect size of the improvement when measured at follow-up 11 months after completing the functional MRI-based neurofeedback (fMRI-NF) training exercises was moderate to large and comparable to that of psychostimulant medication in published placebo-controlled clinical trials. But the effects of the medications last only 24 hours after administration, and the drugs have side effects.

Bruce Jancin/Frontline Medical News
Katya Rubia, PhD
Thus, fMRI-NF offers several major advantages over drug therapy: “Learning brain self-regulation enhances neuroplasticity, and the effects are likely to be longer lasting than with external drug stimulation. Neurofeedback seems to have no side effects, and is preferred by parents and patients. And the long-term effects of stimulant medication on the developing brain are unknown,” said Dr. Rubia, professor of cognitive neuroscience and head of the section of developmental neurobiology and neuroimaging at the Institute of Psychiatry at King’s College London.

Neurofeedback is an operant conditioning procedure, which, through trial and error, teaches patients to self-regulate specific areas of the brain involved in psychopathology. EEG-based neurofeedback for ADHD has been extensively studied, with generally small to medium effect sizes being reported. Morever, patients need to be very highly motivated in order to succeed at EEG-NF: It takes 30-40 EEG-NF sessions, each an hour long, in order to learn targeted brain self-control in ADHD, whereas in Dr. Rubia’s study, patients learned to self-regulate brain activity in an average of eight fMRI sessions, each lasting 8.5 minutes, over the course of 2 weeks. The far speedier learning curve is probably tied to the superior specificity of spatial localization afforded by fMRI neurofeedback, according to the neuroscientist.

Also, fMRI-NF can reach certain key regions of the brain involved in ADHD that EEG-NF cannot, most notably the inferior frontal cortex (IFC) and basal ganglia, she added.

The target region in the proof-of-concept study was the right IFC, an area important for cognitive control, attention, and timing. Functional neuroimaging studies consistently have shown that the right IFC is underactive in ADHD, and that psychostimulant medications upregulate this area. A dysfunctional right IFC is an ADHD-specific abnormality not present in children with obessive-compulsive disorder (JAMA Psychiatry. 2016 Aug 1;73[8]:815-25), conduct disorder, or autism.

“The IFC seems to be a very good functional biomarker for ADHD,” Dr. Rubia said.

The proof-of-concept study, published in Human Brain Mapping, included 31 boys with a DSM-5 diagnosis of ADHD, aged 12-17, who were randomized to fMRI-NF of the right IFC or, as a control condition, to fMRI-NF targeting the left parahippocampal gyrus. Two patients had the inattentive subtype of ADHD; the rest had the combined hyperactive/inattentive form. Parents and patients were blinded as to their study arm.

The fMRI-NF training teaches subjects to self-regulate the blood oxygen level–dependent response of target areas of the brain. So this program uses neuroimaging as a treatment. It is neuroimaging employed as neurotherapy. To make the training experience more attractive to young patients, it was presented as a computer game: By making progress in controlling their brain activity, patients could launch a rocket ship on the screen. With further progress, they could send the rocket through the atmosphere into space and eventually land it on another planet.

The primary study endpoint was change in the ADHD Rating Scale. The group that targeted self-upregulation of right IFC activity showed roughly a 20% improvement in scores, from a baseline mean total score of 36.7 to 30.2 immediately post treatment, further improving to a score of 26.7 at roughly 11 months of follow-up. Mean scores on the inattention subscale improved from 19.8 to 15.9 immediately post treatment and 15.3 at follow-up. Scores on the hyperactivity/impulsivity subscale went from 16.9 before treatment to 14.2 after treatment and 11.5 at follow-up.

There were no side effects of fMRI-NF in either study arm.

However, a degree of uncertainty exists regarding the clinical significance of the results, Dr. Rubia said. That’s because the control group showed a similar degree of improvement in ADHD symptoms immediately after learning to upregulate the left parahippocampal gyrus, although their scores did backslide modestly during 11 months of follow-up, while the IFC group continued to improve.

Dr. Rubia acknowledged that this raises the possibility that the observed improvement in clinical symptoms achieved through fMRI-NF could be attributable to a placebo effect. However, she said she believes this is unlikely for several reasons. For one, brain scans showed that targeting either the right IFC or the left parahippocampal gyrus not only resulted in upregulation of activity in those specific regions, but throughout the broader neural networks of which they are a part. The right IFC upregulators showed activation of a bilateral dorsolateral prefrontal cortex/IFC-insular-striato-cerebellar cognitive control network. In contrast, the boys who targeted the left parahippocampal gyrus experienced activation of associated posterior visual-spatial attention regions, which are relevant to ADHD. This made for a far from ideal control group.

Also, the amount of improvement in ADHD symptoms in the right IFC-targeted group correlated with the degree of activation of that region, indicative of a brain-behavior correlation that speaks against a nonspecific effect.

Because this was a small, unpowered pilot study and interest remains intense in potential nonpharmacologic treatments for ADHD, the U.K. Medical Research Council is funding Dr. Rubia and her colleagues for a new 100-patient study – including a sham fMRI-NF arm – in order to definitively address the possibility of a placebo effect. The study also will attempt to pin down the patient population most likely to benefit from fMRI-NF. “It’s possible that the inattentive subtype of ADHD will respond best. Neurofeedback is, after all, a form of attention training,” she noted.

While real-time fMRI-NF might sound prohibitively expensive for widespread use in clinical practice for a disorder as common as ADHD, which has an estimated prevalence of about 7%, it might actually stack up reasonably well in a cost-benefit analysis, compared with ongoing medication costs and side effects or with a year’s worth of weekly psychotherapy, according to Dr. Rubia.

In parallel with the ongoing sham-controlled fMRI-NF study, Dr. Rubia also is conducting a clinical trial of transcranial direct current stimulation of the right IFC in combination with cognitive training. The idea is to study the clinical impact of directly upregulating activity in this area of the brain, bypassing the added step of training patients to gain self-control over this dysregulated region. The early findings, she said, look promising.

The fMRI-NF study (Hum Brain Mapp. 2017 Jun;38[6]:3190-209) was sponsored by the U.K. National Institute for Health Research and the Maudsley NHS Foundation Trust. Dr. Rubia reported receiving speakers honoraria from Lilly, Shire, and Medice.

Source: Rubia K et al. European College of Neuropsychopharmacology.

 

– Neurofeedback based upon real-time functional magnetic resonance imaging resulted in long-term reduction in attention-deficit/hyperactivity disorder symptoms in adolescents in a randomized controlled proof-of-concept study, Katya Rubia, PhD, reported at the annual congress of the European College of Neuropsychopharmacology.

The effect size of the improvement when measured at follow-up 11 months after completing the functional MRI-based neurofeedback (fMRI-NF) training exercises was moderate to large and comparable to that of psychostimulant medication in published placebo-controlled clinical trials. But the effects of the medications last only 24 hours after administration, and the drugs have side effects.

Bruce Jancin/Frontline Medical News
Katya Rubia, PhD
Thus, fMRI-NF offers several major advantages over drug therapy: “Learning brain self-regulation enhances neuroplasticity, and the effects are likely to be longer lasting than with external drug stimulation. Neurofeedback seems to have no side effects, and is preferred by parents and patients. And the long-term effects of stimulant medication on the developing brain are unknown,” said Dr. Rubia, professor of cognitive neuroscience and head of the section of developmental neurobiology and neuroimaging at the Institute of Psychiatry at King’s College London.

Neurofeedback is an operant conditioning procedure, which, through trial and error, teaches patients to self-regulate specific areas of the brain involved in psychopathology. EEG-based neurofeedback for ADHD has been extensively studied, with generally small to medium effect sizes being reported. Morever, patients need to be very highly motivated in order to succeed at EEG-NF: It takes 30-40 EEG-NF sessions, each an hour long, in order to learn targeted brain self-control in ADHD, whereas in Dr. Rubia’s study, patients learned to self-regulate brain activity in an average of eight fMRI sessions, each lasting 8.5 minutes, over the course of 2 weeks. The far speedier learning curve is probably tied to the superior specificity of spatial localization afforded by fMRI neurofeedback, according to the neuroscientist.

Also, fMRI-NF can reach certain key regions of the brain involved in ADHD that EEG-NF cannot, most notably the inferior frontal cortex (IFC) and basal ganglia, she added.

The target region in the proof-of-concept study was the right IFC, an area important for cognitive control, attention, and timing. Functional neuroimaging studies consistently have shown that the right IFC is underactive in ADHD, and that psychostimulant medications upregulate this area. A dysfunctional right IFC is an ADHD-specific abnormality not present in children with obessive-compulsive disorder (JAMA Psychiatry. 2016 Aug 1;73[8]:815-25), conduct disorder, or autism.

“The IFC seems to be a very good functional biomarker for ADHD,” Dr. Rubia said.

The proof-of-concept study, published in Human Brain Mapping, included 31 boys with a DSM-5 diagnosis of ADHD, aged 12-17, who were randomized to fMRI-NF of the right IFC or, as a control condition, to fMRI-NF targeting the left parahippocampal gyrus. Two patients had the inattentive subtype of ADHD; the rest had the combined hyperactive/inattentive form. Parents and patients were blinded as to their study arm.

The fMRI-NF training teaches subjects to self-regulate the blood oxygen level–dependent response of target areas of the brain. So this program uses neuroimaging as a treatment. It is neuroimaging employed as neurotherapy. To make the training experience more attractive to young patients, it was presented as a computer game: By making progress in controlling their brain activity, patients could launch a rocket ship on the screen. With further progress, they could send the rocket through the atmosphere into space and eventually land it on another planet.

The primary study endpoint was change in the ADHD Rating Scale. The group that targeted self-upregulation of right IFC activity showed roughly a 20% improvement in scores, from a baseline mean total score of 36.7 to 30.2 immediately post treatment, further improving to a score of 26.7 at roughly 11 months of follow-up. Mean scores on the inattention subscale improved from 19.8 to 15.9 immediately post treatment and 15.3 at follow-up. Scores on the hyperactivity/impulsivity subscale went from 16.9 before treatment to 14.2 after treatment and 11.5 at follow-up.

There were no side effects of fMRI-NF in either study arm.

However, a degree of uncertainty exists regarding the clinical significance of the results, Dr. Rubia said. That’s because the control group showed a similar degree of improvement in ADHD symptoms immediately after learning to upregulate the left parahippocampal gyrus, although their scores did backslide modestly during 11 months of follow-up, while the IFC group continued to improve.

Dr. Rubia acknowledged that this raises the possibility that the observed improvement in clinical symptoms achieved through fMRI-NF could be attributable to a placebo effect. However, she said she believes this is unlikely for several reasons. For one, brain scans showed that targeting either the right IFC or the left parahippocampal gyrus not only resulted in upregulation of activity in those specific regions, but throughout the broader neural networks of which they are a part. The right IFC upregulators showed activation of a bilateral dorsolateral prefrontal cortex/IFC-insular-striato-cerebellar cognitive control network. In contrast, the boys who targeted the left parahippocampal gyrus experienced activation of associated posterior visual-spatial attention regions, which are relevant to ADHD. This made for a far from ideal control group.

Also, the amount of improvement in ADHD symptoms in the right IFC-targeted group correlated with the degree of activation of that region, indicative of a brain-behavior correlation that speaks against a nonspecific effect.

Because this was a small, unpowered pilot study and interest remains intense in potential nonpharmacologic treatments for ADHD, the U.K. Medical Research Council is funding Dr. Rubia and her colleagues for a new 100-patient study – including a sham fMRI-NF arm – in order to definitively address the possibility of a placebo effect. The study also will attempt to pin down the patient population most likely to benefit from fMRI-NF. “It’s possible that the inattentive subtype of ADHD will respond best. Neurofeedback is, after all, a form of attention training,” she noted.

While real-time fMRI-NF might sound prohibitively expensive for widespread use in clinical practice for a disorder as common as ADHD, which has an estimated prevalence of about 7%, it might actually stack up reasonably well in a cost-benefit analysis, compared with ongoing medication costs and side effects or with a year’s worth of weekly psychotherapy, according to Dr. Rubia.

In parallel with the ongoing sham-controlled fMRI-NF study, Dr. Rubia also is conducting a clinical trial of transcranial direct current stimulation of the right IFC in combination with cognitive training. The idea is to study the clinical impact of directly upregulating activity in this area of the brain, bypassing the added step of training patients to gain self-control over this dysregulated region. The early findings, she said, look promising.

The fMRI-NF study (Hum Brain Mapp. 2017 Jun;38[6]:3190-209) was sponsored by the U.K. National Institute for Health Research and the Maudsley NHS Foundation Trust. Dr. Rubia reported receiving speakers honoraria from Lilly, Shire, and Medice.

Source: Rubia K et al. European College of Neuropsychopharmacology.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

REPORTING FROM THE ECNP CONGRESS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Neuroimaging can be employed as neurotherapy to improve ADHD nonpharmacologically.

Major finding: Adolescents with ADHD who learned via functional MRI neurofeedback to upregulate activity in their right inferior frontal cortex showed significant improvement in scores on the ADHD Rating Scale, from a baseline mean total score of 36.7 to 30.2 immediately after the training program, further improving to 26.7 at roughly 11 months of follow-up.

Study details: A prospective, randomized, single-blind study of 31 boys aged 12-17 with ADHD.

Disclosures: The study was sponsored by the U.K. National Institute for Health Research and the Maudsley NHS Foundation Trust. The presenter reported receiving speakers honoraria from Lilly, Shire, and Medice.

Source: Rubia K et al. European College of Neuropsychopharmacology.

Disqus Comments
Default

New tool predicts late distant recurrence of postmenopausal ER+ breast cancer

Article Type
Changed
Wed, 01/04/2023 - 16:45

 

– A new prognostic tool that uses four clinical and pathological variables may help to guide decisions about extending adjuvant endocrine therapy for postmenopausal women with estrogen receptor–positive (ER+) breast cancer, according to a study reported at the San Antonio Breast Cancer Symposium.

ER+ breast cancer is well known for recurring long after endocrine therapy stops, but the risk varies widely, ranging from 10% to 40% (N Engl J Med. 2017;377:1836-46), noted lead investigator Ivana Sestak, MS, PhD, a lecturer in medical statistics at the Queen Mary University of London. “A few trials have shown that extended endocrine therapy can reduce the risk of recurrence, but careful assessment of potential side effects and actual risk of developing a late distant recurrence is essential,” she said.

Courtesy of Dr. Sestak
Dr. Ivana Sestak
The investigators developed and validated the new tool – called the Clinical Treatment Score post-5 years (CTS5) – among 11,446 postmenopausal women treated for ER+ breast cancer (with or without chemotherapy) who had completed 5 years of adjuvant endocrine therapy without any distant recurrence on the randomized ATAC and BIG 1-98 trials.

The investigators used the CTS5 to stratify patients into a low-risk group (risk of late distant recurrence less than 5%), an intermediate-risk group (risk between 5% and 10%), and a high-risk group (risk more than 10%). The observed rates of distant recurrence between years 5 and 10 were about 3% for the low-risk group, 7% for the intermediate-risk group, and 19% for the high risk-group. In addition, the CTS5 outperformed the original Clinical Treatment Score (CTS0), which was developed to predict recurrence between 0 and 10 years (J Clin Oncol. 2011;29:4273-8).

“We have developed a simple prognostic tool for the prediction of late distant recurrences which will help clinicians and their patients in the decision-making process about extended endocrine therapy,” Dr. Sestak commented. “The CTS5 was highly prognostic for the prediction of late distant recurrences and identified a large proportion of women, 42%, as low risk, where the value of extended endocrine therapy is limited. The CTS5 was also more prognostic than the already published CTS0 and should be used in this context for the prediction of late distant recurrence.”

“We aim to make the CTS5 algorithm and risk curve, with a read-out table, available to clinicians, and it will also be published in our manuscript,” she added.

Session attendee Frankie Ann Holmes, MD, of the Texas Oncology/US Oncology Network in Houston commented, “Just identifying high risk doesn’t necessarily translate into benefit, which is what we see with the Breast Cancer Index: You get the high risk, but then you learn if there is actually benefit to the extended therapy. Does your assay have a benefit portion to it?”

“No, we can’t look at the predictive benefit [with the CTS5]. This assay is purely a prognostic tool to predict late distant recurrences,” Dr. Sestak replied. “In these two trials, we do not have information on how many patients actually went on to extended endocrine therapy. You have to remember, these are old trials – they finished in about 2007-2008 – so not many women would have been given extended endocrine therapy at that time point.”

Session attendee Laura J. van’t Veer, PhD, of the University of California, San Francisco, asked, “How do you feel this will translate for risk up to 20 years, for which the question of extended endocrine therapy might also be very relevant?”

“For the purpose of this analysis, we only looked at out to 10 years. But I agree, it’s also important if we could apply a prognostic tool out to 20 years,” Dr. Sestak replied. “We have longer follow-up on some of the ATAC women, and we might look into that to see if we see any benefit of using a prognostic tool in the prediction of late distant recurrences.”
 

Study details

The investigators developed and trained the new tool using data from 4,735 women from the ATAC trial. They then validated the tool using data from 6,711 women from the BIG 1-98 trial.

The final CTS5 model contained four clinical variables, Dr. Sestak reported: number of involved nodes, size of the tumor, grade of the tumor, and age of the patient.

In the ATAC population, the CTS5 model did a better job than the original CTS0 model of predicting late distant recurrence. CTS5 improved the prediction of late distant recurrence by a factor of 2.47, whereas CTS0 improved the predictive value by a factor of 2.04. The CTS5 model performed similarly well regardless of whether patients had received chemotherapy.

In the BIG 1-98 population, the findings were much the same: The CTS5 model improved prediction of late distant recurrence by 2.07, while the CTS0 model improved prediction of late distant recurrence by 1.84. Performance of the CTS5 model was again similarly good regardless of whether patients had received chemotherapy.

Observed rates of distant recurrence between years 5 and 10 were similar in the ATAC and BIG 1-98 populations for the CTS5-defined low-risk group (2.5% and 3.0%, respectively), intermediate-risk group (7.7% and 6.9%), and the high-risk group (20.3% and 17.3%).

When the two trials’ populations were combined, the observed rate was 3.0% in the CTS5-defined low-risk group, 7.3% in the intermediate-risk group, and 18.9% in the high-risk group.

In addition, the main results held up among all node-negative women combined and among all women who had between one and three positive nodes combined. “For women with four or more positive lymph nodes, the CTS5 was not informative and categorized virtually all women into the high-risk group,” Dr. Sestak noted.

The investigators did not look at whether local or regional recurrences modulated the risk of late distant recurrence, she said. However, women who had experienced isolated local recurrence during the first 5 years would have been included in analysis.

“A strength of our study is that we used clinicopathological parameters that are measured in all breast cancer patients, and there is no need for further testing,” noted Dr. Sestak, who disclosed that she has received fees for advisory boards and lectures from Myriad Genetics.

On the other hand, it is unclear how the CTS5 would perform among premenopausal women and among women with HER2-positive disease given that two trials took place before routine HER2 testing and HER2-directed therapy were used.

SOURCE: Sestak I et al. SABCS 2017 Abstract GS6-01.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– A new prognostic tool that uses four clinical and pathological variables may help to guide decisions about extending adjuvant endocrine therapy for postmenopausal women with estrogen receptor–positive (ER+) breast cancer, according to a study reported at the San Antonio Breast Cancer Symposium.

ER+ breast cancer is well known for recurring long after endocrine therapy stops, but the risk varies widely, ranging from 10% to 40% (N Engl J Med. 2017;377:1836-46), noted lead investigator Ivana Sestak, MS, PhD, a lecturer in medical statistics at the Queen Mary University of London. “A few trials have shown that extended endocrine therapy can reduce the risk of recurrence, but careful assessment of potential side effects and actual risk of developing a late distant recurrence is essential,” she said.

Courtesy of Dr. Sestak
Dr. Ivana Sestak
The investigators developed and validated the new tool – called the Clinical Treatment Score post-5 years (CTS5) – among 11,446 postmenopausal women treated for ER+ breast cancer (with or without chemotherapy) who had completed 5 years of adjuvant endocrine therapy without any distant recurrence on the randomized ATAC and BIG 1-98 trials.

The investigators used the CTS5 to stratify patients into a low-risk group (risk of late distant recurrence less than 5%), an intermediate-risk group (risk between 5% and 10%), and a high-risk group (risk more than 10%). The observed rates of distant recurrence between years 5 and 10 were about 3% for the low-risk group, 7% for the intermediate-risk group, and 19% for the high risk-group. In addition, the CTS5 outperformed the original Clinical Treatment Score (CTS0), which was developed to predict recurrence between 0 and 10 years (J Clin Oncol. 2011;29:4273-8).

“We have developed a simple prognostic tool for the prediction of late distant recurrences which will help clinicians and their patients in the decision-making process about extended endocrine therapy,” Dr. Sestak commented. “The CTS5 was highly prognostic for the prediction of late distant recurrences and identified a large proportion of women, 42%, as low risk, where the value of extended endocrine therapy is limited. The CTS5 was also more prognostic than the already published CTS0 and should be used in this context for the prediction of late distant recurrence.”

“We aim to make the CTS5 algorithm and risk curve, with a read-out table, available to clinicians, and it will also be published in our manuscript,” she added.

Session attendee Frankie Ann Holmes, MD, of the Texas Oncology/US Oncology Network in Houston commented, “Just identifying high risk doesn’t necessarily translate into benefit, which is what we see with the Breast Cancer Index: You get the high risk, but then you learn if there is actually benefit to the extended therapy. Does your assay have a benefit portion to it?”

“No, we can’t look at the predictive benefit [with the CTS5]. This assay is purely a prognostic tool to predict late distant recurrences,” Dr. Sestak replied. “In these two trials, we do not have information on how many patients actually went on to extended endocrine therapy. You have to remember, these are old trials – they finished in about 2007-2008 – so not many women would have been given extended endocrine therapy at that time point.”

Session attendee Laura J. van’t Veer, PhD, of the University of California, San Francisco, asked, “How do you feel this will translate for risk up to 20 years, for which the question of extended endocrine therapy might also be very relevant?”

“For the purpose of this analysis, we only looked at out to 10 years. But I agree, it’s also important if we could apply a prognostic tool out to 20 years,” Dr. Sestak replied. “We have longer follow-up on some of the ATAC women, and we might look into that to see if we see any benefit of using a prognostic tool in the prediction of late distant recurrences.”
 

Study details

The investigators developed and trained the new tool using data from 4,735 women from the ATAC trial. They then validated the tool using data from 6,711 women from the BIG 1-98 trial.

The final CTS5 model contained four clinical variables, Dr. Sestak reported: number of involved nodes, size of the tumor, grade of the tumor, and age of the patient.

In the ATAC population, the CTS5 model did a better job than the original CTS0 model of predicting late distant recurrence. CTS5 improved the prediction of late distant recurrence by a factor of 2.47, whereas CTS0 improved the predictive value by a factor of 2.04. The CTS5 model performed similarly well regardless of whether patients had received chemotherapy.

In the BIG 1-98 population, the findings were much the same: The CTS5 model improved prediction of late distant recurrence by 2.07, while the CTS0 model improved prediction of late distant recurrence by 1.84. Performance of the CTS5 model was again similarly good regardless of whether patients had received chemotherapy.

Observed rates of distant recurrence between years 5 and 10 were similar in the ATAC and BIG 1-98 populations for the CTS5-defined low-risk group (2.5% and 3.0%, respectively), intermediate-risk group (7.7% and 6.9%), and the high-risk group (20.3% and 17.3%).

When the two trials’ populations were combined, the observed rate was 3.0% in the CTS5-defined low-risk group, 7.3% in the intermediate-risk group, and 18.9% in the high-risk group.

In addition, the main results held up among all node-negative women combined and among all women who had between one and three positive nodes combined. “For women with four or more positive lymph nodes, the CTS5 was not informative and categorized virtually all women into the high-risk group,” Dr. Sestak noted.

The investigators did not look at whether local or regional recurrences modulated the risk of late distant recurrence, she said. However, women who had experienced isolated local recurrence during the first 5 years would have been included in analysis.

“A strength of our study is that we used clinicopathological parameters that are measured in all breast cancer patients, and there is no need for further testing,” noted Dr. Sestak, who disclosed that she has received fees for advisory boards and lectures from Myriad Genetics.

On the other hand, it is unclear how the CTS5 would perform among premenopausal women and among women with HER2-positive disease given that two trials took place before routine HER2 testing and HER2-directed therapy were used.

SOURCE: Sestak I et al. SABCS 2017 Abstract GS6-01.

 

– A new prognostic tool that uses four clinical and pathological variables may help to guide decisions about extending adjuvant endocrine therapy for postmenopausal women with estrogen receptor–positive (ER+) breast cancer, according to a study reported at the San Antonio Breast Cancer Symposium.

ER+ breast cancer is well known for recurring long after endocrine therapy stops, but the risk varies widely, ranging from 10% to 40% (N Engl J Med. 2017;377:1836-46), noted lead investigator Ivana Sestak, MS, PhD, a lecturer in medical statistics at the Queen Mary University of London. “A few trials have shown that extended endocrine therapy can reduce the risk of recurrence, but careful assessment of potential side effects and actual risk of developing a late distant recurrence is essential,” she said.

Courtesy of Dr. Sestak
Dr. Ivana Sestak
The investigators developed and validated the new tool – called the Clinical Treatment Score post-5 years (CTS5) – among 11,446 postmenopausal women treated for ER+ breast cancer (with or without chemotherapy) who had completed 5 years of adjuvant endocrine therapy without any distant recurrence on the randomized ATAC and BIG 1-98 trials.

The investigators used the CTS5 to stratify patients into a low-risk group (risk of late distant recurrence less than 5%), an intermediate-risk group (risk between 5% and 10%), and a high-risk group (risk more than 10%). The observed rates of distant recurrence between years 5 and 10 were about 3% for the low-risk group, 7% for the intermediate-risk group, and 19% for the high risk-group. In addition, the CTS5 outperformed the original Clinical Treatment Score (CTS0), which was developed to predict recurrence between 0 and 10 years (J Clin Oncol. 2011;29:4273-8).

“We have developed a simple prognostic tool for the prediction of late distant recurrences which will help clinicians and their patients in the decision-making process about extended endocrine therapy,” Dr. Sestak commented. “The CTS5 was highly prognostic for the prediction of late distant recurrences and identified a large proportion of women, 42%, as low risk, where the value of extended endocrine therapy is limited. The CTS5 was also more prognostic than the already published CTS0 and should be used in this context for the prediction of late distant recurrence.”

“We aim to make the CTS5 algorithm and risk curve, with a read-out table, available to clinicians, and it will also be published in our manuscript,” she added.

Session attendee Frankie Ann Holmes, MD, of the Texas Oncology/US Oncology Network in Houston commented, “Just identifying high risk doesn’t necessarily translate into benefit, which is what we see with the Breast Cancer Index: You get the high risk, but then you learn if there is actually benefit to the extended therapy. Does your assay have a benefit portion to it?”

“No, we can’t look at the predictive benefit [with the CTS5]. This assay is purely a prognostic tool to predict late distant recurrences,” Dr. Sestak replied. “In these two trials, we do not have information on how many patients actually went on to extended endocrine therapy. You have to remember, these are old trials – they finished in about 2007-2008 – so not many women would have been given extended endocrine therapy at that time point.”

Session attendee Laura J. van’t Veer, PhD, of the University of California, San Francisco, asked, “How do you feel this will translate for risk up to 20 years, for which the question of extended endocrine therapy might also be very relevant?”

“For the purpose of this analysis, we only looked at out to 10 years. But I agree, it’s also important if we could apply a prognostic tool out to 20 years,” Dr. Sestak replied. “We have longer follow-up on some of the ATAC women, and we might look into that to see if we see any benefit of using a prognostic tool in the prediction of late distant recurrences.”
 

Study details

The investigators developed and trained the new tool using data from 4,735 women from the ATAC trial. They then validated the tool using data from 6,711 women from the BIG 1-98 trial.

The final CTS5 model contained four clinical variables, Dr. Sestak reported: number of involved nodes, size of the tumor, grade of the tumor, and age of the patient.

In the ATAC population, the CTS5 model did a better job than the original CTS0 model of predicting late distant recurrence. CTS5 improved the prediction of late distant recurrence by a factor of 2.47, whereas CTS0 improved the predictive value by a factor of 2.04. The CTS5 model performed similarly well regardless of whether patients had received chemotherapy.

In the BIG 1-98 population, the findings were much the same: The CTS5 model improved prediction of late distant recurrence by 2.07, while the CTS0 model improved prediction of late distant recurrence by 1.84. Performance of the CTS5 model was again similarly good regardless of whether patients had received chemotherapy.

Observed rates of distant recurrence between years 5 and 10 were similar in the ATAC and BIG 1-98 populations for the CTS5-defined low-risk group (2.5% and 3.0%, respectively), intermediate-risk group (7.7% and 6.9%), and the high-risk group (20.3% and 17.3%).

When the two trials’ populations were combined, the observed rate was 3.0% in the CTS5-defined low-risk group, 7.3% in the intermediate-risk group, and 18.9% in the high-risk group.

In addition, the main results held up among all node-negative women combined and among all women who had between one and three positive nodes combined. “For women with four or more positive lymph nodes, the CTS5 was not informative and categorized virtually all women into the high-risk group,” Dr. Sestak noted.

The investigators did not look at whether local or regional recurrences modulated the risk of late distant recurrence, she said. However, women who had experienced isolated local recurrence during the first 5 years would have been included in analysis.

“A strength of our study is that we used clinicopathological parameters that are measured in all breast cancer patients, and there is no need for further testing,” noted Dr. Sestak, who disclosed that she has received fees for advisory boards and lectures from Myriad Genetics.

On the other hand, it is unclear how the CTS5 would perform among premenopausal women and among women with HER2-positive disease given that two trials took place before routine HER2 testing and HER2-directed therapy were used.

SOURCE: Sestak I et al. SABCS 2017 Abstract GS6-01.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

REPORTING FROM SABCS 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: The CTS5 prognostic tool uses clinical and pathological data to predict late distant recurrence of postmenopausal ER+ breast cancer.

Major finding: The tool stratified patients for risk of distant recurrence between years 5 and 10 as low risk (less than 5% risk), intermediate risk (5%-10% risk), and high risk (more than 10% risk).

Data source: A cohort study of 11,446 postmenopausal women with early-stage breast cancer who were free of distant recurrence after 5 years of adjuvant endocrine therapy.

Disclosures: Dr. Sestak disclosed that she has received fees for advisory boards and lectures from Myriad Genetics.

Source: Sestak I et al. SABCS 2017 Abstract GS6-01.

Disqus Comments
Default

FDA expands indication for bosutinib in newly diagnosed CML

Article Type
Changed
Fri, 01/04/2019 - 10:15

 

Bosutinib is now approved for the treatment of adults with newly diagnosed chronic phase Philadelphia chromosome–positive (Ph+) chronic myelogenous leukemia (CML).

The Food and Drug Administration granted accelerated approval for bosutinib (Bosulif), which is marketed by Pfizer. The approval is based on data from the randomized, multicenter phase 3 BFORE trial of 487 patients with Ph+ newly diagnosed chronic phase CML who received either bosutinib or imatinib 400 mg once daily. Major molecular response at 12 months was 47.2% (95% confidence interval, 40.9-53.4) in the bosutinib arm and 36.9% (95% CI, 30.8-43.0) in the imatinib arm (two-sided P = .0200).

Continued approval for this indication may depend on confirmation of clinical benefit in an ongoing follow-up trial, according to Pfizer.

Bosutinib, a kinase inhibitor, was first approved in September 2012 for the treatment of adult patients with chronic, accelerated, or blast phase Ph+ CML with resistance or intolerance to prior therapy.

The recommended dose of bosutinib for newly diagnosed chronic phase Ph+ CML is 400 mg orally once daily with food.

The most common adverse reactions to the drug in newly diagnosed CML patients are diarrhea, nausea, thrombocytopenia, rash, increased alanine aminotransferase, abdominal pain, and increased aspartate aminotransferase.

Publications
Topics
Sections
Related Articles

 

Bosutinib is now approved for the treatment of adults with newly diagnosed chronic phase Philadelphia chromosome–positive (Ph+) chronic myelogenous leukemia (CML).

The Food and Drug Administration granted accelerated approval for bosutinib (Bosulif), which is marketed by Pfizer. The approval is based on data from the randomized, multicenter phase 3 BFORE trial of 487 patients with Ph+ newly diagnosed chronic phase CML who received either bosutinib or imatinib 400 mg once daily. Major molecular response at 12 months was 47.2% (95% confidence interval, 40.9-53.4) in the bosutinib arm and 36.9% (95% CI, 30.8-43.0) in the imatinib arm (two-sided P = .0200).

Continued approval for this indication may depend on confirmation of clinical benefit in an ongoing follow-up trial, according to Pfizer.

Bosutinib, a kinase inhibitor, was first approved in September 2012 for the treatment of adult patients with chronic, accelerated, or blast phase Ph+ CML with resistance or intolerance to prior therapy.

The recommended dose of bosutinib for newly diagnosed chronic phase Ph+ CML is 400 mg orally once daily with food.

The most common adverse reactions to the drug in newly diagnosed CML patients are diarrhea, nausea, thrombocytopenia, rash, increased alanine aminotransferase, abdominal pain, and increased aspartate aminotransferase.

 

Bosutinib is now approved for the treatment of adults with newly diagnosed chronic phase Philadelphia chromosome–positive (Ph+) chronic myelogenous leukemia (CML).

The Food and Drug Administration granted accelerated approval for bosutinib (Bosulif), which is marketed by Pfizer. The approval is based on data from the randomized, multicenter phase 3 BFORE trial of 487 patients with Ph+ newly diagnosed chronic phase CML who received either bosutinib or imatinib 400 mg once daily. Major molecular response at 12 months was 47.2% (95% confidence interval, 40.9-53.4) in the bosutinib arm and 36.9% (95% CI, 30.8-43.0) in the imatinib arm (two-sided P = .0200).

Continued approval for this indication may depend on confirmation of clinical benefit in an ongoing follow-up trial, according to Pfizer.

Bosutinib, a kinase inhibitor, was first approved in September 2012 for the treatment of adult patients with chronic, accelerated, or blast phase Ph+ CML with resistance or intolerance to prior therapy.

The recommended dose of bosutinib for newly diagnosed chronic phase Ph+ CML is 400 mg orally once daily with food.

The most common adverse reactions to the drug in newly diagnosed CML patients are diarrhea, nausea, thrombocytopenia, rash, increased alanine aminotransferase, abdominal pain, and increased aspartate aminotransferase.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

CMS clinical trials raise cardiac mortality

Article Type
Changed
Fri, 01/18/2019 - 17:18

 

Nearly 2 years ago I speculated in this column that health planners or health economists would attempt to manipulate the patterns of patient care to influence the cost and/or quality of clinical care. At that time I suggested that, in that event, the intervention should be managed as we have with drug or device trials to ensure the authenticity and accuracy and most of all assuring the safety of the patient. Furthermore, the design should be incorporated in the intervention, that equipoise be present in the arms of the trial and that a safety monitoring board be in place to alert investigators when and if patient safety is threatened. Patient consent should also be obtained.

Dr. Sidney Goldstein
Little did I know that an example was in play at the time of publication. A study presented at the Heart Failure Society of America meeting indicates that the Centers for Medicare & Medicaid Services, as part of the Affordable Care Act, was carrying out such an experiment in the attempt to lower cost and improve the quality of the care of heart failure patients by decreasing the occurrence of readmissions. On the surface, that appears to be a laudable goal and one that we can all support. In an attempt to decrease the readmissions, CMS had incentivized the process by financially rewarding hospitals if they decreased repeat admissions after discharge. Much to the surprise of the health planners, the intervention reported that, although 30-day readmission decreased as the result of the financial incentives, 30-day mortality increased. This was particularly surprising since in numerous drug trials, notably MERIT-HF (Lancet. 1999 Jun 12;353:2001-7), readmission usually tracked closely with mortality.

Beginning in 2012, CMS, using claims data from 2008 to 2012, penalized hospitals if they did not achieve acceptable readmission rates. At the same time, the agency established the Hospital Admission Reduction Program to monitor 30-day mortality and standardize readmission data. The recent data indicate that the incentives did achieve some decrease in rehospitalization but this was associated with a 16.5% relative increase in 30-day mortality. It was of particular concern that in the previous decade there had been a progressive decrease in 30-day mortality (Circulation 2014;130:966-75). The increase in 30-day mortality observed in the 4-year observational period appears to have interrupted the progressive decrease in 30-day mortality, which would have decreased to 30% if not impacted by the plan.

My previous concerns with this type of social experimentation and manipulation of health care was carried out, and as far as I can tell, continues without any oversight and little insight into the possible risks of this process. A better designed study would have provided better understanding of these results and might have mitigated the adverse effects and mortality events. It is suggested that some hospitals actually gamed the system to their economic advantage. In addition, no oversight board was or is in place as we have with drug trials to allow monitors to become aware of adverse events before there any further loss of life occurs.

I would agree that a randomized trial in this environment would be difficult to achieve. Obtaining consent from thousands of patients would also be difficult. Nevertheless, health care planners should not have free rein to modify accepted processes without taking into consideration the potential risks of their intervention.
 

Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.

Publications
Topics
Sections
Related Articles

 

Nearly 2 years ago I speculated in this column that health planners or health economists would attempt to manipulate the patterns of patient care to influence the cost and/or quality of clinical care. At that time I suggested that, in that event, the intervention should be managed as we have with drug or device trials to ensure the authenticity and accuracy and most of all assuring the safety of the patient. Furthermore, the design should be incorporated in the intervention, that equipoise be present in the arms of the trial and that a safety monitoring board be in place to alert investigators when and if patient safety is threatened. Patient consent should also be obtained.

Dr. Sidney Goldstein
Little did I know that an example was in play at the time of publication. A study presented at the Heart Failure Society of America meeting indicates that the Centers for Medicare & Medicaid Services, as part of the Affordable Care Act, was carrying out such an experiment in the attempt to lower cost and improve the quality of the care of heart failure patients by decreasing the occurrence of readmissions. On the surface, that appears to be a laudable goal and one that we can all support. In an attempt to decrease the readmissions, CMS had incentivized the process by financially rewarding hospitals if they decreased repeat admissions after discharge. Much to the surprise of the health planners, the intervention reported that, although 30-day readmission decreased as the result of the financial incentives, 30-day mortality increased. This was particularly surprising since in numerous drug trials, notably MERIT-HF (Lancet. 1999 Jun 12;353:2001-7), readmission usually tracked closely with mortality.

Beginning in 2012, CMS, using claims data from 2008 to 2012, penalized hospitals if they did not achieve acceptable readmission rates. At the same time, the agency established the Hospital Admission Reduction Program to monitor 30-day mortality and standardize readmission data. The recent data indicate that the incentives did achieve some decrease in rehospitalization but this was associated with a 16.5% relative increase in 30-day mortality. It was of particular concern that in the previous decade there had been a progressive decrease in 30-day mortality (Circulation 2014;130:966-75). The increase in 30-day mortality observed in the 4-year observational period appears to have interrupted the progressive decrease in 30-day mortality, which would have decreased to 30% if not impacted by the plan.

My previous concerns with this type of social experimentation and manipulation of health care was carried out, and as far as I can tell, continues without any oversight and little insight into the possible risks of this process. A better designed study would have provided better understanding of these results and might have mitigated the adverse effects and mortality events. It is suggested that some hospitals actually gamed the system to their economic advantage. In addition, no oversight board was or is in place as we have with drug trials to allow monitors to become aware of adverse events before there any further loss of life occurs.

I would agree that a randomized trial in this environment would be difficult to achieve. Obtaining consent from thousands of patients would also be difficult. Nevertheless, health care planners should not have free rein to modify accepted processes without taking into consideration the potential risks of their intervention.
 

Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.

 

Nearly 2 years ago I speculated in this column that health planners or health economists would attempt to manipulate the patterns of patient care to influence the cost and/or quality of clinical care. At that time I suggested that, in that event, the intervention should be managed as we have with drug or device trials to ensure the authenticity and accuracy and most of all assuring the safety of the patient. Furthermore, the design should be incorporated in the intervention, that equipoise be present in the arms of the trial and that a safety monitoring board be in place to alert investigators when and if patient safety is threatened. Patient consent should also be obtained.

Dr. Sidney Goldstein
Little did I know that an example was in play at the time of publication. A study presented at the Heart Failure Society of America meeting indicates that the Centers for Medicare & Medicaid Services, as part of the Affordable Care Act, was carrying out such an experiment in the attempt to lower cost and improve the quality of the care of heart failure patients by decreasing the occurrence of readmissions. On the surface, that appears to be a laudable goal and one that we can all support. In an attempt to decrease the readmissions, CMS had incentivized the process by financially rewarding hospitals if they decreased repeat admissions after discharge. Much to the surprise of the health planners, the intervention reported that, although 30-day readmission decreased as the result of the financial incentives, 30-day mortality increased. This was particularly surprising since in numerous drug trials, notably MERIT-HF (Lancet. 1999 Jun 12;353:2001-7), readmission usually tracked closely with mortality.

Beginning in 2012, CMS, using claims data from 2008 to 2012, penalized hospitals if they did not achieve acceptable readmission rates. At the same time, the agency established the Hospital Admission Reduction Program to monitor 30-day mortality and standardize readmission data. The recent data indicate that the incentives did achieve some decrease in rehospitalization but this was associated with a 16.5% relative increase in 30-day mortality. It was of particular concern that in the previous decade there had been a progressive decrease in 30-day mortality (Circulation 2014;130:966-75). The increase in 30-day mortality observed in the 4-year observational period appears to have interrupted the progressive decrease in 30-day mortality, which would have decreased to 30% if not impacted by the plan.

My previous concerns with this type of social experimentation and manipulation of health care was carried out, and as far as I can tell, continues without any oversight and little insight into the possible risks of this process. A better designed study would have provided better understanding of these results and might have mitigated the adverse effects and mortality events. It is suggested that some hospitals actually gamed the system to their economic advantage. In addition, no oversight board was or is in place as we have with drug trials to allow monitors to become aware of adverse events before there any further loss of life occurs.

I would agree that a randomized trial in this environment would be difficult to achieve. Obtaining consent from thousands of patients would also be difficult. Nevertheless, health care planners should not have free rein to modify accepted processes without taking into consideration the potential risks of their intervention.
 

Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

EndoPredict results reflected tumor response to neoadjuvant therapy

Article Type
Changed
Wed, 01/04/2023 - 16:45

 

– The results of the EndoPredict test appear to predict tumor response in patients with early hormone receptor–positive, HER2-negative breast cancer given neoadjuvant therapy, based on results of a study conducted by the Austrian Breast & Colorectal Cancer Study Group (ABCSG).

“Very good tumor shrinkage in estrogen receptor–positive, HER2-negative disease is going to happen only in a minority of patients, and biomarkers that would predict excellent tumor shrinkage are an unmet medical need,” commented lead investigator Peter Dubsky, MD, PhD, who is head of the Breast Center at Hirslanden Klinik St. Anna, Lucerne, Switzerland. “As a surgeon, that would help me to predict breast conservation at diagnosis, but as a surgical oncologist, I would also recognize that tumor response is an important component of future survival.”

SABCS/Scott Morgan 2017
Dr. Peter Dubsky


The ABCSG findings suggest expanded utility for EndoPredict. The test’s molecular score is currently used along with tumor size and nodal status to predict the 10-year distant recurrence rate, and whether patients may safely forgo chemotherapy or are at high risk and may need adjuvant chemotherapy in addition to endocrine therapy.

Dr. Dubsky and his coinvestigators assessed performance of the EndoPredict test among 217 patients treated on ABCSG 34, a randomized phase 2 neoadjuvant trial. Findings showed that among patients given neoadjuvant endocrine therapy because they had less aggressive disease features, an EndoPredict high-risk result was associated with poor response (negative predictive value of 92%), defined as a residual cancer burden (RCB) of II or III, he reported at the San Antonio Breast Cancer Symposium.

On the other hand, among patients given neoadjuvant chemotherapy because they had more aggressive disease features, a low-risk result was associated with poor response (negative predictive value of 100%).

“Clinicians really gave us two distinct cohorts within ABCSG 34. In the luminal A–type patients who were treated with neoendocrine therapy, a high EndoPredict score predicted a low chance of tumor shrinkage. In the more aggressive ER-positive tumors, so-called luminal B type, treated with neoadjuvant chemotherapy, there was absolutely no excellent response in the low-risk group,” Dr. Dubsky summarized. “We believe that this molecular score may contribute to patient selection for biomarker-driven studies, especially in the neoadjuvant setting.”

Session attendee Steven Vogl, MD, a medical oncologist with the Montefiore Medical Center in New York, commented, “I have trouble correlating an RCB of 0 or I with what you as a surgeon do for the patient, because you are talking about pathologic complete response or just a few cells there. That’s not what determines how much breast you take off: It’s determined by the total size of the tumor and the size of the breast. So if it’s less than a few centimeters, I’m sure you can do a lumpectomy in every patient. Tell me why I should care that you are getting an RCB of 0 or I in these endocrine patients.”

“Because it’s more likely that these patients will have a smaller tumor and better tumor shrinkage,” Dr. Dubsky replied. “You are of course right, RCB 0 or I was not designed to help surgeons. But it helps me as a translational scientist to have a surrogate and an exact classification for good tumor shrinkage. That’s how I used it.”

C. Kent Osborne, MD, codirector of SABCS and director of the Dan L. Duncan Cancer Center at Baylor College of Medicine in Houston, asked, “We see it in the clinic, and I’m sure you have as well, patients whose tumor doesn’t shrink very much, but the Ki-67 really drops. And that may or may not be a better factor than the actual tumor shrinkage. So how many patients who had tumors that didn’t shrink, which was your endpoint, had a reduction in Ki-67 that was, say, 5%?”

“We haven’t looked at that specifically, but we will do so as we carry on with the follow-up of these patients. Then we can learn more about the prognosis,” Dr. Dubsky replied.
 

Study details

ABCSG 34 was a randomized phase 2 trial testing addition of the cancer vaccine tecemotide (Stimuvax) to neoadjuvant standard of care among patients with HER2-negative early breast cancer.

Dr. Dubsky and coinvestigators restricted analyses to patients with hormone receptor–positive disease who, depending on clinical and pathologic factors, received neoadjuvant chemotherapy (eight cycles of epirubicin-cyclophosphamide and docetaxel) or neoadjuvant endocrine therapy (6 months of letrozole [Femara]) as standard of care. They were then randomized to additionally receive tecemotide or not before undergoing surgery.

Overall, 25% of the 134 patients in the neoadjuvant chemotherapy group had a good tumor response, defined as pathologic complete response in both breast and nodes (RCB of 0) or minimal residual disease (RCB of I).

Higher EndoPredict score was associated with greater likelihood of good response to chemotherapy. EndoPredict risk group (high vs. low) had a negative predictive value of 100%, a positive predictive value of 26.4%, a true-positive rate of 100%, and a true-negative rate of 8.9% for predicting response (P = .112).

Area under the receiver operating characteristic curve was 0.736.

In a multivariate model, EndoPredict score as a continuous variable was not an independent predictor of response. “The good response was largely driven by covariates that included cell proliferation, and it was Ki-67 that was significant,” Dr. Dubsky noted.

Overall, 18% of the 83 patients in the neoadjuvant endocrine therapy group had a good tumor response (RCB of 0 or I). Here, lower EndoPredict score was associated with greater likelihood of good response. EndoPredict risk group (high vs. low) had a negative predictive value of 92.3%, a positive predictive value of 27.3%, a true-positive rate of 80.0%, and a true-negative rate of 52.9% for predicting response (P = .024). Area under the curve was 0.726.

In a multivariate model here, EndoPredict score as a continuous variable, its estrogen receptor–signaling/differentiation component, and Ki-67 did not independently predict response. “It was maybe a bit surprising that T stage was the strongest factor, possibly indicating that we should have simply treated those women longer than 6 months,” Dr. Dubsky commented. The EndoPredict proliferation component was also a significant predictor.

“Possibly, the very narrow distribution of Ki-67 [among patients given neoendocrine therapy] may have prevented this factor from playing a bigger role in this particular model,” he speculated.

Dr. Dubsky disclosed that he receives consulting fees from Myriad, the maker of EndoPredict, and from Cepheid, Nanostring, and Amgen.

 

 

SOURCE: Dubsky P et al. SABCS 2017 Abstract GS6-04.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– The results of the EndoPredict test appear to predict tumor response in patients with early hormone receptor–positive, HER2-negative breast cancer given neoadjuvant therapy, based on results of a study conducted by the Austrian Breast & Colorectal Cancer Study Group (ABCSG).

“Very good tumor shrinkage in estrogen receptor–positive, HER2-negative disease is going to happen only in a minority of patients, and biomarkers that would predict excellent tumor shrinkage are an unmet medical need,” commented lead investigator Peter Dubsky, MD, PhD, who is head of the Breast Center at Hirslanden Klinik St. Anna, Lucerne, Switzerland. “As a surgeon, that would help me to predict breast conservation at diagnosis, but as a surgical oncologist, I would also recognize that tumor response is an important component of future survival.”

SABCS/Scott Morgan 2017
Dr. Peter Dubsky


The ABCSG findings suggest expanded utility for EndoPredict. The test’s molecular score is currently used along with tumor size and nodal status to predict the 10-year distant recurrence rate, and whether patients may safely forgo chemotherapy or are at high risk and may need adjuvant chemotherapy in addition to endocrine therapy.

Dr. Dubsky and his coinvestigators assessed performance of the EndoPredict test among 217 patients treated on ABCSG 34, a randomized phase 2 neoadjuvant trial. Findings showed that among patients given neoadjuvant endocrine therapy because they had less aggressive disease features, an EndoPredict high-risk result was associated with poor response (negative predictive value of 92%), defined as a residual cancer burden (RCB) of II or III, he reported at the San Antonio Breast Cancer Symposium.

On the other hand, among patients given neoadjuvant chemotherapy because they had more aggressive disease features, a low-risk result was associated with poor response (negative predictive value of 100%).

“Clinicians really gave us two distinct cohorts within ABCSG 34. In the luminal A–type patients who were treated with neoendocrine therapy, a high EndoPredict score predicted a low chance of tumor shrinkage. In the more aggressive ER-positive tumors, so-called luminal B type, treated with neoadjuvant chemotherapy, there was absolutely no excellent response in the low-risk group,” Dr. Dubsky summarized. “We believe that this molecular score may contribute to patient selection for biomarker-driven studies, especially in the neoadjuvant setting.”

Session attendee Steven Vogl, MD, a medical oncologist with the Montefiore Medical Center in New York, commented, “I have trouble correlating an RCB of 0 or I with what you as a surgeon do for the patient, because you are talking about pathologic complete response or just a few cells there. That’s not what determines how much breast you take off: It’s determined by the total size of the tumor and the size of the breast. So if it’s less than a few centimeters, I’m sure you can do a lumpectomy in every patient. Tell me why I should care that you are getting an RCB of 0 or I in these endocrine patients.”

“Because it’s more likely that these patients will have a smaller tumor and better tumor shrinkage,” Dr. Dubsky replied. “You are of course right, RCB 0 or I was not designed to help surgeons. But it helps me as a translational scientist to have a surrogate and an exact classification for good tumor shrinkage. That’s how I used it.”

C. Kent Osborne, MD, codirector of SABCS and director of the Dan L. Duncan Cancer Center at Baylor College of Medicine in Houston, asked, “We see it in the clinic, and I’m sure you have as well, patients whose tumor doesn’t shrink very much, but the Ki-67 really drops. And that may or may not be a better factor than the actual tumor shrinkage. So how many patients who had tumors that didn’t shrink, which was your endpoint, had a reduction in Ki-67 that was, say, 5%?”

“We haven’t looked at that specifically, but we will do so as we carry on with the follow-up of these patients. Then we can learn more about the prognosis,” Dr. Dubsky replied.
 

Study details

ABCSG 34 was a randomized phase 2 trial testing addition of the cancer vaccine tecemotide (Stimuvax) to neoadjuvant standard of care among patients with HER2-negative early breast cancer.

Dr. Dubsky and coinvestigators restricted analyses to patients with hormone receptor–positive disease who, depending on clinical and pathologic factors, received neoadjuvant chemotherapy (eight cycles of epirubicin-cyclophosphamide and docetaxel) or neoadjuvant endocrine therapy (6 months of letrozole [Femara]) as standard of care. They were then randomized to additionally receive tecemotide or not before undergoing surgery.

Overall, 25% of the 134 patients in the neoadjuvant chemotherapy group had a good tumor response, defined as pathologic complete response in both breast and nodes (RCB of 0) or minimal residual disease (RCB of I).

Higher EndoPredict score was associated with greater likelihood of good response to chemotherapy. EndoPredict risk group (high vs. low) had a negative predictive value of 100%, a positive predictive value of 26.4%, a true-positive rate of 100%, and a true-negative rate of 8.9% for predicting response (P = .112).

Area under the receiver operating characteristic curve was 0.736.

In a multivariate model, EndoPredict score as a continuous variable was not an independent predictor of response. “The good response was largely driven by covariates that included cell proliferation, and it was Ki-67 that was significant,” Dr. Dubsky noted.

Overall, 18% of the 83 patients in the neoadjuvant endocrine therapy group had a good tumor response (RCB of 0 or I). Here, lower EndoPredict score was associated with greater likelihood of good response. EndoPredict risk group (high vs. low) had a negative predictive value of 92.3%, a positive predictive value of 27.3%, a true-positive rate of 80.0%, and a true-negative rate of 52.9% for predicting response (P = .024). Area under the curve was 0.726.

In a multivariate model here, EndoPredict score as a continuous variable, its estrogen receptor–signaling/differentiation component, and Ki-67 did not independently predict response. “It was maybe a bit surprising that T stage was the strongest factor, possibly indicating that we should have simply treated those women longer than 6 months,” Dr. Dubsky commented. The EndoPredict proliferation component was also a significant predictor.

“Possibly, the very narrow distribution of Ki-67 [among patients given neoendocrine therapy] may have prevented this factor from playing a bigger role in this particular model,” he speculated.

Dr. Dubsky disclosed that he receives consulting fees from Myriad, the maker of EndoPredict, and from Cepheid, Nanostring, and Amgen.

 

 

SOURCE: Dubsky P et al. SABCS 2017 Abstract GS6-04.

 

– The results of the EndoPredict test appear to predict tumor response in patients with early hormone receptor–positive, HER2-negative breast cancer given neoadjuvant therapy, based on results of a study conducted by the Austrian Breast & Colorectal Cancer Study Group (ABCSG).

“Very good tumor shrinkage in estrogen receptor–positive, HER2-negative disease is going to happen only in a minority of patients, and biomarkers that would predict excellent tumor shrinkage are an unmet medical need,” commented lead investigator Peter Dubsky, MD, PhD, who is head of the Breast Center at Hirslanden Klinik St. Anna, Lucerne, Switzerland. “As a surgeon, that would help me to predict breast conservation at diagnosis, but as a surgical oncologist, I would also recognize that tumor response is an important component of future survival.”

SABCS/Scott Morgan 2017
Dr. Peter Dubsky


The ABCSG findings suggest expanded utility for EndoPredict. The test’s molecular score is currently used along with tumor size and nodal status to predict the 10-year distant recurrence rate, and whether patients may safely forgo chemotherapy or are at high risk and may need adjuvant chemotherapy in addition to endocrine therapy.

Dr. Dubsky and his coinvestigators assessed performance of the EndoPredict test among 217 patients treated on ABCSG 34, a randomized phase 2 neoadjuvant trial. Findings showed that among patients given neoadjuvant endocrine therapy because they had less aggressive disease features, an EndoPredict high-risk result was associated with poor response (negative predictive value of 92%), defined as a residual cancer burden (RCB) of II or III, he reported at the San Antonio Breast Cancer Symposium.

On the other hand, among patients given neoadjuvant chemotherapy because they had more aggressive disease features, a low-risk result was associated with poor response (negative predictive value of 100%).

“Clinicians really gave us two distinct cohorts within ABCSG 34. In the luminal A–type patients who were treated with neoendocrine therapy, a high EndoPredict score predicted a low chance of tumor shrinkage. In the more aggressive ER-positive tumors, so-called luminal B type, treated with neoadjuvant chemotherapy, there was absolutely no excellent response in the low-risk group,” Dr. Dubsky summarized. “We believe that this molecular score may contribute to patient selection for biomarker-driven studies, especially in the neoadjuvant setting.”

Session attendee Steven Vogl, MD, a medical oncologist with the Montefiore Medical Center in New York, commented, “I have trouble correlating an RCB of 0 or I with what you as a surgeon do for the patient, because you are talking about pathologic complete response or just a few cells there. That’s not what determines how much breast you take off: It’s determined by the total size of the tumor and the size of the breast. So if it’s less than a few centimeters, I’m sure you can do a lumpectomy in every patient. Tell me why I should care that you are getting an RCB of 0 or I in these endocrine patients.”

“Because it’s more likely that these patients will have a smaller tumor and better tumor shrinkage,” Dr. Dubsky replied. “You are of course right, RCB 0 or I was not designed to help surgeons. But it helps me as a translational scientist to have a surrogate and an exact classification for good tumor shrinkage. That’s how I used it.”

C. Kent Osborne, MD, codirector of SABCS and director of the Dan L. Duncan Cancer Center at Baylor College of Medicine in Houston, asked, “We see it in the clinic, and I’m sure you have as well, patients whose tumor doesn’t shrink very much, but the Ki-67 really drops. And that may or may not be a better factor than the actual tumor shrinkage. So how many patients who had tumors that didn’t shrink, which was your endpoint, had a reduction in Ki-67 that was, say, 5%?”

“We haven’t looked at that specifically, but we will do so as we carry on with the follow-up of these patients. Then we can learn more about the prognosis,” Dr. Dubsky replied.
 

Study details

ABCSG 34 was a randomized phase 2 trial testing addition of the cancer vaccine tecemotide (Stimuvax) to neoadjuvant standard of care among patients with HER2-negative early breast cancer.

Dr. Dubsky and coinvestigators restricted analyses to patients with hormone receptor–positive disease who, depending on clinical and pathologic factors, received neoadjuvant chemotherapy (eight cycles of epirubicin-cyclophosphamide and docetaxel) or neoadjuvant endocrine therapy (6 months of letrozole [Femara]) as standard of care. They were then randomized to additionally receive tecemotide or not before undergoing surgery.

Overall, 25% of the 134 patients in the neoadjuvant chemotherapy group had a good tumor response, defined as pathologic complete response in both breast and nodes (RCB of 0) or minimal residual disease (RCB of I).

Higher EndoPredict score was associated with greater likelihood of good response to chemotherapy. EndoPredict risk group (high vs. low) had a negative predictive value of 100%, a positive predictive value of 26.4%, a true-positive rate of 100%, and a true-negative rate of 8.9% for predicting response (P = .112).

Area under the receiver operating characteristic curve was 0.736.

In a multivariate model, EndoPredict score as a continuous variable was not an independent predictor of response. “The good response was largely driven by covariates that included cell proliferation, and it was Ki-67 that was significant,” Dr. Dubsky noted.

Overall, 18% of the 83 patients in the neoadjuvant endocrine therapy group had a good tumor response (RCB of 0 or I). Here, lower EndoPredict score was associated with greater likelihood of good response. EndoPredict risk group (high vs. low) had a negative predictive value of 92.3%, a positive predictive value of 27.3%, a true-positive rate of 80.0%, and a true-negative rate of 52.9% for predicting response (P = .024). Area under the curve was 0.726.

In a multivariate model here, EndoPredict score as a continuous variable, its estrogen receptor–signaling/differentiation component, and Ki-67 did not independently predict response. “It was maybe a bit surprising that T stage was the strongest factor, possibly indicating that we should have simply treated those women longer than 6 months,” Dr. Dubsky commented. The EndoPredict proliferation component was also a significant predictor.

“Possibly, the very narrow distribution of Ki-67 [among patients given neoendocrine therapy] may have prevented this factor from playing a bigger role in this particular model,” he speculated.

Dr. Dubsky disclosed that he receives consulting fees from Myriad, the maker of EndoPredict, and from Cepheid, Nanostring, and Amgen.

 

 

SOURCE: Dubsky P et al. SABCS 2017 Abstract GS6-04.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM SABCS 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: The results of the EndoPredict test appear to predict tumor response in patients with early hormone receptor–positive, HER2-negative breast cancer given neoadjuvant therapy.

Major finding: EndoPredict predicted poor tumor shrinkage in patients given neoadjuvant endocrine therapy (high-risk test result NPV, 92%) or neoadjuvant chemotherapy (low-risk test result NPV, 100%).

Data source: A cohort study of 217 patients with HR–positive, HER2-negative breast cancer enrolled in a phase 2 trial of neoadjuvant therapy (ABCSG 34).

Disclosures: Dr. Dubsky disclosed that he receives consulting fees from Cepheid, Myriad, Nanostring, and Amgen.

Disqus Comments
Default