TAVR, SAVR share same infective endocarditis risk

Article Type
Changed
Thu, 10/03/2019 - 13:39

 

– The risk of infective endocarditis following transcatheter aortic valve replacement (TAVR) for the treatment of severe aortic stenosis proved to be the same as after surgical replacement in a French national propensity score–matched study.

Dr. Laurent Fauchier

This finding from what is believed to be the largest-ever study of infective endocarditis following TAVR will come as a surprise to many physicians. It’s easy to mistakenly assume the risk of this feared complication is lower – and perhaps even negligible – in TAVR patients since the procedure doesn’t involve a significant surgical wound, it’s briefer, the hospital length of stay is shorter, and recovery time is markedly less than with surgical aortic valve replacement (SAVR).

Not so, Laurent Fauchier, MD, PhD, said in presenting the study findings at the annual congress of the European Society of Cardiology.

“Do not think there is a lower risk of infective endocarditis. Be aware, be careful, and provide appropriate antibiotic prophylaxis, just as surgeons do in SAVR. Don’t think, as I did, that with TAVR with no pacemaker implantation there is no risk of infective endocarditis. The TAVR valve is a device, it’s a prosthesis, and the risk is very similar to that of surgery,” advised Dr. Fauchier, a cardiologist at Francois Rabelais University in Tours, France.



He presented a study of all of the nearly 108,000 patients who underwent isolated TAVR or SAVR in France during 2010-2018. The data source was the French national administrative hospital discharge record system. Since the TAVR patients were overall markedly older and sicker than the SAVR patients, especially during the first years of the study, he and his coinvestigators performed propensity score matching using 30 variables, which enabled them to narrow the field of inquiry down to a carefully selected study population of 16,291 TAVR patients and an equal number of closely similar SAVR patients.

A total of 1,070 cases of infective endocarditis occurred during a mean follow-up of just over 2 years. The rate of hospital admission for this complication was 1.89% per year in the TAVR group and similar at 1.71% per year in the SAVR cohort.

Of note, all-cause mortality in TAVR patients who developed infective endocarditis was 1.32-fold greater than it was in SAVR patients with infective endocarditis, a statistically significant difference. The explanation for the increased mortality risk in the TAVR group probably has to do at least in part with an inability on the part of the investigators to fully capture and control for the TAVR group’s greater frailty, according to the cardiologist.

Risk factors for infective endocarditis shared in common by TAVR and SAVR patients included male gender, a higher Charlson Comorbidity Index score, and a greater frailty index. The main predictors unique to the TAVR patients were atrial fibrillation, anemia, and tricuspid regurgitation. And although pacemaker and defibrillator implantation were risk factors for infective endocarditis in the SAVR patients, it wasn’t predictive of increased risk in the TAVR population. Dr. Fauchier called this finding “quite reassuring” given that roughly 20% of the TAVR group received a pacemaker.

The causative microorganisms for infective endocarditis were essentially the same in the TAVR and SAVR groups, simplifying antimicrobial prophylaxis decision making.

Dr. Fauchier reported having no financial conflicts regarding the study, conducted free of commercial support. He serves as a consultant to and/or on speakers’ bureaus for Bayer, BMS Pfizer, Boehringer Ingelheim, Medtronic, and Novartis.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– The risk of infective endocarditis following transcatheter aortic valve replacement (TAVR) for the treatment of severe aortic stenosis proved to be the same as after surgical replacement in a French national propensity score–matched study.

Dr. Laurent Fauchier

This finding from what is believed to be the largest-ever study of infective endocarditis following TAVR will come as a surprise to many physicians. It’s easy to mistakenly assume the risk of this feared complication is lower – and perhaps even negligible – in TAVR patients since the procedure doesn’t involve a significant surgical wound, it’s briefer, the hospital length of stay is shorter, and recovery time is markedly less than with surgical aortic valve replacement (SAVR).

Not so, Laurent Fauchier, MD, PhD, said in presenting the study findings at the annual congress of the European Society of Cardiology.

“Do not think there is a lower risk of infective endocarditis. Be aware, be careful, and provide appropriate antibiotic prophylaxis, just as surgeons do in SAVR. Don’t think, as I did, that with TAVR with no pacemaker implantation there is no risk of infective endocarditis. The TAVR valve is a device, it’s a prosthesis, and the risk is very similar to that of surgery,” advised Dr. Fauchier, a cardiologist at Francois Rabelais University in Tours, France.



He presented a study of all of the nearly 108,000 patients who underwent isolated TAVR or SAVR in France during 2010-2018. The data source was the French national administrative hospital discharge record system. Since the TAVR patients were overall markedly older and sicker than the SAVR patients, especially during the first years of the study, he and his coinvestigators performed propensity score matching using 30 variables, which enabled them to narrow the field of inquiry down to a carefully selected study population of 16,291 TAVR patients and an equal number of closely similar SAVR patients.

A total of 1,070 cases of infective endocarditis occurred during a mean follow-up of just over 2 years. The rate of hospital admission for this complication was 1.89% per year in the TAVR group and similar at 1.71% per year in the SAVR cohort.

Of note, all-cause mortality in TAVR patients who developed infective endocarditis was 1.32-fold greater than it was in SAVR patients with infective endocarditis, a statistically significant difference. The explanation for the increased mortality risk in the TAVR group probably has to do at least in part with an inability on the part of the investigators to fully capture and control for the TAVR group’s greater frailty, according to the cardiologist.

Risk factors for infective endocarditis shared in common by TAVR and SAVR patients included male gender, a higher Charlson Comorbidity Index score, and a greater frailty index. The main predictors unique to the TAVR patients were atrial fibrillation, anemia, and tricuspid regurgitation. And although pacemaker and defibrillator implantation were risk factors for infective endocarditis in the SAVR patients, it wasn’t predictive of increased risk in the TAVR population. Dr. Fauchier called this finding “quite reassuring” given that roughly 20% of the TAVR group received a pacemaker.

The causative microorganisms for infective endocarditis were essentially the same in the TAVR and SAVR groups, simplifying antimicrobial prophylaxis decision making.

Dr. Fauchier reported having no financial conflicts regarding the study, conducted free of commercial support. He serves as a consultant to and/or on speakers’ bureaus for Bayer, BMS Pfizer, Boehringer Ingelheim, Medtronic, and Novartis.

 

– The risk of infective endocarditis following transcatheter aortic valve replacement (TAVR) for the treatment of severe aortic stenosis proved to be the same as after surgical replacement in a French national propensity score–matched study.

Dr. Laurent Fauchier

This finding from what is believed to be the largest-ever study of infective endocarditis following TAVR will come as a surprise to many physicians. It’s easy to mistakenly assume the risk of this feared complication is lower – and perhaps even negligible – in TAVR patients since the procedure doesn’t involve a significant surgical wound, it’s briefer, the hospital length of stay is shorter, and recovery time is markedly less than with surgical aortic valve replacement (SAVR).

Not so, Laurent Fauchier, MD, PhD, said in presenting the study findings at the annual congress of the European Society of Cardiology.

“Do not think there is a lower risk of infective endocarditis. Be aware, be careful, and provide appropriate antibiotic prophylaxis, just as surgeons do in SAVR. Don’t think, as I did, that with TAVR with no pacemaker implantation there is no risk of infective endocarditis. The TAVR valve is a device, it’s a prosthesis, and the risk is very similar to that of surgery,” advised Dr. Fauchier, a cardiologist at Francois Rabelais University in Tours, France.



He presented a study of all of the nearly 108,000 patients who underwent isolated TAVR or SAVR in France during 2010-2018. The data source was the French national administrative hospital discharge record system. Since the TAVR patients were overall markedly older and sicker than the SAVR patients, especially during the first years of the study, he and his coinvestigators performed propensity score matching using 30 variables, which enabled them to narrow the field of inquiry down to a carefully selected study population of 16,291 TAVR patients and an equal number of closely similar SAVR patients.

A total of 1,070 cases of infective endocarditis occurred during a mean follow-up of just over 2 years. The rate of hospital admission for this complication was 1.89% per year in the TAVR group and similar at 1.71% per year in the SAVR cohort.

Of note, all-cause mortality in TAVR patients who developed infective endocarditis was 1.32-fold greater than it was in SAVR patients with infective endocarditis, a statistically significant difference. The explanation for the increased mortality risk in the TAVR group probably has to do at least in part with an inability on the part of the investigators to fully capture and control for the TAVR group’s greater frailty, according to the cardiologist.

Risk factors for infective endocarditis shared in common by TAVR and SAVR patients included male gender, a higher Charlson Comorbidity Index score, and a greater frailty index. The main predictors unique to the TAVR patients were atrial fibrillation, anemia, and tricuspid regurgitation. And although pacemaker and defibrillator implantation were risk factors for infective endocarditis in the SAVR patients, it wasn’t predictive of increased risk in the TAVR population. Dr. Fauchier called this finding “quite reassuring” given that roughly 20% of the TAVR group received a pacemaker.

The causative microorganisms for infective endocarditis were essentially the same in the TAVR and SAVR groups, simplifying antimicrobial prophylaxis decision making.

Dr. Fauchier reported having no financial conflicts regarding the study, conducted free of commercial support. He serves as a consultant to and/or on speakers’ bureaus for Bayer, BMS Pfizer, Boehringer Ingelheim, Medtronic, and Novartis.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE ESC CONGRESS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Novel cardiac troponin protocol rapidly rules out MI

Article Type
Changed
Wed, 06/09/2021 - 08:16

– An accelerated rule-out pathway, reliant upon a single high-sensitivity cardiac troponin test upon presentation to the ED with suspected acute coronary syndrome, reduced length of stay and hospital admission rates without increasing cardiac events at 30 days or 1 year in a major Scottish study.

“We conclude that implementation of this early rule-out pathway is both effective and safe, and adoption of this pathway will have major benefits for patients and health care systems,” Nicholas L. Mills, MBChB, PhD, said in presenting the results of the HiSTORIC (High-Sensitivity Cardiac Troponin at Presentation to Rule Out Myocardial Infarction) trial at the annual congress of the European Society of Cardiology.

Indeed, in the Unites States, where more than 20 million people per year present to EDs with suspected ACS, the 3.3-hour reduction in length of stay achieved in the HiSTORIC trial by implementing the accelerated rule-out pathway would add up to a $3.6 billion annual savings in bed occupancy alone, according to Dr. Mills, who is chair of cardiology at the University of Edinburgh.

The HiSTORIC pathway incorporates separate thresholds for risk stratification and diagnosis. This strategy is based on an accumulation of persuasive evidence that the major advantage of high-sensitivity cardiac troponin testing is to rule out MI, rather than to rule it in, Dr. Mills explained.

HiSTORIC was a 2-year, prospective, stepped-wedge, cluster-randomized, controlled trial including 31,492 consecutive patients with suspected ACS who presented to seven participating hospitals in Scotland. Patients were randomized, at the hospital level, to one of two management pathways. The control group got a standard guideline-recommended strategy involving high-sensitivity cardiac troponin I testing upon presentation and again 6-12 hours later, with MI being ruled out if the troponin levels were not above the 99th percentile.

In contrast, the novel early rule-out strategy worked as follows: If the patient presented with at least 2 hours of symptoms and the initial troponin I level was below 5 ng/L, then MI was ruled out and the patient was triaged straightaway for outpatient management. If the level was above the 99th percentile, the patient was admitted for serial testing to be done 6-12 hours after symptom onset. And for an intermediate test result – that is, a troponin level between 5 ng/L and the 99th percentile – patients remained in the ED for retesting 3 hours from the time of presentation, and were subsequently admitted only if their troponin level was rising.



Using the accelerated rule-out strategy, two-thirds of patients were quickly discharged from the ED on the basis of a troponin level below 5 ng/mL, and another 7% were ruled out for MI and discharged from the ED after a 3-hour stay on the basis of their second test.

The primary efficacy outcome was length of stay from initial presentation to the ED to discharge. The duration was 10.1 hours with the guideline-recommended pathway and 6.8 hours with the accelerated rule-out pathway, for a statistically significant and clinically meaningful 3.3-hour difference. Moreover, the proportion of patients discharged directly from the ED without hospital admission increased from 53% to 74%, a 57% jump.

The primary safety outcome was the rate of MI or cardiac death post discharge. The rates at 30 days and 1 year were 0.4% and 2.6%, respectively, in the standard-pathway group, compared with 0.3% and 1.8% with the early rule-out pathway. Those between-group differences favoring the accelerated rule-out pathway weren’t statistically significant, but they provided reassurance that the novel pathway was safe.

Of note, this was the first-ever randomized trial to evaluate the safety and efficacy of an early rule-out pathway. Other rapid diagnostic pathways are largely based on observational experience and expert opinion, Dr. Mills said.

The assay utilized in the HiSTORIC trial was the Abbott Diagnostics Architect high sensitivity assay. The 5-ng/L threshold for early rule-out was chosen for the trial because an earlier study by Dr. Mills and coinvestigators showed that a level below that cutoff had a 99.6% negative predictive value for MI (Lancet. 2015 Dec 19;386[10012]:2481-8)

The early rule-out pathway was deliberately designed to be simple and pragmatic, according to the cardiologist. “One of the most remarkable observations in this trial was the adherence to the pathway. We prespecified three criteria to evaluate this and demonstrated adherence rates of 86%-92% for each of these criteria. This was despite the pathway being implemented in all consecutive patients at seven different hospitals and used by many hundreds of different clinicians.”

Discussant Hugo A. Katus, MD, called the HiSTORIC study “a really urgently needed and very well-conducted trial.”

Bruce Jancin/MDedge News
Dr. Hugo Katus

“There were very consistently low MI and cardiac death rates at 30 days and 1 year. So this really works,” commented Dr. Katus, who is chief of internal medicine and director of the department of cardiovascular medicine at Heidelberg (Germany) University.

“Accelerated rule-out high-sensitivity cardiac troponin protocols are here to stay,” he declared.

However, Dr. Katus voiced a concern: “By early discharge as rule out, are other life-threatening conditions ignored?”

He raised this issue because of what he views as the substantial 1-year all-cause mortality and return-to-hospital rates of 5.8% and 39.2% in the standard-pathway group and 5.2% and 38.9% in the accelerated rule-out patients in HiSTORIC. An accelerated rule-out strategy should not prohibit a careful clinical work-up, he emphasized.

Dr. Mills discussed the results in a video interview.

The HiSTORIC trial was funded by the British Heart Foundation. Dr. Mills reported receiving research grants from Abbott Diagnostics and Siemens.

Simultaneous with Dr. Mills’ presentation of the HiSTORIC trial results at the ESC congress, an earlier study that formed the scientific basis for the investigators’ decision to employ distinct risk stratification and diagnostic thresholds for cardiac troponin testing was published online (Circulation. 2019 Sep 1. doi: 10.1161/CIRCULATIONAHA.119.042866). The actual HiSTORIC trial results will be published later.

Dr. Katus reported holding a patent for a cardiac troponin T test and serving as a consultant to AstraZeneca, Bayer, Boehringer Ingelheim, and Novo Nordisk.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– An accelerated rule-out pathway, reliant upon a single high-sensitivity cardiac troponin test upon presentation to the ED with suspected acute coronary syndrome, reduced length of stay and hospital admission rates without increasing cardiac events at 30 days or 1 year in a major Scottish study.

“We conclude that implementation of this early rule-out pathway is both effective and safe, and adoption of this pathway will have major benefits for patients and health care systems,” Nicholas L. Mills, MBChB, PhD, said in presenting the results of the HiSTORIC (High-Sensitivity Cardiac Troponin at Presentation to Rule Out Myocardial Infarction) trial at the annual congress of the European Society of Cardiology.

Indeed, in the Unites States, where more than 20 million people per year present to EDs with suspected ACS, the 3.3-hour reduction in length of stay achieved in the HiSTORIC trial by implementing the accelerated rule-out pathway would add up to a $3.6 billion annual savings in bed occupancy alone, according to Dr. Mills, who is chair of cardiology at the University of Edinburgh.

The HiSTORIC pathway incorporates separate thresholds for risk stratification and diagnosis. This strategy is based on an accumulation of persuasive evidence that the major advantage of high-sensitivity cardiac troponin testing is to rule out MI, rather than to rule it in, Dr. Mills explained.

HiSTORIC was a 2-year, prospective, stepped-wedge, cluster-randomized, controlled trial including 31,492 consecutive patients with suspected ACS who presented to seven participating hospitals in Scotland. Patients were randomized, at the hospital level, to one of two management pathways. The control group got a standard guideline-recommended strategy involving high-sensitivity cardiac troponin I testing upon presentation and again 6-12 hours later, with MI being ruled out if the troponin levels were not above the 99th percentile.

In contrast, the novel early rule-out strategy worked as follows: If the patient presented with at least 2 hours of symptoms and the initial troponin I level was below 5 ng/L, then MI was ruled out and the patient was triaged straightaway for outpatient management. If the level was above the 99th percentile, the patient was admitted for serial testing to be done 6-12 hours after symptom onset. And for an intermediate test result – that is, a troponin level between 5 ng/L and the 99th percentile – patients remained in the ED for retesting 3 hours from the time of presentation, and were subsequently admitted only if their troponin level was rising.



Using the accelerated rule-out strategy, two-thirds of patients were quickly discharged from the ED on the basis of a troponin level below 5 ng/mL, and another 7% were ruled out for MI and discharged from the ED after a 3-hour stay on the basis of their second test.

The primary efficacy outcome was length of stay from initial presentation to the ED to discharge. The duration was 10.1 hours with the guideline-recommended pathway and 6.8 hours with the accelerated rule-out pathway, for a statistically significant and clinically meaningful 3.3-hour difference. Moreover, the proportion of patients discharged directly from the ED without hospital admission increased from 53% to 74%, a 57% jump.

The primary safety outcome was the rate of MI or cardiac death post discharge. The rates at 30 days and 1 year were 0.4% and 2.6%, respectively, in the standard-pathway group, compared with 0.3% and 1.8% with the early rule-out pathway. Those between-group differences favoring the accelerated rule-out pathway weren’t statistically significant, but they provided reassurance that the novel pathway was safe.

Of note, this was the first-ever randomized trial to evaluate the safety and efficacy of an early rule-out pathway. Other rapid diagnostic pathways are largely based on observational experience and expert opinion, Dr. Mills said.

The assay utilized in the HiSTORIC trial was the Abbott Diagnostics Architect high sensitivity assay. The 5-ng/L threshold for early rule-out was chosen for the trial because an earlier study by Dr. Mills and coinvestigators showed that a level below that cutoff had a 99.6% negative predictive value for MI (Lancet. 2015 Dec 19;386[10012]:2481-8)

The early rule-out pathway was deliberately designed to be simple and pragmatic, according to the cardiologist. “One of the most remarkable observations in this trial was the adherence to the pathway. We prespecified three criteria to evaluate this and demonstrated adherence rates of 86%-92% for each of these criteria. This was despite the pathway being implemented in all consecutive patients at seven different hospitals and used by many hundreds of different clinicians.”

Discussant Hugo A. Katus, MD, called the HiSTORIC study “a really urgently needed and very well-conducted trial.”

Bruce Jancin/MDedge News
Dr. Hugo Katus

“There were very consistently low MI and cardiac death rates at 30 days and 1 year. So this really works,” commented Dr. Katus, who is chief of internal medicine and director of the department of cardiovascular medicine at Heidelberg (Germany) University.

“Accelerated rule-out high-sensitivity cardiac troponin protocols are here to stay,” he declared.

However, Dr. Katus voiced a concern: “By early discharge as rule out, are other life-threatening conditions ignored?”

He raised this issue because of what he views as the substantial 1-year all-cause mortality and return-to-hospital rates of 5.8% and 39.2% in the standard-pathway group and 5.2% and 38.9% in the accelerated rule-out patients in HiSTORIC. An accelerated rule-out strategy should not prohibit a careful clinical work-up, he emphasized.

Dr. Mills discussed the results in a video interview.

The HiSTORIC trial was funded by the British Heart Foundation. Dr. Mills reported receiving research grants from Abbott Diagnostics and Siemens.

Simultaneous with Dr. Mills’ presentation of the HiSTORIC trial results at the ESC congress, an earlier study that formed the scientific basis for the investigators’ decision to employ distinct risk stratification and diagnostic thresholds for cardiac troponin testing was published online (Circulation. 2019 Sep 1. doi: 10.1161/CIRCULATIONAHA.119.042866). The actual HiSTORIC trial results will be published later.

Dr. Katus reported holding a patent for a cardiac troponin T test and serving as a consultant to AstraZeneca, Bayer, Boehringer Ingelheim, and Novo Nordisk.

– An accelerated rule-out pathway, reliant upon a single high-sensitivity cardiac troponin test upon presentation to the ED with suspected acute coronary syndrome, reduced length of stay and hospital admission rates without increasing cardiac events at 30 days or 1 year in a major Scottish study.

“We conclude that implementation of this early rule-out pathway is both effective and safe, and adoption of this pathway will have major benefits for patients and health care systems,” Nicholas L. Mills, MBChB, PhD, said in presenting the results of the HiSTORIC (High-Sensitivity Cardiac Troponin at Presentation to Rule Out Myocardial Infarction) trial at the annual congress of the European Society of Cardiology.

Indeed, in the Unites States, where more than 20 million people per year present to EDs with suspected ACS, the 3.3-hour reduction in length of stay achieved in the HiSTORIC trial by implementing the accelerated rule-out pathway would add up to a $3.6 billion annual savings in bed occupancy alone, according to Dr. Mills, who is chair of cardiology at the University of Edinburgh.

The HiSTORIC pathway incorporates separate thresholds for risk stratification and diagnosis. This strategy is based on an accumulation of persuasive evidence that the major advantage of high-sensitivity cardiac troponin testing is to rule out MI, rather than to rule it in, Dr. Mills explained.

HiSTORIC was a 2-year, prospective, stepped-wedge, cluster-randomized, controlled trial including 31,492 consecutive patients with suspected ACS who presented to seven participating hospitals in Scotland. Patients were randomized, at the hospital level, to one of two management pathways. The control group got a standard guideline-recommended strategy involving high-sensitivity cardiac troponin I testing upon presentation and again 6-12 hours later, with MI being ruled out if the troponin levels were not above the 99th percentile.

In contrast, the novel early rule-out strategy worked as follows: If the patient presented with at least 2 hours of symptoms and the initial troponin I level was below 5 ng/L, then MI was ruled out and the patient was triaged straightaway for outpatient management. If the level was above the 99th percentile, the patient was admitted for serial testing to be done 6-12 hours after symptom onset. And for an intermediate test result – that is, a troponin level between 5 ng/L and the 99th percentile – patients remained in the ED for retesting 3 hours from the time of presentation, and were subsequently admitted only if their troponin level was rising.



Using the accelerated rule-out strategy, two-thirds of patients were quickly discharged from the ED on the basis of a troponin level below 5 ng/mL, and another 7% were ruled out for MI and discharged from the ED after a 3-hour stay on the basis of their second test.

The primary efficacy outcome was length of stay from initial presentation to the ED to discharge. The duration was 10.1 hours with the guideline-recommended pathway and 6.8 hours with the accelerated rule-out pathway, for a statistically significant and clinically meaningful 3.3-hour difference. Moreover, the proportion of patients discharged directly from the ED without hospital admission increased from 53% to 74%, a 57% jump.

The primary safety outcome was the rate of MI or cardiac death post discharge. The rates at 30 days and 1 year were 0.4% and 2.6%, respectively, in the standard-pathway group, compared with 0.3% and 1.8% with the early rule-out pathway. Those between-group differences favoring the accelerated rule-out pathway weren’t statistically significant, but they provided reassurance that the novel pathway was safe.

Of note, this was the first-ever randomized trial to evaluate the safety and efficacy of an early rule-out pathway. Other rapid diagnostic pathways are largely based on observational experience and expert opinion, Dr. Mills said.

The assay utilized in the HiSTORIC trial was the Abbott Diagnostics Architect high sensitivity assay. The 5-ng/L threshold for early rule-out was chosen for the trial because an earlier study by Dr. Mills and coinvestigators showed that a level below that cutoff had a 99.6% negative predictive value for MI (Lancet. 2015 Dec 19;386[10012]:2481-8)

The early rule-out pathway was deliberately designed to be simple and pragmatic, according to the cardiologist. “One of the most remarkable observations in this trial was the adherence to the pathway. We prespecified three criteria to evaluate this and demonstrated adherence rates of 86%-92% for each of these criteria. This was despite the pathway being implemented in all consecutive patients at seven different hospitals and used by many hundreds of different clinicians.”

Discussant Hugo A. Katus, MD, called the HiSTORIC study “a really urgently needed and very well-conducted trial.”

Bruce Jancin/MDedge News
Dr. Hugo Katus

“There were very consistently low MI and cardiac death rates at 30 days and 1 year. So this really works,” commented Dr. Katus, who is chief of internal medicine and director of the department of cardiovascular medicine at Heidelberg (Germany) University.

“Accelerated rule-out high-sensitivity cardiac troponin protocols are here to stay,” he declared.

However, Dr. Katus voiced a concern: “By early discharge as rule out, are other life-threatening conditions ignored?”

He raised this issue because of what he views as the substantial 1-year all-cause mortality and return-to-hospital rates of 5.8% and 39.2% in the standard-pathway group and 5.2% and 38.9% in the accelerated rule-out patients in HiSTORIC. An accelerated rule-out strategy should not prohibit a careful clinical work-up, he emphasized.

Dr. Mills discussed the results in a video interview.

The HiSTORIC trial was funded by the British Heart Foundation. Dr. Mills reported receiving research grants from Abbott Diagnostics and Siemens.

Simultaneous with Dr. Mills’ presentation of the HiSTORIC trial results at the ESC congress, an earlier study that formed the scientific basis for the investigators’ decision to employ distinct risk stratification and diagnostic thresholds for cardiac troponin testing was published online (Circulation. 2019 Sep 1. doi: 10.1161/CIRCULATIONAHA.119.042866). The actual HiSTORIC trial results will be published later.

Dr. Katus reported holding a patent for a cardiac troponin T test and serving as a consultant to AstraZeneca, Bayer, Boehringer Ingelheim, and Novo Nordisk.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE ESC CONGRESS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Smoking, inactivity most powerful post-MI lifestyle risk factors

Article Type
Changed
Tue, 05/03/2022 - 15:12

 

– All lifestyle-related cardiovascular risk factors aren’t equal in power when it comes to secondary prevention after a first acute MI, according to a massive Swedish registry study.

Dr. Emil Hagstrom

Insufficient physical activity and current smoking were consistently the strongest risk factors for all-cause mortality, major adverse cardiovascular events, and other key adverse outcomes in an analysis from the SWEDEHEART registry. The study included 65,002 patients discharged after a first MI and 325,010 age- and sex-matched controls with no prior MI followed for a median of 5.5 years and maximum of 12, Emil Hagstrom, MD, PhD, reported at the annual congress of the European Society of Cardiology.

Strongest lifestyle risk factors

The study examined the long-term relative importance of control of six major lifestyle risk factors for secondary cardiovascular prevention: current smoking, insufficient physical activity, blood pressure of 140/90 mm Hg or more, obesity, a fasting blood glucose of at least 126 mg/dL, and an LDL cholesterol of 70 mg/dL or more. Notably, two risk factors that physicians often emphasize in working with their patients with known coronary heart disease – an elevated LDL cholesterol and obesity – barely moved the needle. Out of the six risk factors scrutinized, those two consistently showed the weakest association with long-term risk of adverse outcomes. Occupying the middle ground in terms of predictive strength were hypertension and elevated blood glucose, according to Dr. Hagstrom, a cardiologist at Uppsala (Sweden) University.

Risk factor status was assessed 6-10 weeks post MI. Insufficient physical activity was defined as not engaging in at least 30 minutes of moderate-intensity exercise on at least 5 days per week. And when Dr. Hagstrom recalculated the risk of adverse outcomes using an LDL cholesterol threshold of 55 mg/dL rather than using 70 mg/dL, as recommended in new ESC secondary prevention guidelines released during the congress, the study results remained unchanged.

Cumulative effects

A key SWEDEHEART finding underscoring the importance of lifestyle in secondary prevention was that a linear stepwise relationship existed between the number of risk factors at target levels and the risk of all of the various adverse outcomes assessed, including stroke and heart failure hospitalization as well as all-cause mortality, cardiovascular mortality, and major bleeding.



Moreover, patients with none of the six risk factors outside of target when assessed after their MI had the same risks of all-cause mortality, cardiovascular mortality, and stroke as the matched controls.

For example, in an analysis adjusted for comorbid cancer, chronic obstructive pulmonary disease, and dementia, post-MI patients with zero risk factors had the same long-term risk of cardiovascular mortality as controls without a history of MI at baseline. With one risk factor not at target, a patient had a 41% increased risk compared with controls, a statistically significant difference. With two out-of-whack risk factors, the risk climbed to 102%. With three, 185%. With four risk factors not at target, the all-cause mortality risk jumped to 291%. And patients with more than four of the six risk factors not at target had a 409% greater risk of all-cause mortality than controls who had never had a heart attack.

When Dr. Hagstrom stratified subjects by age at baseline – up to 55, 56-64, 65-70, and 70-75 years – he discovered that, regardless of age, patients with zero risk factors had the same risk of all-cause mortality and other adverse outcomes as controls. However, when risk factors were present, younger patients consistently had a higher risk of all adverse outcomes than older patients with the same number of risk factors. When asked for an explanation of this phenomenon, Dr. Hagstrom noted that younger patients with multiple risk factors have a longer time to be exposed to and accumulate risk.

Follow-up of the study cohort will continue for years to come, the cardiologist promised.

At an ESC congress highlights session that closed out the meeting, Eva Prescott, MD, put the SWEDEHEART study at the top of her list of important developments in preventive cardiology arising from the congress.

“This is an excellent national registry I think we’re all envious of,” commented Dr. Prescott, a cardiologist at Copenhagen University. “The conclusion of this registry-based data, I think, is that lifestyle really remains at the core of prevention of cardiovascular events still today.”

The SWEDEHEART study analysis was funded free of commercial support. Dr. Hagstrom reported serving as a consultant to or receiving speakers’ fees from Amgen, AstraZeneca, Bayer, Novo Nordisk, and Sanofi.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– All lifestyle-related cardiovascular risk factors aren’t equal in power when it comes to secondary prevention after a first acute MI, according to a massive Swedish registry study.

Dr. Emil Hagstrom

Insufficient physical activity and current smoking were consistently the strongest risk factors for all-cause mortality, major adverse cardiovascular events, and other key adverse outcomes in an analysis from the SWEDEHEART registry. The study included 65,002 patients discharged after a first MI and 325,010 age- and sex-matched controls with no prior MI followed for a median of 5.5 years and maximum of 12, Emil Hagstrom, MD, PhD, reported at the annual congress of the European Society of Cardiology.

Strongest lifestyle risk factors

The study examined the long-term relative importance of control of six major lifestyle risk factors for secondary cardiovascular prevention: current smoking, insufficient physical activity, blood pressure of 140/90 mm Hg or more, obesity, a fasting blood glucose of at least 126 mg/dL, and an LDL cholesterol of 70 mg/dL or more. Notably, two risk factors that physicians often emphasize in working with their patients with known coronary heart disease – an elevated LDL cholesterol and obesity – barely moved the needle. Out of the six risk factors scrutinized, those two consistently showed the weakest association with long-term risk of adverse outcomes. Occupying the middle ground in terms of predictive strength were hypertension and elevated blood glucose, according to Dr. Hagstrom, a cardiologist at Uppsala (Sweden) University.

Risk factor status was assessed 6-10 weeks post MI. Insufficient physical activity was defined as not engaging in at least 30 minutes of moderate-intensity exercise on at least 5 days per week. And when Dr. Hagstrom recalculated the risk of adverse outcomes using an LDL cholesterol threshold of 55 mg/dL rather than using 70 mg/dL, as recommended in new ESC secondary prevention guidelines released during the congress, the study results remained unchanged.

Cumulative effects

A key SWEDEHEART finding underscoring the importance of lifestyle in secondary prevention was that a linear stepwise relationship existed between the number of risk factors at target levels and the risk of all of the various adverse outcomes assessed, including stroke and heart failure hospitalization as well as all-cause mortality, cardiovascular mortality, and major bleeding.



Moreover, patients with none of the six risk factors outside of target when assessed after their MI had the same risks of all-cause mortality, cardiovascular mortality, and stroke as the matched controls.

For example, in an analysis adjusted for comorbid cancer, chronic obstructive pulmonary disease, and dementia, post-MI patients with zero risk factors had the same long-term risk of cardiovascular mortality as controls without a history of MI at baseline. With one risk factor not at target, a patient had a 41% increased risk compared with controls, a statistically significant difference. With two out-of-whack risk factors, the risk climbed to 102%. With three, 185%. With four risk factors not at target, the all-cause mortality risk jumped to 291%. And patients with more than four of the six risk factors not at target had a 409% greater risk of all-cause mortality than controls who had never had a heart attack.

When Dr. Hagstrom stratified subjects by age at baseline – up to 55, 56-64, 65-70, and 70-75 years – he discovered that, regardless of age, patients with zero risk factors had the same risk of all-cause mortality and other adverse outcomes as controls. However, when risk factors were present, younger patients consistently had a higher risk of all adverse outcomes than older patients with the same number of risk factors. When asked for an explanation of this phenomenon, Dr. Hagstrom noted that younger patients with multiple risk factors have a longer time to be exposed to and accumulate risk.

Follow-up of the study cohort will continue for years to come, the cardiologist promised.

At an ESC congress highlights session that closed out the meeting, Eva Prescott, MD, put the SWEDEHEART study at the top of her list of important developments in preventive cardiology arising from the congress.

“This is an excellent national registry I think we’re all envious of,” commented Dr. Prescott, a cardiologist at Copenhagen University. “The conclusion of this registry-based data, I think, is that lifestyle really remains at the core of prevention of cardiovascular events still today.”

The SWEDEHEART study analysis was funded free of commercial support. Dr. Hagstrom reported serving as a consultant to or receiving speakers’ fees from Amgen, AstraZeneca, Bayer, Novo Nordisk, and Sanofi.

 

– All lifestyle-related cardiovascular risk factors aren’t equal in power when it comes to secondary prevention after a first acute MI, according to a massive Swedish registry study.

Dr. Emil Hagstrom

Insufficient physical activity and current smoking were consistently the strongest risk factors for all-cause mortality, major adverse cardiovascular events, and other key adverse outcomes in an analysis from the SWEDEHEART registry. The study included 65,002 patients discharged after a first MI and 325,010 age- and sex-matched controls with no prior MI followed for a median of 5.5 years and maximum of 12, Emil Hagstrom, MD, PhD, reported at the annual congress of the European Society of Cardiology.

Strongest lifestyle risk factors

The study examined the long-term relative importance of control of six major lifestyle risk factors for secondary cardiovascular prevention: current smoking, insufficient physical activity, blood pressure of 140/90 mm Hg or more, obesity, a fasting blood glucose of at least 126 mg/dL, and an LDL cholesterol of 70 mg/dL or more. Notably, two risk factors that physicians often emphasize in working with their patients with known coronary heart disease – an elevated LDL cholesterol and obesity – barely moved the needle. Out of the six risk factors scrutinized, those two consistently showed the weakest association with long-term risk of adverse outcomes. Occupying the middle ground in terms of predictive strength were hypertension and elevated blood glucose, according to Dr. Hagstrom, a cardiologist at Uppsala (Sweden) University.

Risk factor status was assessed 6-10 weeks post MI. Insufficient physical activity was defined as not engaging in at least 30 minutes of moderate-intensity exercise on at least 5 days per week. And when Dr. Hagstrom recalculated the risk of adverse outcomes using an LDL cholesterol threshold of 55 mg/dL rather than using 70 mg/dL, as recommended in new ESC secondary prevention guidelines released during the congress, the study results remained unchanged.

Cumulative effects

A key SWEDEHEART finding underscoring the importance of lifestyle in secondary prevention was that a linear stepwise relationship existed between the number of risk factors at target levels and the risk of all of the various adverse outcomes assessed, including stroke and heart failure hospitalization as well as all-cause mortality, cardiovascular mortality, and major bleeding.



Moreover, patients with none of the six risk factors outside of target when assessed after their MI had the same risks of all-cause mortality, cardiovascular mortality, and stroke as the matched controls.

For example, in an analysis adjusted for comorbid cancer, chronic obstructive pulmonary disease, and dementia, post-MI patients with zero risk factors had the same long-term risk of cardiovascular mortality as controls without a history of MI at baseline. With one risk factor not at target, a patient had a 41% increased risk compared with controls, a statistically significant difference. With two out-of-whack risk factors, the risk climbed to 102%. With three, 185%. With four risk factors not at target, the all-cause mortality risk jumped to 291%. And patients with more than four of the six risk factors not at target had a 409% greater risk of all-cause mortality than controls who had never had a heart attack.

When Dr. Hagstrom stratified subjects by age at baseline – up to 55, 56-64, 65-70, and 70-75 years – he discovered that, regardless of age, patients with zero risk factors had the same risk of all-cause mortality and other adverse outcomes as controls. However, when risk factors were present, younger patients consistently had a higher risk of all adverse outcomes than older patients with the same number of risk factors. When asked for an explanation of this phenomenon, Dr. Hagstrom noted that younger patients with multiple risk factors have a longer time to be exposed to and accumulate risk.

Follow-up of the study cohort will continue for years to come, the cardiologist promised.

At an ESC congress highlights session that closed out the meeting, Eva Prescott, MD, put the SWEDEHEART study at the top of her list of important developments in preventive cardiology arising from the congress.

“This is an excellent national registry I think we’re all envious of,” commented Dr. Prescott, a cardiologist at Copenhagen University. “The conclusion of this registry-based data, I think, is that lifestyle really remains at the core of prevention of cardiovascular events still today.”

The SWEDEHEART study analysis was funded free of commercial support. Dr. Hagstrom reported serving as a consultant to or receiving speakers’ fees from Amgen, AstraZeneca, Bayer, Novo Nordisk, and Sanofi.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE ESC CONGRESS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Moderate aortic stenosis just as deadly as severe AS

Article Type
Changed
Thu, 10/03/2019 - 11:34

 

– The 5-year mortality rate associated with untreated moderate aortic stenosis is just as grim as it is for severe aortic stenosis, according to new findings from the largest-ever study of the natural history of aortic stenosis.

Dr. Geoff Strange

“These data provide a clear signal of the expected adverse outcomes for individuals presenting across the globe with a mean aortic valve gradient greater than 20.0 mm Hg or a peak aortic valve velocity above 3.0 m/sec,” Geoff Strange, PhD, said in presenting an analysis from NEDA, the National Echocardiography Database of Australia, at the annual congress of the European Society of Cardiology.

These results, if confirmed in other large datasets, could potentially have enormous implications for the use of transcatheter and surgical aortic valve replacement, interventions which until now have been restricted to patients with severe aortic stenosis (AS) as defined by an aortic valve (AV) mean gradient in excess of 40 mm Hg or a peak AV velocity greater than 4.0 m/sec. This restriction was based on what Dr. Strange considers rather limited and flimsy evidence suggesting that the mortality associated with AS was negligible except in severe AS.

“Cut points used to stratify for interventional strategies are based on very small numbers,” observed Dr. Strange, professor of medicine at the University of Notre Dame in Fremantle, Australia.

The NEDA findings, he added, constitute a call to action: “These data provide the impetus for a contemporary evaluation of the risk-to-benefit ratio of intervention in the moderate AS population,” Dr. Strange declared.

He and his NEDA coinvestigators analyzed echocardiographic data on 241,303 individuals in the Australian database, zeroing in on the 25,827 with untreated mild, moderate, or severe native valve AS. To place the size and scope of this project into perspective, the next-largest study of the natural history of untreated AS included 1,375 individuals – and that study was in turn roughly 10-fold bigger than the handful of other published studies addressing this issue.

A key finding in the NEDA study was that the 5-year all-cause mortality rate of 61.4% in the group with moderate AS wasn’t significantly different from the 64.6% rate in those with severe AS (see graphic).

The investigators performed additional analyses, analyzing peak velocity and mean gradient as continuous variables and stratifying patients into quintiles on that basis. They found that the top quintile for AV velocity started very low, at 1.73 m/sec, while the top quintile for mean AV gradient also started at a surprisingly low level: greater than 9.6 mm Hg. They noted that both all-cause and cardiovascular-specific mortality rates were basically flat until taking what Dr. Strange described as “a sharp pivot point upward” right around 20 mm Hg or 3 m/sec.

“No matter how we looked at these data – whether we looked at patients with or without left heart disease, whether we used the dimensionless index, whether we adjusted for stroke volume index, whether we stratified between age above or below 65, whether we used the gradient, the velocity, or the AV area – this threshold of increasing mortality at around 20 mm Hg or 3 m/sec continued to emerge,” according to Dr. Strange.



He noted that this study used real-world data with hard endpoints – actuarial patient mortality outcomes obtained through linkage to the national database – rather than hypothetical projections based upon Kaplan-Meier curves. A study limitation was that comorbidity data couldn’t be obtained for the AS patients.

Session cochair Patrizio Lancellotti, MD, PhD, commented, “I think this study will change a bit our consideration about patients with moderate AS.”

However, Dr. Lancellotti, who was lead author of the second-largest study of the natural history of aortic stenosis (JAMA Cardiol. 2018 Nov 1;3[11]:1060-8), expressed misgivings about the NEDA system’s lack of a core echocardiographic laboratory for imaging adjudication. That’s a study weakness given that image quality and the accuracy of echocardiographic interpretation are so highly dependent upon an individual cardiologist’s skill, observed Dr. Lancellotti, who is head of cardiology at the University of Liege (Belgium).

Dr. Strange replied that he and his coinvestigators analyzed a random subset of the NEDA data and found very little interlaboratory variability in results.

“All I can say is that the labs that contributed to this study are the most eminent labs across Australia,” he added.

Simultaneously with Dr. Strange’s presentation at the congress, the NEDA study results were published online (J Am Coll Cardiol. Sep 2019. doi: 10.1016/j.jacc.2019.08.004).

Dr. Strange reported having no financial conflicts of interest regarding the NEDA project, which is funded by GlaxoSmithKline, Bayer, and Actelion.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– The 5-year mortality rate associated with untreated moderate aortic stenosis is just as grim as it is for severe aortic stenosis, according to new findings from the largest-ever study of the natural history of aortic stenosis.

Dr. Geoff Strange

“These data provide a clear signal of the expected adverse outcomes for individuals presenting across the globe with a mean aortic valve gradient greater than 20.0 mm Hg or a peak aortic valve velocity above 3.0 m/sec,” Geoff Strange, PhD, said in presenting an analysis from NEDA, the National Echocardiography Database of Australia, at the annual congress of the European Society of Cardiology.

These results, if confirmed in other large datasets, could potentially have enormous implications for the use of transcatheter and surgical aortic valve replacement, interventions which until now have been restricted to patients with severe aortic stenosis (AS) as defined by an aortic valve (AV) mean gradient in excess of 40 mm Hg or a peak AV velocity greater than 4.0 m/sec. This restriction was based on what Dr. Strange considers rather limited and flimsy evidence suggesting that the mortality associated with AS was negligible except in severe AS.

“Cut points used to stratify for interventional strategies are based on very small numbers,” observed Dr. Strange, professor of medicine at the University of Notre Dame in Fremantle, Australia.

The NEDA findings, he added, constitute a call to action: “These data provide the impetus for a contemporary evaluation of the risk-to-benefit ratio of intervention in the moderate AS population,” Dr. Strange declared.

He and his NEDA coinvestigators analyzed echocardiographic data on 241,303 individuals in the Australian database, zeroing in on the 25,827 with untreated mild, moderate, or severe native valve AS. To place the size and scope of this project into perspective, the next-largest study of the natural history of untreated AS included 1,375 individuals – and that study was in turn roughly 10-fold bigger than the handful of other published studies addressing this issue.

A key finding in the NEDA study was that the 5-year all-cause mortality rate of 61.4% in the group with moderate AS wasn’t significantly different from the 64.6% rate in those with severe AS (see graphic).

The investigators performed additional analyses, analyzing peak velocity and mean gradient as continuous variables and stratifying patients into quintiles on that basis. They found that the top quintile for AV velocity started very low, at 1.73 m/sec, while the top quintile for mean AV gradient also started at a surprisingly low level: greater than 9.6 mm Hg. They noted that both all-cause and cardiovascular-specific mortality rates were basically flat until taking what Dr. Strange described as “a sharp pivot point upward” right around 20 mm Hg or 3 m/sec.

“No matter how we looked at these data – whether we looked at patients with or without left heart disease, whether we used the dimensionless index, whether we adjusted for stroke volume index, whether we stratified between age above or below 65, whether we used the gradient, the velocity, or the AV area – this threshold of increasing mortality at around 20 mm Hg or 3 m/sec continued to emerge,” according to Dr. Strange.



He noted that this study used real-world data with hard endpoints – actuarial patient mortality outcomes obtained through linkage to the national database – rather than hypothetical projections based upon Kaplan-Meier curves. A study limitation was that comorbidity data couldn’t be obtained for the AS patients.

Session cochair Patrizio Lancellotti, MD, PhD, commented, “I think this study will change a bit our consideration about patients with moderate AS.”

However, Dr. Lancellotti, who was lead author of the second-largest study of the natural history of aortic stenosis (JAMA Cardiol. 2018 Nov 1;3[11]:1060-8), expressed misgivings about the NEDA system’s lack of a core echocardiographic laboratory for imaging adjudication. That’s a study weakness given that image quality and the accuracy of echocardiographic interpretation are so highly dependent upon an individual cardiologist’s skill, observed Dr. Lancellotti, who is head of cardiology at the University of Liege (Belgium).

Dr. Strange replied that he and his coinvestigators analyzed a random subset of the NEDA data and found very little interlaboratory variability in results.

“All I can say is that the labs that contributed to this study are the most eminent labs across Australia,” he added.

Simultaneously with Dr. Strange’s presentation at the congress, the NEDA study results were published online (J Am Coll Cardiol. Sep 2019. doi: 10.1016/j.jacc.2019.08.004).

Dr. Strange reported having no financial conflicts of interest regarding the NEDA project, which is funded by GlaxoSmithKline, Bayer, and Actelion.

 

– The 5-year mortality rate associated with untreated moderate aortic stenosis is just as grim as it is for severe aortic stenosis, according to new findings from the largest-ever study of the natural history of aortic stenosis.

Dr. Geoff Strange

“These data provide a clear signal of the expected adverse outcomes for individuals presenting across the globe with a mean aortic valve gradient greater than 20.0 mm Hg or a peak aortic valve velocity above 3.0 m/sec,” Geoff Strange, PhD, said in presenting an analysis from NEDA, the National Echocardiography Database of Australia, at the annual congress of the European Society of Cardiology.

These results, if confirmed in other large datasets, could potentially have enormous implications for the use of transcatheter and surgical aortic valve replacement, interventions which until now have been restricted to patients with severe aortic stenosis (AS) as defined by an aortic valve (AV) mean gradient in excess of 40 mm Hg or a peak AV velocity greater than 4.0 m/sec. This restriction was based on what Dr. Strange considers rather limited and flimsy evidence suggesting that the mortality associated with AS was negligible except in severe AS.

“Cut points used to stratify for interventional strategies are based on very small numbers,” observed Dr. Strange, professor of medicine at the University of Notre Dame in Fremantle, Australia.

The NEDA findings, he added, constitute a call to action: “These data provide the impetus for a contemporary evaluation of the risk-to-benefit ratio of intervention in the moderate AS population,” Dr. Strange declared.

He and his NEDA coinvestigators analyzed echocardiographic data on 241,303 individuals in the Australian database, zeroing in on the 25,827 with untreated mild, moderate, or severe native valve AS. To place the size and scope of this project into perspective, the next-largest study of the natural history of untreated AS included 1,375 individuals – and that study was in turn roughly 10-fold bigger than the handful of other published studies addressing this issue.

A key finding in the NEDA study was that the 5-year all-cause mortality rate of 61.4% in the group with moderate AS wasn’t significantly different from the 64.6% rate in those with severe AS (see graphic).

The investigators performed additional analyses, analyzing peak velocity and mean gradient as continuous variables and stratifying patients into quintiles on that basis. They found that the top quintile for AV velocity started very low, at 1.73 m/sec, while the top quintile for mean AV gradient also started at a surprisingly low level: greater than 9.6 mm Hg. They noted that both all-cause and cardiovascular-specific mortality rates were basically flat until taking what Dr. Strange described as “a sharp pivot point upward” right around 20 mm Hg or 3 m/sec.

“No matter how we looked at these data – whether we looked at patients with or without left heart disease, whether we used the dimensionless index, whether we adjusted for stroke volume index, whether we stratified between age above or below 65, whether we used the gradient, the velocity, or the AV area – this threshold of increasing mortality at around 20 mm Hg or 3 m/sec continued to emerge,” according to Dr. Strange.



He noted that this study used real-world data with hard endpoints – actuarial patient mortality outcomes obtained through linkage to the national database – rather than hypothetical projections based upon Kaplan-Meier curves. A study limitation was that comorbidity data couldn’t be obtained for the AS patients.

Session cochair Patrizio Lancellotti, MD, PhD, commented, “I think this study will change a bit our consideration about patients with moderate AS.”

However, Dr. Lancellotti, who was lead author of the second-largest study of the natural history of aortic stenosis (JAMA Cardiol. 2018 Nov 1;3[11]:1060-8), expressed misgivings about the NEDA system’s lack of a core echocardiographic laboratory for imaging adjudication. That’s a study weakness given that image quality and the accuracy of echocardiographic interpretation are so highly dependent upon an individual cardiologist’s skill, observed Dr. Lancellotti, who is head of cardiology at the University of Liege (Belgium).

Dr. Strange replied that he and his coinvestigators analyzed a random subset of the NEDA data and found very little interlaboratory variability in results.

“All I can say is that the labs that contributed to this study are the most eminent labs across Australia,” he added.

Simultaneously with Dr. Strange’s presentation at the congress, the NEDA study results were published online (J Am Coll Cardiol. Sep 2019. doi: 10.1016/j.jacc.2019.08.004).

Dr. Strange reported having no financial conflicts of interest regarding the NEDA project, which is funded by GlaxoSmithKline, Bayer, and Actelion.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE ESC CONGRESS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Cannabis-using MS patients improve cognition with 28 days of abstinence

Article Type
Changed
Wed, 10/30/2019 - 14:39

 

– The good news about cognitive impairment in patients with multiple sclerosis who’ve been using cannabis heavily for symptom relief – even for many years – is that their memory, executive function, and information processing speed will improve significantly once they’ve been off the drug for just 28 days, according to the results of a randomized trial presented at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis.

Cecilia Meza

“It’s good for neurologists to know that, if they prescribe cannabis or their patient is self-medicating and chooses to stop, their cognition will improve considerably,” observed Cecilia Meza, a coinvestigator in the study led by Anthony Feinstein, MD, professor of psychiatry at the University of Toronto.

But there’s a surprise twist to this study, she explained in an interview: “We showed patients their results, and they also felt that their cognition was doing a lot better, but despite that, they would rather be using cannabis to feel better than to have their memory intact. The pain was that bad,” said Ms. Meza, a research coordinator at the university’s Sunnybrook Research Institute.

It’s known that cognitive impairment in healthy long-term cannabis users, provided they started as adults, is fully reversed after 28 days of abstinence. But disease-related cognitive dysfunction affects 40%-80% of patients with MS, and cannabis use may compound this impairment. This Canadian study asked a previously unaddressed question: Does coming off cannabis make a difference cognitively in the MS population?



The study included 40 MS patients with global impairment of cognition, none of whom were cannabis users prior to their diagnosis. They typically started using it for MS symptom relief 2-3 years after receiving their diagnosis. By the time they were approached for study participation, they had been using cannabis four to five times per day or more for an average of 7 years for relief of symptoms, including incontinence, spasticity, poor sleep, headaches, and difficulties in eating.

All participants were willing to try 28 days of abstinence; half were randomized to do so, while the others stayed the course. Study endpoints included change from baseline to day 28 in the Brief Repeatable Neuropsychological Battery, functional MRI done while taking the Symbol Digit Modalities Test, and urine testing to assure compliance with abstinence.

By day 28, the abstinence group – and with one exception, urine testing confirmed they were bona fide cannabis quitters for the study duration – performed significantly better on the neuropsychological test battery than at baseline, with an associated significant increase in brain activation in the bilateral inferior frontal gyri, as well as the caudate and declive cerebellum while executing the Symbol Digit Modalities Test. The control group who kept on using cannabis showed no such improvements.

The full study details were published in conjunction with Ms. Meza’s presentation (Brain. 2019 Sep 1;142[9]:2800-12).

She reported having no financial conflicts regarding the study, funded by the Multiple Sclerosis Society of Canada.

SOURCE: Meza C. ECTRIMS 2019, Abstract P542.

Meeting/Event
Issue
Neurology Reviews- 27(11)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– The good news about cognitive impairment in patients with multiple sclerosis who’ve been using cannabis heavily for symptom relief – even for many years – is that their memory, executive function, and information processing speed will improve significantly once they’ve been off the drug for just 28 days, according to the results of a randomized trial presented at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis.

Cecilia Meza

“It’s good for neurologists to know that, if they prescribe cannabis or their patient is self-medicating and chooses to stop, their cognition will improve considerably,” observed Cecilia Meza, a coinvestigator in the study led by Anthony Feinstein, MD, professor of psychiatry at the University of Toronto.

But there’s a surprise twist to this study, she explained in an interview: “We showed patients their results, and they also felt that their cognition was doing a lot better, but despite that, they would rather be using cannabis to feel better than to have their memory intact. The pain was that bad,” said Ms. Meza, a research coordinator at the university’s Sunnybrook Research Institute.

It’s known that cognitive impairment in healthy long-term cannabis users, provided they started as adults, is fully reversed after 28 days of abstinence. But disease-related cognitive dysfunction affects 40%-80% of patients with MS, and cannabis use may compound this impairment. This Canadian study asked a previously unaddressed question: Does coming off cannabis make a difference cognitively in the MS population?



The study included 40 MS patients with global impairment of cognition, none of whom were cannabis users prior to their diagnosis. They typically started using it for MS symptom relief 2-3 years after receiving their diagnosis. By the time they were approached for study participation, they had been using cannabis four to five times per day or more for an average of 7 years for relief of symptoms, including incontinence, spasticity, poor sleep, headaches, and difficulties in eating.

All participants were willing to try 28 days of abstinence; half were randomized to do so, while the others stayed the course. Study endpoints included change from baseline to day 28 in the Brief Repeatable Neuropsychological Battery, functional MRI done while taking the Symbol Digit Modalities Test, and urine testing to assure compliance with abstinence.

By day 28, the abstinence group – and with one exception, urine testing confirmed they were bona fide cannabis quitters for the study duration – performed significantly better on the neuropsychological test battery than at baseline, with an associated significant increase in brain activation in the bilateral inferior frontal gyri, as well as the caudate and declive cerebellum while executing the Symbol Digit Modalities Test. The control group who kept on using cannabis showed no such improvements.

The full study details were published in conjunction with Ms. Meza’s presentation (Brain. 2019 Sep 1;142[9]:2800-12).

She reported having no financial conflicts regarding the study, funded by the Multiple Sclerosis Society of Canada.

SOURCE: Meza C. ECTRIMS 2019, Abstract P542.

 

– The good news about cognitive impairment in patients with multiple sclerosis who’ve been using cannabis heavily for symptom relief – even for many years – is that their memory, executive function, and information processing speed will improve significantly once they’ve been off the drug for just 28 days, according to the results of a randomized trial presented at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis.

Cecilia Meza

“It’s good for neurologists to know that, if they prescribe cannabis or their patient is self-medicating and chooses to stop, their cognition will improve considerably,” observed Cecilia Meza, a coinvestigator in the study led by Anthony Feinstein, MD, professor of psychiatry at the University of Toronto.

But there’s a surprise twist to this study, she explained in an interview: “We showed patients their results, and they also felt that their cognition was doing a lot better, but despite that, they would rather be using cannabis to feel better than to have their memory intact. The pain was that bad,” said Ms. Meza, a research coordinator at the university’s Sunnybrook Research Institute.

It’s known that cognitive impairment in healthy long-term cannabis users, provided they started as adults, is fully reversed after 28 days of abstinence. But disease-related cognitive dysfunction affects 40%-80% of patients with MS, and cannabis use may compound this impairment. This Canadian study asked a previously unaddressed question: Does coming off cannabis make a difference cognitively in the MS population?



The study included 40 MS patients with global impairment of cognition, none of whom were cannabis users prior to their diagnosis. They typically started using it for MS symptom relief 2-3 years after receiving their diagnosis. By the time they were approached for study participation, they had been using cannabis four to five times per day or more for an average of 7 years for relief of symptoms, including incontinence, spasticity, poor sleep, headaches, and difficulties in eating.

All participants were willing to try 28 days of abstinence; half were randomized to do so, while the others stayed the course. Study endpoints included change from baseline to day 28 in the Brief Repeatable Neuropsychological Battery, functional MRI done while taking the Symbol Digit Modalities Test, and urine testing to assure compliance with abstinence.

By day 28, the abstinence group – and with one exception, urine testing confirmed they were bona fide cannabis quitters for the study duration – performed significantly better on the neuropsychological test battery than at baseline, with an associated significant increase in brain activation in the bilateral inferior frontal gyri, as well as the caudate and declive cerebellum while executing the Symbol Digit Modalities Test. The control group who kept on using cannabis showed no such improvements.

The full study details were published in conjunction with Ms. Meza’s presentation (Brain. 2019 Sep 1;142[9]:2800-12).

She reported having no financial conflicts regarding the study, funded by the Multiple Sclerosis Society of Canada.

SOURCE: Meza C. ECTRIMS 2019, Abstract P542.

Issue
Neurology Reviews- 27(11)
Issue
Neurology Reviews- 27(11)
Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ECTRIMS 2019

Citation Override
Publish date: September 30, 2019
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Early maternal anxiety tied to adolescent hyperactivity

Article Type
Changed
Wed, 10/16/2019 - 14:32

 

– Exposure to maternal somatic anxiety during pregnancy and toddlerhood increases a child’s risk of hyperactivity symptoms in adolescence, Blanca Bolea, MD, said at the annual congress of the European College of Neuropsychopharmacology.

Dr. Blanca Bolea

In contrast, the children of mothers who were anxious were not at increased risk for subsequent inattention symptoms in an analysis of 8,725 mothers and their children participating in the Avon Longitudinal Study of Parents and Children, a prospective epidemiologic cohort study ongoing in southwest England since 1991, said Dr. Bolea, a psychiatrist at the University of Toronto.

These findings have practical implications for clinical care: “If we know that women who are anxious in the perinatal period put their children at risk for hyperactivity later on, then we can tackle their anxiety in pregnancy or toddlerhood. And that’s easy to do: You can do group [cognitive-behavioral therapy]; you can give medications, so there are things you can do to reduce that risk. That’s relevant, because we don’t know much about how to reduce levels of ADHD. We know it has a genetic component, but we can’t touch that. You cannot change your genes, so far. But environmental things, we can change. So if we can identify the mothers who are more anxious during pregnancy and toddlerhood and give them resources to reduce their anxiety, then we can potentially reduce hyperactivity later on,” she explained in an interview.

In the Avon study, maternal anxiety was serially assessed from early pregnancy up until a child’s 5th birthday.

“We looked for maternal symptoms similar to panic disorder: shortness of breath, dizziness, sweating, things like that. These are symptoms that any clinician can identify by asking the mothers, so it’s not hard to identify the mothers who could be at risk,” according to the psychiatrist.

Children in the Avon study were assessed for symptoms of inattention at age 8.5 years using the Sky Search, Sky Search Dual Test, and Opposite Worlds subtests of the Tests of Everyday Attention for Children. Hyperactivity symptoms were assessed at age 16 years via the Strengths and Difficulties Questionnaire.

In an analysis adjusted for potentially confounding sociodemographic factors, adolescents whose mothers were rated by investigators as having moderate or high somatic anxiety during pregnancy and the toddlerhood years were at 2.1-fold increased risk of hyperactivity symptoms compared to those whose mothers had low or no anxiety, but increased maternal anxiety wasn’t associated with scores on any of the three tests of inattention.

Dr. Bolea cautioned that, while these Avon study findings document an association between early maternal anxiety and subsequent adolescent hyperactivity, that doesn’t prove causality. The findings are consistent, however, with the fetal origins hypothesis put forth by the late British epidemiologist David J. Barker, MD, PhD, which postulates that stressful fetal circumstances have profound effects later in life.

“What we’re thinking here is, if the mother is anxious during pregnancy, that may change how the fetal brain develops, and it makes kids hyperactive later on,” she said.

The hypothesis has been borne out in animal studies: Stress a pregnant rat, and her offspring will display hyperactivity.

Dr. Bolea reported having no financial conflicts regarding her study. The Avon Longitudinal Study of Parents and Children is funded by the Medical Research Council and the Wellcome Trust.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Exposure to maternal somatic anxiety during pregnancy and toddlerhood increases a child’s risk of hyperactivity symptoms in adolescence, Blanca Bolea, MD, said at the annual congress of the European College of Neuropsychopharmacology.

Dr. Blanca Bolea

In contrast, the children of mothers who were anxious were not at increased risk for subsequent inattention symptoms in an analysis of 8,725 mothers and their children participating in the Avon Longitudinal Study of Parents and Children, a prospective epidemiologic cohort study ongoing in southwest England since 1991, said Dr. Bolea, a psychiatrist at the University of Toronto.

These findings have practical implications for clinical care: “If we know that women who are anxious in the perinatal period put their children at risk for hyperactivity later on, then we can tackle their anxiety in pregnancy or toddlerhood. And that’s easy to do: You can do group [cognitive-behavioral therapy]; you can give medications, so there are things you can do to reduce that risk. That’s relevant, because we don’t know much about how to reduce levels of ADHD. We know it has a genetic component, but we can’t touch that. You cannot change your genes, so far. But environmental things, we can change. So if we can identify the mothers who are more anxious during pregnancy and toddlerhood and give them resources to reduce their anxiety, then we can potentially reduce hyperactivity later on,” she explained in an interview.

In the Avon study, maternal anxiety was serially assessed from early pregnancy up until a child’s 5th birthday.

“We looked for maternal symptoms similar to panic disorder: shortness of breath, dizziness, sweating, things like that. These are symptoms that any clinician can identify by asking the mothers, so it’s not hard to identify the mothers who could be at risk,” according to the psychiatrist.

Children in the Avon study were assessed for symptoms of inattention at age 8.5 years using the Sky Search, Sky Search Dual Test, and Opposite Worlds subtests of the Tests of Everyday Attention for Children. Hyperactivity symptoms were assessed at age 16 years via the Strengths and Difficulties Questionnaire.

In an analysis adjusted for potentially confounding sociodemographic factors, adolescents whose mothers were rated by investigators as having moderate or high somatic anxiety during pregnancy and the toddlerhood years were at 2.1-fold increased risk of hyperactivity symptoms compared to those whose mothers had low or no anxiety, but increased maternal anxiety wasn’t associated with scores on any of the three tests of inattention.

Dr. Bolea cautioned that, while these Avon study findings document an association between early maternal anxiety and subsequent adolescent hyperactivity, that doesn’t prove causality. The findings are consistent, however, with the fetal origins hypothesis put forth by the late British epidemiologist David J. Barker, MD, PhD, which postulates that stressful fetal circumstances have profound effects later in life.

“What we’re thinking here is, if the mother is anxious during pregnancy, that may change how the fetal brain develops, and it makes kids hyperactive later on,” she said.

The hypothesis has been borne out in animal studies: Stress a pregnant rat, and her offspring will display hyperactivity.

Dr. Bolea reported having no financial conflicts regarding her study. The Avon Longitudinal Study of Parents and Children is funded by the Medical Research Council and the Wellcome Trust.

 

– Exposure to maternal somatic anxiety during pregnancy and toddlerhood increases a child’s risk of hyperactivity symptoms in adolescence, Blanca Bolea, MD, said at the annual congress of the European College of Neuropsychopharmacology.

Dr. Blanca Bolea

In contrast, the children of mothers who were anxious were not at increased risk for subsequent inattention symptoms in an analysis of 8,725 mothers and their children participating in the Avon Longitudinal Study of Parents and Children, a prospective epidemiologic cohort study ongoing in southwest England since 1991, said Dr. Bolea, a psychiatrist at the University of Toronto.

These findings have practical implications for clinical care: “If we know that women who are anxious in the perinatal period put their children at risk for hyperactivity later on, then we can tackle their anxiety in pregnancy or toddlerhood. And that’s easy to do: You can do group [cognitive-behavioral therapy]; you can give medications, so there are things you can do to reduce that risk. That’s relevant, because we don’t know much about how to reduce levels of ADHD. We know it has a genetic component, but we can’t touch that. You cannot change your genes, so far. But environmental things, we can change. So if we can identify the mothers who are more anxious during pregnancy and toddlerhood and give them resources to reduce their anxiety, then we can potentially reduce hyperactivity later on,” she explained in an interview.

In the Avon study, maternal anxiety was serially assessed from early pregnancy up until a child’s 5th birthday.

“We looked for maternal symptoms similar to panic disorder: shortness of breath, dizziness, sweating, things like that. These are symptoms that any clinician can identify by asking the mothers, so it’s not hard to identify the mothers who could be at risk,” according to the psychiatrist.

Children in the Avon study were assessed for symptoms of inattention at age 8.5 years using the Sky Search, Sky Search Dual Test, and Opposite Worlds subtests of the Tests of Everyday Attention for Children. Hyperactivity symptoms were assessed at age 16 years via the Strengths and Difficulties Questionnaire.

In an analysis adjusted for potentially confounding sociodemographic factors, adolescents whose mothers were rated by investigators as having moderate or high somatic anxiety during pregnancy and the toddlerhood years were at 2.1-fold increased risk of hyperactivity symptoms compared to those whose mothers had low or no anxiety, but increased maternal anxiety wasn’t associated with scores on any of the three tests of inattention.

Dr. Bolea cautioned that, while these Avon study findings document an association between early maternal anxiety and subsequent adolescent hyperactivity, that doesn’t prove causality. The findings are consistent, however, with the fetal origins hypothesis put forth by the late British epidemiologist David J. Barker, MD, PhD, which postulates that stressful fetal circumstances have profound effects later in life.

“What we’re thinking here is, if the mother is anxious during pregnancy, that may change how the fetal brain develops, and it makes kids hyperactive later on,” she said.

The hypothesis has been borne out in animal studies: Stress a pregnant rat, and her offspring will display hyperactivity.

Dr. Bolea reported having no financial conflicts regarding her study. The Avon Longitudinal Study of Parents and Children is funded by the Medical Research Council and the Wellcome Trust.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ECNP 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Lumateperone for schizophrenia shows safety, tolerability in long-term study

Article Type
Changed
Sat, 09/28/2019 - 19:53

 

– Lumateperone, a novel investigational drug for schizophrenia with a unique triple mechanism of action, showed impressive safety and tolerability while achieving a continuous decline in schizophrenia symptoms over the course of a year in a long-term, open-label study, Suresh Durgam, MD, said at the annual congress of the European College of Neuropsychopharmacology.

Bruce Jancin/MDedge News
Dr. Suresh Durgam

Indeed, patients on lumateperone at the 1-year mark showed significant reductions in LDL cholesterol, total cholesterol, serum prolactin, and body weight, compared with baseline values recorded when participants were on various standard-of-care antipsychotics prior to switching. Other cardiometabolic parameters, including fasting blood glucose, insulin, triglycerides, and HDL cholesterol, showed only negligible change over the course of study, according to Dr. Durgam, a psychiatrist and senior vice president for late-stage clinical development and medical affairs at Intra-Cellular Therapies, the New York–based company developing lumateperone as its lead product.

This favorable cardiometabolic profile contrasts sharply with those of currently available antipsychotic agents, many of which worsen cardiometabolic risk factors. That would seem to be a major advantage for lumateperone and is likely to be a factor in the Food and Drug Administration’s ongoing deliberation over the company’s new drug application. The agency has promised a decision on the application by the end of December, Dr. Durgam said.

Intra-Cellular Therapies’ stock price took a hit in July 2019, when the FDA abruptly canceled an advisory committee meeting scheduled to consider lumateperone. The agency sought additional information on animal toxicology studies. Having received it from the company, the FDA now no longer plans to schedule an advisory committee meeting before issuing its marketing approval decision.

Lumateperone is an oral once-daily drug that doesn’t require titration. Its high degree of tolerability is thought to be attributable to the drug’s mechanism of action, which involves simultaneous modulation of three different neurotransmitter pathways: serotonin, dopamine, and glutamate. The drug is a potent serotonin 5-HT2a antagonist and serotonin reuptake inhibitor, a dopamine D2 presynaptic partial agonist and postsynaptic antagonist, and it also modulates glutamate via activation of the D1 receptor.

Three phase 3, double-blind, placebo-controlled randomized clinical trials of 4-6 weeks duration have been completed in a total of 1,481 patients with acute exacerbation of schizophrenia. Two trials were positive, with lumateperone achieving significantly greater mean reductions in the Positive and Negative Syndrome Scale (PANSS) total score than placebo, while the third was negative, with no significant between-group difference. Of note, the safety profile of lumateperone was indistinguishable from placebo with the sole exception of somnolence, where the 20% incidence was twice that of placebo-treated controls. However, in the open-label program, dosing was switched from morning to evening, with a resultant drop in somnolence to the placebo level, Dr. Durgam said.



The open-label program has two parts. Part 1 was conducted in 302 patients with stable, generally mild schizophrenia symptoms while on risperidone, olanzapine, or various other antipsychotics commonly prescribed in the United States. They were switched to lumateperone at 42 mg once daily for 6 weeks, at which point they demonstrated significant reductions in body weight, serum prolactin, insulin, total cholesterol, and LDL cholesterol. They then were switched back to their former medications, with a resultant worsening of those parameters to prelumateperone levels, providing evidence of a cause-and-effect relationship with cardiometabolic risk factors.

Part 2 of the open-label program is the long-term study, in which 603 patients with stable symptoms on standard-of-care antipsychotics were switched to lumateperone at 42 mg/day, to be followed for 1 year or more. Dr. Durgam presented an interim analysis focused on the first 107 patients to achieve the 1-year treatment milestone. Most were obese at baseline: the group’s mean body mass index was 31.3 kg/m2. They experienced progressive weight loss, with a mean reduction of 1.82 kg on day 175 and 3.16 kg on day 350. About 24% of subjects had a 7% or greater reduction in body weight, while 8% had at least a 7% weight gain. Waist circumference decreased by an average of 5.2 cm from a baseline of 103.2 cm in men and by 1.9 cm in women.

The primary focus of the ongoing long-term study is safety. The most common treatment-emergent adverse events during a full year of therapy were dry mouth, headache, and diarrhea, each occurring in about 7% of patients. Only 0.8% of patients developed extrapyramidal symptoms.

At 150 days of treatment in 340 patients, 30% had achieved a PANSS response, defined as at least a 20% improvement in PANSS total score, compared with baseline. At 300 days in the smaller group who had reached that milestone at the time of the interim analysis, the PANSS response rate had grown to 41%.

Among patients with schizophrenia and comorbid depression as defined by a Calgary Depression Scale for Schizophrenia (CDSS) score of 6 or more at baseline, lumateperone at 42 mg/day improved depressive symptoms, such that 60% of those patients achieved a CDSS response – that is, at least a 50% reduction in the score – by day 75. This finding supports data from earlier short-term studies, and suggests that lumateperone’s multiple mechanisms of action and high tolerability make it a promising candidate for treatment of depression and other symptom domains of schizophrenia that are currently inadequately treated, according to Dr. Durgam.

Dr. Durgam also presented an update on the lumateperone program for bipolar depression, which consists of three phase 3, double-blind, placebo-controlled, 6-week-long clinical trials totaling 1,455 patients. Two have been completed: one positive and the other negative with an unusually high placebo response rate. The ongoing third trial will be the tiebreaker. Safety and tolerability have been as noted in other lumateperone studies.

In the positive trial, the primary efficacy endpoint was change in Montgomery-Åsberg Depression Rating Scale, which improved in lumateperone-treated patients by an average of 16.7 points from a baseline score of just over 30, a significantly better result than the 12.1-point reduction in placebo-treated controls. The treatment benefit was similar in bipolar I and bipolar II patients.

The phase 3 trial* for treatment of agitation in patients with Alzheimer’s disease and other dementias was stopped early for lack of efficacy in an interim analysis. And lumateperone is in ongoing phase 2 trials for sleep disturbances associated with neuropsychiatric disorders. The phase 2 study* in major depressive disorder has been completed.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Lumateperone, a novel investigational drug for schizophrenia with a unique triple mechanism of action, showed impressive safety and tolerability while achieving a continuous decline in schizophrenia symptoms over the course of a year in a long-term, open-label study, Suresh Durgam, MD, said at the annual congress of the European College of Neuropsychopharmacology.

Bruce Jancin/MDedge News
Dr. Suresh Durgam

Indeed, patients on lumateperone at the 1-year mark showed significant reductions in LDL cholesterol, total cholesterol, serum prolactin, and body weight, compared with baseline values recorded when participants were on various standard-of-care antipsychotics prior to switching. Other cardiometabolic parameters, including fasting blood glucose, insulin, triglycerides, and HDL cholesterol, showed only negligible change over the course of study, according to Dr. Durgam, a psychiatrist and senior vice president for late-stage clinical development and medical affairs at Intra-Cellular Therapies, the New York–based company developing lumateperone as its lead product.

This favorable cardiometabolic profile contrasts sharply with those of currently available antipsychotic agents, many of which worsen cardiometabolic risk factors. That would seem to be a major advantage for lumateperone and is likely to be a factor in the Food and Drug Administration’s ongoing deliberation over the company’s new drug application. The agency has promised a decision on the application by the end of December, Dr. Durgam said.

Intra-Cellular Therapies’ stock price took a hit in July 2019, when the FDA abruptly canceled an advisory committee meeting scheduled to consider lumateperone. The agency sought additional information on animal toxicology studies. Having received it from the company, the FDA now no longer plans to schedule an advisory committee meeting before issuing its marketing approval decision.

Lumateperone is an oral once-daily drug that doesn’t require titration. Its high degree of tolerability is thought to be attributable to the drug’s mechanism of action, which involves simultaneous modulation of three different neurotransmitter pathways: serotonin, dopamine, and glutamate. The drug is a potent serotonin 5-HT2a antagonist and serotonin reuptake inhibitor, a dopamine D2 presynaptic partial agonist and postsynaptic antagonist, and it also modulates glutamate via activation of the D1 receptor.

Three phase 3, double-blind, placebo-controlled randomized clinical trials of 4-6 weeks duration have been completed in a total of 1,481 patients with acute exacerbation of schizophrenia. Two trials were positive, with lumateperone achieving significantly greater mean reductions in the Positive and Negative Syndrome Scale (PANSS) total score than placebo, while the third was negative, with no significant between-group difference. Of note, the safety profile of lumateperone was indistinguishable from placebo with the sole exception of somnolence, where the 20% incidence was twice that of placebo-treated controls. However, in the open-label program, dosing was switched from morning to evening, with a resultant drop in somnolence to the placebo level, Dr. Durgam said.



The open-label program has two parts. Part 1 was conducted in 302 patients with stable, generally mild schizophrenia symptoms while on risperidone, olanzapine, or various other antipsychotics commonly prescribed in the United States. They were switched to lumateperone at 42 mg once daily for 6 weeks, at which point they demonstrated significant reductions in body weight, serum prolactin, insulin, total cholesterol, and LDL cholesterol. They then were switched back to their former medications, with a resultant worsening of those parameters to prelumateperone levels, providing evidence of a cause-and-effect relationship with cardiometabolic risk factors.

Part 2 of the open-label program is the long-term study, in which 603 patients with stable symptoms on standard-of-care antipsychotics were switched to lumateperone at 42 mg/day, to be followed for 1 year or more. Dr. Durgam presented an interim analysis focused on the first 107 patients to achieve the 1-year treatment milestone. Most were obese at baseline: the group’s mean body mass index was 31.3 kg/m2. They experienced progressive weight loss, with a mean reduction of 1.82 kg on day 175 and 3.16 kg on day 350. About 24% of subjects had a 7% or greater reduction in body weight, while 8% had at least a 7% weight gain. Waist circumference decreased by an average of 5.2 cm from a baseline of 103.2 cm in men and by 1.9 cm in women.

The primary focus of the ongoing long-term study is safety. The most common treatment-emergent adverse events during a full year of therapy were dry mouth, headache, and diarrhea, each occurring in about 7% of patients. Only 0.8% of patients developed extrapyramidal symptoms.

At 150 days of treatment in 340 patients, 30% had achieved a PANSS response, defined as at least a 20% improvement in PANSS total score, compared with baseline. At 300 days in the smaller group who had reached that milestone at the time of the interim analysis, the PANSS response rate had grown to 41%.

Among patients with schizophrenia and comorbid depression as defined by a Calgary Depression Scale for Schizophrenia (CDSS) score of 6 or more at baseline, lumateperone at 42 mg/day improved depressive symptoms, such that 60% of those patients achieved a CDSS response – that is, at least a 50% reduction in the score – by day 75. This finding supports data from earlier short-term studies, and suggests that lumateperone’s multiple mechanisms of action and high tolerability make it a promising candidate for treatment of depression and other symptom domains of schizophrenia that are currently inadequately treated, according to Dr. Durgam.

Dr. Durgam also presented an update on the lumateperone program for bipolar depression, which consists of three phase 3, double-blind, placebo-controlled, 6-week-long clinical trials totaling 1,455 patients. Two have been completed: one positive and the other negative with an unusually high placebo response rate. The ongoing third trial will be the tiebreaker. Safety and tolerability have been as noted in other lumateperone studies.

In the positive trial, the primary efficacy endpoint was change in Montgomery-Åsberg Depression Rating Scale, which improved in lumateperone-treated patients by an average of 16.7 points from a baseline score of just over 30, a significantly better result than the 12.1-point reduction in placebo-treated controls. The treatment benefit was similar in bipolar I and bipolar II patients.

The phase 3 trial* for treatment of agitation in patients with Alzheimer’s disease and other dementias was stopped early for lack of efficacy in an interim analysis. And lumateperone is in ongoing phase 2 trials for sleep disturbances associated with neuropsychiatric disorders. The phase 2 study* in major depressive disorder has been completed.

 

– Lumateperone, a novel investigational drug for schizophrenia with a unique triple mechanism of action, showed impressive safety and tolerability while achieving a continuous decline in schizophrenia symptoms over the course of a year in a long-term, open-label study, Suresh Durgam, MD, said at the annual congress of the European College of Neuropsychopharmacology.

Bruce Jancin/MDedge News
Dr. Suresh Durgam

Indeed, patients on lumateperone at the 1-year mark showed significant reductions in LDL cholesterol, total cholesterol, serum prolactin, and body weight, compared with baseline values recorded when participants were on various standard-of-care antipsychotics prior to switching. Other cardiometabolic parameters, including fasting blood glucose, insulin, triglycerides, and HDL cholesterol, showed only negligible change over the course of study, according to Dr. Durgam, a psychiatrist and senior vice president for late-stage clinical development and medical affairs at Intra-Cellular Therapies, the New York–based company developing lumateperone as its lead product.

This favorable cardiometabolic profile contrasts sharply with those of currently available antipsychotic agents, many of which worsen cardiometabolic risk factors. That would seem to be a major advantage for lumateperone and is likely to be a factor in the Food and Drug Administration’s ongoing deliberation over the company’s new drug application. The agency has promised a decision on the application by the end of December, Dr. Durgam said.

Intra-Cellular Therapies’ stock price took a hit in July 2019, when the FDA abruptly canceled an advisory committee meeting scheduled to consider lumateperone. The agency sought additional information on animal toxicology studies. Having received it from the company, the FDA now no longer plans to schedule an advisory committee meeting before issuing its marketing approval decision.

Lumateperone is an oral once-daily drug that doesn’t require titration. Its high degree of tolerability is thought to be attributable to the drug’s mechanism of action, which involves simultaneous modulation of three different neurotransmitter pathways: serotonin, dopamine, and glutamate. The drug is a potent serotonin 5-HT2a antagonist and serotonin reuptake inhibitor, a dopamine D2 presynaptic partial agonist and postsynaptic antagonist, and it also modulates glutamate via activation of the D1 receptor.

Three phase 3, double-blind, placebo-controlled randomized clinical trials of 4-6 weeks duration have been completed in a total of 1,481 patients with acute exacerbation of schizophrenia. Two trials were positive, with lumateperone achieving significantly greater mean reductions in the Positive and Negative Syndrome Scale (PANSS) total score than placebo, while the third was negative, with no significant between-group difference. Of note, the safety profile of lumateperone was indistinguishable from placebo with the sole exception of somnolence, where the 20% incidence was twice that of placebo-treated controls. However, in the open-label program, dosing was switched from morning to evening, with a resultant drop in somnolence to the placebo level, Dr. Durgam said.



The open-label program has two parts. Part 1 was conducted in 302 patients with stable, generally mild schizophrenia symptoms while on risperidone, olanzapine, or various other antipsychotics commonly prescribed in the United States. They were switched to lumateperone at 42 mg once daily for 6 weeks, at which point they demonstrated significant reductions in body weight, serum prolactin, insulin, total cholesterol, and LDL cholesterol. They then were switched back to their former medications, with a resultant worsening of those parameters to prelumateperone levels, providing evidence of a cause-and-effect relationship with cardiometabolic risk factors.

Part 2 of the open-label program is the long-term study, in which 603 patients with stable symptoms on standard-of-care antipsychotics were switched to lumateperone at 42 mg/day, to be followed for 1 year or more. Dr. Durgam presented an interim analysis focused on the first 107 patients to achieve the 1-year treatment milestone. Most were obese at baseline: the group’s mean body mass index was 31.3 kg/m2. They experienced progressive weight loss, with a mean reduction of 1.82 kg on day 175 and 3.16 kg on day 350. About 24% of subjects had a 7% or greater reduction in body weight, while 8% had at least a 7% weight gain. Waist circumference decreased by an average of 5.2 cm from a baseline of 103.2 cm in men and by 1.9 cm in women.

The primary focus of the ongoing long-term study is safety. The most common treatment-emergent adverse events during a full year of therapy were dry mouth, headache, and diarrhea, each occurring in about 7% of patients. Only 0.8% of patients developed extrapyramidal symptoms.

At 150 days of treatment in 340 patients, 30% had achieved a PANSS response, defined as at least a 20% improvement in PANSS total score, compared with baseline. At 300 days in the smaller group who had reached that milestone at the time of the interim analysis, the PANSS response rate had grown to 41%.

Among patients with schizophrenia and comorbid depression as defined by a Calgary Depression Scale for Schizophrenia (CDSS) score of 6 or more at baseline, lumateperone at 42 mg/day improved depressive symptoms, such that 60% of those patients achieved a CDSS response – that is, at least a 50% reduction in the score – by day 75. This finding supports data from earlier short-term studies, and suggests that lumateperone’s multiple mechanisms of action and high tolerability make it a promising candidate for treatment of depression and other symptom domains of schizophrenia that are currently inadequately treated, according to Dr. Durgam.

Dr. Durgam also presented an update on the lumateperone program for bipolar depression, which consists of three phase 3, double-blind, placebo-controlled, 6-week-long clinical trials totaling 1,455 patients. Two have been completed: one positive and the other negative with an unusually high placebo response rate. The ongoing third trial will be the tiebreaker. Safety and tolerability have been as noted in other lumateperone studies.

In the positive trial, the primary efficacy endpoint was change in Montgomery-Åsberg Depression Rating Scale, which improved in lumateperone-treated patients by an average of 16.7 points from a baseline score of just over 30, a significantly better result than the 12.1-point reduction in placebo-treated controls. The treatment benefit was similar in bipolar I and bipolar II patients.

The phase 3 trial* for treatment of agitation in patients with Alzheimer’s disease and other dementias was stopped early for lack of efficacy in an interim analysis. And lumateperone is in ongoing phase 2 trials for sleep disturbances associated with neuropsychiatric disorders. The phase 2 study* in major depressive disorder has been completed.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ECNP 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Skip supplemental O2 in nonhypoxic ACS

Article Type
Changed
Wed, 09/25/2019 - 14:51
Display Headline
Skip supplemental O2 in nonhypoxic ACS

 

– A massive randomized trial that included all New Zealanders with a suspected acute coronary syndrome during a 2-year period has provided definitive evidence that giving high-flow supplemental oxygen to those who are nonhypoxemic is of no clinical benefit, although it wasn’t harmful, either.

Vidyard Video

“Patients who have a normal blood oxygen saturation level are very unlikely to benefit from supplemental oxygen,” Ralph Stewart, MbChB, said in presenting the results of the NZOTACS (New Zealand Oxygen Therapy in Acute Coronary Syndromes) trial at the annual congress of the European Society of Cardiology.

“It’s amazing that oxygen has been used in patients with suspected heart attack for over 50 years, and during that time there’s never been definite evidence that it improves outcomes. And more recently some have even suggested giving high-level oxygen might actually cause harm,” observed Dr. Stewart, a cardiologist at Auckland City Hospital and the University of Auckland (New Zealand).

The primary outcome in NZOTACS was 30-day all-cause mortality. In the overall study population, the rate was 3.0% in the group assigned to the routine high-flow oxygen protocol and closely similar at 3.1% in those randomized to the conservative oxygen strategy. And there was reassuringly no signal that the liberal oxygen protocol caused any harm.

To conduct this cluster randomized crossover trial, Dr. Stewart and his coinvestigators divided New Zealand into quadrants and, taking advantage of the coordinated health care systems operative in the nation of 4.8 million, they arranged for all ambulances, emergency departments, and hospitals in each geographic region to utilize each supplemental oxygen strategy for a total of 12 months.

In the liberal oxygen strategy, patients with suspected ACS on the basis of ischemic chest pain or ECG changes received high-flow oxygen by face mask at 6-8 L/min regardless of their blood oxygen saturation (SaO2) level. The oxygen was stopped only upon clinical resolution of myocardial ischemia. In contrast, in the low-oxygen protocol, supplemental oxygen was reserved for patients with an initial SaO2 below 90%, with a target SaO2 of 90%-94%.



Roughly 90% of the nearly 41,000 study participants had a normal SaO2 of 90% or more. Their 30-day mortality was 2.1% with the high-oxygen protocol and similar at 1.9% with the conservative oxygen protocol.

In contrast, there was a suggestion of benefit for the routine liberal oxygen strategy in the subgroup of patients with ST-elevation MI. Their 30-day mortality was 8.8% with high-flow oxygen and 10.6% with the conservative oxygen protocol. The resultant 19% relative risk reduction barely missed statistical significance. There was also a trend for possible benefit of routine high-flow oxygen in the roughly 12% of NZOTACS participants with an SaO2 below 95%, a lower bar than the 90% SaO2 that defines hypoxemia. Their death rate at 30 days was 10.1% if they got supplemental oxygen and 11.1% if they only received oxygen in the event their SaO2 was below 90%. But these exploratory findings must be viewed as hypothesis-generating, and a large confirmatory study would be required, Dr. Stewart noted.

Discussant Robin Hofmann, MD, PhD, commented that, based on the NZOTACS results, he believes a couple of changes to the current ESC guidelines on management of ACS are in order. The guidelines now state that oxygen is indicated in patients with suspected ACS and hypoxemia as defined by an SaO2 below 90%, giving that recommendation a Class I Level of Evidence C. That should now be upgraded to the strongest-possible Class I A recommendation, according to Dr. Hofmann, a cardiologist at the Karolinska Institute in Stockholm.

The ESC guidelines also state that oxygen isn’t routinely recommended in patients with an SaO2 of 90% or more, rating that guidance Class III B. On the basis of NZOTACS coupled with earlier far smaller studies, that should be changed to a Class III A recommendation, meaning simply don’t do it. The hint provided by NZOTACS of a possible small benefit for oxygen in patients with an SaO2 below 95% isn’t strong enough evidence to carry the day, in Dr. Hofmann’s view.

Dr. Stewart and Dr. Hofmann reported having no financial conflicts of interest. The NZOTACS trial was funded by the National Heart Foundation of New Zealand.

SOURCE: Stewart R. ESC 2019, Hotline Session 2.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– A massive randomized trial that included all New Zealanders with a suspected acute coronary syndrome during a 2-year period has provided definitive evidence that giving high-flow supplemental oxygen to those who are nonhypoxemic is of no clinical benefit, although it wasn’t harmful, either.

Vidyard Video

“Patients who have a normal blood oxygen saturation level are very unlikely to benefit from supplemental oxygen,” Ralph Stewart, MbChB, said in presenting the results of the NZOTACS (New Zealand Oxygen Therapy in Acute Coronary Syndromes) trial at the annual congress of the European Society of Cardiology.

“It’s amazing that oxygen has been used in patients with suspected heart attack for over 50 years, and during that time there’s never been definite evidence that it improves outcomes. And more recently some have even suggested giving high-level oxygen might actually cause harm,” observed Dr. Stewart, a cardiologist at Auckland City Hospital and the University of Auckland (New Zealand).

The primary outcome in NZOTACS was 30-day all-cause mortality. In the overall study population, the rate was 3.0% in the group assigned to the routine high-flow oxygen protocol and closely similar at 3.1% in those randomized to the conservative oxygen strategy. And there was reassuringly no signal that the liberal oxygen protocol caused any harm.

To conduct this cluster randomized crossover trial, Dr. Stewart and his coinvestigators divided New Zealand into quadrants and, taking advantage of the coordinated health care systems operative in the nation of 4.8 million, they arranged for all ambulances, emergency departments, and hospitals in each geographic region to utilize each supplemental oxygen strategy for a total of 12 months.

In the liberal oxygen strategy, patients with suspected ACS on the basis of ischemic chest pain or ECG changes received high-flow oxygen by face mask at 6-8 L/min regardless of their blood oxygen saturation (SaO2) level. The oxygen was stopped only upon clinical resolution of myocardial ischemia. In contrast, in the low-oxygen protocol, supplemental oxygen was reserved for patients with an initial SaO2 below 90%, with a target SaO2 of 90%-94%.



Roughly 90% of the nearly 41,000 study participants had a normal SaO2 of 90% or more. Their 30-day mortality was 2.1% with the high-oxygen protocol and similar at 1.9% with the conservative oxygen protocol.

In contrast, there was a suggestion of benefit for the routine liberal oxygen strategy in the subgroup of patients with ST-elevation MI. Their 30-day mortality was 8.8% with high-flow oxygen and 10.6% with the conservative oxygen protocol. The resultant 19% relative risk reduction barely missed statistical significance. There was also a trend for possible benefit of routine high-flow oxygen in the roughly 12% of NZOTACS participants with an SaO2 below 95%, a lower bar than the 90% SaO2 that defines hypoxemia. Their death rate at 30 days was 10.1% if they got supplemental oxygen and 11.1% if they only received oxygen in the event their SaO2 was below 90%. But these exploratory findings must be viewed as hypothesis-generating, and a large confirmatory study would be required, Dr. Stewart noted.

Discussant Robin Hofmann, MD, PhD, commented that, based on the NZOTACS results, he believes a couple of changes to the current ESC guidelines on management of ACS are in order. The guidelines now state that oxygen is indicated in patients with suspected ACS and hypoxemia as defined by an SaO2 below 90%, giving that recommendation a Class I Level of Evidence C. That should now be upgraded to the strongest-possible Class I A recommendation, according to Dr. Hofmann, a cardiologist at the Karolinska Institute in Stockholm.

The ESC guidelines also state that oxygen isn’t routinely recommended in patients with an SaO2 of 90% or more, rating that guidance Class III B. On the basis of NZOTACS coupled with earlier far smaller studies, that should be changed to a Class III A recommendation, meaning simply don’t do it. The hint provided by NZOTACS of a possible small benefit for oxygen in patients with an SaO2 below 95% isn’t strong enough evidence to carry the day, in Dr. Hofmann’s view.

Dr. Stewart and Dr. Hofmann reported having no financial conflicts of interest. The NZOTACS trial was funded by the National Heart Foundation of New Zealand.

SOURCE: Stewart R. ESC 2019, Hotline Session 2.

 

– A massive randomized trial that included all New Zealanders with a suspected acute coronary syndrome during a 2-year period has provided definitive evidence that giving high-flow supplemental oxygen to those who are nonhypoxemic is of no clinical benefit, although it wasn’t harmful, either.

Vidyard Video

“Patients who have a normal blood oxygen saturation level are very unlikely to benefit from supplemental oxygen,” Ralph Stewart, MbChB, said in presenting the results of the NZOTACS (New Zealand Oxygen Therapy in Acute Coronary Syndromes) trial at the annual congress of the European Society of Cardiology.

“It’s amazing that oxygen has been used in patients with suspected heart attack for over 50 years, and during that time there’s never been definite evidence that it improves outcomes. And more recently some have even suggested giving high-level oxygen might actually cause harm,” observed Dr. Stewart, a cardiologist at Auckland City Hospital and the University of Auckland (New Zealand).

The primary outcome in NZOTACS was 30-day all-cause mortality. In the overall study population, the rate was 3.0% in the group assigned to the routine high-flow oxygen protocol and closely similar at 3.1% in those randomized to the conservative oxygen strategy. And there was reassuringly no signal that the liberal oxygen protocol caused any harm.

To conduct this cluster randomized crossover trial, Dr. Stewart and his coinvestigators divided New Zealand into quadrants and, taking advantage of the coordinated health care systems operative in the nation of 4.8 million, they arranged for all ambulances, emergency departments, and hospitals in each geographic region to utilize each supplemental oxygen strategy for a total of 12 months.

In the liberal oxygen strategy, patients with suspected ACS on the basis of ischemic chest pain or ECG changes received high-flow oxygen by face mask at 6-8 L/min regardless of their blood oxygen saturation (SaO2) level. The oxygen was stopped only upon clinical resolution of myocardial ischemia. In contrast, in the low-oxygen protocol, supplemental oxygen was reserved for patients with an initial SaO2 below 90%, with a target SaO2 of 90%-94%.



Roughly 90% of the nearly 41,000 study participants had a normal SaO2 of 90% or more. Their 30-day mortality was 2.1% with the high-oxygen protocol and similar at 1.9% with the conservative oxygen protocol.

In contrast, there was a suggestion of benefit for the routine liberal oxygen strategy in the subgroup of patients with ST-elevation MI. Their 30-day mortality was 8.8% with high-flow oxygen and 10.6% with the conservative oxygen protocol. The resultant 19% relative risk reduction barely missed statistical significance. There was also a trend for possible benefit of routine high-flow oxygen in the roughly 12% of NZOTACS participants with an SaO2 below 95%, a lower bar than the 90% SaO2 that defines hypoxemia. Their death rate at 30 days was 10.1% if they got supplemental oxygen and 11.1% if they only received oxygen in the event their SaO2 was below 90%. But these exploratory findings must be viewed as hypothesis-generating, and a large confirmatory study would be required, Dr. Stewart noted.

Discussant Robin Hofmann, MD, PhD, commented that, based on the NZOTACS results, he believes a couple of changes to the current ESC guidelines on management of ACS are in order. The guidelines now state that oxygen is indicated in patients with suspected ACS and hypoxemia as defined by an SaO2 below 90%, giving that recommendation a Class I Level of Evidence C. That should now be upgraded to the strongest-possible Class I A recommendation, according to Dr. Hofmann, a cardiologist at the Karolinska Institute in Stockholm.

The ESC guidelines also state that oxygen isn’t routinely recommended in patients with an SaO2 of 90% or more, rating that guidance Class III B. On the basis of NZOTACS coupled with earlier far smaller studies, that should be changed to a Class III A recommendation, meaning simply don’t do it. The hint provided by NZOTACS of a possible small benefit for oxygen in patients with an SaO2 below 95% isn’t strong enough evidence to carry the day, in Dr. Hofmann’s view.

Dr. Stewart and Dr. Hofmann reported having no financial conflicts of interest. The NZOTACS trial was funded by the National Heart Foundation of New Zealand.

SOURCE: Stewart R. ESC 2019, Hotline Session 2.

Publications
Publications
Topics
Article Type
Display Headline
Skip supplemental O2 in nonhypoxic ACS
Display Headline
Skip supplemental O2 in nonhypoxic ACS
Sections
Article Source

REPORTING FROM THE ESC CONGRESS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Virtual dark therapy tames manic episodes

Article Type
Changed
Tue, 09/24/2019 - 15:02

 

– Bright light therapy is a well-established, guideline-recommended treatment for seasonal affective disorder, and many people prone to depression keep a light box at home. But are you ready to embrace the dark side – that is, dark therapy for bipolar mania, or its vastly more patient-friendly offshoot, virtual dark therapy?

Bruce Jancin/MDedge News
Dr. Tone E.G. Henriksen

Virtual dark therapy using blue light spectrum–blocking glasses turns out to be a highly effective adjunct to standard antimanic medications in patients with bipolar mania. And it’s a lot easier on patients than the massive sensory deprivation imposed by the original form of dark therapy, which entails keeping a patient with mania in a completely dark room for 14 hours per night, Tone E.G. Henriksen, MD, observed at the annual congress of the European College of Neuropsychopharmacology.

She was lead author of a pioneering randomized controlled trial demonstrating that bipolar patients who wore blue-blocking, orange-tinted glasses for 14 hours per evening while hospitalized for a manic episode experienced a significant improvement in scores on the Young Mania Rating Scale (YMRS), compared with patients randomized to wearing clear lenses. Moreover, the between-group difference achieved strong significance in just 3 days.

That’s a remarkable result, because bipolar mania is such a challenge to treat pharmacologically. The standard medications – mood stabilizers and antipsychotic agents – are slow in onset of effect, observed Dr. Henriksen, a psychiatrist at the University of Bergen (Norway).

Backing up, she noted there is strong evidence of seasonality to bipolar disorder, as highlighted in a systematic review of 51 publications (J Affect Disord. 2014 Oct;168:210-23). This recognition has prompted numerous researchers to focus attention on the abnormal circadian rhythms prevalent in patients with bipolar disorder, for which the light/dark cycle is a powerful synchronizing signal to the hypothalamic suprachiasmatic nucleus, the master clock of circadian rhythms. This understanding led to a landmark case control pilot study by Italian investigators who exposed 16 bipolar inpatients experiencing a manic episode to 14 hours of complete darkness from 6 p.m. to 8 a.m. for 3 consecutive nights. The outcome was a dramatic reduction in YMRS scores in the dark therapy group, compared with 16 matched control inpatients, with all participants on pharmacologic treatment as usual (Bipolar Disord. 2005 Feb;7[1]:98-101).

“This was really something,” Dr. Henriksen recalled.

She and her colleagues were impressed by other investigators’ discovery of specialized retinal ganglion cells, known as intrinsically photosensitive retinal ganglion cells, which are responsible for conveying the daylight signal to the brain. These specialized cells contain melanopsin, which is blue light sensitive. The Norwegian investigators reasoned that it might not be necessary to expose patients with mania to prolonged utter darkness to achieve rapid symptomatic improvement, as the Italian psychiatrists did. Instead, they hypothesized, it might be sufficient just to block the blue light, low-wavelength end of the spectrum. And that turned out to be the case.

Their randomized, single-blind, multicenter study included 23 patients with bipolar disorder who were hospitalized for manic symptoms. All remained on their standard background psychiatric medications while being randomized to wear orange-tinted, blue light–blocking glasses, which allowed passage of almost all light above 530 nm, or clear glasses. Participants were instructed to wear their glasses from 6 p.m. to 8 a.m. for 7 consecutive nights. They took their glasses off when they switched off the lights at bedtime, but they had to put them back on if they turned on a light before 8 a.m. The patients also wore an activity monitor.

The results were dramatic: The blue-blocking glasses group had a mean 14.1-point drop in their YMRS score from a baseline of about 25, compared with a mere 1.7-point decline in the control group. Moreover, Dr. Henriksen said, this result might actually underrepresent the true clinical effect of blocking blue light to the brain, since two patients in the blue-blocking glasses group experienced such rapid symptomatic improvement that they were moved from an acute psychiatric ward to a local hospital midstudy, a sudden change that triggered transient worsening of manic symptoms in both patients.

The investigators documented improved sleep efficiency in the blue-blocking group. Another noteworthy finding was that, in the blue-blocking group, the elements of the YMRS related to increased activation declined before the measures of distorted thoughts and perceptions. So did motor activity as recorded by actigraph. Meanwhile, nighttime activity worsened in the control group; they received substantially more sedatives, hypnotics, anxiolytic agents, and antipsychotic medications (Bipolar Disord. 2016 May;18[3]:221-32).

The mechanism underlying the improvement in sleep regularity and manic symptoms achieved by blocking blue light is not understood. Dr. Henriksen finds “very compelling” a theory put forth by prominent chronobiologist Daniel Kripke, MD, of the University of California, San Diego. He has shown in animal studies that a change in light exposure can trigger bifurcation in the circadian rhythms of the suprachiasmatic nucleus. The resultant suppression of melatonin secretion results in excess production of hypothalamic triiodothyronine, which in turn affects production of other key hormones. In patients with bipolar disorder, this could trigger mania, according to Dr. Kripke (F1000Res. 2015 May 6;4:107.

Dr. Henriksen reported having no financial conflicts regarding her study, which was conducted free of commercial support. She serves as a consultant to Chrono Chrome AS.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Bright light therapy is a well-established, guideline-recommended treatment for seasonal affective disorder, and many people prone to depression keep a light box at home. But are you ready to embrace the dark side – that is, dark therapy for bipolar mania, or its vastly more patient-friendly offshoot, virtual dark therapy?

Bruce Jancin/MDedge News
Dr. Tone E.G. Henriksen

Virtual dark therapy using blue light spectrum–blocking glasses turns out to be a highly effective adjunct to standard antimanic medications in patients with bipolar mania. And it’s a lot easier on patients than the massive sensory deprivation imposed by the original form of dark therapy, which entails keeping a patient with mania in a completely dark room for 14 hours per night, Tone E.G. Henriksen, MD, observed at the annual congress of the European College of Neuropsychopharmacology.

She was lead author of a pioneering randomized controlled trial demonstrating that bipolar patients who wore blue-blocking, orange-tinted glasses for 14 hours per evening while hospitalized for a manic episode experienced a significant improvement in scores on the Young Mania Rating Scale (YMRS), compared with patients randomized to wearing clear lenses. Moreover, the between-group difference achieved strong significance in just 3 days.

That’s a remarkable result, because bipolar mania is such a challenge to treat pharmacologically. The standard medications – mood stabilizers and antipsychotic agents – are slow in onset of effect, observed Dr. Henriksen, a psychiatrist at the University of Bergen (Norway).

Backing up, she noted there is strong evidence of seasonality to bipolar disorder, as highlighted in a systematic review of 51 publications (J Affect Disord. 2014 Oct;168:210-23). This recognition has prompted numerous researchers to focus attention on the abnormal circadian rhythms prevalent in patients with bipolar disorder, for which the light/dark cycle is a powerful synchronizing signal to the hypothalamic suprachiasmatic nucleus, the master clock of circadian rhythms. This understanding led to a landmark case control pilot study by Italian investigators who exposed 16 bipolar inpatients experiencing a manic episode to 14 hours of complete darkness from 6 p.m. to 8 a.m. for 3 consecutive nights. The outcome was a dramatic reduction in YMRS scores in the dark therapy group, compared with 16 matched control inpatients, with all participants on pharmacologic treatment as usual (Bipolar Disord. 2005 Feb;7[1]:98-101).

“This was really something,” Dr. Henriksen recalled.

She and her colleagues were impressed by other investigators’ discovery of specialized retinal ganglion cells, known as intrinsically photosensitive retinal ganglion cells, which are responsible for conveying the daylight signal to the brain. These specialized cells contain melanopsin, which is blue light sensitive. The Norwegian investigators reasoned that it might not be necessary to expose patients with mania to prolonged utter darkness to achieve rapid symptomatic improvement, as the Italian psychiatrists did. Instead, they hypothesized, it might be sufficient just to block the blue light, low-wavelength end of the spectrum. And that turned out to be the case.

Their randomized, single-blind, multicenter study included 23 patients with bipolar disorder who were hospitalized for manic symptoms. All remained on their standard background psychiatric medications while being randomized to wear orange-tinted, blue light–blocking glasses, which allowed passage of almost all light above 530 nm, or clear glasses. Participants were instructed to wear their glasses from 6 p.m. to 8 a.m. for 7 consecutive nights. They took their glasses off when they switched off the lights at bedtime, but they had to put them back on if they turned on a light before 8 a.m. The patients also wore an activity monitor.

The results were dramatic: The blue-blocking glasses group had a mean 14.1-point drop in their YMRS score from a baseline of about 25, compared with a mere 1.7-point decline in the control group. Moreover, Dr. Henriksen said, this result might actually underrepresent the true clinical effect of blocking blue light to the brain, since two patients in the blue-blocking glasses group experienced such rapid symptomatic improvement that they were moved from an acute psychiatric ward to a local hospital midstudy, a sudden change that triggered transient worsening of manic symptoms in both patients.

The investigators documented improved sleep efficiency in the blue-blocking group. Another noteworthy finding was that, in the blue-blocking group, the elements of the YMRS related to increased activation declined before the measures of distorted thoughts and perceptions. So did motor activity as recorded by actigraph. Meanwhile, nighttime activity worsened in the control group; they received substantially more sedatives, hypnotics, anxiolytic agents, and antipsychotic medications (Bipolar Disord. 2016 May;18[3]:221-32).

The mechanism underlying the improvement in sleep regularity and manic symptoms achieved by blocking blue light is not understood. Dr. Henriksen finds “very compelling” a theory put forth by prominent chronobiologist Daniel Kripke, MD, of the University of California, San Diego. He has shown in animal studies that a change in light exposure can trigger bifurcation in the circadian rhythms of the suprachiasmatic nucleus. The resultant suppression of melatonin secretion results in excess production of hypothalamic triiodothyronine, which in turn affects production of other key hormones. In patients with bipolar disorder, this could trigger mania, according to Dr. Kripke (F1000Res. 2015 May 6;4:107.

Dr. Henriksen reported having no financial conflicts regarding her study, which was conducted free of commercial support. She serves as a consultant to Chrono Chrome AS.

 

– Bright light therapy is a well-established, guideline-recommended treatment for seasonal affective disorder, and many people prone to depression keep a light box at home. But are you ready to embrace the dark side – that is, dark therapy for bipolar mania, or its vastly more patient-friendly offshoot, virtual dark therapy?

Bruce Jancin/MDedge News
Dr. Tone E.G. Henriksen

Virtual dark therapy using blue light spectrum–blocking glasses turns out to be a highly effective adjunct to standard antimanic medications in patients with bipolar mania. And it’s a lot easier on patients than the massive sensory deprivation imposed by the original form of dark therapy, which entails keeping a patient with mania in a completely dark room for 14 hours per night, Tone E.G. Henriksen, MD, observed at the annual congress of the European College of Neuropsychopharmacology.

She was lead author of a pioneering randomized controlled trial demonstrating that bipolar patients who wore blue-blocking, orange-tinted glasses for 14 hours per evening while hospitalized for a manic episode experienced a significant improvement in scores on the Young Mania Rating Scale (YMRS), compared with patients randomized to wearing clear lenses. Moreover, the between-group difference achieved strong significance in just 3 days.

That’s a remarkable result, because bipolar mania is such a challenge to treat pharmacologically. The standard medications – mood stabilizers and antipsychotic agents – are slow in onset of effect, observed Dr. Henriksen, a psychiatrist at the University of Bergen (Norway).

Backing up, she noted there is strong evidence of seasonality to bipolar disorder, as highlighted in a systematic review of 51 publications (J Affect Disord. 2014 Oct;168:210-23). This recognition has prompted numerous researchers to focus attention on the abnormal circadian rhythms prevalent in patients with bipolar disorder, for which the light/dark cycle is a powerful synchronizing signal to the hypothalamic suprachiasmatic nucleus, the master clock of circadian rhythms. This understanding led to a landmark case control pilot study by Italian investigators who exposed 16 bipolar inpatients experiencing a manic episode to 14 hours of complete darkness from 6 p.m. to 8 a.m. for 3 consecutive nights. The outcome was a dramatic reduction in YMRS scores in the dark therapy group, compared with 16 matched control inpatients, with all participants on pharmacologic treatment as usual (Bipolar Disord. 2005 Feb;7[1]:98-101).

“This was really something,” Dr. Henriksen recalled.

She and her colleagues were impressed by other investigators’ discovery of specialized retinal ganglion cells, known as intrinsically photosensitive retinal ganglion cells, which are responsible for conveying the daylight signal to the brain. These specialized cells contain melanopsin, which is blue light sensitive. The Norwegian investigators reasoned that it might not be necessary to expose patients with mania to prolonged utter darkness to achieve rapid symptomatic improvement, as the Italian psychiatrists did. Instead, they hypothesized, it might be sufficient just to block the blue light, low-wavelength end of the spectrum. And that turned out to be the case.

Their randomized, single-blind, multicenter study included 23 patients with bipolar disorder who were hospitalized for manic symptoms. All remained on their standard background psychiatric medications while being randomized to wear orange-tinted, blue light–blocking glasses, which allowed passage of almost all light above 530 nm, or clear glasses. Participants were instructed to wear their glasses from 6 p.m. to 8 a.m. for 7 consecutive nights. They took their glasses off when they switched off the lights at bedtime, but they had to put them back on if they turned on a light before 8 a.m. The patients also wore an activity monitor.

The results were dramatic: The blue-blocking glasses group had a mean 14.1-point drop in their YMRS score from a baseline of about 25, compared with a mere 1.7-point decline in the control group. Moreover, Dr. Henriksen said, this result might actually underrepresent the true clinical effect of blocking blue light to the brain, since two patients in the blue-blocking glasses group experienced such rapid symptomatic improvement that they were moved from an acute psychiatric ward to a local hospital midstudy, a sudden change that triggered transient worsening of manic symptoms in both patients.

The investigators documented improved sleep efficiency in the blue-blocking group. Another noteworthy finding was that, in the blue-blocking group, the elements of the YMRS related to increased activation declined before the measures of distorted thoughts and perceptions. So did motor activity as recorded by actigraph. Meanwhile, nighttime activity worsened in the control group; they received substantially more sedatives, hypnotics, anxiolytic agents, and antipsychotic medications (Bipolar Disord. 2016 May;18[3]:221-32).

The mechanism underlying the improvement in sleep regularity and manic symptoms achieved by blocking blue light is not understood. Dr. Henriksen finds “very compelling” a theory put forth by prominent chronobiologist Daniel Kripke, MD, of the University of California, San Diego. He has shown in animal studies that a change in light exposure can trigger bifurcation in the circadian rhythms of the suprachiasmatic nucleus. The resultant suppression of melatonin secretion results in excess production of hypothalamic triiodothyronine, which in turn affects production of other key hormones. In patients with bipolar disorder, this could trigger mania, according to Dr. Kripke (F1000Res. 2015 May 6;4:107.

Dr. Henriksen reported having no financial conflicts regarding her study, which was conducted free of commercial support. She serves as a consultant to Chrono Chrome AS.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ECNP 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Deep transcranial magnetic stimulation alleviates OCD symptoms

Article Type
Changed
Fri, 09/20/2019 - 14:47

 

– High-frequency deep transcranial magnetic stimulation (dTMS) directed at the anterior cingulate cortex and medial prefrontal cortex proved to be a safe and effective nonpharmacologic treatment for symptoms of obsessive-compulsive disorder in an international, randomized, sham-controlled, double-blind clinical trial that earned the device clearance for that indication from the Food and Drug Administration.

Bruce Jancin/MDedge News
Dr. Lior Cormi

The operative word here is “deep,” lead investigator Lior Carmi, PhD, explained in presenting the pivotal trial results at the annual congress of the European College of Neuropsychopharmacology.

“Deep TMS is a relatively new form of TMS that allows direct stimulation of deeper neuronal pathways than standard TMS. It induces a direct effective field at a depth of 3-5 cm below the skull, compared to less than 1.5 cm for the standard TMS figure-eight coil,” said Dr. Carmi, of Chaim Sheba Medical Center in Ramat Gan, Israel.

The brain circuitry involved in obsessive-compulsive disorder (OCD) is very well known. Multiple potential targets for intervention are available. Dr. Carmi and coinvestigators focused on the anterior cingulate cortex and medial prefrontal cortex, because this is an area that’s very much involved in OCD – it’s the generator of increased error-related negativity on the Stroop task – and it can be stimulated by dTMS, whereas standard TMS can’t reach it.

This was not only the first major clinical trial to successfully target the anterior cingulate cortex and medial prefrontal cortex using any form of TMS, it also was the first study to employ individually tailored symptom provocation using photos or a written script immediately before each treatment session. At the first patient encounter, the investigators created a list of what distressed that particular individual – for example, touching a public bathroom door handle or experiencing doubt about whether the stove had been left on – and then prior to each treatment session they deliberately provoked each study participant using representations of those triggers. The treatment, real or sham, didn’t begin until a patient’s distress level measured 4-7 on a visual analog scale.

“The idea is to deliver the treatment when the brain circuitry is aroused and not while the patient is thinking about the shopping he needs to get done after the session is over,” Dr. Carmi explained.

He was first author of the recently published pivotal study (Am J Psychiatry. 2019 May 21. doi: 10.1176/appi.ajp.2019.18101180) in which 99 adults aged up to age 65 years with OCD refractory to at least one selective serotonin reuptake inhibitor underwent real or sham dTMS every weekday for 5 consecutive weeks, plus four sessions during week 6. That’s a total of 29 sessions, featuring 2,000 magnetic stimulations per session. The study was conducted at 11 centers in the United States, Canada, and Israel. Participants had to remain on an approved drug therapy for OCD or engaged in psychotherapy throughout the study.

The primary efficacy outcome was the change in scores on the Yale-Brown Obsessive Compulsive Scale (YBOCS) from baseline to 6 weeks. Patients who received dTMS averaged a 6.0-point reduction, significantly better than the 3.3-point reduction in the sham-treatment group. The treatment response rate, as defined by at least a 30% reduction from baseline in YBOCS score, was 38% with dTMS, compared with 11% in controls. One month after the final treatment session, the response rate was 45% in the active-treatment arm, compared with less than 18% in the sham-treatment group.

In addition, 55% of patients in the active-treatment group achieved a partial response of more than a 20% reduction in YBOCS score, a rate slightly more than twice that in the sham group.

To put those findings in perspective, Dr. Carmi highlighted treatment effect–size results from OCD drug trials involving fluoxetine, fluvoxamine, sertraline, and paroxetine, all FDA-approved for treatment of OCD. The placebo-subtracted mean change in YBOCS scores in the pharmacotherapy trials were similar to the sham treatment–subtracted result in the dTMS study, with one important distinction: “In terms of change in YBOCS, it took 10-12 weeks to get those results in the drug trials, while we have shown this in a 6-week period of time,” he noted.

The only adverse effect associated with dTMS was headaches. They occurred in about one-third of the dTMS group and in a similar proportion of controls early on in the study, but they became a nonissue later.

“I have to say, we recruited 99 patients for the multicenter study, but only 2 of them dropped out because of side effects,” Dr. Carmi noted.

He reported having no financial conflicts of interest regarding the study, sponsored by Brainsway, which markets the dTMS device for the FDA-cleared indications of treatment-resistant depression and OCD.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– High-frequency deep transcranial magnetic stimulation (dTMS) directed at the anterior cingulate cortex and medial prefrontal cortex proved to be a safe and effective nonpharmacologic treatment for symptoms of obsessive-compulsive disorder in an international, randomized, sham-controlled, double-blind clinical trial that earned the device clearance for that indication from the Food and Drug Administration.

Bruce Jancin/MDedge News
Dr. Lior Cormi

The operative word here is “deep,” lead investigator Lior Carmi, PhD, explained in presenting the pivotal trial results at the annual congress of the European College of Neuropsychopharmacology.

“Deep TMS is a relatively new form of TMS that allows direct stimulation of deeper neuronal pathways than standard TMS. It induces a direct effective field at a depth of 3-5 cm below the skull, compared to less than 1.5 cm for the standard TMS figure-eight coil,” said Dr. Carmi, of Chaim Sheba Medical Center in Ramat Gan, Israel.

The brain circuitry involved in obsessive-compulsive disorder (OCD) is very well known. Multiple potential targets for intervention are available. Dr. Carmi and coinvestigators focused on the anterior cingulate cortex and medial prefrontal cortex, because this is an area that’s very much involved in OCD – it’s the generator of increased error-related negativity on the Stroop task – and it can be stimulated by dTMS, whereas standard TMS can’t reach it.

This was not only the first major clinical trial to successfully target the anterior cingulate cortex and medial prefrontal cortex using any form of TMS, it also was the first study to employ individually tailored symptom provocation using photos or a written script immediately before each treatment session. At the first patient encounter, the investigators created a list of what distressed that particular individual – for example, touching a public bathroom door handle or experiencing doubt about whether the stove had been left on – and then prior to each treatment session they deliberately provoked each study participant using representations of those triggers. The treatment, real or sham, didn’t begin until a patient’s distress level measured 4-7 on a visual analog scale.

“The idea is to deliver the treatment when the brain circuitry is aroused and not while the patient is thinking about the shopping he needs to get done after the session is over,” Dr. Carmi explained.

He was first author of the recently published pivotal study (Am J Psychiatry. 2019 May 21. doi: 10.1176/appi.ajp.2019.18101180) in which 99 adults aged up to age 65 years with OCD refractory to at least one selective serotonin reuptake inhibitor underwent real or sham dTMS every weekday for 5 consecutive weeks, plus four sessions during week 6. That’s a total of 29 sessions, featuring 2,000 magnetic stimulations per session. The study was conducted at 11 centers in the United States, Canada, and Israel. Participants had to remain on an approved drug therapy for OCD or engaged in psychotherapy throughout the study.

The primary efficacy outcome was the change in scores on the Yale-Brown Obsessive Compulsive Scale (YBOCS) from baseline to 6 weeks. Patients who received dTMS averaged a 6.0-point reduction, significantly better than the 3.3-point reduction in the sham-treatment group. The treatment response rate, as defined by at least a 30% reduction from baseline in YBOCS score, was 38% with dTMS, compared with 11% in controls. One month after the final treatment session, the response rate was 45% in the active-treatment arm, compared with less than 18% in the sham-treatment group.

In addition, 55% of patients in the active-treatment group achieved a partial response of more than a 20% reduction in YBOCS score, a rate slightly more than twice that in the sham group.

To put those findings in perspective, Dr. Carmi highlighted treatment effect–size results from OCD drug trials involving fluoxetine, fluvoxamine, sertraline, and paroxetine, all FDA-approved for treatment of OCD. The placebo-subtracted mean change in YBOCS scores in the pharmacotherapy trials were similar to the sham treatment–subtracted result in the dTMS study, with one important distinction: “In terms of change in YBOCS, it took 10-12 weeks to get those results in the drug trials, while we have shown this in a 6-week period of time,” he noted.

The only adverse effect associated with dTMS was headaches. They occurred in about one-third of the dTMS group and in a similar proportion of controls early on in the study, but they became a nonissue later.

“I have to say, we recruited 99 patients for the multicenter study, but only 2 of them dropped out because of side effects,” Dr. Carmi noted.

He reported having no financial conflicts of interest regarding the study, sponsored by Brainsway, which markets the dTMS device for the FDA-cleared indications of treatment-resistant depression and OCD.

 

– High-frequency deep transcranial magnetic stimulation (dTMS) directed at the anterior cingulate cortex and medial prefrontal cortex proved to be a safe and effective nonpharmacologic treatment for symptoms of obsessive-compulsive disorder in an international, randomized, sham-controlled, double-blind clinical trial that earned the device clearance for that indication from the Food and Drug Administration.

Bruce Jancin/MDedge News
Dr. Lior Cormi

The operative word here is “deep,” lead investigator Lior Carmi, PhD, explained in presenting the pivotal trial results at the annual congress of the European College of Neuropsychopharmacology.

“Deep TMS is a relatively new form of TMS that allows direct stimulation of deeper neuronal pathways than standard TMS. It induces a direct effective field at a depth of 3-5 cm below the skull, compared to less than 1.5 cm for the standard TMS figure-eight coil,” said Dr. Carmi, of Chaim Sheba Medical Center in Ramat Gan, Israel.

The brain circuitry involved in obsessive-compulsive disorder (OCD) is very well known. Multiple potential targets for intervention are available. Dr. Carmi and coinvestigators focused on the anterior cingulate cortex and medial prefrontal cortex, because this is an area that’s very much involved in OCD – it’s the generator of increased error-related negativity on the Stroop task – and it can be stimulated by dTMS, whereas standard TMS can’t reach it.

This was not only the first major clinical trial to successfully target the anterior cingulate cortex and medial prefrontal cortex using any form of TMS, it also was the first study to employ individually tailored symptom provocation using photos or a written script immediately before each treatment session. At the first patient encounter, the investigators created a list of what distressed that particular individual – for example, touching a public bathroom door handle or experiencing doubt about whether the stove had been left on – and then prior to each treatment session they deliberately provoked each study participant using representations of those triggers. The treatment, real or sham, didn’t begin until a patient’s distress level measured 4-7 on a visual analog scale.

“The idea is to deliver the treatment when the brain circuitry is aroused and not while the patient is thinking about the shopping he needs to get done after the session is over,” Dr. Carmi explained.

He was first author of the recently published pivotal study (Am J Psychiatry. 2019 May 21. doi: 10.1176/appi.ajp.2019.18101180) in which 99 adults aged up to age 65 years with OCD refractory to at least one selective serotonin reuptake inhibitor underwent real or sham dTMS every weekday for 5 consecutive weeks, plus four sessions during week 6. That’s a total of 29 sessions, featuring 2,000 magnetic stimulations per session. The study was conducted at 11 centers in the United States, Canada, and Israel. Participants had to remain on an approved drug therapy for OCD or engaged in psychotherapy throughout the study.

The primary efficacy outcome was the change in scores on the Yale-Brown Obsessive Compulsive Scale (YBOCS) from baseline to 6 weeks. Patients who received dTMS averaged a 6.0-point reduction, significantly better than the 3.3-point reduction in the sham-treatment group. The treatment response rate, as defined by at least a 30% reduction from baseline in YBOCS score, was 38% with dTMS, compared with 11% in controls. One month after the final treatment session, the response rate was 45% in the active-treatment arm, compared with less than 18% in the sham-treatment group.

In addition, 55% of patients in the active-treatment group achieved a partial response of more than a 20% reduction in YBOCS score, a rate slightly more than twice that in the sham group.

To put those findings in perspective, Dr. Carmi highlighted treatment effect–size results from OCD drug trials involving fluoxetine, fluvoxamine, sertraline, and paroxetine, all FDA-approved for treatment of OCD. The placebo-subtracted mean change in YBOCS scores in the pharmacotherapy trials were similar to the sham treatment–subtracted result in the dTMS study, with one important distinction: “In terms of change in YBOCS, it took 10-12 weeks to get those results in the drug trials, while we have shown this in a 6-week period of time,” he noted.

The only adverse effect associated with dTMS was headaches. They occurred in about one-third of the dTMS group and in a similar proportion of controls early on in the study, but they became a nonissue later.

“I have to say, we recruited 99 patients for the multicenter study, but only 2 of them dropped out because of side effects,” Dr. Carmi noted.

He reported having no financial conflicts of interest regarding the study, sponsored by Brainsway, which markets the dTMS device for the FDA-cleared indications of treatment-resistant depression and OCD.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ECNP 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.