User login
Cesarean deliveries drop in women at low risk
Although clinically indicated cesarean deliveries may improve outcomes for mothers and infants, “when not clinically indicated, cesarean delivery is a major surgical intervention that increases risk for adverse outcomes,” wrote Anna M. Frappaolo of Columbia University College of Physicians and Surgeons, New York, and colleagues.
The Healthy People 2030 campaign includes the reduction of cesarean deliveries, but trends in these procedures, especially with regard to diagnoses of labor arrest, have not been well studied, the researchers said.
In an analysis published in JAMA Network Open, the researchers reviewed delivery hospitalizations using data from the National Inpatient Sample from 2000 to 2019.
Births deemed low risk for cesarean delivery were identified by using criteria of the Society for Maternal-Fetal Medicine and additional criteria, and joinpoint regression analysis was used to estimate changes.
The researchers examined overall trends in cesarean deliveries as well as trends for three specific diagnoses: nonreassuring fetal status, labor arrest, and obstructed labor.
The final analysis included 40,517,867 deliveries; of these, 4,885,716 (12.1%) were cesarean deliveries.
Overall, cesarean deliveries in patients deemed at low risk increased from 9.7% in 2000 to 13.9% in 2009, then plateaued and decreased from 13.0% in 2012 to 11.1% in 2019. The average annual percentage change (AAPC) for cesarean delivery was 6.4% for the years from 2000 to 2005, 1.2% from 2005 to 2009, and −2.2% from 2009 to 2019.
Cesarean delivery for nonreassuring fetal status increased over the entire study period, from 3.4% in 2000 to 5.1% in 2019. By contrast, overall cesarean delivery for labor arrest increased from 3.6% in 2000 to a high of 4.8% in 2009, then decreased to 2.7% in 2019. Cesarean deliveries with a diagnosis of obstructed labor decreased from 0.9% in 2008 to 0.3% in 2019.
More specifically, cesarean deliveries for labor arrest in the active phase, latent phase, and second stage of labor increased from 1.5% to 2.1%, 1.1% to 1.5%, and 0.9% to 1.3%, respectively, from 2000 to 2009, and decreased from 2.1% to 1.7% for the active phase, from 1.5% to 1.2% for the latent phase, and from 1.2% to 0.9% for the second stage between 2010 and 2019.
Patients with increased odds of cesarean delivery were older (aged 35-39 years vs. 25-29 years, adjusted odds ratio 1.27), delivered in a hospital in the South vs. the Northeast of the United States (aOR 1.11), and were more likely to be non-Hispanic Black vs. non-Hispanic White (OR 1.23).
Notably, changes in nomenclature and interpretation of intrapartum electronic fetal heart monitoring occurred during the study period, with recommendations for the adoption of a three-tiered system for fetal heart rate patterns in 2008. “It is possible that current evidence and nomenclature related to intrapartum FHR interpretation may result in identification of a larger number of fetuses deemed at indeterminate risk for abnormal acid-base status,” the researchers wrote in their discussion.
The study findings were limited by several factors including the use of administrative discharge data rather than clinical records, the exclusion of patients with chronic conditions associated with cesarean delivery, changes in billing codes during the study period, and the inability to account for the effect of health factors, maternal age, and use of assisted reproductive technology, the researchers noted.
However, the results were strengthened by the large sample size and 20-year study period, as well as the stratification of labor arrest by stage, and suggest uptake of newer recommendations, they said. “Future reductions in cesarean deliveries among patients at low risk for cesarean delivery may be dependent on improved assessment of intrapartum fetal status,” they concluded.
Consider populations and outcomes in cesarean risk assessment
The decreasing rates of cesarean deliveries in the current study can be seen as positive, but more research is needed to examine maternal and neonatal outcomes, and to consider other conditions that affect risk for cesarean delivery, Paolo Ivo Cavoretto, MD, and Massimo Candiani, MD, of IRCCS San Raffaele Scientific Institute, and Antonio Farina, MD, of the University of Bologna, Italy, wrote in an accompanying editorial.
Notably, the study authors identified a population aged 15-39 years as low risk, and an increased risk for cesarean delivery within this range increased with age. “Maternal age remains a major risk factor associated with the risk of cesarean delivery, both from results of this study and those of previous analyses assessing its independence from other related risk factors,” the editorialists said.
The study findings also reflect the changes in standards for labor duration during the study period, they noted. The longer duration of labor may reduce cesarean delivery rates, but it is not without maternal and fetal-neonatal risks, they wrote.
“To be sure that the described trend of cesarean delivery rate reduction can be considered positive, there would be the theoretical need to analyze other maternal-fetal-neonatal outcomes (e.g., rates of operative deliveries, neonatal acidemia, intensive care unit use, maternal hemorrhage, pelvic floor trauma and dysfunction, and psychological distress),” the editorialists concluded.
More research needed to explore clinical decisions
“Reducing the cesarean delivery rate is a top priority, but evidence is lacking on an optimal rate that improves maternal and neonatal outcomes,” Iris Krishna, MD, a maternal-fetal medicine specialist at Emory University, Atlanta, said in an interview.
“Hospital quality and safety committees have been working to decrease cesarean deliveries amongst low-risk women, and identifying contemporary trends gives us insight on whether some of these efforts have translated to a lower cesarean delivery rate,” she said.
Dr. Krishna said she was not surprised by the higher cesarean section rate in the South. “The decision for cesarean delivery is multifaceted, and although this study was not able to assess clinical indications for cesarean delivery or maternal and fetal outcomes, we cannot ignore that social determinants of health contribute greatly to overall health outcomes,” she said. The trends in the current study further underscore the geographic disparities in access to health care present in the South, she added.
“This study notes that cesarean delivery for nonreassuring fetal status increased; however, nonreassuring fetal status as an indication for cesarean delivery can be subjective,” Dr. Krishna said. “Hospital quality and safety committees should consider reviewing the clinical scenarios that led to this decision to identify opportunities for improvement and further education,” she said.
“Defining contemporary trends in cesarean delivery for low-risk patients has merit, but the study findings should be interpreted with caution,” said Dr. Krishna, who is a member of the Ob.Gyn. News advisory board. More research is needed to define an optimal cesarean section rate that promotes positive maternal and fetal outcomes, and to determine whether identifying an optimal rate should be based on patient risk profiles, she said.
The study received no outside funding. Lead author Ms. Frappaolo had no financial conflicts to disclose; nor did the editorial authors or Dr. Krishna.
Although clinically indicated cesarean deliveries may improve outcomes for mothers and infants, “when not clinically indicated, cesarean delivery is a major surgical intervention that increases risk for adverse outcomes,” wrote Anna M. Frappaolo of Columbia University College of Physicians and Surgeons, New York, and colleagues.
The Healthy People 2030 campaign includes the reduction of cesarean deliveries, but trends in these procedures, especially with regard to diagnoses of labor arrest, have not been well studied, the researchers said.
In an analysis published in JAMA Network Open, the researchers reviewed delivery hospitalizations using data from the National Inpatient Sample from 2000 to 2019.
Births deemed low risk for cesarean delivery were identified by using criteria of the Society for Maternal-Fetal Medicine and additional criteria, and joinpoint regression analysis was used to estimate changes.
The researchers examined overall trends in cesarean deliveries as well as trends for three specific diagnoses: nonreassuring fetal status, labor arrest, and obstructed labor.
The final analysis included 40,517,867 deliveries; of these, 4,885,716 (12.1%) were cesarean deliveries.
Overall, cesarean deliveries in patients deemed at low risk increased from 9.7% in 2000 to 13.9% in 2009, then plateaued and decreased from 13.0% in 2012 to 11.1% in 2019. The average annual percentage change (AAPC) for cesarean delivery was 6.4% for the years from 2000 to 2005, 1.2% from 2005 to 2009, and −2.2% from 2009 to 2019.
Cesarean delivery for nonreassuring fetal status increased over the entire study period, from 3.4% in 2000 to 5.1% in 2019. By contrast, overall cesarean delivery for labor arrest increased from 3.6% in 2000 to a high of 4.8% in 2009, then decreased to 2.7% in 2019. Cesarean deliveries with a diagnosis of obstructed labor decreased from 0.9% in 2008 to 0.3% in 2019.
More specifically, cesarean deliveries for labor arrest in the active phase, latent phase, and second stage of labor increased from 1.5% to 2.1%, 1.1% to 1.5%, and 0.9% to 1.3%, respectively, from 2000 to 2009, and decreased from 2.1% to 1.7% for the active phase, from 1.5% to 1.2% for the latent phase, and from 1.2% to 0.9% for the second stage between 2010 and 2019.
Patients with increased odds of cesarean delivery were older (aged 35-39 years vs. 25-29 years, adjusted odds ratio 1.27), delivered in a hospital in the South vs. the Northeast of the United States (aOR 1.11), and were more likely to be non-Hispanic Black vs. non-Hispanic White (OR 1.23).
Notably, changes in nomenclature and interpretation of intrapartum electronic fetal heart monitoring occurred during the study period, with recommendations for the adoption of a three-tiered system for fetal heart rate patterns in 2008. “It is possible that current evidence and nomenclature related to intrapartum FHR interpretation may result in identification of a larger number of fetuses deemed at indeterminate risk for abnormal acid-base status,” the researchers wrote in their discussion.
The study findings were limited by several factors including the use of administrative discharge data rather than clinical records, the exclusion of patients with chronic conditions associated with cesarean delivery, changes in billing codes during the study period, and the inability to account for the effect of health factors, maternal age, and use of assisted reproductive technology, the researchers noted.
However, the results were strengthened by the large sample size and 20-year study period, as well as the stratification of labor arrest by stage, and suggest uptake of newer recommendations, they said. “Future reductions in cesarean deliveries among patients at low risk for cesarean delivery may be dependent on improved assessment of intrapartum fetal status,” they concluded.
Consider populations and outcomes in cesarean risk assessment
The decreasing rates of cesarean deliveries in the current study can be seen as positive, but more research is needed to examine maternal and neonatal outcomes, and to consider other conditions that affect risk for cesarean delivery, Paolo Ivo Cavoretto, MD, and Massimo Candiani, MD, of IRCCS San Raffaele Scientific Institute, and Antonio Farina, MD, of the University of Bologna, Italy, wrote in an accompanying editorial.
Notably, the study authors identified a population aged 15-39 years as low risk, and an increased risk for cesarean delivery within this range increased with age. “Maternal age remains a major risk factor associated with the risk of cesarean delivery, both from results of this study and those of previous analyses assessing its independence from other related risk factors,” the editorialists said.
The study findings also reflect the changes in standards for labor duration during the study period, they noted. The longer duration of labor may reduce cesarean delivery rates, but it is not without maternal and fetal-neonatal risks, they wrote.
“To be sure that the described trend of cesarean delivery rate reduction can be considered positive, there would be the theoretical need to analyze other maternal-fetal-neonatal outcomes (e.g., rates of operative deliveries, neonatal acidemia, intensive care unit use, maternal hemorrhage, pelvic floor trauma and dysfunction, and psychological distress),” the editorialists concluded.
More research needed to explore clinical decisions
“Reducing the cesarean delivery rate is a top priority, but evidence is lacking on an optimal rate that improves maternal and neonatal outcomes,” Iris Krishna, MD, a maternal-fetal medicine specialist at Emory University, Atlanta, said in an interview.
“Hospital quality and safety committees have been working to decrease cesarean deliveries amongst low-risk women, and identifying contemporary trends gives us insight on whether some of these efforts have translated to a lower cesarean delivery rate,” she said.
Dr. Krishna said she was not surprised by the higher cesarean section rate in the South. “The decision for cesarean delivery is multifaceted, and although this study was not able to assess clinical indications for cesarean delivery or maternal and fetal outcomes, we cannot ignore that social determinants of health contribute greatly to overall health outcomes,” she said. The trends in the current study further underscore the geographic disparities in access to health care present in the South, she added.
“This study notes that cesarean delivery for nonreassuring fetal status increased; however, nonreassuring fetal status as an indication for cesarean delivery can be subjective,” Dr. Krishna said. “Hospital quality and safety committees should consider reviewing the clinical scenarios that led to this decision to identify opportunities for improvement and further education,” she said.
“Defining contemporary trends in cesarean delivery for low-risk patients has merit, but the study findings should be interpreted with caution,” said Dr. Krishna, who is a member of the Ob.Gyn. News advisory board. More research is needed to define an optimal cesarean section rate that promotes positive maternal and fetal outcomes, and to determine whether identifying an optimal rate should be based on patient risk profiles, she said.
The study received no outside funding. Lead author Ms. Frappaolo had no financial conflicts to disclose; nor did the editorial authors or Dr. Krishna.
Although clinically indicated cesarean deliveries may improve outcomes for mothers and infants, “when not clinically indicated, cesarean delivery is a major surgical intervention that increases risk for adverse outcomes,” wrote Anna M. Frappaolo of Columbia University College of Physicians and Surgeons, New York, and colleagues.
The Healthy People 2030 campaign includes the reduction of cesarean deliveries, but trends in these procedures, especially with regard to diagnoses of labor arrest, have not been well studied, the researchers said.
In an analysis published in JAMA Network Open, the researchers reviewed delivery hospitalizations using data from the National Inpatient Sample from 2000 to 2019.
Births deemed low risk for cesarean delivery were identified by using criteria of the Society for Maternal-Fetal Medicine and additional criteria, and joinpoint regression analysis was used to estimate changes.
The researchers examined overall trends in cesarean deliveries as well as trends for three specific diagnoses: nonreassuring fetal status, labor arrest, and obstructed labor.
The final analysis included 40,517,867 deliveries; of these, 4,885,716 (12.1%) were cesarean deliveries.
Overall, cesarean deliveries in patients deemed at low risk increased from 9.7% in 2000 to 13.9% in 2009, then plateaued and decreased from 13.0% in 2012 to 11.1% in 2019. The average annual percentage change (AAPC) for cesarean delivery was 6.4% for the years from 2000 to 2005, 1.2% from 2005 to 2009, and −2.2% from 2009 to 2019.
Cesarean delivery for nonreassuring fetal status increased over the entire study period, from 3.4% in 2000 to 5.1% in 2019. By contrast, overall cesarean delivery for labor arrest increased from 3.6% in 2000 to a high of 4.8% in 2009, then decreased to 2.7% in 2019. Cesarean deliveries with a diagnosis of obstructed labor decreased from 0.9% in 2008 to 0.3% in 2019.
More specifically, cesarean deliveries for labor arrest in the active phase, latent phase, and second stage of labor increased from 1.5% to 2.1%, 1.1% to 1.5%, and 0.9% to 1.3%, respectively, from 2000 to 2009, and decreased from 2.1% to 1.7% for the active phase, from 1.5% to 1.2% for the latent phase, and from 1.2% to 0.9% for the second stage between 2010 and 2019.
Patients with increased odds of cesarean delivery were older (aged 35-39 years vs. 25-29 years, adjusted odds ratio 1.27), delivered in a hospital in the South vs. the Northeast of the United States (aOR 1.11), and were more likely to be non-Hispanic Black vs. non-Hispanic White (OR 1.23).
Notably, changes in nomenclature and interpretation of intrapartum electronic fetal heart monitoring occurred during the study period, with recommendations for the adoption of a three-tiered system for fetal heart rate patterns in 2008. “It is possible that current evidence and nomenclature related to intrapartum FHR interpretation may result in identification of a larger number of fetuses deemed at indeterminate risk for abnormal acid-base status,” the researchers wrote in their discussion.
The study findings were limited by several factors including the use of administrative discharge data rather than clinical records, the exclusion of patients with chronic conditions associated with cesarean delivery, changes in billing codes during the study period, and the inability to account for the effect of health factors, maternal age, and use of assisted reproductive technology, the researchers noted.
However, the results were strengthened by the large sample size and 20-year study period, as well as the stratification of labor arrest by stage, and suggest uptake of newer recommendations, they said. “Future reductions in cesarean deliveries among patients at low risk for cesarean delivery may be dependent on improved assessment of intrapartum fetal status,” they concluded.
Consider populations and outcomes in cesarean risk assessment
The decreasing rates of cesarean deliveries in the current study can be seen as positive, but more research is needed to examine maternal and neonatal outcomes, and to consider other conditions that affect risk for cesarean delivery, Paolo Ivo Cavoretto, MD, and Massimo Candiani, MD, of IRCCS San Raffaele Scientific Institute, and Antonio Farina, MD, of the University of Bologna, Italy, wrote in an accompanying editorial.
Notably, the study authors identified a population aged 15-39 years as low risk, and an increased risk for cesarean delivery within this range increased with age. “Maternal age remains a major risk factor associated with the risk of cesarean delivery, both from results of this study and those of previous analyses assessing its independence from other related risk factors,” the editorialists said.
The study findings also reflect the changes in standards for labor duration during the study period, they noted. The longer duration of labor may reduce cesarean delivery rates, but it is not without maternal and fetal-neonatal risks, they wrote.
“To be sure that the described trend of cesarean delivery rate reduction can be considered positive, there would be the theoretical need to analyze other maternal-fetal-neonatal outcomes (e.g., rates of operative deliveries, neonatal acidemia, intensive care unit use, maternal hemorrhage, pelvic floor trauma and dysfunction, and psychological distress),” the editorialists concluded.
More research needed to explore clinical decisions
“Reducing the cesarean delivery rate is a top priority, but evidence is lacking on an optimal rate that improves maternal and neonatal outcomes,” Iris Krishna, MD, a maternal-fetal medicine specialist at Emory University, Atlanta, said in an interview.
“Hospital quality and safety committees have been working to decrease cesarean deliveries amongst low-risk women, and identifying contemporary trends gives us insight on whether some of these efforts have translated to a lower cesarean delivery rate,” she said.
Dr. Krishna said she was not surprised by the higher cesarean section rate in the South. “The decision for cesarean delivery is multifaceted, and although this study was not able to assess clinical indications for cesarean delivery or maternal and fetal outcomes, we cannot ignore that social determinants of health contribute greatly to overall health outcomes,” she said. The trends in the current study further underscore the geographic disparities in access to health care present in the South, she added.
“This study notes that cesarean delivery for nonreassuring fetal status increased; however, nonreassuring fetal status as an indication for cesarean delivery can be subjective,” Dr. Krishna said. “Hospital quality and safety committees should consider reviewing the clinical scenarios that led to this decision to identify opportunities for improvement and further education,” she said.
“Defining contemporary trends in cesarean delivery for low-risk patients has merit, but the study findings should be interpreted with caution,” said Dr. Krishna, who is a member of the Ob.Gyn. News advisory board. More research is needed to define an optimal cesarean section rate that promotes positive maternal and fetal outcomes, and to determine whether identifying an optimal rate should be based on patient risk profiles, she said.
The study received no outside funding. Lead author Ms. Frappaolo had no financial conflicts to disclose; nor did the editorial authors or Dr. Krishna.
FROM JAMA NETWORK OPEN
AHA, ACC push supervised exercise training for HFpEF
A statement released by the American Heart Association and the American College of Cardiology advocates use of supervised exercise training in patients with heart failure with preserved ejection fraction (HFpEF), as well as coverage for these services by third-party payers.
The authors hope to boost the stature of supervised exercise training (SET) in HFpEF among practitioners and show Medicare and insurers that it deserves reimbursement. Currently, they noted, clinicians tend to recognize exercise as therapy more in HF with reduced ejection fraction (HFrEF). And Medicare covers exercise training within broader cardiac rehabilitation programs for patients with HFrEF but not HFpEF.
Yet exercise has been broadly effective in HFpEF clinical trials, as outlined in the document. And there are good mechanistic reasons to believe that patients with the disorder can gain as much or more from SET than those with HFrEF.
“The signals for improvement from exercise training, in symptoms and objective measures of exercise capacity, are considerably larger for HFpEF than for HFrEF,” Dalane W. Kitzman, MD, Wake Forest University, Winston-Salem, N.C., said in an interview.
So, it’s a bit of a paradox that clinicians don’t prescribe it as often in HFpEF, probably because of the lack of reimbursement but also from less “awareness” and understanding of the disease itself, he proposed.
Dr. Kitzman is senior author on the statement sponsored by the AHA and the ACC. It was published in the societies’ flagship journals Circulation and the Journal of the American College of Cardiology. The statement was also endorsed by the Heart Failure Society of America, the American Association of Cardiovascular and Pulmonary Rehabilitation, and the American Association of Heart Failure Nurses.
Carefully chosen words
The statement makes its case in HFpEF specifically for SET rather than cardiac rehabilitation, the latter typically a comprehensive program that goes beyond exercise, Dr. Kitzman noted. And SET is closer to the exercise interventions used in the supportive HFpEF trials.
“Also, Medicare in recent years has approved something called ‘supervised exercise training’ for other disorders, such as peripheral artery disease.” So, the document specifies SET “to be fully aligned with the evidence base,” he said, as well as “align it with a type of treatment that Medicare has a precedent for approving for other disorders.”
Data and physiologic basis
Core features of the AHA/ACC statement is its review of HFpEF exercise physiology, survey of randomized trials supporting SET in the disease, and characterization of exercise as an especially suitable pleiotropic therapy.
Increasingly, “HFpEF is now accepted as a systemic disorder that affects and impacts all organs,” Dr. Kitzman observed. “With a systemic multiorgan disorder, it would make sense that a broad treatment like exercise might be just the right thing. We think that’s the reason that its benefits are really quite large in magnitude.”
The document notes that exercise seems “potentially well suited for the treatment of both the cardiac and, in particular, the extracardiac abnormalities that contribute to exercise intolerance in HFpEF.”
Its effects in the disorder are “anti-inflammatory, rheological, lipid lowering, antihypertensive, positive inotropic, positive lusitropic, negative chronotropic, vasodilation, diuretic, weight-reducing, hypoglycemic, hypnotic, and antidepressive,” the statement notes. It achieves them via multiple pathways involving the heart, lungs, vasculature and, notably, the skeletal muscles.
“It’s been widely overlooked that at least 50% of low exercise capacity and symptoms in HFpEF are due to skeletal muscle dysfunction,” said Dr. Kitzman, an authority on exercise physiology in heart failure.
“But we’ve spent about 95% of our attention trying to modify and understand the cardiac component.” Skeletal muscles, he said, “are not an innocent bystander. They’re part of the problem. And that’s why we should really spend more time focusing on them.”
Dr. Kitzman disclosed receiving consulting fees from Bayer, Medtronic, Corvia Medical, Boehringer Ingelheim, Keyto, Rivus, NovoNordisk, AstraZeneca, and Pfizer; holding stock in Gilead; and receiving grants to his institution from Bayer, Novo Nordisk, AstraZeneca, Rivus, and Pfizer.
A version of this article first appeared on Medscape.com.
A statement released by the American Heart Association and the American College of Cardiology advocates use of supervised exercise training in patients with heart failure with preserved ejection fraction (HFpEF), as well as coverage for these services by third-party payers.
The authors hope to boost the stature of supervised exercise training (SET) in HFpEF among practitioners and show Medicare and insurers that it deserves reimbursement. Currently, they noted, clinicians tend to recognize exercise as therapy more in HF with reduced ejection fraction (HFrEF). And Medicare covers exercise training within broader cardiac rehabilitation programs for patients with HFrEF but not HFpEF.
Yet exercise has been broadly effective in HFpEF clinical trials, as outlined in the document. And there are good mechanistic reasons to believe that patients with the disorder can gain as much or more from SET than those with HFrEF.
“The signals for improvement from exercise training, in symptoms and objective measures of exercise capacity, are considerably larger for HFpEF than for HFrEF,” Dalane W. Kitzman, MD, Wake Forest University, Winston-Salem, N.C., said in an interview.
So, it’s a bit of a paradox that clinicians don’t prescribe it as often in HFpEF, probably because of the lack of reimbursement but also from less “awareness” and understanding of the disease itself, he proposed.
Dr. Kitzman is senior author on the statement sponsored by the AHA and the ACC. It was published in the societies’ flagship journals Circulation and the Journal of the American College of Cardiology. The statement was also endorsed by the Heart Failure Society of America, the American Association of Cardiovascular and Pulmonary Rehabilitation, and the American Association of Heart Failure Nurses.
Carefully chosen words
The statement makes its case in HFpEF specifically for SET rather than cardiac rehabilitation, the latter typically a comprehensive program that goes beyond exercise, Dr. Kitzman noted. And SET is closer to the exercise interventions used in the supportive HFpEF trials.
“Also, Medicare in recent years has approved something called ‘supervised exercise training’ for other disorders, such as peripheral artery disease.” So, the document specifies SET “to be fully aligned with the evidence base,” he said, as well as “align it with a type of treatment that Medicare has a precedent for approving for other disorders.”
Data and physiologic basis
Core features of the AHA/ACC statement is its review of HFpEF exercise physiology, survey of randomized trials supporting SET in the disease, and characterization of exercise as an especially suitable pleiotropic therapy.
Increasingly, “HFpEF is now accepted as a systemic disorder that affects and impacts all organs,” Dr. Kitzman observed. “With a systemic multiorgan disorder, it would make sense that a broad treatment like exercise might be just the right thing. We think that’s the reason that its benefits are really quite large in magnitude.”
The document notes that exercise seems “potentially well suited for the treatment of both the cardiac and, in particular, the extracardiac abnormalities that contribute to exercise intolerance in HFpEF.”
Its effects in the disorder are “anti-inflammatory, rheological, lipid lowering, antihypertensive, positive inotropic, positive lusitropic, negative chronotropic, vasodilation, diuretic, weight-reducing, hypoglycemic, hypnotic, and antidepressive,” the statement notes. It achieves them via multiple pathways involving the heart, lungs, vasculature and, notably, the skeletal muscles.
“It’s been widely overlooked that at least 50% of low exercise capacity and symptoms in HFpEF are due to skeletal muscle dysfunction,” said Dr. Kitzman, an authority on exercise physiology in heart failure.
“But we’ve spent about 95% of our attention trying to modify and understand the cardiac component.” Skeletal muscles, he said, “are not an innocent bystander. They’re part of the problem. And that’s why we should really spend more time focusing on them.”
Dr. Kitzman disclosed receiving consulting fees from Bayer, Medtronic, Corvia Medical, Boehringer Ingelheim, Keyto, Rivus, NovoNordisk, AstraZeneca, and Pfizer; holding stock in Gilead; and receiving grants to his institution from Bayer, Novo Nordisk, AstraZeneca, Rivus, and Pfizer.
A version of this article first appeared on Medscape.com.
A statement released by the American Heart Association and the American College of Cardiology advocates use of supervised exercise training in patients with heart failure with preserved ejection fraction (HFpEF), as well as coverage for these services by third-party payers.
The authors hope to boost the stature of supervised exercise training (SET) in HFpEF among practitioners and show Medicare and insurers that it deserves reimbursement. Currently, they noted, clinicians tend to recognize exercise as therapy more in HF with reduced ejection fraction (HFrEF). And Medicare covers exercise training within broader cardiac rehabilitation programs for patients with HFrEF but not HFpEF.
Yet exercise has been broadly effective in HFpEF clinical trials, as outlined in the document. And there are good mechanistic reasons to believe that patients with the disorder can gain as much or more from SET than those with HFrEF.
“The signals for improvement from exercise training, in symptoms and objective measures of exercise capacity, are considerably larger for HFpEF than for HFrEF,” Dalane W. Kitzman, MD, Wake Forest University, Winston-Salem, N.C., said in an interview.
So, it’s a bit of a paradox that clinicians don’t prescribe it as often in HFpEF, probably because of the lack of reimbursement but also from less “awareness” and understanding of the disease itself, he proposed.
Dr. Kitzman is senior author on the statement sponsored by the AHA and the ACC. It was published in the societies’ flagship journals Circulation and the Journal of the American College of Cardiology. The statement was also endorsed by the Heart Failure Society of America, the American Association of Cardiovascular and Pulmonary Rehabilitation, and the American Association of Heart Failure Nurses.
Carefully chosen words
The statement makes its case in HFpEF specifically for SET rather than cardiac rehabilitation, the latter typically a comprehensive program that goes beyond exercise, Dr. Kitzman noted. And SET is closer to the exercise interventions used in the supportive HFpEF trials.
“Also, Medicare in recent years has approved something called ‘supervised exercise training’ for other disorders, such as peripheral artery disease.” So, the document specifies SET “to be fully aligned with the evidence base,” he said, as well as “align it with a type of treatment that Medicare has a precedent for approving for other disorders.”
Data and physiologic basis
Core features of the AHA/ACC statement is its review of HFpEF exercise physiology, survey of randomized trials supporting SET in the disease, and characterization of exercise as an especially suitable pleiotropic therapy.
Increasingly, “HFpEF is now accepted as a systemic disorder that affects and impacts all organs,” Dr. Kitzman observed. “With a systemic multiorgan disorder, it would make sense that a broad treatment like exercise might be just the right thing. We think that’s the reason that its benefits are really quite large in magnitude.”
The document notes that exercise seems “potentially well suited for the treatment of both the cardiac and, in particular, the extracardiac abnormalities that contribute to exercise intolerance in HFpEF.”
Its effects in the disorder are “anti-inflammatory, rheological, lipid lowering, antihypertensive, positive inotropic, positive lusitropic, negative chronotropic, vasodilation, diuretic, weight-reducing, hypoglycemic, hypnotic, and antidepressive,” the statement notes. It achieves them via multiple pathways involving the heart, lungs, vasculature and, notably, the skeletal muscles.
“It’s been widely overlooked that at least 50% of low exercise capacity and symptoms in HFpEF are due to skeletal muscle dysfunction,” said Dr. Kitzman, an authority on exercise physiology in heart failure.
“But we’ve spent about 95% of our attention trying to modify and understand the cardiac component.” Skeletal muscles, he said, “are not an innocent bystander. They’re part of the problem. And that’s why we should really spend more time focusing on them.”
Dr. Kitzman disclosed receiving consulting fees from Bayer, Medtronic, Corvia Medical, Boehringer Ingelheim, Keyto, Rivus, NovoNordisk, AstraZeneca, and Pfizer; holding stock in Gilead; and receiving grants to his institution from Bayer, Novo Nordisk, AstraZeneca, Rivus, and Pfizer.
A version of this article first appeared on Medscape.com.
Music at bedtime may aid depression-related insomnia
PARIS –
The Music to Improve Sleep Quality in Adults With Depression and Insomnia (MUSTAFI) trial randomly assigned more than 110 outpatients with depression to either a music intervention or a waiting list. Sleep quality and quality of life significantly improved after listening to music for half an hour at bedtime for 4 weeks.
“This is a low-cost, safe intervention that has no side effects and may easily be implemented in psychiatry” along with existing treatments, lead researcher Helle Nystrup Lund, PhD, unit for depression, Aalborg (Denmark) University Hospital, said in an interview.
The findings were presented at the European Psychiatric Association 2023 Congress, and recently published in the Nordic Journal of Psychiatry.
Difficult to resolve
The researchers noted that insomnia is common in patients with depression and is “difficult to resolve.”
They noted that, while music is commonly used as a sleep aid and a growing evidence base suggests it has positive effects, there have been few investigations into the effectiveness of music for patients with depression-related insomnia.
To fill this research gap, 112 outpatients with depression and comorbid insomnia who were receiving care at a single center were randomly assigned to either an intervention group or a wait list control group.
Participants in the intervention group listened to music for a minimum of 30 minutes at bedtime for 4 weeks. The music was delivered via the MusicStar app, which is available as a free download from the Apple and Android (Google Play) app stores. The app was developed by Dr. Lund and Lars Rye Bertelsen, a PhD student and music therapist at Aalborg University Hospital.
The app is designed as a multicolored star, with each arm of the star linking to a playlist lasting between 30 minutes and 1 hour. Each color of the star indicates a different tempo of music.
Blue playlists, Dr. Lund explained, offer the quietest music, green is more lively, and red is the most dynamic. Gray playlists linked to project-related soundtracks, such as summer rain.
Dr. Lund said organizing the playlists by stimuli and color code, instead of genre, allows users to regulate their level of arousal and makes the music choice intuitive and easy.
She said that the genres of music include New Age, folk, pop, classical, and film soundtracks, “but no hard rock.”
“There’s actually a quite large selection of music available, because studies show that individual choice is important, as are personal preferences,” she said, adding that the endless choices offered by streaming services can cause confusion.
“So we made curated playlists and designed them with well-known pieces, but also with newly composed music not associated with anything,” Dr. Lund said.
Participants were assessed using the Pittsburgh Sleep Quality Index (PSQI), the Hamilton Depression Rating Scale, and two World Health Organization well-being questionnaires (WHO-5, WHOQOL-BREF), as well as actigraphy.
Results showed that, at 4 weeks, participants in the intervention group experienced significant improvements in sleep quality in comparison with control persons. The effect size for the PSQI was –2.1, and for quality of life on the WHO-5, the effect size was 8.4.
A subanalysis revealed that the length of nocturnal sleep in the intervention group increased by an average of 18 minutes during the study from a baseline of approximately 5 hours per night, said Dr. Lund.
However, there were no changes in actigraphy measurements and no significant improvements in HAMD-17 scores.
Dr. Lund said that, on the basis of these positive findings, music intervention as a sleep aid is now offered at Aalborg University Hospital to patients with depression-related insomnia.
Clinically meaningful?
Commenting on the findings, Gerald J. Haeffel, PhD, department of psychology, University of Notre Dame, South Bend, Ind., said that overall, the study showed there was a change in sleep-quality and quality of life scores of “about 10% in each.”
“This, on the surface, would seem to be a meaningful change,” although it is less clear whether it is “clinically meaningful.” Perhaps it is, “but it would be nice to have more information.”
It would be useful, he said, to “show the means for each group pre- to postintervention, along with standard deviations,” he added.
Dr. Haeffel added that on the basis of current results, it isn’t possible to determine whether individuals’ control over music choice is important.
“We have no idea if ‘choice’ or length of playlist had any causal role in the results. One would need to run a study with the same playlist, but in one group people have to listen to whatever song comes on versus another condition in which they get to choose a song off the same list,” he said.
He noted that his group conducted a study in which highly popular music that was chosen by individual participants was found to have a positive effect. Even so, he said, “we could not determine if it was ‘choice’ or ‘popularity’ that caused the positive effects of music.”
In addition, he said, the reason music has a positive effect on insomnia remains unclear.
“It is not because it helped with depression, and it’s not because it’s actually changing objective sleep parameters. It could be that it improves mood right before bed or helps distract people right before bed. At the same time, it could also just be a placebo effect,” said Dr. Haeffel.
In addition, he said, it’s important to note that the music intervention had no comparator, so “maybe just doing something different or getting to talk with researchers created the effect and has nothing to do with music.”
Overall, he believes that there are “not enough data” to use the sleep intervention that was employed in the current study “as primary intervention, but future work could show its usefulness as a supplement.”
Dr. Lund and Mr. Bertelsen reported ownership and sales of the MusicStar app. Dr. Haeffel reported no relevant financial relationships.
PARIS –
The Music to Improve Sleep Quality in Adults With Depression and Insomnia (MUSTAFI) trial randomly assigned more than 110 outpatients with depression to either a music intervention or a waiting list. Sleep quality and quality of life significantly improved after listening to music for half an hour at bedtime for 4 weeks.
“This is a low-cost, safe intervention that has no side effects and may easily be implemented in psychiatry” along with existing treatments, lead researcher Helle Nystrup Lund, PhD, unit for depression, Aalborg (Denmark) University Hospital, said in an interview.
The findings were presented at the European Psychiatric Association 2023 Congress, and recently published in the Nordic Journal of Psychiatry.
Difficult to resolve
The researchers noted that insomnia is common in patients with depression and is “difficult to resolve.”
They noted that, while music is commonly used as a sleep aid and a growing evidence base suggests it has positive effects, there have been few investigations into the effectiveness of music for patients with depression-related insomnia.
To fill this research gap, 112 outpatients with depression and comorbid insomnia who were receiving care at a single center were randomly assigned to either an intervention group or a wait list control group.
Participants in the intervention group listened to music for a minimum of 30 minutes at bedtime for 4 weeks. The music was delivered via the MusicStar app, which is available as a free download from the Apple and Android (Google Play) app stores. The app was developed by Dr. Lund and Lars Rye Bertelsen, a PhD student and music therapist at Aalborg University Hospital.
The app is designed as a multicolored star, with each arm of the star linking to a playlist lasting between 30 minutes and 1 hour. Each color of the star indicates a different tempo of music.
Blue playlists, Dr. Lund explained, offer the quietest music, green is more lively, and red is the most dynamic. Gray playlists linked to project-related soundtracks, such as summer rain.
Dr. Lund said organizing the playlists by stimuli and color code, instead of genre, allows users to regulate their level of arousal and makes the music choice intuitive and easy.
She said that the genres of music include New Age, folk, pop, classical, and film soundtracks, “but no hard rock.”
“There’s actually a quite large selection of music available, because studies show that individual choice is important, as are personal preferences,” she said, adding that the endless choices offered by streaming services can cause confusion.
“So we made curated playlists and designed them with well-known pieces, but also with newly composed music not associated with anything,” Dr. Lund said.
Participants were assessed using the Pittsburgh Sleep Quality Index (PSQI), the Hamilton Depression Rating Scale, and two World Health Organization well-being questionnaires (WHO-5, WHOQOL-BREF), as well as actigraphy.
Results showed that, at 4 weeks, participants in the intervention group experienced significant improvements in sleep quality in comparison with control persons. The effect size for the PSQI was –2.1, and for quality of life on the WHO-5, the effect size was 8.4.
A subanalysis revealed that the length of nocturnal sleep in the intervention group increased by an average of 18 minutes during the study from a baseline of approximately 5 hours per night, said Dr. Lund.
However, there were no changes in actigraphy measurements and no significant improvements in HAMD-17 scores.
Dr. Lund said that, on the basis of these positive findings, music intervention as a sleep aid is now offered at Aalborg University Hospital to patients with depression-related insomnia.
Clinically meaningful?
Commenting on the findings, Gerald J. Haeffel, PhD, department of psychology, University of Notre Dame, South Bend, Ind., said that overall, the study showed there was a change in sleep-quality and quality of life scores of “about 10% in each.”
“This, on the surface, would seem to be a meaningful change,” although it is less clear whether it is “clinically meaningful.” Perhaps it is, “but it would be nice to have more information.”
It would be useful, he said, to “show the means for each group pre- to postintervention, along with standard deviations,” he added.
Dr. Haeffel added that on the basis of current results, it isn’t possible to determine whether individuals’ control over music choice is important.
“We have no idea if ‘choice’ or length of playlist had any causal role in the results. One would need to run a study with the same playlist, but in one group people have to listen to whatever song comes on versus another condition in which they get to choose a song off the same list,” he said.
He noted that his group conducted a study in which highly popular music that was chosen by individual participants was found to have a positive effect. Even so, he said, “we could not determine if it was ‘choice’ or ‘popularity’ that caused the positive effects of music.”
In addition, he said, the reason music has a positive effect on insomnia remains unclear.
“It is not because it helped with depression, and it’s not because it’s actually changing objective sleep parameters. It could be that it improves mood right before bed or helps distract people right before bed. At the same time, it could also just be a placebo effect,” said Dr. Haeffel.
In addition, he said, it’s important to note that the music intervention had no comparator, so “maybe just doing something different or getting to talk with researchers created the effect and has nothing to do with music.”
Overall, he believes that there are “not enough data” to use the sleep intervention that was employed in the current study “as primary intervention, but future work could show its usefulness as a supplement.”
Dr. Lund and Mr. Bertelsen reported ownership and sales of the MusicStar app. Dr. Haeffel reported no relevant financial relationships.
PARIS –
The Music to Improve Sleep Quality in Adults With Depression and Insomnia (MUSTAFI) trial randomly assigned more than 110 outpatients with depression to either a music intervention or a waiting list. Sleep quality and quality of life significantly improved after listening to music for half an hour at bedtime for 4 weeks.
“This is a low-cost, safe intervention that has no side effects and may easily be implemented in psychiatry” along with existing treatments, lead researcher Helle Nystrup Lund, PhD, unit for depression, Aalborg (Denmark) University Hospital, said in an interview.
The findings were presented at the European Psychiatric Association 2023 Congress, and recently published in the Nordic Journal of Psychiatry.
Difficult to resolve
The researchers noted that insomnia is common in patients with depression and is “difficult to resolve.”
They noted that, while music is commonly used as a sleep aid and a growing evidence base suggests it has positive effects, there have been few investigations into the effectiveness of music for patients with depression-related insomnia.
To fill this research gap, 112 outpatients with depression and comorbid insomnia who were receiving care at a single center were randomly assigned to either an intervention group or a wait list control group.
Participants in the intervention group listened to music for a minimum of 30 minutes at bedtime for 4 weeks. The music was delivered via the MusicStar app, which is available as a free download from the Apple and Android (Google Play) app stores. The app was developed by Dr. Lund and Lars Rye Bertelsen, a PhD student and music therapist at Aalborg University Hospital.
The app is designed as a multicolored star, with each arm of the star linking to a playlist lasting between 30 minutes and 1 hour. Each color of the star indicates a different tempo of music.
Blue playlists, Dr. Lund explained, offer the quietest music, green is more lively, and red is the most dynamic. Gray playlists linked to project-related soundtracks, such as summer rain.
Dr. Lund said organizing the playlists by stimuli and color code, instead of genre, allows users to regulate their level of arousal and makes the music choice intuitive and easy.
She said that the genres of music include New Age, folk, pop, classical, and film soundtracks, “but no hard rock.”
“There’s actually a quite large selection of music available, because studies show that individual choice is important, as are personal preferences,” she said, adding that the endless choices offered by streaming services can cause confusion.
“So we made curated playlists and designed them with well-known pieces, but also with newly composed music not associated with anything,” Dr. Lund said.
Participants were assessed using the Pittsburgh Sleep Quality Index (PSQI), the Hamilton Depression Rating Scale, and two World Health Organization well-being questionnaires (WHO-5, WHOQOL-BREF), as well as actigraphy.
Results showed that, at 4 weeks, participants in the intervention group experienced significant improvements in sleep quality in comparison with control persons. The effect size for the PSQI was –2.1, and for quality of life on the WHO-5, the effect size was 8.4.
A subanalysis revealed that the length of nocturnal sleep in the intervention group increased by an average of 18 minutes during the study from a baseline of approximately 5 hours per night, said Dr. Lund.
However, there were no changes in actigraphy measurements and no significant improvements in HAMD-17 scores.
Dr. Lund said that, on the basis of these positive findings, music intervention as a sleep aid is now offered at Aalborg University Hospital to patients with depression-related insomnia.
Clinically meaningful?
Commenting on the findings, Gerald J. Haeffel, PhD, department of psychology, University of Notre Dame, South Bend, Ind., said that overall, the study showed there was a change in sleep-quality and quality of life scores of “about 10% in each.”
“This, on the surface, would seem to be a meaningful change,” although it is less clear whether it is “clinically meaningful.” Perhaps it is, “but it would be nice to have more information.”
It would be useful, he said, to “show the means for each group pre- to postintervention, along with standard deviations,” he added.
Dr. Haeffel added that on the basis of current results, it isn’t possible to determine whether individuals’ control over music choice is important.
“We have no idea if ‘choice’ or length of playlist had any causal role in the results. One would need to run a study with the same playlist, but in one group people have to listen to whatever song comes on versus another condition in which they get to choose a song off the same list,” he said.
He noted that his group conducted a study in which highly popular music that was chosen by individual participants was found to have a positive effect. Even so, he said, “we could not determine if it was ‘choice’ or ‘popularity’ that caused the positive effects of music.”
In addition, he said, the reason music has a positive effect on insomnia remains unclear.
“It is not because it helped with depression, and it’s not because it’s actually changing objective sleep parameters. It could be that it improves mood right before bed or helps distract people right before bed. At the same time, it could also just be a placebo effect,” said Dr. Haeffel.
In addition, he said, it’s important to note that the music intervention had no comparator, so “maybe just doing something different or getting to talk with researchers created the effect and has nothing to do with music.”
Overall, he believes that there are “not enough data” to use the sleep intervention that was employed in the current study “as primary intervention, but future work could show its usefulness as a supplement.”
Dr. Lund and Mr. Bertelsen reported ownership and sales of the MusicStar app. Dr. Haeffel reported no relevant financial relationships.
AT EPA 2023
Cancer risk elevated after stroke in younger people
In young people, stroke might be the first manifestation of an underlying cancer, according to the investigators, led by Jamie Verhoeven, MD, PhD, with the department of neurology, Radboud University Medical Centre, Nijmegen, the Netherlands.
The new study can be viewed as a “stepping stone for future studies investigating the usefulness of screening for cancer after stroke,” the researchers say.
The study was published online in JAMA Network Open.
Currently, the diagnostic workup for young people with stroke includes searching for rare clotting disorders, although screening for cancer is not regularly performed.
Some research suggests that stroke and cancer are linked, but the literature is limited. In prior studies among people of all ages, cancer incidence after stroke has been variable – from 1% to 5% at 1 year and from 11% to 30% after 10 years.
To the team’s knowledge, only two studies have described the incidence of cancer after stroke among younger patients. One put the risk at 0.5% for people aged 18-50 years in the first year after stroke; the other described a cumulative risk of 17.3% in the 10 years after stroke for patients aged 18-55 years.
Using Dutch data, Dr. Verhoeven and colleagues identified 27,616 young stroke patients (age, 15-49 years; median age, 45 years) and 362,782 older stroke patients (median age, 76 years).
The cumulative incidence of any new cancer at 10 years was 3.7% among the younger stroke patients and 8.5% among the older stroke patients.
The incidence of a new cancer after stroke among younger patients was higher among women than men, while the opposite was true for older stroke patients.
Compared with the general population, younger stroke patients had a more than 2.5-fold greater likelihood of being diagnosed with a new cancer in the first year after ischemic stroke (standardized incidence ratio, 2.6). The risk was highest for lung cancer (SIR, 6.9), followed by hematologic cancers (SIR, 5.2).
Compared with the general population, younger stroke patients had nearly a 5.5-fold greater likelihood of being diagnosed with a new cancer in the first year after intracerebral hemorrhage (SIR, 5.4), and the risk was highest for hematologic cancers (SIR, 14.2).
In younger patients, the cumulative incidence of any cancer decreased over the years but remained significantly higher for 8 years following a stroke.
For patients aged 50 years or older, the 1-year risk for any new cancer after either ischemic stroke or intracerebral hemorrhage was 1.2 times higher, compared with the general population.
“We typically think of occult cancer as being a cause of stroke in an older population, given that the incidence of cancer increases over time [but] what this study shows is that we probably do need to consider occult cancer as an underlying cause of stroke even in a younger population,” said Laura Gioia, MD, stroke neurologist at the University of Montreal, who was not involved in the research.
Dr. Verhoeven and colleagues conclude that their finding supports the hypothesis of a causal link between cancer and stroke. Given the timing between stroke and cancer diagnosis, cancer may have been present when the stroke occurred and possibly played a role in causing it, the authors note. However, conclusions on causal mechanisms cannot be drawn from the current study.
The question of whether young stroke patients should be screened for cancer is a tough one, Dr. Gioia noted. “Cancer represents a small percentage of causes of stroke. That means you would have to screen a lot of people with a benefit that is still uncertain for the moment,” Dr. Gioia said in an interview.
“I think we need to keep cancer in mind as a cause of stroke in our young patients, and that should probably guide our history-taking with the patient and consider imaging when it’s appropriate and when we think that there could be an underlying occult cancer,” Dr. Gioia suggested.
The study was funded in part through unrestricted funding by Stryker, Medtronic, and Cerenovus. Dr. Verhoeven and Dr. Gioia have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In young people, stroke might be the first manifestation of an underlying cancer, according to the investigators, led by Jamie Verhoeven, MD, PhD, with the department of neurology, Radboud University Medical Centre, Nijmegen, the Netherlands.
The new study can be viewed as a “stepping stone for future studies investigating the usefulness of screening for cancer after stroke,” the researchers say.
The study was published online in JAMA Network Open.
Currently, the diagnostic workup for young people with stroke includes searching for rare clotting disorders, although screening for cancer is not regularly performed.
Some research suggests that stroke and cancer are linked, but the literature is limited. In prior studies among people of all ages, cancer incidence after stroke has been variable – from 1% to 5% at 1 year and from 11% to 30% after 10 years.
To the team’s knowledge, only two studies have described the incidence of cancer after stroke among younger patients. One put the risk at 0.5% for people aged 18-50 years in the first year after stroke; the other described a cumulative risk of 17.3% in the 10 years after stroke for patients aged 18-55 years.
Using Dutch data, Dr. Verhoeven and colleagues identified 27,616 young stroke patients (age, 15-49 years; median age, 45 years) and 362,782 older stroke patients (median age, 76 years).
The cumulative incidence of any new cancer at 10 years was 3.7% among the younger stroke patients and 8.5% among the older stroke patients.
The incidence of a new cancer after stroke among younger patients was higher among women than men, while the opposite was true for older stroke patients.
Compared with the general population, younger stroke patients had a more than 2.5-fold greater likelihood of being diagnosed with a new cancer in the first year after ischemic stroke (standardized incidence ratio, 2.6). The risk was highest for lung cancer (SIR, 6.9), followed by hematologic cancers (SIR, 5.2).
Compared with the general population, younger stroke patients had nearly a 5.5-fold greater likelihood of being diagnosed with a new cancer in the first year after intracerebral hemorrhage (SIR, 5.4), and the risk was highest for hematologic cancers (SIR, 14.2).
In younger patients, the cumulative incidence of any cancer decreased over the years but remained significantly higher for 8 years following a stroke.
For patients aged 50 years or older, the 1-year risk for any new cancer after either ischemic stroke or intracerebral hemorrhage was 1.2 times higher, compared with the general population.
“We typically think of occult cancer as being a cause of stroke in an older population, given that the incidence of cancer increases over time [but] what this study shows is that we probably do need to consider occult cancer as an underlying cause of stroke even in a younger population,” said Laura Gioia, MD, stroke neurologist at the University of Montreal, who was not involved in the research.
Dr. Verhoeven and colleagues conclude that their finding supports the hypothesis of a causal link between cancer and stroke. Given the timing between stroke and cancer diagnosis, cancer may have been present when the stroke occurred and possibly played a role in causing it, the authors note. However, conclusions on causal mechanisms cannot be drawn from the current study.
The question of whether young stroke patients should be screened for cancer is a tough one, Dr. Gioia noted. “Cancer represents a small percentage of causes of stroke. That means you would have to screen a lot of people with a benefit that is still uncertain for the moment,” Dr. Gioia said in an interview.
“I think we need to keep cancer in mind as a cause of stroke in our young patients, and that should probably guide our history-taking with the patient and consider imaging when it’s appropriate and when we think that there could be an underlying occult cancer,” Dr. Gioia suggested.
The study was funded in part through unrestricted funding by Stryker, Medtronic, and Cerenovus. Dr. Verhoeven and Dr. Gioia have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In young people, stroke might be the first manifestation of an underlying cancer, according to the investigators, led by Jamie Verhoeven, MD, PhD, with the department of neurology, Radboud University Medical Centre, Nijmegen, the Netherlands.
The new study can be viewed as a “stepping stone for future studies investigating the usefulness of screening for cancer after stroke,” the researchers say.
The study was published online in JAMA Network Open.
Currently, the diagnostic workup for young people with stroke includes searching for rare clotting disorders, although screening for cancer is not regularly performed.
Some research suggests that stroke and cancer are linked, but the literature is limited. In prior studies among people of all ages, cancer incidence after stroke has been variable – from 1% to 5% at 1 year and from 11% to 30% after 10 years.
To the team’s knowledge, only two studies have described the incidence of cancer after stroke among younger patients. One put the risk at 0.5% for people aged 18-50 years in the first year after stroke; the other described a cumulative risk of 17.3% in the 10 years after stroke for patients aged 18-55 years.
Using Dutch data, Dr. Verhoeven and colleagues identified 27,616 young stroke patients (age, 15-49 years; median age, 45 years) and 362,782 older stroke patients (median age, 76 years).
The cumulative incidence of any new cancer at 10 years was 3.7% among the younger stroke patients and 8.5% among the older stroke patients.
The incidence of a new cancer after stroke among younger patients was higher among women than men, while the opposite was true for older stroke patients.
Compared with the general population, younger stroke patients had a more than 2.5-fold greater likelihood of being diagnosed with a new cancer in the first year after ischemic stroke (standardized incidence ratio, 2.6). The risk was highest for lung cancer (SIR, 6.9), followed by hematologic cancers (SIR, 5.2).
Compared with the general population, younger stroke patients had nearly a 5.5-fold greater likelihood of being diagnosed with a new cancer in the first year after intracerebral hemorrhage (SIR, 5.4), and the risk was highest for hematologic cancers (SIR, 14.2).
In younger patients, the cumulative incidence of any cancer decreased over the years but remained significantly higher for 8 years following a stroke.
For patients aged 50 years or older, the 1-year risk for any new cancer after either ischemic stroke or intracerebral hemorrhage was 1.2 times higher, compared with the general population.
“We typically think of occult cancer as being a cause of stroke in an older population, given that the incidence of cancer increases over time [but] what this study shows is that we probably do need to consider occult cancer as an underlying cause of stroke even in a younger population,” said Laura Gioia, MD, stroke neurologist at the University of Montreal, who was not involved in the research.
Dr. Verhoeven and colleagues conclude that their finding supports the hypothesis of a causal link between cancer and stroke. Given the timing between stroke and cancer diagnosis, cancer may have been present when the stroke occurred and possibly played a role in causing it, the authors note. However, conclusions on causal mechanisms cannot be drawn from the current study.
The question of whether young stroke patients should be screened for cancer is a tough one, Dr. Gioia noted. “Cancer represents a small percentage of causes of stroke. That means you would have to screen a lot of people with a benefit that is still uncertain for the moment,” Dr. Gioia said in an interview.
“I think we need to keep cancer in mind as a cause of stroke in our young patients, and that should probably guide our history-taking with the patient and consider imaging when it’s appropriate and when we think that there could be an underlying occult cancer,” Dr. Gioia suggested.
The study was funded in part through unrestricted funding by Stryker, Medtronic, and Cerenovus. Dr. Verhoeven and Dr. Gioia have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
SGLT2 inhibitors: Real-world data show benefits outweigh risks
Starting therapy with an SGLT2 inhibitor versus a glucagon-like peptide-1 (GLP-1) receptor agonist was associated with more lower limb amputations, nonvertebral fractures, and genital infections, but these risks need to be balanced against cardiovascular and renoprotective benefits, according to the researchers.
The analysis showed that there would be 2.1 more lower limb amputations, 2.5 more nonvertebral fractures, and 41 more genital infections per 1,000 patients per year among those receiving SGLT2 inhibitors versus an equal number of patients receiving GLP-1 agonists, lead author Edouard Fu, PhD, explained to this news organization in an email.
“On the other hand, we know from the evidence from randomized controlled trials that taking an SGLT2 inhibitor compared with placebo lowers the risk of developing kidney failure,” said Dr. Fu, who is a research fellow in the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital, Boston.
“For instance,” he continued, “in the DAPA-CKD clinical trial, dapagliflozin versus placebo led to 29 fewer events per 1,000 patients per year of the composite outcome (50% decline in estimated glomerular filtration rate [eGFR], kidney failure, cardiovascular or kidney death).”
In the CREDENCE trial, canagliflozin versus placebo led to 18 fewer events per 1,000 person-years for the composite outcome of doubling of serum creatinine, kidney failure, and cardiovascular or kidney death.
And in the EMPA-KIDNEY study, empagliflozin versus placebo led to 21 fewer events per 1,000 person-years for the composite outcome of progression of kidney disease or cardiovascular death.
“Thus, benefits would still outweigh the risks,” Dr. Fu emphasized.
‘Quantifies absolute rate of events among routine care patients’
“The importance of our paper,” he summarized, “is that it quantifies the absolute rate of events among routine care patients and may be used to inform shared decision-making.”
The analysis also found that the risks of diabetic ketoacidosis (DKA), hypovolemia, hypoglycemia, and severe urinary tract infection (UTI) were similar with SGLT2 inhibitors versus GLP-1 agonists, but the risk of developing acute kidney injury (AKI) was lower with an SGLT2 inhibitor.
“Our study can help inform patient-physician decision-making regarding risks and benefits before prescribing SGLT2 inhibitors in this population” of patients with CKD and diabetes treated in clinical practice, the researchers conclude, “but needs to be interpreted in light of its limitations, including residual confounding, short follow-up time, and the use of diagnosis codes to identify patients with CKD.”
The study was recently published in the Clinical Journal of the American Society of Nephrology.
Slow uptake, safety concerns
SGLT2 inhibitors are recommended as first-line therapy in patients with type 2 diabetes and CKD who have an eGFR equal to or greater than 20 mL/min per 1.73 m2, and thus are at high risk for cardiovascular disease and kidney disease progression, Dr. Fu and colleagues write.
However, studies report that as few as 6% of patients with CKD and type 2 diabetes are currently prescribed SGLT2 inhibitors in the United States.
This slow uptake of SGLT2 inhibitors among patients with CKD may be partly due to concerns about DKA, fractures, amputations, and urogenital infections observed in clinical trials.
However, such trials are generally underpowered to assess rare adverse events, use monitoring protocols to lower the risk of adverse events, and include a highly selected patient population, and so safety in routine clinical practice is often unclear.
To examine this, the researchers identified health insurance claims data from 96,128 individuals (from Optum, IBM MarketScan, and Medicare databases) who were 18 years or older (65 years or older for Medicare) and had type 2 diabetes and at least one inpatient or two outpatient diagnostic codes for stage 3 or 4 CKD.
Of these patients, 32,192 had a newly filled prescription for an SGLT2 inhibitor (empagliflozin, dapagliflozin, canagliflozin, or ertugliflozin) and 63,936 had a newly filled prescription for a GLP-1 agonist (liraglutide, dulaglutide, semaglutide, exenatide, albiglutide, or lixisenatide) between April 2013, when the first SGLT2 inhibitor was available in the United States, and 2021.
The researchers matched 28,847 individuals who were initiated on an SGLT2 inhibitor with an equal number who were initiated on a GLP-1 agonist, based on propensity scores, adjusting for more than 120 baseline characteristics.
Safety outcomes were based on previously identified potential safety signals.
Patients who were initiated on an SGLT2 inhibitor had 1.30-fold, 2.13-fold, and 3.08-fold higher risks of having a nonvertebral fracture, a lower limb amputation, and a genital infection, respectively, compared with patients who were initiated on a GLP-1 agonist, after a mean on-treatment time of 7.5 months,
Risks of DKA, hypovolemia, hypoglycemia, and severe UTI were similar in both groups.
Patients initiated on an SGLT2 inhibitor versus a GLP-1 agonist had a lower risk of AKI (hazard ratio, 0.93) equivalent to 6.75 fewer cases of AKI per 1,000 patients per year.
Patients had higher risks for lower limb amputation, genital infections, and nonvertebral fractures with SGLT2 inhibitors versus GLP-1 agonists across most of the prespecified subgroups by age, sex, cardiovascular disease, heart failure, and use of metformin, insulin, or sulfonylurea, but with wider confidence intervals.
Dr. Fu was supported by a Rubicon grant from the Dutch Research Council and has reported no relevant financial relationships. Disclosures for the other authors are listed with the article.
A version of this article originally appeared on Medscape.com.
Starting therapy with an SGLT2 inhibitor versus a glucagon-like peptide-1 (GLP-1) receptor agonist was associated with more lower limb amputations, nonvertebral fractures, and genital infections, but these risks need to be balanced against cardiovascular and renoprotective benefits, according to the researchers.
The analysis showed that there would be 2.1 more lower limb amputations, 2.5 more nonvertebral fractures, and 41 more genital infections per 1,000 patients per year among those receiving SGLT2 inhibitors versus an equal number of patients receiving GLP-1 agonists, lead author Edouard Fu, PhD, explained to this news organization in an email.
“On the other hand, we know from the evidence from randomized controlled trials that taking an SGLT2 inhibitor compared with placebo lowers the risk of developing kidney failure,” said Dr. Fu, who is a research fellow in the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital, Boston.
“For instance,” he continued, “in the DAPA-CKD clinical trial, dapagliflozin versus placebo led to 29 fewer events per 1,000 patients per year of the composite outcome (50% decline in estimated glomerular filtration rate [eGFR], kidney failure, cardiovascular or kidney death).”
In the CREDENCE trial, canagliflozin versus placebo led to 18 fewer events per 1,000 person-years for the composite outcome of doubling of serum creatinine, kidney failure, and cardiovascular or kidney death.
And in the EMPA-KIDNEY study, empagliflozin versus placebo led to 21 fewer events per 1,000 person-years for the composite outcome of progression of kidney disease or cardiovascular death.
“Thus, benefits would still outweigh the risks,” Dr. Fu emphasized.
‘Quantifies absolute rate of events among routine care patients’
“The importance of our paper,” he summarized, “is that it quantifies the absolute rate of events among routine care patients and may be used to inform shared decision-making.”
The analysis also found that the risks of diabetic ketoacidosis (DKA), hypovolemia, hypoglycemia, and severe urinary tract infection (UTI) were similar with SGLT2 inhibitors versus GLP-1 agonists, but the risk of developing acute kidney injury (AKI) was lower with an SGLT2 inhibitor.
“Our study can help inform patient-physician decision-making regarding risks and benefits before prescribing SGLT2 inhibitors in this population” of patients with CKD and diabetes treated in clinical practice, the researchers conclude, “but needs to be interpreted in light of its limitations, including residual confounding, short follow-up time, and the use of diagnosis codes to identify patients with CKD.”
The study was recently published in the Clinical Journal of the American Society of Nephrology.
Slow uptake, safety concerns
SGLT2 inhibitors are recommended as first-line therapy in patients with type 2 diabetes and CKD who have an eGFR equal to or greater than 20 mL/min per 1.73 m2, and thus are at high risk for cardiovascular disease and kidney disease progression, Dr. Fu and colleagues write.
However, studies report that as few as 6% of patients with CKD and type 2 diabetes are currently prescribed SGLT2 inhibitors in the United States.
This slow uptake of SGLT2 inhibitors among patients with CKD may be partly due to concerns about DKA, fractures, amputations, and urogenital infections observed in clinical trials.
However, such trials are generally underpowered to assess rare adverse events, use monitoring protocols to lower the risk of adverse events, and include a highly selected patient population, and so safety in routine clinical practice is often unclear.
To examine this, the researchers identified health insurance claims data from 96,128 individuals (from Optum, IBM MarketScan, and Medicare databases) who were 18 years or older (65 years or older for Medicare) and had type 2 diabetes and at least one inpatient or two outpatient diagnostic codes for stage 3 or 4 CKD.
Of these patients, 32,192 had a newly filled prescription for an SGLT2 inhibitor (empagliflozin, dapagliflozin, canagliflozin, or ertugliflozin) and 63,936 had a newly filled prescription for a GLP-1 agonist (liraglutide, dulaglutide, semaglutide, exenatide, albiglutide, or lixisenatide) between April 2013, when the first SGLT2 inhibitor was available in the United States, and 2021.
The researchers matched 28,847 individuals who were initiated on an SGLT2 inhibitor with an equal number who were initiated on a GLP-1 agonist, based on propensity scores, adjusting for more than 120 baseline characteristics.
Safety outcomes were based on previously identified potential safety signals.
Patients who were initiated on an SGLT2 inhibitor had 1.30-fold, 2.13-fold, and 3.08-fold higher risks of having a nonvertebral fracture, a lower limb amputation, and a genital infection, respectively, compared with patients who were initiated on a GLP-1 agonist, after a mean on-treatment time of 7.5 months,
Risks of DKA, hypovolemia, hypoglycemia, and severe UTI were similar in both groups.
Patients initiated on an SGLT2 inhibitor versus a GLP-1 agonist had a lower risk of AKI (hazard ratio, 0.93) equivalent to 6.75 fewer cases of AKI per 1,000 patients per year.
Patients had higher risks for lower limb amputation, genital infections, and nonvertebral fractures with SGLT2 inhibitors versus GLP-1 agonists across most of the prespecified subgroups by age, sex, cardiovascular disease, heart failure, and use of metformin, insulin, or sulfonylurea, but with wider confidence intervals.
Dr. Fu was supported by a Rubicon grant from the Dutch Research Council and has reported no relevant financial relationships. Disclosures for the other authors are listed with the article.
A version of this article originally appeared on Medscape.com.
Starting therapy with an SGLT2 inhibitor versus a glucagon-like peptide-1 (GLP-1) receptor agonist was associated with more lower limb amputations, nonvertebral fractures, and genital infections, but these risks need to be balanced against cardiovascular and renoprotective benefits, according to the researchers.
The analysis showed that there would be 2.1 more lower limb amputations, 2.5 more nonvertebral fractures, and 41 more genital infections per 1,000 patients per year among those receiving SGLT2 inhibitors versus an equal number of patients receiving GLP-1 agonists, lead author Edouard Fu, PhD, explained to this news organization in an email.
“On the other hand, we know from the evidence from randomized controlled trials that taking an SGLT2 inhibitor compared with placebo lowers the risk of developing kidney failure,” said Dr. Fu, who is a research fellow in the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital, Boston.
“For instance,” he continued, “in the DAPA-CKD clinical trial, dapagliflozin versus placebo led to 29 fewer events per 1,000 patients per year of the composite outcome (50% decline in estimated glomerular filtration rate [eGFR], kidney failure, cardiovascular or kidney death).”
In the CREDENCE trial, canagliflozin versus placebo led to 18 fewer events per 1,000 person-years for the composite outcome of doubling of serum creatinine, kidney failure, and cardiovascular or kidney death.
And in the EMPA-KIDNEY study, empagliflozin versus placebo led to 21 fewer events per 1,000 person-years for the composite outcome of progression of kidney disease or cardiovascular death.
“Thus, benefits would still outweigh the risks,” Dr. Fu emphasized.
‘Quantifies absolute rate of events among routine care patients’
“The importance of our paper,” he summarized, “is that it quantifies the absolute rate of events among routine care patients and may be used to inform shared decision-making.”
The analysis also found that the risks of diabetic ketoacidosis (DKA), hypovolemia, hypoglycemia, and severe urinary tract infection (UTI) were similar with SGLT2 inhibitors versus GLP-1 agonists, but the risk of developing acute kidney injury (AKI) was lower with an SGLT2 inhibitor.
“Our study can help inform patient-physician decision-making regarding risks and benefits before prescribing SGLT2 inhibitors in this population” of patients with CKD and diabetes treated in clinical practice, the researchers conclude, “but needs to be interpreted in light of its limitations, including residual confounding, short follow-up time, and the use of diagnosis codes to identify patients with CKD.”
The study was recently published in the Clinical Journal of the American Society of Nephrology.
Slow uptake, safety concerns
SGLT2 inhibitors are recommended as first-line therapy in patients with type 2 diabetes and CKD who have an eGFR equal to or greater than 20 mL/min per 1.73 m2, and thus are at high risk for cardiovascular disease and kidney disease progression, Dr. Fu and colleagues write.
However, studies report that as few as 6% of patients with CKD and type 2 diabetes are currently prescribed SGLT2 inhibitors in the United States.
This slow uptake of SGLT2 inhibitors among patients with CKD may be partly due to concerns about DKA, fractures, amputations, and urogenital infections observed in clinical trials.
However, such trials are generally underpowered to assess rare adverse events, use monitoring protocols to lower the risk of adverse events, and include a highly selected patient population, and so safety in routine clinical practice is often unclear.
To examine this, the researchers identified health insurance claims data from 96,128 individuals (from Optum, IBM MarketScan, and Medicare databases) who were 18 years or older (65 years or older for Medicare) and had type 2 diabetes and at least one inpatient or two outpatient diagnostic codes for stage 3 or 4 CKD.
Of these patients, 32,192 had a newly filled prescription for an SGLT2 inhibitor (empagliflozin, dapagliflozin, canagliflozin, or ertugliflozin) and 63,936 had a newly filled prescription for a GLP-1 agonist (liraglutide, dulaglutide, semaglutide, exenatide, albiglutide, or lixisenatide) between April 2013, when the first SGLT2 inhibitor was available in the United States, and 2021.
The researchers matched 28,847 individuals who were initiated on an SGLT2 inhibitor with an equal number who were initiated on a GLP-1 agonist, based on propensity scores, adjusting for more than 120 baseline characteristics.
Safety outcomes were based on previously identified potential safety signals.
Patients who were initiated on an SGLT2 inhibitor had 1.30-fold, 2.13-fold, and 3.08-fold higher risks of having a nonvertebral fracture, a lower limb amputation, and a genital infection, respectively, compared with patients who were initiated on a GLP-1 agonist, after a mean on-treatment time of 7.5 months,
Risks of DKA, hypovolemia, hypoglycemia, and severe UTI were similar in both groups.
Patients initiated on an SGLT2 inhibitor versus a GLP-1 agonist had a lower risk of AKI (hazard ratio, 0.93) equivalent to 6.75 fewer cases of AKI per 1,000 patients per year.
Patients had higher risks for lower limb amputation, genital infections, and nonvertebral fractures with SGLT2 inhibitors versus GLP-1 agonists across most of the prespecified subgroups by age, sex, cardiovascular disease, heart failure, and use of metformin, insulin, or sulfonylurea, but with wider confidence intervals.
Dr. Fu was supported by a Rubicon grant from the Dutch Research Council and has reported no relevant financial relationships. Disclosures for the other authors are listed with the article.
A version of this article originally appeared on Medscape.com.
When practice-changing results don’t change practice
The highly favorable results of the CheckMate 816 trial of neoadjuvant chemotherapy plus nivolumab for resectable stage IB-IIIA non–small cell lung cancer (NSCLC) were impressive enough to prompt a Food and Drug Administration approval of this combination in March 2022.
For many, this led to a marked shift in how we approached these patients. But in my conversations with many care teams, they have expressed ambivalence about using the chemoimmunotherapy regimen. Some have conveyed to me that the lack of statistically significant improvement in overall survival is a sticking point. Others have expressed uncertainty about the true benefit of neoadjuvant chemotherapy alongside nivolumab for patients with earlier-stage disease, given that 64% of patients in the trial had stage IIIA disease. The benefit of the neoadjuvant combination in patients with low or negative tumor programmed death–ligand 1 (PD-L1) expression also remains a question mark, though the trial found no significant differences in outcomes by PD-L1 subset.
But among many of my colleagues who favor adjuvant over neoadjuvant therapy, it isn’t necessarily the fine points of the data that present the real barrier: it’s the sentiment that “we just don’t favor a neoadjuvant approach at my place.”
If the worry is that a subset of patients who are eligible for up-front surgery may be derailed from the operating room if they experience significant disease progression or a complication during preoperative therapy or that surgery will more difficult after chemoimmunotherapy, those concerns are not supported by evidence. In fact, data on surgical outcomes from CheckMate 816 assessing these issues found that surgery after chemoimmunotherapy was approximately 30 minutes faster than it was after chemotherapy alone. In addition, the combination neoadjuvant chemoimmunotherapy approach was associated with less extensive surgeries, particularly for patients with stage IIIA NSCLC, and patients experienced measurably lower reports of pain and dyspnea as well.
Though postoperative systemic therapy has been our general approach for resectable NSCLC for nearly 2 decades, there are several reasons to focus on neoadjuvant therapy.
First, immunotherapy may work more effectively when the tumor antigens as well as lymph nodes and lymphatic system are present in situ at the time.
Second, patients may be eager to complete their treatment within a 3-month period of just three cycles of systemic therapy followed by surgery rather than receiving their treatment over a prolonged chapter of their lives, starting with surgery followed by four cycles of chemotherapy and 1 year of immunotherapy.
Finally, we can’t ignore the fact that most neoadjuvant therapy is delivered exactly as intended, whereas planned adjuvant therapy is often not started or rarely completed as designed. At most, only about half of appropriate patients for adjuvant chemotherapy even start it, and far less complete a full four cycles or go on to complete prolonged adjuvant immunotherapy.
We also can’t underestimate the value of imaging and pathology findings after patients have completed neoadjuvant therapy. The pathologic complete response rate in CheckMate 816 is predictive of improved event-free survival over time.
And that isn’t just a binary variable of achieving a pathologic complete response or not. The degree of residual, viable tumor after surgery is a continuous variable associated along a spectrum with event-free survival. Our colleagues who treat breast cancer have been able to customize postoperative therapy to improve outcomes on the basis of the results achieved with neoadjuvant therapy. Multidisciplinary gastrointestinal oncology teams have revolutionized outcomes with rectal cancer by transitioning to total neoadjuvant therapy that makes it possible to deliver treatment more reliably and pursue organ-sparing approaches while achieving better survival.
Putting all of this together, I appreciate arguments against the generalizability or the maturity of the data supporting neoadjuvant chemoimmunotherapy for resectable NSCLC. However, sidestepping our most promising advances will harm our patients. Plus, what’s the point of generating practice-changing results if we don’t accept and implement them?
We owe it to our patients to follow the evolving evidence and not just stick to what we’ve always done.
Dr. West is an associate professor at City of Hope Comprehensive Cancer Center in Duarte, Calif., and vice president of network strategy at AccessHope in Los Angeles. Dr. West serves as web editor for JAMA Oncology, edits and writes several sections on lung cancer for UpToDate, and leads a wide range of continuing medical education and other educational programs.
A version of this article first appeared on Medscape.com.
The highly favorable results of the CheckMate 816 trial of neoadjuvant chemotherapy plus nivolumab for resectable stage IB-IIIA non–small cell lung cancer (NSCLC) were impressive enough to prompt a Food and Drug Administration approval of this combination in March 2022.
For many, this led to a marked shift in how we approached these patients. But in my conversations with many care teams, they have expressed ambivalence about using the chemoimmunotherapy regimen. Some have conveyed to me that the lack of statistically significant improvement in overall survival is a sticking point. Others have expressed uncertainty about the true benefit of neoadjuvant chemotherapy alongside nivolumab for patients with earlier-stage disease, given that 64% of patients in the trial had stage IIIA disease. The benefit of the neoadjuvant combination in patients with low or negative tumor programmed death–ligand 1 (PD-L1) expression also remains a question mark, though the trial found no significant differences in outcomes by PD-L1 subset.
But among many of my colleagues who favor adjuvant over neoadjuvant therapy, it isn’t necessarily the fine points of the data that present the real barrier: it’s the sentiment that “we just don’t favor a neoadjuvant approach at my place.”
If the worry is that a subset of patients who are eligible for up-front surgery may be derailed from the operating room if they experience significant disease progression or a complication during preoperative therapy or that surgery will more difficult after chemoimmunotherapy, those concerns are not supported by evidence. In fact, data on surgical outcomes from CheckMate 816 assessing these issues found that surgery after chemoimmunotherapy was approximately 30 minutes faster than it was after chemotherapy alone. In addition, the combination neoadjuvant chemoimmunotherapy approach was associated with less extensive surgeries, particularly for patients with stage IIIA NSCLC, and patients experienced measurably lower reports of pain and dyspnea as well.
Though postoperative systemic therapy has been our general approach for resectable NSCLC for nearly 2 decades, there are several reasons to focus on neoadjuvant therapy.
First, immunotherapy may work more effectively when the tumor antigens as well as lymph nodes and lymphatic system are present in situ at the time.
Second, patients may be eager to complete their treatment within a 3-month period of just three cycles of systemic therapy followed by surgery rather than receiving their treatment over a prolonged chapter of their lives, starting with surgery followed by four cycles of chemotherapy and 1 year of immunotherapy.
Finally, we can’t ignore the fact that most neoadjuvant therapy is delivered exactly as intended, whereas planned adjuvant therapy is often not started or rarely completed as designed. At most, only about half of appropriate patients for adjuvant chemotherapy even start it, and far less complete a full four cycles or go on to complete prolonged adjuvant immunotherapy.
We also can’t underestimate the value of imaging and pathology findings after patients have completed neoadjuvant therapy. The pathologic complete response rate in CheckMate 816 is predictive of improved event-free survival over time.
And that isn’t just a binary variable of achieving a pathologic complete response or not. The degree of residual, viable tumor after surgery is a continuous variable associated along a spectrum with event-free survival. Our colleagues who treat breast cancer have been able to customize postoperative therapy to improve outcomes on the basis of the results achieved with neoadjuvant therapy. Multidisciplinary gastrointestinal oncology teams have revolutionized outcomes with rectal cancer by transitioning to total neoadjuvant therapy that makes it possible to deliver treatment more reliably and pursue organ-sparing approaches while achieving better survival.
Putting all of this together, I appreciate arguments against the generalizability or the maturity of the data supporting neoadjuvant chemoimmunotherapy for resectable NSCLC. However, sidestepping our most promising advances will harm our patients. Plus, what’s the point of generating practice-changing results if we don’t accept and implement them?
We owe it to our patients to follow the evolving evidence and not just stick to what we’ve always done.
Dr. West is an associate professor at City of Hope Comprehensive Cancer Center in Duarte, Calif., and vice president of network strategy at AccessHope in Los Angeles. Dr. West serves as web editor for JAMA Oncology, edits and writes several sections on lung cancer for UpToDate, and leads a wide range of continuing medical education and other educational programs.
A version of this article first appeared on Medscape.com.
The highly favorable results of the CheckMate 816 trial of neoadjuvant chemotherapy plus nivolumab for resectable stage IB-IIIA non–small cell lung cancer (NSCLC) were impressive enough to prompt a Food and Drug Administration approval of this combination in March 2022.
For many, this led to a marked shift in how we approached these patients. But in my conversations with many care teams, they have expressed ambivalence about using the chemoimmunotherapy regimen. Some have conveyed to me that the lack of statistically significant improvement in overall survival is a sticking point. Others have expressed uncertainty about the true benefit of neoadjuvant chemotherapy alongside nivolumab for patients with earlier-stage disease, given that 64% of patients in the trial had stage IIIA disease. The benefit of the neoadjuvant combination in patients with low or negative tumor programmed death–ligand 1 (PD-L1) expression also remains a question mark, though the trial found no significant differences in outcomes by PD-L1 subset.
But among many of my colleagues who favor adjuvant over neoadjuvant therapy, it isn’t necessarily the fine points of the data that present the real barrier: it’s the sentiment that “we just don’t favor a neoadjuvant approach at my place.”
If the worry is that a subset of patients who are eligible for up-front surgery may be derailed from the operating room if they experience significant disease progression or a complication during preoperative therapy or that surgery will more difficult after chemoimmunotherapy, those concerns are not supported by evidence. In fact, data on surgical outcomes from CheckMate 816 assessing these issues found that surgery after chemoimmunotherapy was approximately 30 minutes faster than it was after chemotherapy alone. In addition, the combination neoadjuvant chemoimmunotherapy approach was associated with less extensive surgeries, particularly for patients with stage IIIA NSCLC, and patients experienced measurably lower reports of pain and dyspnea as well.
Though postoperative systemic therapy has been our general approach for resectable NSCLC for nearly 2 decades, there are several reasons to focus on neoadjuvant therapy.
First, immunotherapy may work more effectively when the tumor antigens as well as lymph nodes and lymphatic system are present in situ at the time.
Second, patients may be eager to complete their treatment within a 3-month period of just three cycles of systemic therapy followed by surgery rather than receiving their treatment over a prolonged chapter of their lives, starting with surgery followed by four cycles of chemotherapy and 1 year of immunotherapy.
Finally, we can’t ignore the fact that most neoadjuvant therapy is delivered exactly as intended, whereas planned adjuvant therapy is often not started or rarely completed as designed. At most, only about half of appropriate patients for adjuvant chemotherapy even start it, and far less complete a full four cycles or go on to complete prolonged adjuvant immunotherapy.
We also can’t underestimate the value of imaging and pathology findings after patients have completed neoadjuvant therapy. The pathologic complete response rate in CheckMate 816 is predictive of improved event-free survival over time.
And that isn’t just a binary variable of achieving a pathologic complete response or not. The degree of residual, viable tumor after surgery is a continuous variable associated along a spectrum with event-free survival. Our colleagues who treat breast cancer have been able to customize postoperative therapy to improve outcomes on the basis of the results achieved with neoadjuvant therapy. Multidisciplinary gastrointestinal oncology teams have revolutionized outcomes with rectal cancer by transitioning to total neoadjuvant therapy that makes it possible to deliver treatment more reliably and pursue organ-sparing approaches while achieving better survival.
Putting all of this together, I appreciate arguments against the generalizability or the maturity of the data supporting neoadjuvant chemoimmunotherapy for resectable NSCLC. However, sidestepping our most promising advances will harm our patients. Plus, what’s the point of generating practice-changing results if we don’t accept and implement them?
We owe it to our patients to follow the evolving evidence and not just stick to what we’ve always done.
Dr. West is an associate professor at City of Hope Comprehensive Cancer Center in Duarte, Calif., and vice president of network strategy at AccessHope in Los Angeles. Dr. West serves as web editor for JAMA Oncology, edits and writes several sections on lung cancer for UpToDate, and leads a wide range of continuing medical education and other educational programs.
A version of this article first appeared on Medscape.com.
Heart rate, cardiac phase influence perception of time
People’s perception of time is subjective and based not only on their emotional state but also on heartbeat and heart rate (HR), two new studies suggest.
Researchers studied young adults with an electrocardiogram (ECG), measuring electrical activity at millisecond resolution while participants listened to tones that varied in duration. Participants were asked to report whether certain tones were longer or shorter, in relation to others.
The researchers found that the momentary perception of time was not continuous but rather expanded or contracted with each heartbeat. When the heartbeat preceding a tone was shorter, participants regarded the tone as longer in duration; but when the preceding heartbeat was longer, the participants experienced the tone as shorter.
“Our findings suggest that there is a unique role that cardiac dynamics play in the momentary experience of time,” lead author Saeedah Sadeghi, MSc, a doctoral candidate in the department of psychology at Cornell University, Ithaca, N.Y., said in an interview.
The study was published online in Psychophysiology.
In a second study, published in the journal Current Biology, a separate team of researchers asked participants to judge whether a brief event – the presentation of a tone or an image – was shorter or longer than a reference duration. ECG was used to track systole and diastole when participants were presented with these events.
The researchers found that the durations were underestimated during systole and overestimated during diastole, suggesting that time seemed to “speed up” or “slow down,” based on cardiac contraction and relaxation. When participants rated the events as more arousing, their perceived durations contracted, even during diastole.
“In our new paper, we show that our heart shapes the perceived duration of events, so time passes quicker when the heart contracts but slower when the heart relaxes,” lead author Irena Arslanova, PhD, postdoctoral researcher in cognitive neuroscience, Royal Holloway University of London, told this news organization.
Temporal ‘wrinkles’
“Subjective time is malleable,” observed Ms. Sadeghi and colleagues in their report. “Rather than being a uniform dimension, perceived duration has ‘wrinkles,’ with certain intervals appearing to dilate or contract relative to objective time” – a phenomenon sometimes referred to as “distortion.”
“We have known that people aren’t always consistent in how they perceive time, and objective duration doesn’t always explain subjective perception of time,” Ms. Sadeghi said.
Although the potential role of the heart in the experience of time has been hypothesized, research into the heart-time connection has been limited, with previous studies focusing primarily on estimating the average cardiac measures on longer time scales over seconds to minutes.
The current study sought to investigate “the beat-by-beat fluctuations of the heart period on the experience of brief moments in time” because, compared with longer time scales, subsecond temporal perception “has different underlying mechanisms” and a subsecond stimulus can be a “small fraction of a heartbeat.”
To home in on this small fraction, the researchers studied 45 participants (aged 18-21), who listened to 210 tones ranging in duration from 80 ms (short) to 188 ms (long). The tones were linearly spaced at 18-ms increments (80, 98, 116, 134, 152, 170, 188).
Participants were asked to categorize each tone as “short” or “long.” All tones were randomly assigned to be synchronized either with the systolic or diastolic phase of the cardiac cycle (50% each). The tones were triggered by participants’ heartbeats.
In addition, participants engaged in a heartbeat-counting activity, in which they were asked not to touch their pulse but to count their heartbeats by tuning in to their bodily sensations at intervals of 25, 35, and 45 seconds.
‘Classical’ response
“Participants exhibited an increased heart period after tone onset, which returned to baseline following an average canonical bell shape,” the authors reported.
The researchers performed regression analyses to determine how, on average, the heart rate before the tone was related to perceived duration or how the amount of change after the tone was related to perceived duration.
They found that when the heart rate was higher before the tone, participants tended to be more accurate in their time perception. When the heartbeat preceding a tone was shorter, participants experienced the tone as longer; conversely, when the heartbeat was longer, they experienced the duration of the identical sound as shorter.
When participants focused their attention on the sounds, their heart rate was affected such that their orienting responses actually changed their heart rate and, in turn, their temporal perception.
“The orienting response is classical,” Ms. Sadeghi said. “When you attend to something unpredictable or novel, the act of orienting attention decreases the HR.”
She explained that the heartbeats are “noise to the brain.” When people need to perceive external events, “a decrease in HR facilitates the intake of things from outside and facilitates sensory intake.”
A lower HR “makes it easier for the person to take in the tone and perceive it, so it feels as though they perceive more of the tone and the duration seems longer – similarly, when the HR decreases.”
It is unknown whether this is a causal relationship, she cautioned, “but it seems as though the decrease in HR somehow makes it easier to ‘get’ more of the tone, which then appears to have longer duration.”
Bidirectional relationship
“We know that experienced time can be distorted,” said Dr. Arslanova. “Time flies by when we’re busy or having fun but drags on when we’re bored or waiting for something, yet we still don’t know how the brain gives rise to such elastic experience of time.”
The brain controls the heart in response to the information the heart provides about the state of the body, she noted, “but we have begun to see more research showing that the heart–brain relationship is bidirectional.”
This means that the heart plays a role in shaping “how we process information and experience emotions.” In this analysis, Dr. Arslanova and colleagues “wanted to study whether the heart also shapes the experience of time.”
To do so, they conducted two experiments.
In the first, participants (n = 28) were presented with brief events during systole or during diastole. The events took the form of an emotionally neutral visual shape or auditory tone, shown for durations of 200 to 400 ms.
Participants were asked whether these events were of longer or shorter duration, compared with a reference duration.
The researchers found significant main effect of cardiac phase systole (F(1,27) = 8.1, P =.01), with stimuli presented at diastole regarded, on average, as 7 ms longer than those presented at systole.
They also found a significant main effect of modality (F(1,27) = 5.7, P = .02), with tones judged, on average, as 13 ms longer than visual stimuli.
“This means that time ‘sped up’ during the heart’s contraction and ‘slowed down’ during the heart’s relaxation,” Dr. Arslanova said.
The effect of cardiac phase on duration perception was independent of changes in HR, the authors noted.
In the second experiment, participants performed a similar task, but this time, it involved the images of faces containing emotional expressions. The researchers again observed a similar pattern of time appearing to speed up during systole and slow down during diastole, with stimuli present at diastole regarded as being an average 9 ms longer than those presented at systole.
These opposing effects of systole and diastole on time perception were present only for low and average arousal ratings (b = 14.4 [SE 3.2], P < .001 and b = 9.2 [2.3], P <.001, respectively). However, this effect disappeared when arousal ratings increased (b = 4.1 [3.2] P =.21).
“Interestingly, when participants rated the events as more arousing, their perceived durations contracted, even during the heart’s relaxation,” Dr. Arslanova observed. “This means that in a nonaroused state, the two cardiac phases pull the experienced duration in opposite directions – time contracts, then expands.”
The findings “also predict that increasing HR would speed up passing time, making events seem shorter, because there will be a stronger influence from the heart’s contractions,” she said.
She described the relationship between time perception and emotion as complex, noting that the findings are important because they show “that the way we experience time cannot be examined in isolation from our body,” she said.
Converging evidence
Martin Wiener, PhD, assistant professor, George Mason University, Fairfax, Va., said both papers “provide converging evidence on the role of the heart in our perception of time.”
Together, “the results share that our sense of time – that is, our incoming sensory perception of the present ‘moment’ – is adjusted or ‘gated’ by both our HR and cardiac phase,” said Dr. Wiener, executive director of the Timing Research Forum.
The studies “provide a link between the body and the brain, in terms of our perception, and that we cannot study one without the context of the other,” said Dr. Wiener, who was not involved with the current study.
“All of this opens up a new avenue of research, and so it is very exciting to see,” Dr. Wiener stated.
No source of funding was listed for the study by Ms. Sadeghi and coauthors. They declared no relevant financial relationships.
Dr. Arslanova and coauthors declared no competing interests. Senior author Manos Tsakiris, PhD, receives funding from the European Research Council Consolidator Grant. Dr. Wiener declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
People’s perception of time is subjective and based not only on their emotional state but also on heartbeat and heart rate (HR), two new studies suggest.
Researchers studied young adults with an electrocardiogram (ECG), measuring electrical activity at millisecond resolution while participants listened to tones that varied in duration. Participants were asked to report whether certain tones were longer or shorter, in relation to others.
The researchers found that the momentary perception of time was not continuous but rather expanded or contracted with each heartbeat. When the heartbeat preceding a tone was shorter, participants regarded the tone as longer in duration; but when the preceding heartbeat was longer, the participants experienced the tone as shorter.
“Our findings suggest that there is a unique role that cardiac dynamics play in the momentary experience of time,” lead author Saeedah Sadeghi, MSc, a doctoral candidate in the department of psychology at Cornell University, Ithaca, N.Y., said in an interview.
The study was published online in Psychophysiology.
In a second study, published in the journal Current Biology, a separate team of researchers asked participants to judge whether a brief event – the presentation of a tone or an image – was shorter or longer than a reference duration. ECG was used to track systole and diastole when participants were presented with these events.
The researchers found that the durations were underestimated during systole and overestimated during diastole, suggesting that time seemed to “speed up” or “slow down,” based on cardiac contraction and relaxation. When participants rated the events as more arousing, their perceived durations contracted, even during diastole.
“In our new paper, we show that our heart shapes the perceived duration of events, so time passes quicker when the heart contracts but slower when the heart relaxes,” lead author Irena Arslanova, PhD, postdoctoral researcher in cognitive neuroscience, Royal Holloway University of London, told this news organization.
Temporal ‘wrinkles’
“Subjective time is malleable,” observed Ms. Sadeghi and colleagues in their report. “Rather than being a uniform dimension, perceived duration has ‘wrinkles,’ with certain intervals appearing to dilate or contract relative to objective time” – a phenomenon sometimes referred to as “distortion.”
“We have known that people aren’t always consistent in how they perceive time, and objective duration doesn’t always explain subjective perception of time,” Ms. Sadeghi said.
Although the potential role of the heart in the experience of time has been hypothesized, research into the heart-time connection has been limited, with previous studies focusing primarily on estimating the average cardiac measures on longer time scales over seconds to minutes.
The current study sought to investigate “the beat-by-beat fluctuations of the heart period on the experience of brief moments in time” because, compared with longer time scales, subsecond temporal perception “has different underlying mechanisms” and a subsecond stimulus can be a “small fraction of a heartbeat.”
To home in on this small fraction, the researchers studied 45 participants (aged 18-21), who listened to 210 tones ranging in duration from 80 ms (short) to 188 ms (long). The tones were linearly spaced at 18-ms increments (80, 98, 116, 134, 152, 170, 188).
Participants were asked to categorize each tone as “short” or “long.” All tones were randomly assigned to be synchronized either with the systolic or diastolic phase of the cardiac cycle (50% each). The tones were triggered by participants’ heartbeats.
In addition, participants engaged in a heartbeat-counting activity, in which they were asked not to touch their pulse but to count their heartbeats by tuning in to their bodily sensations at intervals of 25, 35, and 45 seconds.
‘Classical’ response
“Participants exhibited an increased heart period after tone onset, which returned to baseline following an average canonical bell shape,” the authors reported.
The researchers performed regression analyses to determine how, on average, the heart rate before the tone was related to perceived duration or how the amount of change after the tone was related to perceived duration.
They found that when the heart rate was higher before the tone, participants tended to be more accurate in their time perception. When the heartbeat preceding a tone was shorter, participants experienced the tone as longer; conversely, when the heartbeat was longer, they experienced the duration of the identical sound as shorter.
When participants focused their attention on the sounds, their heart rate was affected such that their orienting responses actually changed their heart rate and, in turn, their temporal perception.
“The orienting response is classical,” Ms. Sadeghi said. “When you attend to something unpredictable or novel, the act of orienting attention decreases the HR.”
She explained that the heartbeats are “noise to the brain.” When people need to perceive external events, “a decrease in HR facilitates the intake of things from outside and facilitates sensory intake.”
A lower HR “makes it easier for the person to take in the tone and perceive it, so it feels as though they perceive more of the tone and the duration seems longer – similarly, when the HR decreases.”
It is unknown whether this is a causal relationship, she cautioned, “but it seems as though the decrease in HR somehow makes it easier to ‘get’ more of the tone, which then appears to have longer duration.”
Bidirectional relationship
“We know that experienced time can be distorted,” said Dr. Arslanova. “Time flies by when we’re busy or having fun but drags on when we’re bored or waiting for something, yet we still don’t know how the brain gives rise to such elastic experience of time.”
The brain controls the heart in response to the information the heart provides about the state of the body, she noted, “but we have begun to see more research showing that the heart–brain relationship is bidirectional.”
This means that the heart plays a role in shaping “how we process information and experience emotions.” In this analysis, Dr. Arslanova and colleagues “wanted to study whether the heart also shapes the experience of time.”
To do so, they conducted two experiments.
In the first, participants (n = 28) were presented with brief events during systole or during diastole. The events took the form of an emotionally neutral visual shape or auditory tone, shown for durations of 200 to 400 ms.
Participants were asked whether these events were of longer or shorter duration, compared with a reference duration.
The researchers found significant main effect of cardiac phase systole (F(1,27) = 8.1, P =.01), with stimuli presented at diastole regarded, on average, as 7 ms longer than those presented at systole.
They also found a significant main effect of modality (F(1,27) = 5.7, P = .02), with tones judged, on average, as 13 ms longer than visual stimuli.
“This means that time ‘sped up’ during the heart’s contraction and ‘slowed down’ during the heart’s relaxation,” Dr. Arslanova said.
The effect of cardiac phase on duration perception was independent of changes in HR, the authors noted.
In the second experiment, participants performed a similar task, but this time, it involved the images of faces containing emotional expressions. The researchers again observed a similar pattern of time appearing to speed up during systole and slow down during diastole, with stimuli present at diastole regarded as being an average 9 ms longer than those presented at systole.
These opposing effects of systole and diastole on time perception were present only for low and average arousal ratings (b = 14.4 [SE 3.2], P < .001 and b = 9.2 [2.3], P <.001, respectively). However, this effect disappeared when arousal ratings increased (b = 4.1 [3.2] P =.21).
“Interestingly, when participants rated the events as more arousing, their perceived durations contracted, even during the heart’s relaxation,” Dr. Arslanova observed. “This means that in a nonaroused state, the two cardiac phases pull the experienced duration in opposite directions – time contracts, then expands.”
The findings “also predict that increasing HR would speed up passing time, making events seem shorter, because there will be a stronger influence from the heart’s contractions,” she said.
She described the relationship between time perception and emotion as complex, noting that the findings are important because they show “that the way we experience time cannot be examined in isolation from our body,” she said.
Converging evidence
Martin Wiener, PhD, assistant professor, George Mason University, Fairfax, Va., said both papers “provide converging evidence on the role of the heart in our perception of time.”
Together, “the results share that our sense of time – that is, our incoming sensory perception of the present ‘moment’ – is adjusted or ‘gated’ by both our HR and cardiac phase,” said Dr. Wiener, executive director of the Timing Research Forum.
The studies “provide a link between the body and the brain, in terms of our perception, and that we cannot study one without the context of the other,” said Dr. Wiener, who was not involved with the current study.
“All of this opens up a new avenue of research, and so it is very exciting to see,” Dr. Wiener stated.
No source of funding was listed for the study by Ms. Sadeghi and coauthors. They declared no relevant financial relationships.
Dr. Arslanova and coauthors declared no competing interests. Senior author Manos Tsakiris, PhD, receives funding from the European Research Council Consolidator Grant. Dr. Wiener declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
People’s perception of time is subjective and based not only on their emotional state but also on heartbeat and heart rate (HR), two new studies suggest.
Researchers studied young adults with an electrocardiogram (ECG), measuring electrical activity at millisecond resolution while participants listened to tones that varied in duration. Participants were asked to report whether certain tones were longer or shorter, in relation to others.
The researchers found that the momentary perception of time was not continuous but rather expanded or contracted with each heartbeat. When the heartbeat preceding a tone was shorter, participants regarded the tone as longer in duration; but when the preceding heartbeat was longer, the participants experienced the tone as shorter.
“Our findings suggest that there is a unique role that cardiac dynamics play in the momentary experience of time,” lead author Saeedah Sadeghi, MSc, a doctoral candidate in the department of psychology at Cornell University, Ithaca, N.Y., said in an interview.
The study was published online in Psychophysiology.
In a second study, published in the journal Current Biology, a separate team of researchers asked participants to judge whether a brief event – the presentation of a tone or an image – was shorter or longer than a reference duration. ECG was used to track systole and diastole when participants were presented with these events.
The researchers found that the durations were underestimated during systole and overestimated during diastole, suggesting that time seemed to “speed up” or “slow down,” based on cardiac contraction and relaxation. When participants rated the events as more arousing, their perceived durations contracted, even during diastole.
“In our new paper, we show that our heart shapes the perceived duration of events, so time passes quicker when the heart contracts but slower when the heart relaxes,” lead author Irena Arslanova, PhD, postdoctoral researcher in cognitive neuroscience, Royal Holloway University of London, told this news organization.
Temporal ‘wrinkles’
“Subjective time is malleable,” observed Ms. Sadeghi and colleagues in their report. “Rather than being a uniform dimension, perceived duration has ‘wrinkles,’ with certain intervals appearing to dilate or contract relative to objective time” – a phenomenon sometimes referred to as “distortion.”
“We have known that people aren’t always consistent in how they perceive time, and objective duration doesn’t always explain subjective perception of time,” Ms. Sadeghi said.
Although the potential role of the heart in the experience of time has been hypothesized, research into the heart-time connection has been limited, with previous studies focusing primarily on estimating the average cardiac measures on longer time scales over seconds to minutes.
The current study sought to investigate “the beat-by-beat fluctuations of the heart period on the experience of brief moments in time” because, compared with longer time scales, subsecond temporal perception “has different underlying mechanisms” and a subsecond stimulus can be a “small fraction of a heartbeat.”
To home in on this small fraction, the researchers studied 45 participants (aged 18-21), who listened to 210 tones ranging in duration from 80 ms (short) to 188 ms (long). The tones were linearly spaced at 18-ms increments (80, 98, 116, 134, 152, 170, 188).
Participants were asked to categorize each tone as “short” or “long.” All tones were randomly assigned to be synchronized either with the systolic or diastolic phase of the cardiac cycle (50% each). The tones were triggered by participants’ heartbeats.
In addition, participants engaged in a heartbeat-counting activity, in which they were asked not to touch their pulse but to count their heartbeats by tuning in to their bodily sensations at intervals of 25, 35, and 45 seconds.
‘Classical’ response
“Participants exhibited an increased heart period after tone onset, which returned to baseline following an average canonical bell shape,” the authors reported.
The researchers performed regression analyses to determine how, on average, the heart rate before the tone was related to perceived duration or how the amount of change after the tone was related to perceived duration.
They found that when the heart rate was higher before the tone, participants tended to be more accurate in their time perception. When the heartbeat preceding a tone was shorter, participants experienced the tone as longer; conversely, when the heartbeat was longer, they experienced the duration of the identical sound as shorter.
When participants focused their attention on the sounds, their heart rate was affected such that their orienting responses actually changed their heart rate and, in turn, their temporal perception.
“The orienting response is classical,” Ms. Sadeghi said. “When you attend to something unpredictable or novel, the act of orienting attention decreases the HR.”
She explained that the heartbeats are “noise to the brain.” When people need to perceive external events, “a decrease in HR facilitates the intake of things from outside and facilitates sensory intake.”
A lower HR “makes it easier for the person to take in the tone and perceive it, so it feels as though they perceive more of the tone and the duration seems longer – similarly, when the HR decreases.”
It is unknown whether this is a causal relationship, she cautioned, “but it seems as though the decrease in HR somehow makes it easier to ‘get’ more of the tone, which then appears to have longer duration.”
Bidirectional relationship
“We know that experienced time can be distorted,” said Dr. Arslanova. “Time flies by when we’re busy or having fun but drags on when we’re bored or waiting for something, yet we still don’t know how the brain gives rise to such elastic experience of time.”
The brain controls the heart in response to the information the heart provides about the state of the body, she noted, “but we have begun to see more research showing that the heart–brain relationship is bidirectional.”
This means that the heart plays a role in shaping “how we process information and experience emotions.” In this analysis, Dr. Arslanova and colleagues “wanted to study whether the heart also shapes the experience of time.”
To do so, they conducted two experiments.
In the first, participants (n = 28) were presented with brief events during systole or during diastole. The events took the form of an emotionally neutral visual shape or auditory tone, shown for durations of 200 to 400 ms.
Participants were asked whether these events were of longer or shorter duration, compared with a reference duration.
The researchers found significant main effect of cardiac phase systole (F(1,27) = 8.1, P =.01), with stimuli presented at diastole regarded, on average, as 7 ms longer than those presented at systole.
They also found a significant main effect of modality (F(1,27) = 5.7, P = .02), with tones judged, on average, as 13 ms longer than visual stimuli.
“This means that time ‘sped up’ during the heart’s contraction and ‘slowed down’ during the heart’s relaxation,” Dr. Arslanova said.
The effect of cardiac phase on duration perception was independent of changes in HR, the authors noted.
In the second experiment, participants performed a similar task, but this time, it involved the images of faces containing emotional expressions. The researchers again observed a similar pattern of time appearing to speed up during systole and slow down during diastole, with stimuli present at diastole regarded as being an average 9 ms longer than those presented at systole.
These opposing effects of systole and diastole on time perception were present only for low and average arousal ratings (b = 14.4 [SE 3.2], P < .001 and b = 9.2 [2.3], P <.001, respectively). However, this effect disappeared when arousal ratings increased (b = 4.1 [3.2] P =.21).
“Interestingly, when participants rated the events as more arousing, their perceived durations contracted, even during the heart’s relaxation,” Dr. Arslanova observed. “This means that in a nonaroused state, the two cardiac phases pull the experienced duration in opposite directions – time contracts, then expands.”
The findings “also predict that increasing HR would speed up passing time, making events seem shorter, because there will be a stronger influence from the heart’s contractions,” she said.
She described the relationship between time perception and emotion as complex, noting that the findings are important because they show “that the way we experience time cannot be examined in isolation from our body,” she said.
Converging evidence
Martin Wiener, PhD, assistant professor, George Mason University, Fairfax, Va., said both papers “provide converging evidence on the role of the heart in our perception of time.”
Together, “the results share that our sense of time – that is, our incoming sensory perception of the present ‘moment’ – is adjusted or ‘gated’ by both our HR and cardiac phase,” said Dr. Wiener, executive director of the Timing Research Forum.
The studies “provide a link between the body and the brain, in terms of our perception, and that we cannot study one without the context of the other,” said Dr. Wiener, who was not involved with the current study.
“All of this opens up a new avenue of research, and so it is very exciting to see,” Dr. Wiener stated.
No source of funding was listed for the study by Ms. Sadeghi and coauthors. They declared no relevant financial relationships.
Dr. Arslanova and coauthors declared no competing interests. Senior author Manos Tsakiris, PhD, receives funding from the European Research Council Consolidator Grant. Dr. Wiener declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM PSYCHOPHYSIOLOGY
Nasal COVID treatment shows early promise against multiple variants
if used within 4 hours after infection inside the nose, new research reveals.
Known as TriSb92 (brand name Covidin, from drugmaker Pandemblock Oy in Finland), the viral inhibitor also appears effective against all coronavirus variants of concern, neutralizing even the Omicron variants BA.5, XBB, and BQ.1.1 in laboratory and mice studies.
Unlike a COVID vaccine that boosts a person’s immune system as protection, the antiviral nasal spray works more directly by blocking the virus, acting as a “biological mask in the nasal cavity,” according to the biotechnology company set up to develop the treatment.
The product targets a stable site on the spike protein of the virus that is not known to mutate. This same site is shared among many variants of the COVID virus, so it could be effective against future variants as well, researchers note.
“In animal models, by directly inactivating the virus, TriSb92 offers immediate and robust protection” against coronavirus infection and severe COVID, said Anna R. Mäkelä, PhD, lead author of the study and a senior scientist in the department of virology at the University of Helsinki.
The study was published online in Nature Communications.
A potential first line of defense
Even in cases where the antiviral does not prevent coronavirus infection, the treatment could slow infection. This could happen by limiting how much virus could replicate early in the skin inside the nose and nasopharynx (the upper part of the throat), said Dr. Mäkelä, who is also CEO of Pandemblock Oy, the company set up to develop the product.
“TriSb92 could effectively tip the balance in favor of the [the person] and thereby help to reduce the risk of severe COVID-19 disease,” she said.
The antiviral also could offer an alternative to people who cannot or do not respond to a vaccine.
“Many elderly people as well as individuals who are immunodeficient for various reasons do not respond to vaccines and are in the need of other protective measures,” said Kalle Saksela, MD, PhD, senior author of the study and a virologist at the University of Helsinki.
Multiple doses needed?
TriSb92 is “one of multiple nasal spray approaches but unlikely to be as durable as effective nasal vaccines,” said Eric Topol, MD, a professor of molecular medicine and executive vice president of Scripps Research in La Jolla, Calif. Dr. Topol is also editor-in-chief of Medscape, WebMD’s sister site for medical professionals.
“The sprays generally require multiple doses per day, whereas a single dose of a nasal vaccine may protect for months,” he said.
“Both have the allure of being variant-proof,” Dr. Topol added.
Thinking small
Many laboratories are shifting from treatments using monoclonal antibodies to treatments using smaller antibody fragments called “nanobodies” because they are more cost-effective and are able to last longer in storage, Dr. Mäkelä and colleagues noted.
Several of these nanobodies have shown promise against viruses in cell culture or animal models, including as an intranasal preventive treatment for SARS-CoV-2.
One of these smaller antibodies is being developed from llamas for example; another comes from experiments with yeast to develop synthetic nanobodies; and in a third case, researchers isolated nanobodies from llamas and from mice and showed they could neutralize the SARS-CoV-2 virus.
These nanobodies and TriSb92 target a specific part of the coronavirus spike protein called the receptor-binding domain (RBD). The RBD is where the coronavirus attaches to cells in the body. These agents essentially trick the virus by changing the structure of the outside of cells, so they look like a virus has already fused to them. This way, the virus moves on.
Key findings
The researchers compared mice treated with TriSb92 before and after exposure to SARS-CoV-2. When given in advance, none of the treated mice had SARS-CoV-2 RNA in their lungs, while untreated mice in the comparison group had “abundant” levels.
Other evidence of viral infection showed similar differences between treated and untreated mice in the protective lining of cells called the epithelium inside the nose, nasal mucosa, and airways.
Similarly, when given 2 or 4 hours after SARS-CoV-2 had already infected the epithelium, TriSb92 was linked to a complete lack of the virus’s RNA in the lungs.
It was more effective against the virus, though, when given before infection rather than after, “perhaps due to the initial establishment of the infection,” the researchers note.
The company led by Dr. Mäkelä is now working to secure funding for clinical trials of TriSb92 in humans.
A version of this article first appeared on WebMD.com.
if used within 4 hours after infection inside the nose, new research reveals.
Known as TriSb92 (brand name Covidin, from drugmaker Pandemblock Oy in Finland), the viral inhibitor also appears effective against all coronavirus variants of concern, neutralizing even the Omicron variants BA.5, XBB, and BQ.1.1 in laboratory and mice studies.
Unlike a COVID vaccine that boosts a person’s immune system as protection, the antiviral nasal spray works more directly by blocking the virus, acting as a “biological mask in the nasal cavity,” according to the biotechnology company set up to develop the treatment.
The product targets a stable site on the spike protein of the virus that is not known to mutate. This same site is shared among many variants of the COVID virus, so it could be effective against future variants as well, researchers note.
“In animal models, by directly inactivating the virus, TriSb92 offers immediate and robust protection” against coronavirus infection and severe COVID, said Anna R. Mäkelä, PhD, lead author of the study and a senior scientist in the department of virology at the University of Helsinki.
The study was published online in Nature Communications.
A potential first line of defense
Even in cases where the antiviral does not prevent coronavirus infection, the treatment could slow infection. This could happen by limiting how much virus could replicate early in the skin inside the nose and nasopharynx (the upper part of the throat), said Dr. Mäkelä, who is also CEO of Pandemblock Oy, the company set up to develop the product.
“TriSb92 could effectively tip the balance in favor of the [the person] and thereby help to reduce the risk of severe COVID-19 disease,” she said.
The antiviral also could offer an alternative to people who cannot or do not respond to a vaccine.
“Many elderly people as well as individuals who are immunodeficient for various reasons do not respond to vaccines and are in the need of other protective measures,” said Kalle Saksela, MD, PhD, senior author of the study and a virologist at the University of Helsinki.
Multiple doses needed?
TriSb92 is “one of multiple nasal spray approaches but unlikely to be as durable as effective nasal vaccines,” said Eric Topol, MD, a professor of molecular medicine and executive vice president of Scripps Research in La Jolla, Calif. Dr. Topol is also editor-in-chief of Medscape, WebMD’s sister site for medical professionals.
“The sprays generally require multiple doses per day, whereas a single dose of a nasal vaccine may protect for months,” he said.
“Both have the allure of being variant-proof,” Dr. Topol added.
Thinking small
Many laboratories are shifting from treatments using monoclonal antibodies to treatments using smaller antibody fragments called “nanobodies” because they are more cost-effective and are able to last longer in storage, Dr. Mäkelä and colleagues noted.
Several of these nanobodies have shown promise against viruses in cell culture or animal models, including as an intranasal preventive treatment for SARS-CoV-2.
One of these smaller antibodies is being developed from llamas for example; another comes from experiments with yeast to develop synthetic nanobodies; and in a third case, researchers isolated nanobodies from llamas and from mice and showed they could neutralize the SARS-CoV-2 virus.
These nanobodies and TriSb92 target a specific part of the coronavirus spike protein called the receptor-binding domain (RBD). The RBD is where the coronavirus attaches to cells in the body. These agents essentially trick the virus by changing the structure of the outside of cells, so they look like a virus has already fused to them. This way, the virus moves on.
Key findings
The researchers compared mice treated with TriSb92 before and after exposure to SARS-CoV-2. When given in advance, none of the treated mice had SARS-CoV-2 RNA in their lungs, while untreated mice in the comparison group had “abundant” levels.
Other evidence of viral infection showed similar differences between treated and untreated mice in the protective lining of cells called the epithelium inside the nose, nasal mucosa, and airways.
Similarly, when given 2 or 4 hours after SARS-CoV-2 had already infected the epithelium, TriSb92 was linked to a complete lack of the virus’s RNA in the lungs.
It was more effective against the virus, though, when given before infection rather than after, “perhaps due to the initial establishment of the infection,” the researchers note.
The company led by Dr. Mäkelä is now working to secure funding for clinical trials of TriSb92 in humans.
A version of this article first appeared on WebMD.com.
if used within 4 hours after infection inside the nose, new research reveals.
Known as TriSb92 (brand name Covidin, from drugmaker Pandemblock Oy in Finland), the viral inhibitor also appears effective against all coronavirus variants of concern, neutralizing even the Omicron variants BA.5, XBB, and BQ.1.1 in laboratory and mice studies.
Unlike a COVID vaccine that boosts a person’s immune system as protection, the antiviral nasal spray works more directly by blocking the virus, acting as a “biological mask in the nasal cavity,” according to the biotechnology company set up to develop the treatment.
The product targets a stable site on the spike protein of the virus that is not known to mutate. This same site is shared among many variants of the COVID virus, so it could be effective against future variants as well, researchers note.
“In animal models, by directly inactivating the virus, TriSb92 offers immediate and robust protection” against coronavirus infection and severe COVID, said Anna R. Mäkelä, PhD, lead author of the study and a senior scientist in the department of virology at the University of Helsinki.
The study was published online in Nature Communications.
A potential first line of defense
Even in cases where the antiviral does not prevent coronavirus infection, the treatment could slow infection. This could happen by limiting how much virus could replicate early in the skin inside the nose and nasopharynx (the upper part of the throat), said Dr. Mäkelä, who is also CEO of Pandemblock Oy, the company set up to develop the product.
“TriSb92 could effectively tip the balance in favor of the [the person] and thereby help to reduce the risk of severe COVID-19 disease,” she said.
The antiviral also could offer an alternative to people who cannot or do not respond to a vaccine.
“Many elderly people as well as individuals who are immunodeficient for various reasons do not respond to vaccines and are in the need of other protective measures,” said Kalle Saksela, MD, PhD, senior author of the study and a virologist at the University of Helsinki.
Multiple doses needed?
TriSb92 is “one of multiple nasal spray approaches but unlikely to be as durable as effective nasal vaccines,” said Eric Topol, MD, a professor of molecular medicine and executive vice president of Scripps Research in La Jolla, Calif. Dr. Topol is also editor-in-chief of Medscape, WebMD’s sister site for medical professionals.
“The sprays generally require multiple doses per day, whereas a single dose of a nasal vaccine may protect for months,” he said.
“Both have the allure of being variant-proof,” Dr. Topol added.
Thinking small
Many laboratories are shifting from treatments using monoclonal antibodies to treatments using smaller antibody fragments called “nanobodies” because they are more cost-effective and are able to last longer in storage, Dr. Mäkelä and colleagues noted.
Several of these nanobodies have shown promise against viruses in cell culture or animal models, including as an intranasal preventive treatment for SARS-CoV-2.
One of these smaller antibodies is being developed from llamas for example; another comes from experiments with yeast to develop synthetic nanobodies; and in a third case, researchers isolated nanobodies from llamas and from mice and showed they could neutralize the SARS-CoV-2 virus.
These nanobodies and TriSb92 target a specific part of the coronavirus spike protein called the receptor-binding domain (RBD). The RBD is where the coronavirus attaches to cells in the body. These agents essentially trick the virus by changing the structure of the outside of cells, so they look like a virus has already fused to them. This way, the virus moves on.
Key findings
The researchers compared mice treated with TriSb92 before and after exposure to SARS-CoV-2. When given in advance, none of the treated mice had SARS-CoV-2 RNA in their lungs, while untreated mice in the comparison group had “abundant” levels.
Other evidence of viral infection showed similar differences between treated and untreated mice in the protective lining of cells called the epithelium inside the nose, nasal mucosa, and airways.
Similarly, when given 2 or 4 hours after SARS-CoV-2 had already infected the epithelium, TriSb92 was linked to a complete lack of the virus’s RNA in the lungs.
It was more effective against the virus, though, when given before infection rather than after, “perhaps due to the initial establishment of the infection,” the researchers note.
The company led by Dr. Mäkelä is now working to secure funding for clinical trials of TriSb92 in humans.
A version of this article first appeared on WebMD.com.
FROM NATURE COMMUNICATIONS
Cluster, migraine headache strongly linked to circadian rhythm
A meta-analysis of 16 studies showed a circadian pattern in 71% of cluster headache attacks (3,490 of 4,953), with a clear circadian peak between 9:00 p.m. and 3:00 a.m.
Migraine was also associated with a circadian pattern in 50% of cases (2,698 of 5,385) across eight studies, with a clear circadian trough between 11:00 p.m. and 7:00 a.m.
Seasonal peaks were also evident for cluster headache (spring and autumn) and migraine (April to October).
“In the short term, these findings help us explain the timing to patients – for example, it is possible that a headache at 8 a.m. is due to their internal body clock instead of their pillow, or breakfast food, or morning medications,” lead investigator Mark Burish, MD, PhD, associate professor, department of neurosurgery, at University of Texas Health Houston, told this news organization.
“In the long term, these findings do suggest that medications that target the circadian system could be effective in migraine and headache patients,” Dr. Burish added.
The study was published online in Neurology.
Treatment implications?
Across studies, chronotype was “highly variable” for both cluster headache and migraine, the investigators report.
Cluster headache was associated with lower melatonin and higher cortisol levels, compared with non–cluster headache controls.
On a genetic level, cluster headache was associated with two core circadian genes (CLOCK and REV-ERB–alpha), and five of the nine genes that increase the likelihood of having cluster headache are genes with a circadian pattern of expression.
Migraine headache was associated with lower urinary melatonin levels and with the core circadian genes, CK1-delta and ROR-alpha, and 110 of the 168 genes associated with migraine were clock-controlled genes.
“The data suggest that both of these headache disorders are highly circadian at multiple levels, especially cluster headache,” Dr. Burish said in a release.
“This reinforces the importance of the hypothalamus – the area of the brain that houses the primary biological clock – and its role in cluster headache and migraine. It also raises the question of the genetics of triggers such as sleep changes that are known triggers for migraine and are cues for the body’s circadian rhythm,” Dr. Burish said.
“We hope that future research will look into circadian medications as a new treatment option for migraine and cluster headache patients,” Dr. Burish told this news organization.
Importance of sleep regulation
The authors of an accompanying editorial note that even though the study doesn’t have immediate clinical implications, it offers a better understanding of the way chronobiologic factors may influence treatment.
“At a minimum, interventions known to regulate and improve sleep (e.g., melatonin, cognitive behavioral therapy), and which are safe and straightforward to introduce, may be useful in some individuals susceptible to circadian misalignment or sleep disorders,” write Heidi Sutherland, PhD, and Lyn Griffiths, PhD, with Queensland University of Technology, Brisbane, Australia.
“Treatment of comorbidities (e.g., insomnia) that result in sleep disturbances may also help headache management. Furthermore, chronobiological aspects of any pharmacological interventions should be considered, as some frequently used headache and migraine drugs can modulate circadian cycles and influence the expression of circadian genes (e.g., verapamil), or have sleep-related side effects,” they add.
A limitation of the study was the lack of information on factors that could influence the circadian cycle, such as medications; other disorders, such as bipolar disorder; or circadian rhythm issues, such as night-shift work.
The study was supported by grants from the Japan Society for the Promotion of Science, the National Institutes of Health, The Welch Foundation, and The Will Erwin Headache Research Foundation. Dr. Burish is an unpaid member of the medical advisory board of Clusterbusters, and a site investigator for a cluster headache clinical trial funded by Lundbeck. Dr. Sutherland has received grant funding from the U.S. Migraine Research Foundation, and received institute support from Queensland University of Technology for genetics research. Dr. Griffiths has received grant funding from the Australian NHMRC, U.S. Department of Defense, and the U.S. Migraine Research Foundation, and consultancy funding from TEVA.
A version of this article first appeared on Medscape.com.
A meta-analysis of 16 studies showed a circadian pattern in 71% of cluster headache attacks (3,490 of 4,953), with a clear circadian peak between 9:00 p.m. and 3:00 a.m.
Migraine was also associated with a circadian pattern in 50% of cases (2,698 of 5,385) across eight studies, with a clear circadian trough between 11:00 p.m. and 7:00 a.m.
Seasonal peaks were also evident for cluster headache (spring and autumn) and migraine (April to October).
“In the short term, these findings help us explain the timing to patients – for example, it is possible that a headache at 8 a.m. is due to their internal body clock instead of their pillow, or breakfast food, or morning medications,” lead investigator Mark Burish, MD, PhD, associate professor, department of neurosurgery, at University of Texas Health Houston, told this news organization.
“In the long term, these findings do suggest that medications that target the circadian system could be effective in migraine and headache patients,” Dr. Burish added.
The study was published online in Neurology.
Treatment implications?
Across studies, chronotype was “highly variable” for both cluster headache and migraine, the investigators report.
Cluster headache was associated with lower melatonin and higher cortisol levels, compared with non–cluster headache controls.
On a genetic level, cluster headache was associated with two core circadian genes (CLOCK and REV-ERB–alpha), and five of the nine genes that increase the likelihood of having cluster headache are genes with a circadian pattern of expression.
Migraine headache was associated with lower urinary melatonin levels and with the core circadian genes, CK1-delta and ROR-alpha, and 110 of the 168 genes associated with migraine were clock-controlled genes.
“The data suggest that both of these headache disorders are highly circadian at multiple levels, especially cluster headache,” Dr. Burish said in a release.
“This reinforces the importance of the hypothalamus – the area of the brain that houses the primary biological clock – and its role in cluster headache and migraine. It also raises the question of the genetics of triggers such as sleep changes that are known triggers for migraine and are cues for the body’s circadian rhythm,” Dr. Burish said.
“We hope that future research will look into circadian medications as a new treatment option for migraine and cluster headache patients,” Dr. Burish told this news organization.
Importance of sleep regulation
The authors of an accompanying editorial note that even though the study doesn’t have immediate clinical implications, it offers a better understanding of the way chronobiologic factors may influence treatment.
“At a minimum, interventions known to regulate and improve sleep (e.g., melatonin, cognitive behavioral therapy), and which are safe and straightforward to introduce, may be useful in some individuals susceptible to circadian misalignment or sleep disorders,” write Heidi Sutherland, PhD, and Lyn Griffiths, PhD, with Queensland University of Technology, Brisbane, Australia.
“Treatment of comorbidities (e.g., insomnia) that result in sleep disturbances may also help headache management. Furthermore, chronobiological aspects of any pharmacological interventions should be considered, as some frequently used headache and migraine drugs can modulate circadian cycles and influence the expression of circadian genes (e.g., verapamil), or have sleep-related side effects,” they add.
A limitation of the study was the lack of information on factors that could influence the circadian cycle, such as medications; other disorders, such as bipolar disorder; or circadian rhythm issues, such as night-shift work.
The study was supported by grants from the Japan Society for the Promotion of Science, the National Institutes of Health, The Welch Foundation, and The Will Erwin Headache Research Foundation. Dr. Burish is an unpaid member of the medical advisory board of Clusterbusters, and a site investigator for a cluster headache clinical trial funded by Lundbeck. Dr. Sutherland has received grant funding from the U.S. Migraine Research Foundation, and received institute support from Queensland University of Technology for genetics research. Dr. Griffiths has received grant funding from the Australian NHMRC, U.S. Department of Defense, and the U.S. Migraine Research Foundation, and consultancy funding from TEVA.
A version of this article first appeared on Medscape.com.
A meta-analysis of 16 studies showed a circadian pattern in 71% of cluster headache attacks (3,490 of 4,953), with a clear circadian peak between 9:00 p.m. and 3:00 a.m.
Migraine was also associated with a circadian pattern in 50% of cases (2,698 of 5,385) across eight studies, with a clear circadian trough between 11:00 p.m. and 7:00 a.m.
Seasonal peaks were also evident for cluster headache (spring and autumn) and migraine (April to October).
“In the short term, these findings help us explain the timing to patients – for example, it is possible that a headache at 8 a.m. is due to their internal body clock instead of their pillow, or breakfast food, or morning medications,” lead investigator Mark Burish, MD, PhD, associate professor, department of neurosurgery, at University of Texas Health Houston, told this news organization.
“In the long term, these findings do suggest that medications that target the circadian system could be effective in migraine and headache patients,” Dr. Burish added.
The study was published online in Neurology.
Treatment implications?
Across studies, chronotype was “highly variable” for both cluster headache and migraine, the investigators report.
Cluster headache was associated with lower melatonin and higher cortisol levels, compared with non–cluster headache controls.
On a genetic level, cluster headache was associated with two core circadian genes (CLOCK and REV-ERB–alpha), and five of the nine genes that increase the likelihood of having cluster headache are genes with a circadian pattern of expression.
Migraine headache was associated with lower urinary melatonin levels and with the core circadian genes, CK1-delta and ROR-alpha, and 110 of the 168 genes associated with migraine were clock-controlled genes.
“The data suggest that both of these headache disorders are highly circadian at multiple levels, especially cluster headache,” Dr. Burish said in a release.
“This reinforces the importance of the hypothalamus – the area of the brain that houses the primary biological clock – and its role in cluster headache and migraine. It also raises the question of the genetics of triggers such as sleep changes that are known triggers for migraine and are cues for the body’s circadian rhythm,” Dr. Burish said.
“We hope that future research will look into circadian medications as a new treatment option for migraine and cluster headache patients,” Dr. Burish told this news organization.
Importance of sleep regulation
The authors of an accompanying editorial note that even though the study doesn’t have immediate clinical implications, it offers a better understanding of the way chronobiologic factors may influence treatment.
“At a minimum, interventions known to regulate and improve sleep (e.g., melatonin, cognitive behavioral therapy), and which are safe and straightforward to introduce, may be useful in some individuals susceptible to circadian misalignment or sleep disorders,” write Heidi Sutherland, PhD, and Lyn Griffiths, PhD, with Queensland University of Technology, Brisbane, Australia.
“Treatment of comorbidities (e.g., insomnia) that result in sleep disturbances may also help headache management. Furthermore, chronobiological aspects of any pharmacological interventions should be considered, as some frequently used headache and migraine drugs can modulate circadian cycles and influence the expression of circadian genes (e.g., verapamil), or have sleep-related side effects,” they add.
A limitation of the study was the lack of information on factors that could influence the circadian cycle, such as medications; other disorders, such as bipolar disorder; or circadian rhythm issues, such as night-shift work.
The study was supported by grants from the Japan Society for the Promotion of Science, the National Institutes of Health, The Welch Foundation, and The Will Erwin Headache Research Foundation. Dr. Burish is an unpaid member of the medical advisory board of Clusterbusters, and a site investigator for a cluster headache clinical trial funded by Lundbeck. Dr. Sutherland has received grant funding from the U.S. Migraine Research Foundation, and received institute support from Queensland University of Technology for genetics research. Dr. Griffiths has received grant funding from the Australian NHMRC, U.S. Department of Defense, and the U.S. Migraine Research Foundation, and consultancy funding from TEVA.
A version of this article first appeared on Medscape.com.
FROM NEUROLOGY
Kickback Scheme Nets Prison Time for Philadelphia VAMC Service Chief
A former manager at the Philadelphia Veterans Affairs Medical Center (VAMC) has been sentenced to 6 months in federal prison for his part in a bribery scheme.
Ralph Johnson was convicted of accepting $30,000 in kickbacks and bribes for steering contracts to Earron and Carlicha Starks, who ran Ekno Medical Supply and Collondale Medical Supply from 2009 to 2019. Johnson served as chief of environmental services at the medical center. He admitted to receiving cash in binders and packages mailed to his home between 2018 and 2019.
The Starkses pleaded guilty first to paying kickbacks on $7 million worth of contracts to Florida VA facilities, then participated in a sting that implicated Johnson.
The VA Office of Inspector General began investigating Johnson in 2018 after the Starkses, who were indicted for bribing staff at US Department of Veterans Affairs (VA) hospitals in Miami and West Palm Beach, Florida, said they also paid officials in VA facilities on the East Coast.
According to the Philadelphia Inquirer, the judge credited Johnson’s past military service and his “extensive cooperation” with federal authorities investigating fraud within the VA. Johnson apologized to his former employers: “Throughout these 2 and a half years [since the arrest] there’s not a day I don’t think about the wrongness that I did.”
In addition to the prison sentence, Johnson has been ordered to pay back, at $50 a month, the $440,000-plus he cost the Philadelphia VAMC in fraudulent and bloated contracts.
Johnson is at least the third Philadelphia VAMC employee indicted or sentenced for fraud since 2020.
A former manager at the Philadelphia Veterans Affairs Medical Center (VAMC) has been sentenced to 6 months in federal prison for his part in a bribery scheme.
Ralph Johnson was convicted of accepting $30,000 in kickbacks and bribes for steering contracts to Earron and Carlicha Starks, who ran Ekno Medical Supply and Collondale Medical Supply from 2009 to 2019. Johnson served as chief of environmental services at the medical center. He admitted to receiving cash in binders and packages mailed to his home between 2018 and 2019.
The Starkses pleaded guilty first to paying kickbacks on $7 million worth of contracts to Florida VA facilities, then participated in a sting that implicated Johnson.
The VA Office of Inspector General began investigating Johnson in 2018 after the Starkses, who were indicted for bribing staff at US Department of Veterans Affairs (VA) hospitals in Miami and West Palm Beach, Florida, said they also paid officials in VA facilities on the East Coast.
According to the Philadelphia Inquirer, the judge credited Johnson’s past military service and his “extensive cooperation” with federal authorities investigating fraud within the VA. Johnson apologized to his former employers: “Throughout these 2 and a half years [since the arrest] there’s not a day I don’t think about the wrongness that I did.”
In addition to the prison sentence, Johnson has been ordered to pay back, at $50 a month, the $440,000-plus he cost the Philadelphia VAMC in fraudulent and bloated contracts.
Johnson is at least the third Philadelphia VAMC employee indicted or sentenced for fraud since 2020.
A former manager at the Philadelphia Veterans Affairs Medical Center (VAMC) has been sentenced to 6 months in federal prison for his part in a bribery scheme.
Ralph Johnson was convicted of accepting $30,000 in kickbacks and bribes for steering contracts to Earron and Carlicha Starks, who ran Ekno Medical Supply and Collondale Medical Supply from 2009 to 2019. Johnson served as chief of environmental services at the medical center. He admitted to receiving cash in binders and packages mailed to his home between 2018 and 2019.
The Starkses pleaded guilty first to paying kickbacks on $7 million worth of contracts to Florida VA facilities, then participated in a sting that implicated Johnson.
The VA Office of Inspector General began investigating Johnson in 2018 after the Starkses, who were indicted for bribing staff at US Department of Veterans Affairs (VA) hospitals in Miami and West Palm Beach, Florida, said they also paid officials in VA facilities on the East Coast.
According to the Philadelphia Inquirer, the judge credited Johnson’s past military service and his “extensive cooperation” with federal authorities investigating fraud within the VA. Johnson apologized to his former employers: “Throughout these 2 and a half years [since the arrest] there’s not a day I don’t think about the wrongness that I did.”
In addition to the prison sentence, Johnson has been ordered to pay back, at $50 a month, the $440,000-plus he cost the Philadelphia VAMC in fraudulent and bloated contracts.
Johnson is at least the third Philadelphia VAMC employee indicted or sentenced for fraud since 2020.