User login
Use the SCAI stages to identify and treat cardiogenic shock
Cardiogenic shock (CS) is being recognized more often in critically ill patients. This increased prevalence is likely due to a better understanding of CS and the benefit of improving cardiac output (CO) to ensure adequate oxygen delivery (DO2).
CS is often, but not always, caused by a cardiac dysfunction. The heart is not able to provide adequate DO2 to the tissues. Hypoperfusion ensues. The body attempts to compensate for the poor perfusion by increasing heart rate, vasoconstriction, and shunting blood flow to vital organs. These compensatory mechanisms worsen perfusion by increasing myocardial ischemia which further worsens cardiac dysfunction. This is known as the downward spiral of CS (Ann Intern Med. 1999 Jul 6;131[1]).
There is a number of different etiologies for CS. Historically, acute myocardial infarctions (AMI) was the most common cause. In the last 20 years, AMI-induced CS has become less prevalent due to more aggressive reperfusion strategies. CS due to etiologies such as cardiomyopathy, myocarditis, right ventricle failure, and valvular pathologies have become more common. While the overarching goal is to restore DO2 to the tissue, the optimal treatment may differ based on the etiology of the CS. The Society for Cardiovascular Angiography and Intervention (SCAI) published CS classification stages in 2019 and then updated the stages 2022 (J Am Coll Cardiol. 2022 Mar 8;79[9]:933-46). In addition to the stages, there is now a three-axis model to address risk stratification. These classifications are a practically means of identifying and treating patients presenting with or concern for acute CS.
Stage A (At Risk) patients are not experiencing CS, but they are the at risk population. The patient’s hemodynamics, physical exam, and markers of hypoperfusion are normal. Stage A includes patients who have had a recent AMI or have heart failure.
Stage B (Beginning) patients have evidence of hemodynamic instability but are able to maintain tissue perfusion. These patients will have true or relative hypotension or tachycardia (in an attempt to maintain CO). Distal perfusion is adequate, but signs of ensuing decompensation (eg, elevated jugular venous pressure [JVP]) are present. Lactate is <2.0 mmol/L. Clinicians must be vigilant and treat these patients aggressively, so they do not decompensate further. It can be difficult to identify these patients because their blood pressure may be “normal,” but upon investigation, the blood pressure is actually a drop from the patient’s baseline.
Chronic heart failure patients with a history of depressed cardiac function will often have periods of cardiac decompensation between stages A and B. These patients are able to maintain perfusion for longer periods of time before further decompensation with hypoperfusion. If and when they do decompensate, they will often have a steep downward trajectory, so it is advantageous to the patient to be aggressive early.
Stage C (Classic) patients have evidence of tissue hypoperfusion. While these patients will often have true or relative hypotension, it is not a definition of stage C. These patients have evidence of volume overload with elevated JVP and rales throughout their lung fields. They will have poor distal perfusion and cool extremities that may become mottled. Lactate is ≥ 2 mmol/L. B-type natriuretic peptide (BNP) and liver function test (LFTs) results are elevated, and urine output is diminished. If a pulmonary arterial catheter is placed (highly recommended), the cardiac index (CI) is < 2.2 L/min/m2 and the pulmonary capillary wedge pressure (PCWP) is > 15 mm Hg. These patients look like what many clinicians think of when they think of CS.
These patients need better tissue perfusion. Inotropic support is needed to augment CO and DO2. Pharmacologic support is often the initial step. These patients also benefit from volume removal. This is usually accomplished with aggressive diuresis with a loop diuretic.
Stage D (Deteriorating) patients have failed initial treatment with single inotropic support. Hypoperfusion is not getting better and is often worsening. Lactate is staying > 2 mmol/L or rising. BNP and LFTs are also rising. These patients require additional inotropes and usually need vasopressors. Mechanical cardiac support (MCS) is often needed in addition to pharmacologic inotropic support.
Stage E (Extremis) patients have actual or impending circulatory collapse. These patients are peri-arrest with profound hypotension, lactic acidosis (often > 8 mmol/L), and unconsciousness. These patients are worsening despite multiple strategies to augment CO and DO2. These patients will likely die without emergent veno-arterial (VA) extracorporeal membrane oxygenation (ECMO). The goal of treatment is to stabilize the patient as quickly as possible to prevent cardiac arrest.
In addition to the stage of CS, SCAI developed the three-axis model of risk stratification as a conceptual model to be used for evaluation and prognostication. Etiology and phenotype, shock severity, and risk modifiers are factors related to patient outcomes from CS. This model is a way to individualize treatment to a specific patient.
Shock severity: What is the patient’s shock stage? What are the hemodynamics and metabolic abnormalities? What are the doses of the inotropes or vasopressors? Risk goes up with higher shock stages and vasoactive agent doses and worsening metabolic disturbances or hemodynamics.
Phenotype and etiology: what is the clinical etiology of the patient’s CS? Is this acute or acute on chronic? Which ventricle is involved? Is this cardiac driven or are other organs the driving factor? Single ventricle involvement is better than bi-ventricular failure. Cardiogenic collapse due to an overdose may have a better outcome than a massive AMI.
Risk modifiers: how old is the patient? What are the comorbidities? Did the patient have a cardiac arrest? What is the patient’s mental status? Some factors are modifiable, but others are not. The concept of chronologic vs. physiologic age may come into play. A frail 40 year old with stage 4 cancer and end stage renal failure may be assessed differently than a 70 year old with mild hypertension and an AMI.
The SCAI stages of CS are a pragmatic way to assess patients with an acute presentation of CS. These stages have defined criteria and treatment recommendations for all patients. The three-axis model allows the clinician to individualize patient care based on shock severity, etiology/phenotype, and risk modification. The goal of these stages is to identify and aggressively treat patients with CS, as well as identify when treatment is failing and additional therapies may be needed.
Dr. Gaillard is Associate Professor in the Departments of Anesthesiology, Section on Critical Care; Internal Medicine, Section on Pulmonology, Critical Care, Allergy, and Immunologic Diseases; and Emergency Medicine; Wake Forest School of Medicine, Winston-Salem, N.C.
Cardiogenic shock (CS) is being recognized more often in critically ill patients. This increased prevalence is likely due to a better understanding of CS and the benefit of improving cardiac output (CO) to ensure adequate oxygen delivery (DO2).
CS is often, but not always, caused by a cardiac dysfunction. The heart is not able to provide adequate DO2 to the tissues. Hypoperfusion ensues. The body attempts to compensate for the poor perfusion by increasing heart rate, vasoconstriction, and shunting blood flow to vital organs. These compensatory mechanisms worsen perfusion by increasing myocardial ischemia which further worsens cardiac dysfunction. This is known as the downward spiral of CS (Ann Intern Med. 1999 Jul 6;131[1]).
There is a number of different etiologies for CS. Historically, acute myocardial infarctions (AMI) was the most common cause. In the last 20 years, AMI-induced CS has become less prevalent due to more aggressive reperfusion strategies. CS due to etiologies such as cardiomyopathy, myocarditis, right ventricle failure, and valvular pathologies have become more common. While the overarching goal is to restore DO2 to the tissue, the optimal treatment may differ based on the etiology of the CS. The Society for Cardiovascular Angiography and Intervention (SCAI) published CS classification stages in 2019 and then updated the stages 2022 (J Am Coll Cardiol. 2022 Mar 8;79[9]:933-46). In addition to the stages, there is now a three-axis model to address risk stratification. These classifications are a practically means of identifying and treating patients presenting with or concern for acute CS.
Stage A (At Risk) patients are not experiencing CS, but they are the at risk population. The patient’s hemodynamics, physical exam, and markers of hypoperfusion are normal. Stage A includes patients who have had a recent AMI or have heart failure.
Stage B (Beginning) patients have evidence of hemodynamic instability but are able to maintain tissue perfusion. These patients will have true or relative hypotension or tachycardia (in an attempt to maintain CO). Distal perfusion is adequate, but signs of ensuing decompensation (eg, elevated jugular venous pressure [JVP]) are present. Lactate is <2.0 mmol/L. Clinicians must be vigilant and treat these patients aggressively, so they do not decompensate further. It can be difficult to identify these patients because their blood pressure may be “normal,” but upon investigation, the blood pressure is actually a drop from the patient’s baseline.
Chronic heart failure patients with a history of depressed cardiac function will often have periods of cardiac decompensation between stages A and B. These patients are able to maintain perfusion for longer periods of time before further decompensation with hypoperfusion. If and when they do decompensate, they will often have a steep downward trajectory, so it is advantageous to the patient to be aggressive early.
Stage C (Classic) patients have evidence of tissue hypoperfusion. While these patients will often have true or relative hypotension, it is not a definition of stage C. These patients have evidence of volume overload with elevated JVP and rales throughout their lung fields. They will have poor distal perfusion and cool extremities that may become mottled. Lactate is ≥ 2 mmol/L. B-type natriuretic peptide (BNP) and liver function test (LFTs) results are elevated, and urine output is diminished. If a pulmonary arterial catheter is placed (highly recommended), the cardiac index (CI) is < 2.2 L/min/m2 and the pulmonary capillary wedge pressure (PCWP) is > 15 mm Hg. These patients look like what many clinicians think of when they think of CS.
These patients need better tissue perfusion. Inotropic support is needed to augment CO and DO2. Pharmacologic support is often the initial step. These patients also benefit from volume removal. This is usually accomplished with aggressive diuresis with a loop diuretic.
Stage D (Deteriorating) patients have failed initial treatment with single inotropic support. Hypoperfusion is not getting better and is often worsening. Lactate is staying > 2 mmol/L or rising. BNP and LFTs are also rising. These patients require additional inotropes and usually need vasopressors. Mechanical cardiac support (MCS) is often needed in addition to pharmacologic inotropic support.
Stage E (Extremis) patients have actual or impending circulatory collapse. These patients are peri-arrest with profound hypotension, lactic acidosis (often > 8 mmol/L), and unconsciousness. These patients are worsening despite multiple strategies to augment CO and DO2. These patients will likely die without emergent veno-arterial (VA) extracorporeal membrane oxygenation (ECMO). The goal of treatment is to stabilize the patient as quickly as possible to prevent cardiac arrest.
In addition to the stage of CS, SCAI developed the three-axis model of risk stratification as a conceptual model to be used for evaluation and prognostication. Etiology and phenotype, shock severity, and risk modifiers are factors related to patient outcomes from CS. This model is a way to individualize treatment to a specific patient.
Shock severity: What is the patient’s shock stage? What are the hemodynamics and metabolic abnormalities? What are the doses of the inotropes or vasopressors? Risk goes up with higher shock stages and vasoactive agent doses and worsening metabolic disturbances or hemodynamics.
Phenotype and etiology: what is the clinical etiology of the patient’s CS? Is this acute or acute on chronic? Which ventricle is involved? Is this cardiac driven or are other organs the driving factor? Single ventricle involvement is better than bi-ventricular failure. Cardiogenic collapse due to an overdose may have a better outcome than a massive AMI.
Risk modifiers: how old is the patient? What are the comorbidities? Did the patient have a cardiac arrest? What is the patient’s mental status? Some factors are modifiable, but others are not. The concept of chronologic vs. physiologic age may come into play. A frail 40 year old with stage 4 cancer and end stage renal failure may be assessed differently than a 70 year old with mild hypertension and an AMI.
The SCAI stages of CS are a pragmatic way to assess patients with an acute presentation of CS. These stages have defined criteria and treatment recommendations for all patients. The three-axis model allows the clinician to individualize patient care based on shock severity, etiology/phenotype, and risk modification. The goal of these stages is to identify and aggressively treat patients with CS, as well as identify when treatment is failing and additional therapies may be needed.
Dr. Gaillard is Associate Professor in the Departments of Anesthesiology, Section on Critical Care; Internal Medicine, Section on Pulmonology, Critical Care, Allergy, and Immunologic Diseases; and Emergency Medicine; Wake Forest School of Medicine, Winston-Salem, N.C.
Cardiogenic shock (CS) is being recognized more often in critically ill patients. This increased prevalence is likely due to a better understanding of CS and the benefit of improving cardiac output (CO) to ensure adequate oxygen delivery (DO2).
CS is often, but not always, caused by a cardiac dysfunction. The heart is not able to provide adequate DO2 to the tissues. Hypoperfusion ensues. The body attempts to compensate for the poor perfusion by increasing heart rate, vasoconstriction, and shunting blood flow to vital organs. These compensatory mechanisms worsen perfusion by increasing myocardial ischemia which further worsens cardiac dysfunction. This is known as the downward spiral of CS (Ann Intern Med. 1999 Jul 6;131[1]).
There is a number of different etiologies for CS. Historically, acute myocardial infarctions (AMI) was the most common cause. In the last 20 years, AMI-induced CS has become less prevalent due to more aggressive reperfusion strategies. CS due to etiologies such as cardiomyopathy, myocarditis, right ventricle failure, and valvular pathologies have become more common. While the overarching goal is to restore DO2 to the tissue, the optimal treatment may differ based on the etiology of the CS. The Society for Cardiovascular Angiography and Intervention (SCAI) published CS classification stages in 2019 and then updated the stages 2022 (J Am Coll Cardiol. 2022 Mar 8;79[9]:933-46). In addition to the stages, there is now a three-axis model to address risk stratification. These classifications are a practically means of identifying and treating patients presenting with or concern for acute CS.
Stage A (At Risk) patients are not experiencing CS, but they are the at risk population. The patient’s hemodynamics, physical exam, and markers of hypoperfusion are normal. Stage A includes patients who have had a recent AMI or have heart failure.
Stage B (Beginning) patients have evidence of hemodynamic instability but are able to maintain tissue perfusion. These patients will have true or relative hypotension or tachycardia (in an attempt to maintain CO). Distal perfusion is adequate, but signs of ensuing decompensation (eg, elevated jugular venous pressure [JVP]) are present. Lactate is <2.0 mmol/L. Clinicians must be vigilant and treat these patients aggressively, so they do not decompensate further. It can be difficult to identify these patients because their blood pressure may be “normal,” but upon investigation, the blood pressure is actually a drop from the patient’s baseline.
Chronic heart failure patients with a history of depressed cardiac function will often have periods of cardiac decompensation between stages A and B. These patients are able to maintain perfusion for longer periods of time before further decompensation with hypoperfusion. If and when they do decompensate, they will often have a steep downward trajectory, so it is advantageous to the patient to be aggressive early.
Stage C (Classic) patients have evidence of tissue hypoperfusion. While these patients will often have true or relative hypotension, it is not a definition of stage C. These patients have evidence of volume overload with elevated JVP and rales throughout their lung fields. They will have poor distal perfusion and cool extremities that may become mottled. Lactate is ≥ 2 mmol/L. B-type natriuretic peptide (BNP) and liver function test (LFTs) results are elevated, and urine output is diminished. If a pulmonary arterial catheter is placed (highly recommended), the cardiac index (CI) is < 2.2 L/min/m2 and the pulmonary capillary wedge pressure (PCWP) is > 15 mm Hg. These patients look like what many clinicians think of when they think of CS.
These patients need better tissue perfusion. Inotropic support is needed to augment CO and DO2. Pharmacologic support is often the initial step. These patients also benefit from volume removal. This is usually accomplished with aggressive diuresis with a loop diuretic.
Stage D (Deteriorating) patients have failed initial treatment with single inotropic support. Hypoperfusion is not getting better and is often worsening. Lactate is staying > 2 mmol/L or rising. BNP and LFTs are also rising. These patients require additional inotropes and usually need vasopressors. Mechanical cardiac support (MCS) is often needed in addition to pharmacologic inotropic support.
Stage E (Extremis) patients have actual or impending circulatory collapse. These patients are peri-arrest with profound hypotension, lactic acidosis (often > 8 mmol/L), and unconsciousness. These patients are worsening despite multiple strategies to augment CO and DO2. These patients will likely die without emergent veno-arterial (VA) extracorporeal membrane oxygenation (ECMO). The goal of treatment is to stabilize the patient as quickly as possible to prevent cardiac arrest.
In addition to the stage of CS, SCAI developed the three-axis model of risk stratification as a conceptual model to be used for evaluation and prognostication. Etiology and phenotype, shock severity, and risk modifiers are factors related to patient outcomes from CS. This model is a way to individualize treatment to a specific patient.
Shock severity: What is the patient’s shock stage? What are the hemodynamics and metabolic abnormalities? What are the doses of the inotropes or vasopressors? Risk goes up with higher shock stages and vasoactive agent doses and worsening metabolic disturbances or hemodynamics.
Phenotype and etiology: what is the clinical etiology of the patient’s CS? Is this acute or acute on chronic? Which ventricle is involved? Is this cardiac driven or are other organs the driving factor? Single ventricle involvement is better than bi-ventricular failure. Cardiogenic collapse due to an overdose may have a better outcome than a massive AMI.
Risk modifiers: how old is the patient? What are the comorbidities? Did the patient have a cardiac arrest? What is the patient’s mental status? Some factors are modifiable, but others are not. The concept of chronologic vs. physiologic age may come into play. A frail 40 year old with stage 4 cancer and end stage renal failure may be assessed differently than a 70 year old with mild hypertension and an AMI.
The SCAI stages of CS are a pragmatic way to assess patients with an acute presentation of CS. These stages have defined criteria and treatment recommendations for all patients. The three-axis model allows the clinician to individualize patient care based on shock severity, etiology/phenotype, and risk modification. The goal of these stages is to identify and aggressively treat patients with CS, as well as identify when treatment is failing and additional therapies may be needed.
Dr. Gaillard is Associate Professor in the Departments of Anesthesiology, Section on Critical Care; Internal Medicine, Section on Pulmonology, Critical Care, Allergy, and Immunologic Diseases; and Emergency Medicine; Wake Forest School of Medicine, Winston-Salem, N.C.
Add hands-on and interactive learning opportunities to your CHEST 2023 schedule
Explore the many ticketed sessions, and sign up early in case they sell out.
Simulation sessions
If you’re looking to gain hands-on exposure to equipment and tools that may not be available at your home institution, look no further than these simulation sessions. Choose from 25 different sessions offering firsthand experience with procedures relevant to your clinical practice.
“It’s a great opportunity to teach higher stakes procedures in a very low stakes environment where everybody’s comfortable and everybody’s learning from each other,” said Live Learning Subcommittee Chair, Nicholas Pastis, MD, FCCP.
CHEST 2023 simulation sessions will address clinical topics, including endobronchial ultrasound, cardiopulmonary exercise testing (CPET), intubation and cricothyrotomy, bronchoscopy management, and more. These sessions are taught by experts who use these real-world strategies in their daily practice.
CHEST 2022 attendee, Weston Bowker, MD, found value in the simulation courses he was able to attend in Nashville.
“It’s fantastic just to work with some of the leading experts in the field, especially from an interventional pulmonology standpoint. And, you truly get a different experience than maybe what your home institution offers,” he said.
Problem-based learning sessions
Exercise your critical thinking skills by working to resolve real-world clinical problems during these small group sessions. Refine your expertise on topics like lung cancer screening and staging, biologics in asthma, pneumonia, and more.
“Problem-based learning courses take a clinical problem or case study that is somewhat controversial to create a learning environment where the problem itself drives the learning with participants,” said CHEST 2023 Scientific Program Committee Chair, Aneesa Das, MD, FCCP. “These are small group sessions where learners can actively participate and collaborate to discuss various perspectives on the issue and work toward potential solutions.”
This year’s problem-based learning courses were chosen based on common controversies in chest medicine and current hot topics in medicine.
Dr. Das is excited for the Using CPET to Solve Your Difficult Cases course. “Cardiopulmonary exercise tests can sometimes be difficult even for seasoned physicians. This is always an amazing problem-based learning topic,” she added.
Meet the Professor sessions
Connect with leading chest medicine experts during these limited-capacity discussions capped at 24 registrants per session. Meet the Professor attendees will have the opportunity to engage in stimulating conversations on bronchiectasis, central airway obstructions, obesity hypoventilation, and sublobar resection.
“Meet the Professor sessions are a unique opportunity to interact and learn from a leader in the field in a very small group setting on a high-yield topic,” said Dr. Das. “These sessions allow for a learning environment that is personalized and intimate.”
Explore the many ticketed sessions, and sign up early in case they sell out.
Simulation sessions
If you’re looking to gain hands-on exposure to equipment and tools that may not be available at your home institution, look no further than these simulation sessions. Choose from 25 different sessions offering firsthand experience with procedures relevant to your clinical practice.
“It’s a great opportunity to teach higher stakes procedures in a very low stakes environment where everybody’s comfortable and everybody’s learning from each other,” said Live Learning Subcommittee Chair, Nicholas Pastis, MD, FCCP.
CHEST 2023 simulation sessions will address clinical topics, including endobronchial ultrasound, cardiopulmonary exercise testing (CPET), intubation and cricothyrotomy, bronchoscopy management, and more. These sessions are taught by experts who use these real-world strategies in their daily practice.
CHEST 2022 attendee, Weston Bowker, MD, found value in the simulation courses he was able to attend in Nashville.
“It’s fantastic just to work with some of the leading experts in the field, especially from an interventional pulmonology standpoint. And, you truly get a different experience than maybe what your home institution offers,” he said.
Problem-based learning sessions
Exercise your critical thinking skills by working to resolve real-world clinical problems during these small group sessions. Refine your expertise on topics like lung cancer screening and staging, biologics in asthma, pneumonia, and more.
“Problem-based learning courses take a clinical problem or case study that is somewhat controversial to create a learning environment where the problem itself drives the learning with participants,” said CHEST 2023 Scientific Program Committee Chair, Aneesa Das, MD, FCCP. “These are small group sessions where learners can actively participate and collaborate to discuss various perspectives on the issue and work toward potential solutions.”
This year’s problem-based learning courses were chosen based on common controversies in chest medicine and current hot topics in medicine.
Dr. Das is excited for the Using CPET to Solve Your Difficult Cases course. “Cardiopulmonary exercise tests can sometimes be difficult even for seasoned physicians. This is always an amazing problem-based learning topic,” she added.
Meet the Professor sessions
Connect with leading chest medicine experts during these limited-capacity discussions capped at 24 registrants per session. Meet the Professor attendees will have the opportunity to engage in stimulating conversations on bronchiectasis, central airway obstructions, obesity hypoventilation, and sublobar resection.
“Meet the Professor sessions are a unique opportunity to interact and learn from a leader in the field in a very small group setting on a high-yield topic,” said Dr. Das. “These sessions allow for a learning environment that is personalized and intimate.”
Explore the many ticketed sessions, and sign up early in case they sell out.
Simulation sessions
If you’re looking to gain hands-on exposure to equipment and tools that may not be available at your home institution, look no further than these simulation sessions. Choose from 25 different sessions offering firsthand experience with procedures relevant to your clinical practice.
“It’s a great opportunity to teach higher stakes procedures in a very low stakes environment where everybody’s comfortable and everybody’s learning from each other,” said Live Learning Subcommittee Chair, Nicholas Pastis, MD, FCCP.
CHEST 2023 simulation sessions will address clinical topics, including endobronchial ultrasound, cardiopulmonary exercise testing (CPET), intubation and cricothyrotomy, bronchoscopy management, and more. These sessions are taught by experts who use these real-world strategies in their daily practice.
CHEST 2022 attendee, Weston Bowker, MD, found value in the simulation courses he was able to attend in Nashville.
“It’s fantastic just to work with some of the leading experts in the field, especially from an interventional pulmonology standpoint. And, you truly get a different experience than maybe what your home institution offers,” he said.
Problem-based learning sessions
Exercise your critical thinking skills by working to resolve real-world clinical problems during these small group sessions. Refine your expertise on topics like lung cancer screening and staging, biologics in asthma, pneumonia, and more.
“Problem-based learning courses take a clinical problem or case study that is somewhat controversial to create a learning environment where the problem itself drives the learning with participants,” said CHEST 2023 Scientific Program Committee Chair, Aneesa Das, MD, FCCP. “These are small group sessions where learners can actively participate and collaborate to discuss various perspectives on the issue and work toward potential solutions.”
This year’s problem-based learning courses were chosen based on common controversies in chest medicine and current hot topics in medicine.
Dr. Das is excited for the Using CPET to Solve Your Difficult Cases course. “Cardiopulmonary exercise tests can sometimes be difficult even for seasoned physicians. This is always an amazing problem-based learning topic,” she added.
Meet the Professor sessions
Connect with leading chest medicine experts during these limited-capacity discussions capped at 24 registrants per session. Meet the Professor attendees will have the opportunity to engage in stimulating conversations on bronchiectasis, central airway obstructions, obesity hypoventilation, and sublobar resection.
“Meet the Professor sessions are a unique opportunity to interact and learn from a leader in the field in a very small group setting on a high-yield topic,” said Dr. Das. “These sessions allow for a learning environment that is personalized and intimate.”
Commentary: Vasodilation, antihypertensive drugs, and caffeine in migraine, August 2023
Migraine is well known as a vascular phenomenon, but research over time has shown that vasodilation is a secondary feature of headache rather than the cause of headache pain. Calcitonin gene-related peptide (CGRP) and other vasoactive inflammatory proteins transmit nociceptive signals through the trigeminal system, and although vasodilation occurs, it is not essential for migraine attacks to occur. White matter changes on MRI are a common finding in people with migraine, and the burden of migraine often correlates with the amount of white matter changes seen. This connection highlights the indirect connection between migraine and vascular risks factors, and this study attempts to better quantify this, specifically with respect to stroke and myocardial infarction (MI).
The study by Fuglsang and colleagues was a registry-based nationwide population-based cohort study that included over 200,000 individuals with migraine, using data collected from 1996 to 2018. Participants were differentiated as having or not having migraine on the basis of prescriptions of preventive or acute migraine medications. Male and female participants were further subdivided, and these groups were compared to healthy controls. The primary endpoints were hazard ratio and absolute risk differences for developing hemorrhagic or ischemic stroke or MI among all groups.
The researchers found an increased risk for ischemic stroke that was equal among male and female participants. Hemorrhagic stroke and MI were seen to be increased in migraine, but primarily among women with migraine. This study specifically investigated what the researchers termed "premature" stroke and MI, and there remains a likelihood that estrogen could be the differentiating factor between the difference in risk between male and female participants with migraine. I have recently highlighted a number of studies investigating vascular risk factors associated with migraine; this study will help clinicians appropriately educate their patients with migraine regarding vascular risk.
The first medications reported as helpful preventively for migraine were antihypertensives, specifically beta-blockers (BB). A number of other medications in other antihypertensive subclasses have also subsequently been shown to be helpful for migraine prevention. These include angiotensin-converting enzyme (ACE) inhibitors, angiotensin receptor blockers (ARB), calcium channel blockers (CCB), and alpha-blockers (AB). Carcel and colleagues conducted a meta-analysis that investigated a wide variety of antihypertensive medications in multiple classes and compared the reduction in headache frequency as defined as headache days per month.
This analysis reviewed 50 studies involving over 4000 participants. The majority of the studies (35 out 50 [70%]) had a cross-over design. The medications evaluated included clonidine (an alpha agonist), candesartan (an ARB), telmisartan (an ARB), propranolol (a BB), timolol (a BB), pindolol (a BB), metoprolol (a BB), bisoprolol (a BB), atenolol (a BB), alprenolol (a BB), nimodipine (a CCB), nifedipine (a CCB), verapamil (a CCB), nicardipine (a CCB), enalapril (an ACE inhibitor), and lisinopril (an ACE inhibitor). For each class of antihypertensive, there was a lower number of monthly headache days with treatment compared with placebo; the greatest reduction was for the CCB with a mean difference of about 2 days per month. BB on average decreased headache days per month by 0.7 days. For BB, there was no clear trend to increased efficacy with increased dose. Only six trials reported the difference in blood pressure: On average, there was a 9.3 mm Hg drop in systolic and 3.0 mm Hg drop in diastolic pressure.
The authors showed that there is statistical significance for the use of antihypertensive medications for decreasing migraine days per month, and this was statistically significant separately for numerous specific drugs within the classes: clonidine, candesartan, atenolol, bisoprolol, propranolol, timolol, nicardipine, and verapamil. Antihypertensive medications remain some of the most popular first-line preventive options for migraine, and although the benefit of this class as a whole is mild (slightly more than 1 day per month), it can be an excellent option for many patients
The relationship between migraine and caffeine is necessarily controversial. Caffeine is included as a component of many over-the-counter migraine treatments, and the beneficial effect of caffeine as an acute treatment for migraine has been documented for decades. Reduction in caffeine, however, has also been established as a helpful lifestyle modification for prevention of migraine attacks. Zhang and colleagues used data from the National Health and Nutrition Examination Survey database, a program conducted by the Centers for Disease Control and Prevention to assess the health and nutritional status of adults and children in the United States.
This study sought to quantify the relationship between dietary caffeine and "severe headache." For this study, "severe headache" was defined as answering yes to the question: During the past 3 months, did you have severe headaches or migraines? Dietary caffeine intake was collected through two 24-hour dietary recall interviews, one in person and one 3-10 days later via telephone. The amount of caffeine consumed was estimated in mg/day from all caffeine-containing foods and beverages, including coffee, tea, soda, and chocolate, using the US Department of Agriculture's Food and Nutrient Database. Each participant's mean caffeine intake was defined as the difference between the first and second dietary recalls.
A large number of covariates were assessed as well, including age, race/ethnicity, body mass index, poverty-income ratio, educational level, marital status, hypertension, cancer, energy intake, protein intake, calcium intake, magnesium intake, iron intake, sodium intake, alcohol status, smoking status, and triglyceride level. A total of 8993 participants were included. Caffeine intake was divided into four groups: ≥ 0 to < 40 mg/day, ≥ 40 to < 200 mg/day, ≥ 200 to < 400 mg/day, and ≥ 400 mg/day. After adjusting for confounders, a significant association between dietary caffeine intake and severe headaches or migraines was detected.
Curiously, in this study, only male participants were included. The authors found a clear correlation between the amount of caffeine consumed over a 24-hour period and severe migraine attacks. Further evaluation should investigate the frequency of attacks rather than just individual experience over a 3-month period. Although caffeine is helpful acutely, higher dose consumption is a risk factor for worsening migraine.
Migraine is well known as a vascular phenomenon, but research over time has shown that vasodilation is a secondary feature of headache rather than the cause of headache pain. Calcitonin gene-related peptide (CGRP) and other vasoactive inflammatory proteins transmit nociceptive signals through the trigeminal system, and although vasodilation occurs, it is not essential for migraine attacks to occur. White matter changes on MRI are a common finding in people with migraine, and the burden of migraine often correlates with the amount of white matter changes seen. This connection highlights the indirect connection between migraine and vascular risks factors, and this study attempts to better quantify this, specifically with respect to stroke and myocardial infarction (MI).
The study by Fuglsang and colleagues was a registry-based nationwide population-based cohort study that included over 200,000 individuals with migraine, using data collected from 1996 to 2018. Participants were differentiated as having or not having migraine on the basis of prescriptions of preventive or acute migraine medications. Male and female participants were further subdivided, and these groups were compared to healthy controls. The primary endpoints were hazard ratio and absolute risk differences for developing hemorrhagic or ischemic stroke or MI among all groups.
The researchers found an increased risk for ischemic stroke that was equal among male and female participants. Hemorrhagic stroke and MI were seen to be increased in migraine, but primarily among women with migraine. This study specifically investigated what the researchers termed "premature" stroke and MI, and there remains a likelihood that estrogen could be the differentiating factor between the difference in risk between male and female participants with migraine. I have recently highlighted a number of studies investigating vascular risk factors associated with migraine; this study will help clinicians appropriately educate their patients with migraine regarding vascular risk.
The first medications reported as helpful preventively for migraine were antihypertensives, specifically beta-blockers (BB). A number of other medications in other antihypertensive subclasses have also subsequently been shown to be helpful for migraine prevention. These include angiotensin-converting enzyme (ACE) inhibitors, angiotensin receptor blockers (ARB), calcium channel blockers (CCB), and alpha-blockers (AB). Carcel and colleagues conducted a meta-analysis that investigated a wide variety of antihypertensive medications in multiple classes and compared the reduction in headache frequency as defined as headache days per month.
This analysis reviewed 50 studies involving over 4000 participants. The majority of the studies (35 out 50 [70%]) had a cross-over design. The medications evaluated included clonidine (an alpha agonist), candesartan (an ARB), telmisartan (an ARB), propranolol (a BB), timolol (a BB), pindolol (a BB), metoprolol (a BB), bisoprolol (a BB), atenolol (a BB), alprenolol (a BB), nimodipine (a CCB), nifedipine (a CCB), verapamil (a CCB), nicardipine (a CCB), enalapril (an ACE inhibitor), and lisinopril (an ACE inhibitor). For each class of antihypertensive, there was a lower number of monthly headache days with treatment compared with placebo; the greatest reduction was for the CCB with a mean difference of about 2 days per month. BB on average decreased headache days per month by 0.7 days. For BB, there was no clear trend to increased efficacy with increased dose. Only six trials reported the difference in blood pressure: On average, there was a 9.3 mm Hg drop in systolic and 3.0 mm Hg drop in diastolic pressure.
The authors showed that there is statistical significance for the use of antihypertensive medications for decreasing migraine days per month, and this was statistically significant separately for numerous specific drugs within the classes: clonidine, candesartan, atenolol, bisoprolol, propranolol, timolol, nicardipine, and verapamil. Antihypertensive medications remain some of the most popular first-line preventive options for migraine, and although the benefit of this class as a whole is mild (slightly more than 1 day per month), it can be an excellent option for many patients
The relationship between migraine and caffeine is necessarily controversial. Caffeine is included as a component of many over-the-counter migraine treatments, and the beneficial effect of caffeine as an acute treatment for migraine has been documented for decades. Reduction in caffeine, however, has also been established as a helpful lifestyle modification for prevention of migraine attacks. Zhang and colleagues used data from the National Health and Nutrition Examination Survey database, a program conducted by the Centers for Disease Control and Prevention to assess the health and nutritional status of adults and children in the United States.
This study sought to quantify the relationship between dietary caffeine and "severe headache." For this study, "severe headache" was defined as answering yes to the question: During the past 3 months, did you have severe headaches or migraines? Dietary caffeine intake was collected through two 24-hour dietary recall interviews, one in person and one 3-10 days later via telephone. The amount of caffeine consumed was estimated in mg/day from all caffeine-containing foods and beverages, including coffee, tea, soda, and chocolate, using the US Department of Agriculture's Food and Nutrient Database. Each participant's mean caffeine intake was defined as the difference between the first and second dietary recalls.
A large number of covariates were assessed as well, including age, race/ethnicity, body mass index, poverty-income ratio, educational level, marital status, hypertension, cancer, energy intake, protein intake, calcium intake, magnesium intake, iron intake, sodium intake, alcohol status, smoking status, and triglyceride level. A total of 8993 participants were included. Caffeine intake was divided into four groups: ≥ 0 to < 40 mg/day, ≥ 40 to < 200 mg/day, ≥ 200 to < 400 mg/day, and ≥ 400 mg/day. After adjusting for confounders, a significant association between dietary caffeine intake and severe headaches or migraines was detected.
Curiously, in this study, only male participants were included. The authors found a clear correlation between the amount of caffeine consumed over a 24-hour period and severe migraine attacks. Further evaluation should investigate the frequency of attacks rather than just individual experience over a 3-month period. Although caffeine is helpful acutely, higher dose consumption is a risk factor for worsening migraine.
Migraine is well known as a vascular phenomenon, but research over time has shown that vasodilation is a secondary feature of headache rather than the cause of headache pain. Calcitonin gene-related peptide (CGRP) and other vasoactive inflammatory proteins transmit nociceptive signals through the trigeminal system, and although vasodilation occurs, it is not essential for migraine attacks to occur. White matter changes on MRI are a common finding in people with migraine, and the burden of migraine often correlates with the amount of white matter changes seen. This connection highlights the indirect connection between migraine and vascular risks factors, and this study attempts to better quantify this, specifically with respect to stroke and myocardial infarction (MI).
The study by Fuglsang and colleagues was a registry-based nationwide population-based cohort study that included over 200,000 individuals with migraine, using data collected from 1996 to 2018. Participants were differentiated as having or not having migraine on the basis of prescriptions of preventive or acute migraine medications. Male and female participants were further subdivided, and these groups were compared to healthy controls. The primary endpoints were hazard ratio and absolute risk differences for developing hemorrhagic or ischemic stroke or MI among all groups.
The researchers found an increased risk for ischemic stroke that was equal among male and female participants. Hemorrhagic stroke and MI were seen to be increased in migraine, but primarily among women with migraine. This study specifically investigated what the researchers termed "premature" stroke and MI, and there remains a likelihood that estrogen could be the differentiating factor between the difference in risk between male and female participants with migraine. I have recently highlighted a number of studies investigating vascular risk factors associated with migraine; this study will help clinicians appropriately educate their patients with migraine regarding vascular risk.
The first medications reported as helpful preventively for migraine were antihypertensives, specifically beta-blockers (BB). A number of other medications in other antihypertensive subclasses have also subsequently been shown to be helpful for migraine prevention. These include angiotensin-converting enzyme (ACE) inhibitors, angiotensin receptor blockers (ARB), calcium channel blockers (CCB), and alpha-blockers (AB). Carcel and colleagues conducted a meta-analysis that investigated a wide variety of antihypertensive medications in multiple classes and compared the reduction in headache frequency as defined as headache days per month.
This analysis reviewed 50 studies involving over 4000 participants. The majority of the studies (35 out 50 [70%]) had a cross-over design. The medications evaluated included clonidine (an alpha agonist), candesartan (an ARB), telmisartan (an ARB), propranolol (a BB), timolol (a BB), pindolol (a BB), metoprolol (a BB), bisoprolol (a BB), atenolol (a BB), alprenolol (a BB), nimodipine (a CCB), nifedipine (a CCB), verapamil (a CCB), nicardipine (a CCB), enalapril (an ACE inhibitor), and lisinopril (an ACE inhibitor). For each class of antihypertensive, there was a lower number of monthly headache days with treatment compared with placebo; the greatest reduction was for the CCB with a mean difference of about 2 days per month. BB on average decreased headache days per month by 0.7 days. For BB, there was no clear trend to increased efficacy with increased dose. Only six trials reported the difference in blood pressure: On average, there was a 9.3 mm Hg drop in systolic and 3.0 mm Hg drop in diastolic pressure.
The authors showed that there is statistical significance for the use of antihypertensive medications for decreasing migraine days per month, and this was statistically significant separately for numerous specific drugs within the classes: clonidine, candesartan, atenolol, bisoprolol, propranolol, timolol, nicardipine, and verapamil. Antihypertensive medications remain some of the most popular first-line preventive options for migraine, and although the benefit of this class as a whole is mild (slightly more than 1 day per month), it can be an excellent option for many patients
The relationship between migraine and caffeine is necessarily controversial. Caffeine is included as a component of many over-the-counter migraine treatments, and the beneficial effect of caffeine as an acute treatment for migraine has been documented for decades. Reduction in caffeine, however, has also been established as a helpful lifestyle modification for prevention of migraine attacks. Zhang and colleagues used data from the National Health and Nutrition Examination Survey database, a program conducted by the Centers for Disease Control and Prevention to assess the health and nutritional status of adults and children in the United States.
This study sought to quantify the relationship between dietary caffeine and "severe headache." For this study, "severe headache" was defined as answering yes to the question: During the past 3 months, did you have severe headaches or migraines? Dietary caffeine intake was collected through two 24-hour dietary recall interviews, one in person and one 3-10 days later via telephone. The amount of caffeine consumed was estimated in mg/day from all caffeine-containing foods and beverages, including coffee, tea, soda, and chocolate, using the US Department of Agriculture's Food and Nutrient Database. Each participant's mean caffeine intake was defined as the difference between the first and second dietary recalls.
A large number of covariates were assessed as well, including age, race/ethnicity, body mass index, poverty-income ratio, educational level, marital status, hypertension, cancer, energy intake, protein intake, calcium intake, magnesium intake, iron intake, sodium intake, alcohol status, smoking status, and triglyceride level. A total of 8993 participants were included. Caffeine intake was divided into four groups: ≥ 0 to < 40 mg/day, ≥ 40 to < 200 mg/day, ≥ 200 to < 400 mg/day, and ≥ 400 mg/day. After adjusting for confounders, a significant association between dietary caffeine intake and severe headaches or migraines was detected.
Curiously, in this study, only male participants were included. The authors found a clear correlation between the amount of caffeine consumed over a 24-hour period and severe migraine attacks. Further evaluation should investigate the frequency of attacks rather than just individual experience over a 3-month period. Although caffeine is helpful acutely, higher dose consumption is a risk factor for worsening migraine.
Medical treatment for appendicitis effective long-term
TOPLINE:
Most patients who receive antibiotics rather than surgical treatment for appendicitis have successful long-term outcomes, but some may require surgery up to 20 years later.
METHODOLOGY:
- Follow-up on 292 patients involved in two randomized controlled trials conducted in the 1990s by the Swedish National Patient Registry
- Both trials divided patients into two groups: those who underwent appendectomy and those who received antibiotic treatment for appendicitis.
- Researchers looked at rates of recurrent appendicitis that required surgery later in life.
TAKEAWAY:
- 29% of patients in the nonoperative group who were discharged successfully during the initial study eventually underwent surgery.
- Some patients who initially received antibiotics required surgery up to 20 years later.
- 9.5% of patients who didn’t undergo surgery went to a surgical outpatient clinic for abdominal pain, compared with 0.01% of those who had surgery.
IN PRACTICE:
“More than half of the patients treated nonoperatively did not experience recurrence and avoided surgery over approximately 2 decades. There is no evidence for long-term risks of nonoperative management other than that of recurrence of appendicitis,” the authors report.
SOURCE:
Simon Eaton, PhD, of UCL Great Ormond Street Institute of Child Health in London, was the corresponding author of the study, published online in JAMA Surgery. The study was funded by the NIHR Biomedical Research Centre at Great Ormond Street Hospital and the Swedish Research Council.
LIMITATIONS:
The data were retrospective, so the researchers could not track how patients’ circumstances and characteristics changed over time. Most patients were male, and the researchers lacked histopathology results for patients for whom nonsurgical treatment succeeded initially but who later required appendectomy. They also relied on diagnostic standards used in the 1990s, when the initial studies were performed; these were less sophisticated and accurate than recent standards.
DISCLOSURES:
Coauthor Jan Svensson, MD, PhD, reported receiving grants from the Lovisa Foundation during the conduct of the study. No other disclosures were reported.
A version of this article first appeared on Medscape.com.
TOPLINE:
Most patients who receive antibiotics rather than surgical treatment for appendicitis have successful long-term outcomes, but some may require surgery up to 20 years later.
METHODOLOGY:
- Follow-up on 292 patients involved in two randomized controlled trials conducted in the 1990s by the Swedish National Patient Registry
- Both trials divided patients into two groups: those who underwent appendectomy and those who received antibiotic treatment for appendicitis.
- Researchers looked at rates of recurrent appendicitis that required surgery later in life.
TAKEAWAY:
- 29% of patients in the nonoperative group who were discharged successfully during the initial study eventually underwent surgery.
- Some patients who initially received antibiotics required surgery up to 20 years later.
- 9.5% of patients who didn’t undergo surgery went to a surgical outpatient clinic for abdominal pain, compared with 0.01% of those who had surgery.
IN PRACTICE:
“More than half of the patients treated nonoperatively did not experience recurrence and avoided surgery over approximately 2 decades. There is no evidence for long-term risks of nonoperative management other than that of recurrence of appendicitis,” the authors report.
SOURCE:
Simon Eaton, PhD, of UCL Great Ormond Street Institute of Child Health in London, was the corresponding author of the study, published online in JAMA Surgery. The study was funded by the NIHR Biomedical Research Centre at Great Ormond Street Hospital and the Swedish Research Council.
LIMITATIONS:
The data were retrospective, so the researchers could not track how patients’ circumstances and characteristics changed over time. Most patients were male, and the researchers lacked histopathology results for patients for whom nonsurgical treatment succeeded initially but who later required appendectomy. They also relied on diagnostic standards used in the 1990s, when the initial studies were performed; these were less sophisticated and accurate than recent standards.
DISCLOSURES:
Coauthor Jan Svensson, MD, PhD, reported receiving grants from the Lovisa Foundation during the conduct of the study. No other disclosures were reported.
A version of this article first appeared on Medscape.com.
TOPLINE:
Most patients who receive antibiotics rather than surgical treatment for appendicitis have successful long-term outcomes, but some may require surgery up to 20 years later.
METHODOLOGY:
- Follow-up on 292 patients involved in two randomized controlled trials conducted in the 1990s by the Swedish National Patient Registry
- Both trials divided patients into two groups: those who underwent appendectomy and those who received antibiotic treatment for appendicitis.
- Researchers looked at rates of recurrent appendicitis that required surgery later in life.
TAKEAWAY:
- 29% of patients in the nonoperative group who were discharged successfully during the initial study eventually underwent surgery.
- Some patients who initially received antibiotics required surgery up to 20 years later.
- 9.5% of patients who didn’t undergo surgery went to a surgical outpatient clinic for abdominal pain, compared with 0.01% of those who had surgery.
IN PRACTICE:
“More than half of the patients treated nonoperatively did not experience recurrence and avoided surgery over approximately 2 decades. There is no evidence for long-term risks of nonoperative management other than that of recurrence of appendicitis,” the authors report.
SOURCE:
Simon Eaton, PhD, of UCL Great Ormond Street Institute of Child Health in London, was the corresponding author of the study, published online in JAMA Surgery. The study was funded by the NIHR Biomedical Research Centre at Great Ormond Street Hospital and the Swedish Research Council.
LIMITATIONS:
The data were retrospective, so the researchers could not track how patients’ circumstances and characteristics changed over time. Most patients were male, and the researchers lacked histopathology results for patients for whom nonsurgical treatment succeeded initially but who later required appendectomy. They also relied on diagnostic standards used in the 1990s, when the initial studies were performed; these were less sophisticated and accurate than recent standards.
DISCLOSURES:
Coauthor Jan Svensson, MD, PhD, reported receiving grants from the Lovisa Foundation during the conduct of the study. No other disclosures were reported.
A version of this article first appeared on Medscape.com.
Genetic profiles affect smokers’ lung cancer risk
conducted by specialists from the Cancer Center at the University of Navarra Clinic (CUN). The results were presented at the annual meeting of the American Society for Clinical Oncology.
Ana Patiño García, PhD, director of the genomic medicine unit at the CUN and a coordinator of the research, explained in an interview the main reason why this study was conducted. “This study came straight out of the oncology clinic, where we are constantly encountering patients with lung cancer who have never smoked or who have smoked very little, while we also all know people who have smoked a lot throughout their lifetime and have never developed cancer. This observation has led us to ask whether there are genetic factors that increase or decrease the risk of cancer and protect people against this disease.”
José Luis Pérez Gracia, MD, PhD, oncologist, coordinator of the oncology trials department at the CUN and another of the individuals responsible for this research, said: “This is the first study to validate genetic factors associated with people who appear to be resistant to developing tobacco-related lung cancer or who, on the other hand, are at high risk of developing this disease.”
Pioneering approach
Earlier evidence showed that some smokers develop cancer, and others don’t. “This is a very well-known fact, since everyone knows about some elderly person who has been a heavy smoker and has never developed lung cancer,” said Dr. Pérez. “Unfortunately, we oncologists encounter young smokers who have been diagnosed with this disease. However, despite the importance of understanding the causes behind these phenotypes, it is a question that has never been studied from a genetic standpoint.”
The study was conducted using DNA from 133 heavy smokers who had not developed lung cancer at a mean age of 80 years, and from another 116 heavy smokers who had developed this type of cancer at a mean age of 50 years. This DNA was sequenced using next-generation techniques, and the results were analyzed using bioinformatics and artificial intelligence systems in collaboration with the University of Navarra Applied Medical Research Center and the University of Navarra School of Engineering.
When asked how this methodology could be applied to support other research conducted along these lines, Dr. Patiño said, “The most novel thing about this research is actually its approach. It’s based on groups at the extremes, defined by the patient’s age at the time of developing lung cancer and how much they had smoked. This type of comparative design is called extreme phenotypes, and its main distinguishing characteristic – which is also its most limiting characteristic – is choosing cases and controls well. Obviously, with today’s next-generation sequencing technologies, we achieve a quantity and quality of data that would have been unattainable in years gone by.”
Speaking to the role played by bioinformatics and artificial intelligence in this research, Dr. Patiño explained that they are fairly new techniques. “In fact, these technologies could be thought of as spearheading a lot of the biomedical research being done today. They’ve also somewhat set the stage for the paradigm shift where the investigator asks the data a question, and in the case of artificial intelligence, it’s the data that answer.”
Pinpointing genetic differences
In his analysis of the most noteworthy data and conclusions from this research, Dr. Pérez noted, “The most significant thing we’ve seen is that both populations have genetic differences. This suggests that our hypothesis is correct. Of course, more studies including a larger number of individuals will be needed to confirm these findings. For the first time, our work has laid the foundation for developing this line of research.”
“Many genetic variants that we have identified as differentials in cases and controls are found in genes relevant to the immune system (HLA system), in genes related to functional pathways that are often altered in tumor development, and in structural proteins and in genes related to cell mobility,” emphasized Dr. Patiño.
Many of the genetic characteristics that were discovered are located in genes with functions related to cancer development, such as immune response, repair of genetic material, regulation of inflammation, etc. This finding is highly significant, said Dr. Pérez. “However, we must remember that these phenotypes may be attributable to multiple causes, not just one cause.”
Furthermore, the specialist explained the next steps to be taken in the context of the line opened up by this research. “First, we must expand these studies, including more individuals with, if possible, even more extreme phenotypes: more smokers who are older and younger, respectively. Once the statistical evidence is stronger, we must also confirm that the alterations observed in lab-based studies truly impact gene function.”
Earlier diagnosis
The clinician also discussed the potential ways that the conclusions of this study could be applied to clinical practice now and in the future, and how the conclusions could benefit these patients. “The results of our line of research may help in early identification of those individuals at high risk of developing lung cancer if they smoke, so that they could be included in prevention programs to keep them from smoking or to help them stop smoking,” said Dr. Pérez. “It would also allow for early diagnosis of cancer at a time when there is a much higher chance of curing it.
“However, the most important thing is that our study may allow us to better understand the mechanisms by which cancer arises and especially why some people do not develop it. This [understanding] could lead to new diagnostic techniques and new treatments for this disease. The techniques needed to develop this line of research (bioinformatic mass sequencing and artificial intelligence) are available and becoming more reliable and more accessible every day. So, we believe our strategy is very realistic,” he added.
Although the line of research opened up by this study depicts a new scenario, the specialists still must face several challenges to discover why some smokers are more likely than others to develop lung cancer.
“There are many lines of research in this regard,” said Dr. Pérez. “But to name a few, I would draw attention to the need to increase the number of cases and controls to improve the comparison, study patients with other tumors related to tobacco use, ask new questions using the data we have already collected, and apply other genomic techniques that would allow us to perform additional studies of genetic variants that have not yet been studied. And, of course, we need to use functional studies to expand our understanding of the function and activity of the genes that have already been identified.”
Dr. Patiño and Dr. Pérez declared that they have no relevant financial conflicts of interest.
This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.
conducted by specialists from the Cancer Center at the University of Navarra Clinic (CUN). The results were presented at the annual meeting of the American Society for Clinical Oncology.
Ana Patiño García, PhD, director of the genomic medicine unit at the CUN and a coordinator of the research, explained in an interview the main reason why this study was conducted. “This study came straight out of the oncology clinic, where we are constantly encountering patients with lung cancer who have never smoked or who have smoked very little, while we also all know people who have smoked a lot throughout their lifetime and have never developed cancer. This observation has led us to ask whether there are genetic factors that increase or decrease the risk of cancer and protect people against this disease.”
José Luis Pérez Gracia, MD, PhD, oncologist, coordinator of the oncology trials department at the CUN and another of the individuals responsible for this research, said: “This is the first study to validate genetic factors associated with people who appear to be resistant to developing tobacco-related lung cancer or who, on the other hand, are at high risk of developing this disease.”
Pioneering approach
Earlier evidence showed that some smokers develop cancer, and others don’t. “This is a very well-known fact, since everyone knows about some elderly person who has been a heavy smoker and has never developed lung cancer,” said Dr. Pérez. “Unfortunately, we oncologists encounter young smokers who have been diagnosed with this disease. However, despite the importance of understanding the causes behind these phenotypes, it is a question that has never been studied from a genetic standpoint.”
The study was conducted using DNA from 133 heavy smokers who had not developed lung cancer at a mean age of 80 years, and from another 116 heavy smokers who had developed this type of cancer at a mean age of 50 years. This DNA was sequenced using next-generation techniques, and the results were analyzed using bioinformatics and artificial intelligence systems in collaboration with the University of Navarra Applied Medical Research Center and the University of Navarra School of Engineering.
When asked how this methodology could be applied to support other research conducted along these lines, Dr. Patiño said, “The most novel thing about this research is actually its approach. It’s based on groups at the extremes, defined by the patient’s age at the time of developing lung cancer and how much they had smoked. This type of comparative design is called extreme phenotypes, and its main distinguishing characteristic – which is also its most limiting characteristic – is choosing cases and controls well. Obviously, with today’s next-generation sequencing technologies, we achieve a quantity and quality of data that would have been unattainable in years gone by.”
Speaking to the role played by bioinformatics and artificial intelligence in this research, Dr. Patiño explained that they are fairly new techniques. “In fact, these technologies could be thought of as spearheading a lot of the biomedical research being done today. They’ve also somewhat set the stage for the paradigm shift where the investigator asks the data a question, and in the case of artificial intelligence, it’s the data that answer.”
Pinpointing genetic differences
In his analysis of the most noteworthy data and conclusions from this research, Dr. Pérez noted, “The most significant thing we’ve seen is that both populations have genetic differences. This suggests that our hypothesis is correct. Of course, more studies including a larger number of individuals will be needed to confirm these findings. For the first time, our work has laid the foundation for developing this line of research.”
“Many genetic variants that we have identified as differentials in cases and controls are found in genes relevant to the immune system (HLA system), in genes related to functional pathways that are often altered in tumor development, and in structural proteins and in genes related to cell mobility,” emphasized Dr. Patiño.
Many of the genetic characteristics that were discovered are located in genes with functions related to cancer development, such as immune response, repair of genetic material, regulation of inflammation, etc. This finding is highly significant, said Dr. Pérez. “However, we must remember that these phenotypes may be attributable to multiple causes, not just one cause.”
Furthermore, the specialist explained the next steps to be taken in the context of the line opened up by this research. “First, we must expand these studies, including more individuals with, if possible, even more extreme phenotypes: more smokers who are older and younger, respectively. Once the statistical evidence is stronger, we must also confirm that the alterations observed in lab-based studies truly impact gene function.”
Earlier diagnosis
The clinician also discussed the potential ways that the conclusions of this study could be applied to clinical practice now and in the future, and how the conclusions could benefit these patients. “The results of our line of research may help in early identification of those individuals at high risk of developing lung cancer if they smoke, so that they could be included in prevention programs to keep them from smoking or to help them stop smoking,” said Dr. Pérez. “It would also allow for early diagnosis of cancer at a time when there is a much higher chance of curing it.
“However, the most important thing is that our study may allow us to better understand the mechanisms by which cancer arises and especially why some people do not develop it. This [understanding] could lead to new diagnostic techniques and new treatments for this disease. The techniques needed to develop this line of research (bioinformatic mass sequencing and artificial intelligence) are available and becoming more reliable and more accessible every day. So, we believe our strategy is very realistic,” he added.
Although the line of research opened up by this study depicts a new scenario, the specialists still must face several challenges to discover why some smokers are more likely than others to develop lung cancer.
“There are many lines of research in this regard,” said Dr. Pérez. “But to name a few, I would draw attention to the need to increase the number of cases and controls to improve the comparison, study patients with other tumors related to tobacco use, ask new questions using the data we have already collected, and apply other genomic techniques that would allow us to perform additional studies of genetic variants that have not yet been studied. And, of course, we need to use functional studies to expand our understanding of the function and activity of the genes that have already been identified.”
Dr. Patiño and Dr. Pérez declared that they have no relevant financial conflicts of interest.
This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.
conducted by specialists from the Cancer Center at the University of Navarra Clinic (CUN). The results were presented at the annual meeting of the American Society for Clinical Oncology.
Ana Patiño García, PhD, director of the genomic medicine unit at the CUN and a coordinator of the research, explained in an interview the main reason why this study was conducted. “This study came straight out of the oncology clinic, where we are constantly encountering patients with lung cancer who have never smoked or who have smoked very little, while we also all know people who have smoked a lot throughout their lifetime and have never developed cancer. This observation has led us to ask whether there are genetic factors that increase or decrease the risk of cancer and protect people against this disease.”
José Luis Pérez Gracia, MD, PhD, oncologist, coordinator of the oncology trials department at the CUN and another of the individuals responsible for this research, said: “This is the first study to validate genetic factors associated with people who appear to be resistant to developing tobacco-related lung cancer or who, on the other hand, are at high risk of developing this disease.”
Pioneering approach
Earlier evidence showed that some smokers develop cancer, and others don’t. “This is a very well-known fact, since everyone knows about some elderly person who has been a heavy smoker and has never developed lung cancer,” said Dr. Pérez. “Unfortunately, we oncologists encounter young smokers who have been diagnosed with this disease. However, despite the importance of understanding the causes behind these phenotypes, it is a question that has never been studied from a genetic standpoint.”
The study was conducted using DNA from 133 heavy smokers who had not developed lung cancer at a mean age of 80 years, and from another 116 heavy smokers who had developed this type of cancer at a mean age of 50 years. This DNA was sequenced using next-generation techniques, and the results were analyzed using bioinformatics and artificial intelligence systems in collaboration with the University of Navarra Applied Medical Research Center and the University of Navarra School of Engineering.
When asked how this methodology could be applied to support other research conducted along these lines, Dr. Patiño said, “The most novel thing about this research is actually its approach. It’s based on groups at the extremes, defined by the patient’s age at the time of developing lung cancer and how much they had smoked. This type of comparative design is called extreme phenotypes, and its main distinguishing characteristic – which is also its most limiting characteristic – is choosing cases and controls well. Obviously, with today’s next-generation sequencing technologies, we achieve a quantity and quality of data that would have been unattainable in years gone by.”
Speaking to the role played by bioinformatics and artificial intelligence in this research, Dr. Patiño explained that they are fairly new techniques. “In fact, these technologies could be thought of as spearheading a lot of the biomedical research being done today. They’ve also somewhat set the stage for the paradigm shift where the investigator asks the data a question, and in the case of artificial intelligence, it’s the data that answer.”
Pinpointing genetic differences
In his analysis of the most noteworthy data and conclusions from this research, Dr. Pérez noted, “The most significant thing we’ve seen is that both populations have genetic differences. This suggests that our hypothesis is correct. Of course, more studies including a larger number of individuals will be needed to confirm these findings. For the first time, our work has laid the foundation for developing this line of research.”
“Many genetic variants that we have identified as differentials in cases and controls are found in genes relevant to the immune system (HLA system), in genes related to functional pathways that are often altered in tumor development, and in structural proteins and in genes related to cell mobility,” emphasized Dr. Patiño.
Many of the genetic characteristics that were discovered are located in genes with functions related to cancer development, such as immune response, repair of genetic material, regulation of inflammation, etc. This finding is highly significant, said Dr. Pérez. “However, we must remember that these phenotypes may be attributable to multiple causes, not just one cause.”
Furthermore, the specialist explained the next steps to be taken in the context of the line opened up by this research. “First, we must expand these studies, including more individuals with, if possible, even more extreme phenotypes: more smokers who are older and younger, respectively. Once the statistical evidence is stronger, we must also confirm that the alterations observed in lab-based studies truly impact gene function.”
Earlier diagnosis
The clinician also discussed the potential ways that the conclusions of this study could be applied to clinical practice now and in the future, and how the conclusions could benefit these patients. “The results of our line of research may help in early identification of those individuals at high risk of developing lung cancer if they smoke, so that they could be included in prevention programs to keep them from smoking or to help them stop smoking,” said Dr. Pérez. “It would also allow for early diagnosis of cancer at a time when there is a much higher chance of curing it.
“However, the most important thing is that our study may allow us to better understand the mechanisms by which cancer arises and especially why some people do not develop it. This [understanding] could lead to new diagnostic techniques and new treatments for this disease. The techniques needed to develop this line of research (bioinformatic mass sequencing and artificial intelligence) are available and becoming more reliable and more accessible every day. So, we believe our strategy is very realistic,” he added.
Although the line of research opened up by this study depicts a new scenario, the specialists still must face several challenges to discover why some smokers are more likely than others to develop lung cancer.
“There are many lines of research in this regard,” said Dr. Pérez. “But to name a few, I would draw attention to the need to increase the number of cases and controls to improve the comparison, study patients with other tumors related to tobacco use, ask new questions using the data we have already collected, and apply other genomic techniques that would allow us to perform additional studies of genetic variants that have not yet been studied. And, of course, we need to use functional studies to expand our understanding of the function and activity of the genes that have already been identified.”
Dr. Patiño and Dr. Pérez declared that they have no relevant financial conflicts of interest.
This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.
FROM ASCO 2023
Female CRC survivors may experience long-term GI symptoms
TOPLINE:
, suggesting a need to improve GI symptom management in this population.
METHODOLOGY:
- In this cross-sectional study, investigators used data from the Women’s Health Initiative (WHI) Life and Longevity After Cancer study to explore the impact of cancer treatments on persistent GI symptoms in long-term female CRC survivors and why some patients suffer from these symptoms.
- The cohort consisted of 413 postmenopausal women aged 50-79 years. The mean age of the patients was 62.7 years at the time of CRC diagnosis and 71.2 years at survey completion.
- Study participants received a CRC diagnosis, mostly in the colon (n = 341), before 2011.
- Participants completed lifestyle questionnaires at baseline and annually thereafter. The questionnaires assessed a range of factors, including GI symptoms, psychological well-being, physical activity, and dietary habits.
TAKEAWAY:
- Most CRC survivors (81%) reported persistent GI symptoms more than 8 years after their cancer diagnosis.
- Abdominal bloating/gas was the most common symptom (54.2%), followed by constipation (44.1%), diarrhea (33.4%), and abdominal/pelvic pain (28.6%). Overall, 15.4% of CRC survivors reported having moderate to severe overall GI symptoms.
- Psychological distress – namely, fatigue, sleep disturbance, and anxiety – represented the most important risk factor for long-term GI symptoms. Other risk factors included time since cancer diagnosis of less than 5 years, advanced cancer stage, poor dietary habits, and low physical activity.
- GI symptoms affected survivors’ quality of life, functioning, and body image.
IN PRACTICE:
“Building upon prior work, our findings contribute to the literature by demonstrating strong relationships between GI symptoms and psychological symptoms,” the authors concluded. “Our findings shed light on the importance of psychosocial support as well as lifestyle interventions (specifically nutritional management) in managing GI symptoms in CRC survivors.”
SOURCE:
The study was led by Claire Han and was published in PLOS ONE in May 2023.
LIMITATIONS:
- The cross-sectional study design limited the researchers’ ability to identify causal effects with respect to risk factors, life impact, and GI symptoms.
- Symptom data were self-reported, so may have been underreported or overreported.
DISCLOSURES:
The study had no direct funding support. The original data collection for the WHI was funded by the National Heart, Lung, and Blood Institute. Authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
TOPLINE:
, suggesting a need to improve GI symptom management in this population.
METHODOLOGY:
- In this cross-sectional study, investigators used data from the Women’s Health Initiative (WHI) Life and Longevity After Cancer study to explore the impact of cancer treatments on persistent GI symptoms in long-term female CRC survivors and why some patients suffer from these symptoms.
- The cohort consisted of 413 postmenopausal women aged 50-79 years. The mean age of the patients was 62.7 years at the time of CRC diagnosis and 71.2 years at survey completion.
- Study participants received a CRC diagnosis, mostly in the colon (n = 341), before 2011.
- Participants completed lifestyle questionnaires at baseline and annually thereafter. The questionnaires assessed a range of factors, including GI symptoms, psychological well-being, physical activity, and dietary habits.
TAKEAWAY:
- Most CRC survivors (81%) reported persistent GI symptoms more than 8 years after their cancer diagnosis.
- Abdominal bloating/gas was the most common symptom (54.2%), followed by constipation (44.1%), diarrhea (33.4%), and abdominal/pelvic pain (28.6%). Overall, 15.4% of CRC survivors reported having moderate to severe overall GI symptoms.
- Psychological distress – namely, fatigue, sleep disturbance, and anxiety – represented the most important risk factor for long-term GI symptoms. Other risk factors included time since cancer diagnosis of less than 5 years, advanced cancer stage, poor dietary habits, and low physical activity.
- GI symptoms affected survivors’ quality of life, functioning, and body image.
IN PRACTICE:
“Building upon prior work, our findings contribute to the literature by demonstrating strong relationships between GI symptoms and psychological symptoms,” the authors concluded. “Our findings shed light on the importance of psychosocial support as well as lifestyle interventions (specifically nutritional management) in managing GI symptoms in CRC survivors.”
SOURCE:
The study was led by Claire Han and was published in PLOS ONE in May 2023.
LIMITATIONS:
- The cross-sectional study design limited the researchers’ ability to identify causal effects with respect to risk factors, life impact, and GI symptoms.
- Symptom data were self-reported, so may have been underreported or overreported.
DISCLOSURES:
The study had no direct funding support. The original data collection for the WHI was funded by the National Heart, Lung, and Blood Institute. Authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
TOPLINE:
, suggesting a need to improve GI symptom management in this population.
METHODOLOGY:
- In this cross-sectional study, investigators used data from the Women’s Health Initiative (WHI) Life and Longevity After Cancer study to explore the impact of cancer treatments on persistent GI symptoms in long-term female CRC survivors and why some patients suffer from these symptoms.
- The cohort consisted of 413 postmenopausal women aged 50-79 years. The mean age of the patients was 62.7 years at the time of CRC diagnosis and 71.2 years at survey completion.
- Study participants received a CRC diagnosis, mostly in the colon (n = 341), before 2011.
- Participants completed lifestyle questionnaires at baseline and annually thereafter. The questionnaires assessed a range of factors, including GI symptoms, psychological well-being, physical activity, and dietary habits.
TAKEAWAY:
- Most CRC survivors (81%) reported persistent GI symptoms more than 8 years after their cancer diagnosis.
- Abdominal bloating/gas was the most common symptom (54.2%), followed by constipation (44.1%), diarrhea (33.4%), and abdominal/pelvic pain (28.6%). Overall, 15.4% of CRC survivors reported having moderate to severe overall GI symptoms.
- Psychological distress – namely, fatigue, sleep disturbance, and anxiety – represented the most important risk factor for long-term GI symptoms. Other risk factors included time since cancer diagnosis of less than 5 years, advanced cancer stage, poor dietary habits, and low physical activity.
- GI symptoms affected survivors’ quality of life, functioning, and body image.
IN PRACTICE:
“Building upon prior work, our findings contribute to the literature by demonstrating strong relationships between GI symptoms and psychological symptoms,” the authors concluded. “Our findings shed light on the importance of psychosocial support as well as lifestyle interventions (specifically nutritional management) in managing GI symptoms in CRC survivors.”
SOURCE:
The study was led by Claire Han and was published in PLOS ONE in May 2023.
LIMITATIONS:
- The cross-sectional study design limited the researchers’ ability to identify causal effects with respect to risk factors, life impact, and GI symptoms.
- Symptom data were self-reported, so may have been underreported or overreported.
DISCLOSURES:
The study had no direct funding support. The original data collection for the WHI was funded by the National Heart, Lung, and Blood Institute. Authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM PLOS ONE
Growth hormone links with worse T2D control in adolescents
SAN DIEGO – Plasma levels of three proteins involved in growth hormone activity showed significant links to the controllability of type 2 diabetes in children, a finding that suggests these proteins may serve as risk markers for incident type 2 diabetes and help identify adolescents who could benefit from aggressive preventive care.
“Plasma growth hormone mediators are associated with glycemic failure in youth with type 2 diabetes,” Chang Lu, MD, said at the at the annual scientific sessions of the American Diabetes Association. “Our hope is that these mediators could be biomarkers for predicting type 2 diabetes onset,” she added in an interview.
Another potential application is to “leverage these data to find predictive markers” that could identify adolescents with type 2 diabetes “at risk for particularly aggressive disease and target them for more intervention,” added Elvira M. Isganaitis, MD, senior author of the report and a pediatric endocrinologist at the Joslin Diabetes Center in Boston.
Does growth hormone cause incident T2D at puberty?
Changes in levels of growth hormone–associated peptides during puberty “could potentially explain why children with type 2 diabetes have a more aggressive course” of the disorder, added Dr. Lu, a pediatric endocrinologist at Joslin and at Boston’s Children’s Hospital.
Puberty-associated changes in growth hormone and related peptides “could be why type 2 diabetes starts during puberty. Type 2 diabetes is almost unheard of before children reach about age 10,” Dr. Isganaitis said in an interview.
A current hypothesis is that “high levels of growth hormone is a cause of insulin resistance during puberty, but in healthy children their beta cells overcome this by making more insulin and so they do not develop diabetes,” said Kristen J. Nadeau, MD, a pediatric endocrinologist and professor at Children’s Hospital Colorado in Denver.
“But this is a stress situation, and if someone has poor beta-cell function they may develop diabetes. The increase in growth hormone [during puberty] can unmask a physiologic and genetic predisposition” to developing type 2 diabetes, Dr. Nadeau said in an interview.
The analyses run by Dr. Lu, Dr. Isganaitis, and their coauthors used data collected in the Treatment Options for Type 2 Diabetes in Adolescents and Youth (TODAY) study, which randomized 699 children aged 10-17 years with type 2 diabetes to one of three antidiabetes treatment regimens and tallied the subsequent incidence of glycemic failure. The study defined the latter as either 6 months with a hemoglobin A1c level of at least 8% or need for insulin treatment.
The primary outcome showed a 39%-52% incidence of failure during 5 years of follow-up depending on the specific treatments the study participants received.
Growth hormone correlates of glycemic failure
The new analyses focused on 310 study participants from TODAY who had plasma specimens available from baseline and a second specimen obtained after 3 years of follow-up. The researchers compared the levels of three peptides that mediate growth hormone signaling at baseline and after 3 years, and assessed these changes relative to the endpoint of glycemic failure.
The results showed that an increase in insulin-like growth factor-1 significantly linked with a reduced incidence of glycemic failure and improved glycemia and beta-cell function.
In contrast, Also, an increase in insulin-like growth factor binding protein-1 significantly linked with glycemic failure and hyperglycemia at 36 months, and with higher insulin sensitivity at baseline. All these analyses adjusted for baseline differences in several demographic and clinical variables.
But these post hoc analyses could not determine whether these associations resulted from, or had a causal role in, treatment failure, cautioned Dr. Lu.
Future studies should examine the relationship of growth hormone signaling and the course of glycemic control in children and adolescents with prediabetes and obesity, Dr. Lu said.
Confirming that these growth hormone-related proteins are reliable predictors of future glycemic dysfunction would open the door to studies of interventions to slow or prevent progression to type 2 diabetes in children identified as high risk.
Potential interventions include early initiation of insulin treatment, which could help preserve beta-cell function, or treatment with a glucagon-like peptide-1 (GLP-1) agonist, a class of agents that may interact with the insulin-like growth factor-1 receptors on beta cells, Dr. Lu said.
The study received no commercial funding. Dr. Lu, Dr. Isganaitis, and Dr. Nadeau reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
SAN DIEGO – Plasma levels of three proteins involved in growth hormone activity showed significant links to the controllability of type 2 diabetes in children, a finding that suggests these proteins may serve as risk markers for incident type 2 diabetes and help identify adolescents who could benefit from aggressive preventive care.
“Plasma growth hormone mediators are associated with glycemic failure in youth with type 2 diabetes,” Chang Lu, MD, said at the at the annual scientific sessions of the American Diabetes Association. “Our hope is that these mediators could be biomarkers for predicting type 2 diabetes onset,” she added in an interview.
Another potential application is to “leverage these data to find predictive markers” that could identify adolescents with type 2 diabetes “at risk for particularly aggressive disease and target them for more intervention,” added Elvira M. Isganaitis, MD, senior author of the report and a pediatric endocrinologist at the Joslin Diabetes Center in Boston.
Does growth hormone cause incident T2D at puberty?
Changes in levels of growth hormone–associated peptides during puberty “could potentially explain why children with type 2 diabetes have a more aggressive course” of the disorder, added Dr. Lu, a pediatric endocrinologist at Joslin and at Boston’s Children’s Hospital.
Puberty-associated changes in growth hormone and related peptides “could be why type 2 diabetes starts during puberty. Type 2 diabetes is almost unheard of before children reach about age 10,” Dr. Isganaitis said in an interview.
A current hypothesis is that “high levels of growth hormone is a cause of insulin resistance during puberty, but in healthy children their beta cells overcome this by making more insulin and so they do not develop diabetes,” said Kristen J. Nadeau, MD, a pediatric endocrinologist and professor at Children’s Hospital Colorado in Denver.
“But this is a stress situation, and if someone has poor beta-cell function they may develop diabetes. The increase in growth hormone [during puberty] can unmask a physiologic and genetic predisposition” to developing type 2 diabetes, Dr. Nadeau said in an interview.
The analyses run by Dr. Lu, Dr. Isganaitis, and their coauthors used data collected in the Treatment Options for Type 2 Diabetes in Adolescents and Youth (TODAY) study, which randomized 699 children aged 10-17 years with type 2 diabetes to one of three antidiabetes treatment regimens and tallied the subsequent incidence of glycemic failure. The study defined the latter as either 6 months with a hemoglobin A1c level of at least 8% or need for insulin treatment.
The primary outcome showed a 39%-52% incidence of failure during 5 years of follow-up depending on the specific treatments the study participants received.
Growth hormone correlates of glycemic failure
The new analyses focused on 310 study participants from TODAY who had plasma specimens available from baseline and a second specimen obtained after 3 years of follow-up. The researchers compared the levels of three peptides that mediate growth hormone signaling at baseline and after 3 years, and assessed these changes relative to the endpoint of glycemic failure.
The results showed that an increase in insulin-like growth factor-1 significantly linked with a reduced incidence of glycemic failure and improved glycemia and beta-cell function.
In contrast, Also, an increase in insulin-like growth factor binding protein-1 significantly linked with glycemic failure and hyperglycemia at 36 months, and with higher insulin sensitivity at baseline. All these analyses adjusted for baseline differences in several demographic and clinical variables.
But these post hoc analyses could not determine whether these associations resulted from, or had a causal role in, treatment failure, cautioned Dr. Lu.
Future studies should examine the relationship of growth hormone signaling and the course of glycemic control in children and adolescents with prediabetes and obesity, Dr. Lu said.
Confirming that these growth hormone-related proteins are reliable predictors of future glycemic dysfunction would open the door to studies of interventions to slow or prevent progression to type 2 diabetes in children identified as high risk.
Potential interventions include early initiation of insulin treatment, which could help preserve beta-cell function, or treatment with a glucagon-like peptide-1 (GLP-1) agonist, a class of agents that may interact with the insulin-like growth factor-1 receptors on beta cells, Dr. Lu said.
The study received no commercial funding. Dr. Lu, Dr. Isganaitis, and Dr. Nadeau reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
SAN DIEGO – Plasma levels of three proteins involved in growth hormone activity showed significant links to the controllability of type 2 diabetes in children, a finding that suggests these proteins may serve as risk markers for incident type 2 diabetes and help identify adolescents who could benefit from aggressive preventive care.
“Plasma growth hormone mediators are associated with glycemic failure in youth with type 2 diabetes,” Chang Lu, MD, said at the at the annual scientific sessions of the American Diabetes Association. “Our hope is that these mediators could be biomarkers for predicting type 2 diabetes onset,” she added in an interview.
Another potential application is to “leverage these data to find predictive markers” that could identify adolescents with type 2 diabetes “at risk for particularly aggressive disease and target them for more intervention,” added Elvira M. Isganaitis, MD, senior author of the report and a pediatric endocrinologist at the Joslin Diabetes Center in Boston.
Does growth hormone cause incident T2D at puberty?
Changes in levels of growth hormone–associated peptides during puberty “could potentially explain why children with type 2 diabetes have a more aggressive course” of the disorder, added Dr. Lu, a pediatric endocrinologist at Joslin and at Boston’s Children’s Hospital.
Puberty-associated changes in growth hormone and related peptides “could be why type 2 diabetes starts during puberty. Type 2 diabetes is almost unheard of before children reach about age 10,” Dr. Isganaitis said in an interview.
A current hypothesis is that “high levels of growth hormone is a cause of insulin resistance during puberty, but in healthy children their beta cells overcome this by making more insulin and so they do not develop diabetes,” said Kristen J. Nadeau, MD, a pediatric endocrinologist and professor at Children’s Hospital Colorado in Denver.
“But this is a stress situation, and if someone has poor beta-cell function they may develop diabetes. The increase in growth hormone [during puberty] can unmask a physiologic and genetic predisposition” to developing type 2 diabetes, Dr. Nadeau said in an interview.
The analyses run by Dr. Lu, Dr. Isganaitis, and their coauthors used data collected in the Treatment Options for Type 2 Diabetes in Adolescents and Youth (TODAY) study, which randomized 699 children aged 10-17 years with type 2 diabetes to one of three antidiabetes treatment regimens and tallied the subsequent incidence of glycemic failure. The study defined the latter as either 6 months with a hemoglobin A1c level of at least 8% or need for insulin treatment.
The primary outcome showed a 39%-52% incidence of failure during 5 years of follow-up depending on the specific treatments the study participants received.
Growth hormone correlates of glycemic failure
The new analyses focused on 310 study participants from TODAY who had plasma specimens available from baseline and a second specimen obtained after 3 years of follow-up. The researchers compared the levels of three peptides that mediate growth hormone signaling at baseline and after 3 years, and assessed these changes relative to the endpoint of glycemic failure.
The results showed that an increase in insulin-like growth factor-1 significantly linked with a reduced incidence of glycemic failure and improved glycemia and beta-cell function.
In contrast, Also, an increase in insulin-like growth factor binding protein-1 significantly linked with glycemic failure and hyperglycemia at 36 months, and with higher insulin sensitivity at baseline. All these analyses adjusted for baseline differences in several demographic and clinical variables.
But these post hoc analyses could not determine whether these associations resulted from, or had a causal role in, treatment failure, cautioned Dr. Lu.
Future studies should examine the relationship of growth hormone signaling and the course of glycemic control in children and adolescents with prediabetes and obesity, Dr. Lu said.
Confirming that these growth hormone-related proteins are reliable predictors of future glycemic dysfunction would open the door to studies of interventions to slow or prevent progression to type 2 diabetes in children identified as high risk.
Potential interventions include early initiation of insulin treatment, which could help preserve beta-cell function, or treatment with a glucagon-like peptide-1 (GLP-1) agonist, a class of agents that may interact with the insulin-like growth factor-1 receptors on beta cells, Dr. Lu said.
The study received no commercial funding. Dr. Lu, Dr. Isganaitis, and Dr. Nadeau reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
AT ADA 2023
Older women risk overdiagnosis with mammograms: Study
Women who continued breast cancer screenings when they reached age 70 had no lower chance of dying from the disease, and just getting a mammogram could instead set them on a path toward unnecessary risks, according to a new study from Yale University.
The findings, published in Annals of Internal Medicine, suggest that , meaning that the cancer found during the screening would not have caused symptoms in a person’s lifetime. (For context, the average life expectancy of a woman in the U.S. is 79 years, according to the Centers for Disease Control and Prevention.)
Overdiagnosis can be harmful because it carries the risks of complications from overtreatment, plus financial and emotional hardships and unnecessary use of limited resources.
For the study, researchers analyzed data for 54,635 women aged 70 and older and compared the rate of breast cancer diagnosis and death among women who did and did not have mammograms during a 15-year follow-up period.
The rate of breast cancer in the study among women aged 70-74 was 6% for women who were screened and 4% for women who were not screened. The researchers estimated that 31% of the cases were potentially overdiagnosed. Among women aged 75-84, breast cancer was found in 5% of women who were screened, compared to less than 3% of unscreened women. Their estimated overdiagnosis rate was 47%. Finally, 3% of women aged 85 and older who were screened had breast cancer detected, compared with 1% of women in the unscreened group. For the older group, the overdiagnosis rate was 54%.
“While our study focused on overdiagnosis, it is important to acknowledge that overdiagnosis is just one of many considerations when deciding whether to continue screening,” researcher and Yale assistant professor of medicine Ilana Richman, MD, said in a statement. “A patient’s preferences and values, personal risk factors, and the overall balance of risks and benefits from screening are also important to take into account when making screening decisions.”
A version of this article first appeared on WebMD.com.
Women who continued breast cancer screenings when they reached age 70 had no lower chance of dying from the disease, and just getting a mammogram could instead set them on a path toward unnecessary risks, according to a new study from Yale University.
The findings, published in Annals of Internal Medicine, suggest that , meaning that the cancer found during the screening would not have caused symptoms in a person’s lifetime. (For context, the average life expectancy of a woman in the U.S. is 79 years, according to the Centers for Disease Control and Prevention.)
Overdiagnosis can be harmful because it carries the risks of complications from overtreatment, plus financial and emotional hardships and unnecessary use of limited resources.
For the study, researchers analyzed data for 54,635 women aged 70 and older and compared the rate of breast cancer diagnosis and death among women who did and did not have mammograms during a 15-year follow-up period.
The rate of breast cancer in the study among women aged 70-74 was 6% for women who were screened and 4% for women who were not screened. The researchers estimated that 31% of the cases were potentially overdiagnosed. Among women aged 75-84, breast cancer was found in 5% of women who were screened, compared to less than 3% of unscreened women. Their estimated overdiagnosis rate was 47%. Finally, 3% of women aged 85 and older who were screened had breast cancer detected, compared with 1% of women in the unscreened group. For the older group, the overdiagnosis rate was 54%.
“While our study focused on overdiagnosis, it is important to acknowledge that overdiagnosis is just one of many considerations when deciding whether to continue screening,” researcher and Yale assistant professor of medicine Ilana Richman, MD, said in a statement. “A patient’s preferences and values, personal risk factors, and the overall balance of risks and benefits from screening are also important to take into account when making screening decisions.”
A version of this article first appeared on WebMD.com.
Women who continued breast cancer screenings when they reached age 70 had no lower chance of dying from the disease, and just getting a mammogram could instead set them on a path toward unnecessary risks, according to a new study from Yale University.
The findings, published in Annals of Internal Medicine, suggest that , meaning that the cancer found during the screening would not have caused symptoms in a person’s lifetime. (For context, the average life expectancy of a woman in the U.S. is 79 years, according to the Centers for Disease Control and Prevention.)
Overdiagnosis can be harmful because it carries the risks of complications from overtreatment, plus financial and emotional hardships and unnecessary use of limited resources.
For the study, researchers analyzed data for 54,635 women aged 70 and older and compared the rate of breast cancer diagnosis and death among women who did and did not have mammograms during a 15-year follow-up period.
The rate of breast cancer in the study among women aged 70-74 was 6% for women who were screened and 4% for women who were not screened. The researchers estimated that 31% of the cases were potentially overdiagnosed. Among women aged 75-84, breast cancer was found in 5% of women who were screened, compared to less than 3% of unscreened women. Their estimated overdiagnosis rate was 47%. Finally, 3% of women aged 85 and older who were screened had breast cancer detected, compared with 1% of women in the unscreened group. For the older group, the overdiagnosis rate was 54%.
“While our study focused on overdiagnosis, it is important to acknowledge that overdiagnosis is just one of many considerations when deciding whether to continue screening,” researcher and Yale assistant professor of medicine Ilana Richman, MD, said in a statement. “A patient’s preferences and values, personal risk factors, and the overall balance of risks and benefits from screening are also important to take into account when making screening decisions.”
A version of this article first appeared on WebMD.com.
FROM ANNALS OF INTERNAL MEDICINE
‘Emerging’ biomarker may predict mild cognitive impairment years before symptoms
, new research indicates.
“Our study shows that low NPTX2 levels are predictive of MCI symptom onset more than 7 years in advance, including among individuals who are in late middle age,” said study investigator Anja Soldan, PhD, associate professor of neurology, Johns Hopkins University School of Medicine, Baltimore.
NPTX2 is still considered an “emerging biomarker” because knowledge about this protein is limited, Dr. Soldan noted.
Prior studies have shown that levels of NPTX2 are lower in people with MCI and dementia than in those with normal cognition and that low levels of this protein in people with MCI are associated with an increased risk of developing dementia.
“Our study extends these prior findings by showing that low protein levels are also associated with the onset of MCI symptoms,” Dr. Soldan said.
The study was published online in Annals of Neurology.
New therapeutic target?
The researchers measured NPTX2, as well as amyloid beta 42/40, phosphorylated (p)-tau181, and total (t)-tau in CSF collected longitudinally from 269 cognitively normal adults from the BIOCARD study.
The average age at baseline was 57.7 years. Nearly all were White, 59% were women, most were college educated, and three-quarters had a close relative with Alzheimer’s disease.
During a mean follow-up average of 16 years, 77 participants progressed to MCI or dementia within or after 7 years of baseline measurements.
In Cox regression models, lower baseline NPTX2 levels were associated with an earlier time to MCI symptom onset (hazard ratio, 0.76; P = .023). This association was significant for progression within 7 years (P = .036) and after 7 years from baseline (P = .001), the investigators reported.
Adults who progressed to MCI had, on average, about 15% lower levels of NPTX2 at baseline, compared with adults who remained cognitively normal.
Baseline NPTX2 levels improved prediction of time to MCI symptom onset after accounting for baseline Alzheimer’s disease biomarker levels (P < .01), and NPTX2 did not interact with the CSF Alzheimer’s disease biomarkers or APOE-ε4 genetic status.
Higher baseline levels of p-tau181 and t-tau were associated with higher baseline NPTX2 levels (both P < .001) and with greater declines in NPTX2 over time, suggesting that NPTX2 may decline in response to tau pathology, the investigators suggested.
Dr. Soldan said NPTX2 may be “a novel target” for developing new therapeutics for Alzheimer’s disease and other dementing and neurodegenerative disorders, as it is not an Alzheimer’s disease–specific protein.
“Efforts are underway for developing a sensitive way to measure NPTX2 brain levels in blood, which could then help clinicians identify individuals at greatest risk for cognitive decline,” she explained.
“Other next steps are to examine how changes in NPTX2 over time relate to changes in brain structure and function and to identify factors that alter levels of NPTX2, including genetic factors and potentially modifiable lifestyle factors,” Dr. Soldan said.
“If having higher levels of NPTX2 in the brain provides some resilience against developing symptoms of Alzheimer’s disease, it would be great if we could somehow increase levels of the protein,” she noted.
Caveats, cautionary notes
Commenting on this research, Christopher Weber, PhD, Alzheimer’s Association director of global science initiatives, said, “Research has shown that when NPTX2 levels are low, it may lead to weaker connections between neurons and could potentially affect cognitive functions, including memory and learning.”
“This new study found an association between lower levels of NPTX2 in CSF and earlier time to MCI symptom onset, and when combined with other established Alzheimer’s biomarkers, they found that NPTX2 improved the prediction of Alzheimer’s symptom onset,” Dr. Weber said.
“This is in line with previous research that suggests NPTX2 levels are associated with an increased risk of progression from MCI to Alzheimer’s dementia,” Dr. Weber said.
However, he noted some limitations of the study. “Participants were primarily White [and] highly educated, and therefore findings may not be generalizable to a real-world population,” he cautioned.
Dr. Weber said it’s also important to note that NPTX2 is not considered an Alzheimer’s-specific biomarker but rather a marker of synaptic activity and neurodegeneration. “The exact role of NPTX2 in predicting dementia is unknown,” Dr. Weber said.
He said that more studies with larger, more diverse cohorts are needed to fully understand its significance as a biomarker or therapeutic target for neurodegenerative diseases, as well as to develop a blood test for NPTX2.
The study was supported by the National Institutes of Health. Dr. Soldan and Dr. Weber report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, new research indicates.
“Our study shows that low NPTX2 levels are predictive of MCI symptom onset more than 7 years in advance, including among individuals who are in late middle age,” said study investigator Anja Soldan, PhD, associate professor of neurology, Johns Hopkins University School of Medicine, Baltimore.
NPTX2 is still considered an “emerging biomarker” because knowledge about this protein is limited, Dr. Soldan noted.
Prior studies have shown that levels of NPTX2 are lower in people with MCI and dementia than in those with normal cognition and that low levels of this protein in people with MCI are associated with an increased risk of developing dementia.
“Our study extends these prior findings by showing that low protein levels are also associated with the onset of MCI symptoms,” Dr. Soldan said.
The study was published online in Annals of Neurology.
New therapeutic target?
The researchers measured NPTX2, as well as amyloid beta 42/40, phosphorylated (p)-tau181, and total (t)-tau in CSF collected longitudinally from 269 cognitively normal adults from the BIOCARD study.
The average age at baseline was 57.7 years. Nearly all were White, 59% were women, most were college educated, and three-quarters had a close relative with Alzheimer’s disease.
During a mean follow-up average of 16 years, 77 participants progressed to MCI or dementia within or after 7 years of baseline measurements.
In Cox regression models, lower baseline NPTX2 levels were associated with an earlier time to MCI symptom onset (hazard ratio, 0.76; P = .023). This association was significant for progression within 7 years (P = .036) and after 7 years from baseline (P = .001), the investigators reported.
Adults who progressed to MCI had, on average, about 15% lower levels of NPTX2 at baseline, compared with adults who remained cognitively normal.
Baseline NPTX2 levels improved prediction of time to MCI symptom onset after accounting for baseline Alzheimer’s disease biomarker levels (P < .01), and NPTX2 did not interact with the CSF Alzheimer’s disease biomarkers or APOE-ε4 genetic status.
Higher baseline levels of p-tau181 and t-tau were associated with higher baseline NPTX2 levels (both P < .001) and with greater declines in NPTX2 over time, suggesting that NPTX2 may decline in response to tau pathology, the investigators suggested.
Dr. Soldan said NPTX2 may be “a novel target” for developing new therapeutics for Alzheimer’s disease and other dementing and neurodegenerative disorders, as it is not an Alzheimer’s disease–specific protein.
“Efforts are underway for developing a sensitive way to measure NPTX2 brain levels in blood, which could then help clinicians identify individuals at greatest risk for cognitive decline,” she explained.
“Other next steps are to examine how changes in NPTX2 over time relate to changes in brain structure and function and to identify factors that alter levels of NPTX2, including genetic factors and potentially modifiable lifestyle factors,” Dr. Soldan said.
“If having higher levels of NPTX2 in the brain provides some resilience against developing symptoms of Alzheimer’s disease, it would be great if we could somehow increase levels of the protein,” she noted.
Caveats, cautionary notes
Commenting on this research, Christopher Weber, PhD, Alzheimer’s Association director of global science initiatives, said, “Research has shown that when NPTX2 levels are low, it may lead to weaker connections between neurons and could potentially affect cognitive functions, including memory and learning.”
“This new study found an association between lower levels of NPTX2 in CSF and earlier time to MCI symptom onset, and when combined with other established Alzheimer’s biomarkers, they found that NPTX2 improved the prediction of Alzheimer’s symptom onset,” Dr. Weber said.
“This is in line with previous research that suggests NPTX2 levels are associated with an increased risk of progression from MCI to Alzheimer’s dementia,” Dr. Weber said.
However, he noted some limitations of the study. “Participants were primarily White [and] highly educated, and therefore findings may not be generalizable to a real-world population,” he cautioned.
Dr. Weber said it’s also important to note that NPTX2 is not considered an Alzheimer’s-specific biomarker but rather a marker of synaptic activity and neurodegeneration. “The exact role of NPTX2 in predicting dementia is unknown,” Dr. Weber said.
He said that more studies with larger, more diverse cohorts are needed to fully understand its significance as a biomarker or therapeutic target for neurodegenerative diseases, as well as to develop a blood test for NPTX2.
The study was supported by the National Institutes of Health. Dr. Soldan and Dr. Weber report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, new research indicates.
“Our study shows that low NPTX2 levels are predictive of MCI symptom onset more than 7 years in advance, including among individuals who are in late middle age,” said study investigator Anja Soldan, PhD, associate professor of neurology, Johns Hopkins University School of Medicine, Baltimore.
NPTX2 is still considered an “emerging biomarker” because knowledge about this protein is limited, Dr. Soldan noted.
Prior studies have shown that levels of NPTX2 are lower in people with MCI and dementia than in those with normal cognition and that low levels of this protein in people with MCI are associated with an increased risk of developing dementia.
“Our study extends these prior findings by showing that low protein levels are also associated with the onset of MCI symptoms,” Dr. Soldan said.
The study was published online in Annals of Neurology.
New therapeutic target?
The researchers measured NPTX2, as well as amyloid beta 42/40, phosphorylated (p)-tau181, and total (t)-tau in CSF collected longitudinally from 269 cognitively normal adults from the BIOCARD study.
The average age at baseline was 57.7 years. Nearly all were White, 59% were women, most were college educated, and three-quarters had a close relative with Alzheimer’s disease.
During a mean follow-up average of 16 years, 77 participants progressed to MCI or dementia within or after 7 years of baseline measurements.
In Cox regression models, lower baseline NPTX2 levels were associated with an earlier time to MCI symptom onset (hazard ratio, 0.76; P = .023). This association was significant for progression within 7 years (P = .036) and after 7 years from baseline (P = .001), the investigators reported.
Adults who progressed to MCI had, on average, about 15% lower levels of NPTX2 at baseline, compared with adults who remained cognitively normal.
Baseline NPTX2 levels improved prediction of time to MCI symptom onset after accounting for baseline Alzheimer’s disease biomarker levels (P < .01), and NPTX2 did not interact with the CSF Alzheimer’s disease biomarkers or APOE-ε4 genetic status.
Higher baseline levels of p-tau181 and t-tau were associated with higher baseline NPTX2 levels (both P < .001) and with greater declines in NPTX2 over time, suggesting that NPTX2 may decline in response to tau pathology, the investigators suggested.
Dr. Soldan said NPTX2 may be “a novel target” for developing new therapeutics for Alzheimer’s disease and other dementing and neurodegenerative disorders, as it is not an Alzheimer’s disease–specific protein.
“Efforts are underway for developing a sensitive way to measure NPTX2 brain levels in blood, which could then help clinicians identify individuals at greatest risk for cognitive decline,” she explained.
“Other next steps are to examine how changes in NPTX2 over time relate to changes in brain structure and function and to identify factors that alter levels of NPTX2, including genetic factors and potentially modifiable lifestyle factors,” Dr. Soldan said.
“If having higher levels of NPTX2 in the brain provides some resilience against developing symptoms of Alzheimer’s disease, it would be great if we could somehow increase levels of the protein,” she noted.
Caveats, cautionary notes
Commenting on this research, Christopher Weber, PhD, Alzheimer’s Association director of global science initiatives, said, “Research has shown that when NPTX2 levels are low, it may lead to weaker connections between neurons and could potentially affect cognitive functions, including memory and learning.”
“This new study found an association between lower levels of NPTX2 in CSF and earlier time to MCI symptom onset, and when combined with other established Alzheimer’s biomarkers, they found that NPTX2 improved the prediction of Alzheimer’s symptom onset,” Dr. Weber said.
“This is in line with previous research that suggests NPTX2 levels are associated with an increased risk of progression from MCI to Alzheimer’s dementia,” Dr. Weber said.
However, he noted some limitations of the study. “Participants were primarily White [and] highly educated, and therefore findings may not be generalizable to a real-world population,” he cautioned.
Dr. Weber said it’s also important to note that NPTX2 is not considered an Alzheimer’s-specific biomarker but rather a marker of synaptic activity and neurodegeneration. “The exact role of NPTX2 in predicting dementia is unknown,” Dr. Weber said.
He said that more studies with larger, more diverse cohorts are needed to fully understand its significance as a biomarker or therapeutic target for neurodegenerative diseases, as well as to develop a blood test for NPTX2.
The study was supported by the National Institutes of Health. Dr. Soldan and Dr. Weber report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM ANNALS OF NEUROLOGY
Scalp cooling for chemo hair loss strikes out with patients
TOPLINE:
, compared with those who opted to forgo scalp cooling.
METHODOLOGY:
- Although studies have demonstrated the effectiveness of scalp cooling to reduce hair loss during breast cancer chemotherapy, most were in the setting of single-agent regimens instead of much more commonly used combined chemotherapy, and few studies assessed patients’ subjective experience.
- To get a real-world sense of the treatment, investigators compared outcomes in 75 women who opted to use the Orbis Paxman cooling cap during taxane/anthracycline-based chemotherapy sessions with 38 women with breast cancer patients who declined to use the cooling cap.
- The women were surveyed for hair loss perception, functional health, and body image at baseline, midchemotherapy, and at their last chemotherapy cycle, as well as at 3 months and 6-9 months following chemotherapy.
- The women were treated at the Medical University of Innsbruck, Austria, for various stages of breast cancer; about half were premenopausal.
TAKEAWAY:
- There was no significant difference between the scalp-cooling and control groups in patient-reported hair loss (P = .831), overall quality of life (P = .627), emotional functioning (P = .737), social functioning (P = .635), and body image (P = .463).
- On average, women stayed on treatment with the cooling cap for about 40% of the duration of their chemotherapy.
- Overall, 53 of 75 women (70.7%) stopped scalp cooling early, with most (73.9%) citing alopecia as the primary reason; only 30% completed treatment.
IN PRACTICE:
“The efficacy and tolerability of [scalp cooling] applied in a clinical routine setting ... appeared to be limited,” the authors concluded. “The further determination and up-front definition of criteria prognostic for effectiveness of [scalp cooling] may be helpful to identify patient subgroups that may experience a treatment benefit.”
SOURCE:
The work, led by Christine Brunner, Medical University of Innsbruck, Austria, was published in Breast Cancer: Targets and Therapy.
LIMITATIONS:
- Shorter intervals between surveys might have given a more granular understanding of patients’ experiences with scalp cooling.
- There were no biomarker assessments to help identify patients more likely to benefit.
DISCLOSURES:
The work was supported by the Medical University of Innsbruck. Dr. Brunner disclosed a grant from Paxman UK, maker of the cooling cap used in the study. Another investigator disclosed personal fees from AstraZeneca, Daiichi Sankyo, Gilead, Lilly, Novartis, and Sirius.
A version of this article first appeared on Medscape.com.
TOPLINE:
, compared with those who opted to forgo scalp cooling.
METHODOLOGY:
- Although studies have demonstrated the effectiveness of scalp cooling to reduce hair loss during breast cancer chemotherapy, most were in the setting of single-agent regimens instead of much more commonly used combined chemotherapy, and few studies assessed patients’ subjective experience.
- To get a real-world sense of the treatment, investigators compared outcomes in 75 women who opted to use the Orbis Paxman cooling cap during taxane/anthracycline-based chemotherapy sessions with 38 women with breast cancer patients who declined to use the cooling cap.
- The women were surveyed for hair loss perception, functional health, and body image at baseline, midchemotherapy, and at their last chemotherapy cycle, as well as at 3 months and 6-9 months following chemotherapy.
- The women were treated at the Medical University of Innsbruck, Austria, for various stages of breast cancer; about half were premenopausal.
TAKEAWAY:
- There was no significant difference between the scalp-cooling and control groups in patient-reported hair loss (P = .831), overall quality of life (P = .627), emotional functioning (P = .737), social functioning (P = .635), and body image (P = .463).
- On average, women stayed on treatment with the cooling cap for about 40% of the duration of their chemotherapy.
- Overall, 53 of 75 women (70.7%) stopped scalp cooling early, with most (73.9%) citing alopecia as the primary reason; only 30% completed treatment.
IN PRACTICE:
“The efficacy and tolerability of [scalp cooling] applied in a clinical routine setting ... appeared to be limited,” the authors concluded. “The further determination and up-front definition of criteria prognostic for effectiveness of [scalp cooling] may be helpful to identify patient subgroups that may experience a treatment benefit.”
SOURCE:
The work, led by Christine Brunner, Medical University of Innsbruck, Austria, was published in Breast Cancer: Targets and Therapy.
LIMITATIONS:
- Shorter intervals between surveys might have given a more granular understanding of patients’ experiences with scalp cooling.
- There were no biomarker assessments to help identify patients more likely to benefit.
DISCLOSURES:
The work was supported by the Medical University of Innsbruck. Dr. Brunner disclosed a grant from Paxman UK, maker of the cooling cap used in the study. Another investigator disclosed personal fees from AstraZeneca, Daiichi Sankyo, Gilead, Lilly, Novartis, and Sirius.
A version of this article first appeared on Medscape.com.
TOPLINE:
, compared with those who opted to forgo scalp cooling.
METHODOLOGY:
- Although studies have demonstrated the effectiveness of scalp cooling to reduce hair loss during breast cancer chemotherapy, most were in the setting of single-agent regimens instead of much more commonly used combined chemotherapy, and few studies assessed patients’ subjective experience.
- To get a real-world sense of the treatment, investigators compared outcomes in 75 women who opted to use the Orbis Paxman cooling cap during taxane/anthracycline-based chemotherapy sessions with 38 women with breast cancer patients who declined to use the cooling cap.
- The women were surveyed for hair loss perception, functional health, and body image at baseline, midchemotherapy, and at their last chemotherapy cycle, as well as at 3 months and 6-9 months following chemotherapy.
- The women were treated at the Medical University of Innsbruck, Austria, for various stages of breast cancer; about half were premenopausal.
TAKEAWAY:
- There was no significant difference between the scalp-cooling and control groups in patient-reported hair loss (P = .831), overall quality of life (P = .627), emotional functioning (P = .737), social functioning (P = .635), and body image (P = .463).
- On average, women stayed on treatment with the cooling cap for about 40% of the duration of their chemotherapy.
- Overall, 53 of 75 women (70.7%) stopped scalp cooling early, with most (73.9%) citing alopecia as the primary reason; only 30% completed treatment.
IN PRACTICE:
“The efficacy and tolerability of [scalp cooling] applied in a clinical routine setting ... appeared to be limited,” the authors concluded. “The further determination and up-front definition of criteria prognostic for effectiveness of [scalp cooling] may be helpful to identify patient subgroups that may experience a treatment benefit.”
SOURCE:
The work, led by Christine Brunner, Medical University of Innsbruck, Austria, was published in Breast Cancer: Targets and Therapy.
LIMITATIONS:
- Shorter intervals between surveys might have given a more granular understanding of patients’ experiences with scalp cooling.
- There were no biomarker assessments to help identify patients more likely to benefit.
DISCLOSURES:
The work was supported by the Medical University of Innsbruck. Dr. Brunner disclosed a grant from Paxman UK, maker of the cooling cap used in the study. Another investigator disclosed personal fees from AstraZeneca, Daiichi Sankyo, Gilead, Lilly, Novartis, and Sirius.
A version of this article first appeared on Medscape.com.
BREAST CANCER: TARGETS AND THERAPY