Emergency intervention did not reduce women’s heavy drinking, partner violence

Article Type
Changed
Display Headline
Emergency intervention did not reduce women’s heavy drinking, partner violence

A brief motivational intervention was unsuccessful for reducing heavy drinking or intimate partner violence in women seeking emergency department (ED) treatment for related injuries, according to a report published online in JAMA.

The hope was that encouraging women to both reduce their drinking and redirect their aggression toward their partners would decrease episodes of alcohol- and violence-related injury, said Dr. Karin V. Rhodes of the Center for Emergency Care Policy and Research and the School of Social Policy and Practice, University of Pennsylvania, Philadelphia, and her associates. The intervention entailed a face-to-face, manual-based discussion conducted in a secure ED location, followed by a telephone “booster” discussion 10 days later.

The intervention relied on reflective feedback, avoidance of confrontation, respect, empathy, and empowerment techniques to elicit the patient’s self-identified reasons for change and personal goals. The discussions centered on identifying links between drinking and violence, resolving any patient ambivalence about changing her behaviors, and supporting the patient’s autonomy and personal choice.

Such brief motivational interventions – 20- to 30-minute interactions with trained clinicians – have proved effective in the ED setting at reducing alcohol consumption and alcohol-related injuries in men who abuse alcohol.

“We did find that over time, reports of experiencing and perpetrating [intimate partner violence] and days of heavy drinking decreased significantly within the intervention and the control groups alike. However, there was no evidence that these outcomes were influenced by the intervention, the researchers said. At both 3-month and 12-month follow-ups, patient reports of episodes of heavy drinking and intimate-partner violence declined in all three study groups.

During a 2-year period, 592 women who sought treatment at two urban academic EDs agreed to participate in the study. The participants’ mean age was 32 years, and 43% disclosed a history of child sexual abuse, 40% screened positive for PTSD, and 85% screened positive for depression.

The women were randomly assigned to the brief intervention (239 patients), a control group that didn’t receive the intervention but did undergo periodic assessments of drinking and violence episodes (232 “assessed” controls), or a no-contact control group who were assessed only once at 3 months to determine if their drinking/violence episodes had improved (121 no-contact controls). All the women received usual care and a standard list of social services resources at the index ED visit.

At baseline, 51% of the intervention group and 46% of the assessed control group reported heavy drinking during the preceding week, which decreased to 43% and 41% at 3 months. Similarly, at baseline 57% of the intervention group and 63% of the assessed control group reported intimate partner violence during the preceding week, which decreased to 43% and 41% at 3 months, the investigators said. (JAMA. 2015 Aug 4. doi: 10.1001/jama.2015.8369.)

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

A brief motivational intervention was unsuccessful for reducing heavy drinking or intimate partner violence in women seeking emergency department (ED) treatment for related injuries, according to a report published online in JAMA.

The hope was that encouraging women to both reduce their drinking and redirect their aggression toward their partners would decrease episodes of alcohol- and violence-related injury, said Dr. Karin V. Rhodes of the Center for Emergency Care Policy and Research and the School of Social Policy and Practice, University of Pennsylvania, Philadelphia, and her associates. The intervention entailed a face-to-face, manual-based discussion conducted in a secure ED location, followed by a telephone “booster” discussion 10 days later.

The intervention relied on reflective feedback, avoidance of confrontation, respect, empathy, and empowerment techniques to elicit the patient’s self-identified reasons for change and personal goals. The discussions centered on identifying links between drinking and violence, resolving any patient ambivalence about changing her behaviors, and supporting the patient’s autonomy and personal choice.

Such brief motivational interventions – 20- to 30-minute interactions with trained clinicians – have proved effective in the ED setting at reducing alcohol consumption and alcohol-related injuries in men who abuse alcohol.

“We did find that over time, reports of experiencing and perpetrating [intimate partner violence] and days of heavy drinking decreased significantly within the intervention and the control groups alike. However, there was no evidence that these outcomes were influenced by the intervention, the researchers said. At both 3-month and 12-month follow-ups, patient reports of episodes of heavy drinking and intimate-partner violence declined in all three study groups.

During a 2-year period, 592 women who sought treatment at two urban academic EDs agreed to participate in the study. The participants’ mean age was 32 years, and 43% disclosed a history of child sexual abuse, 40% screened positive for PTSD, and 85% screened positive for depression.

The women were randomly assigned to the brief intervention (239 patients), a control group that didn’t receive the intervention but did undergo periodic assessments of drinking and violence episodes (232 “assessed” controls), or a no-contact control group who were assessed only once at 3 months to determine if their drinking/violence episodes had improved (121 no-contact controls). All the women received usual care and a standard list of social services resources at the index ED visit.

At baseline, 51% of the intervention group and 46% of the assessed control group reported heavy drinking during the preceding week, which decreased to 43% and 41% at 3 months. Similarly, at baseline 57% of the intervention group and 63% of the assessed control group reported intimate partner violence during the preceding week, which decreased to 43% and 41% at 3 months, the investigators said. (JAMA. 2015 Aug 4. doi: 10.1001/jama.2015.8369.)

A brief motivational intervention was unsuccessful for reducing heavy drinking or intimate partner violence in women seeking emergency department (ED) treatment for related injuries, according to a report published online in JAMA.

The hope was that encouraging women to both reduce their drinking and redirect their aggression toward their partners would decrease episodes of alcohol- and violence-related injury, said Dr. Karin V. Rhodes of the Center for Emergency Care Policy and Research and the School of Social Policy and Practice, University of Pennsylvania, Philadelphia, and her associates. The intervention entailed a face-to-face, manual-based discussion conducted in a secure ED location, followed by a telephone “booster” discussion 10 days later.

The intervention relied on reflective feedback, avoidance of confrontation, respect, empathy, and empowerment techniques to elicit the patient’s self-identified reasons for change and personal goals. The discussions centered on identifying links between drinking and violence, resolving any patient ambivalence about changing her behaviors, and supporting the patient’s autonomy and personal choice.

Such brief motivational interventions – 20- to 30-minute interactions with trained clinicians – have proved effective in the ED setting at reducing alcohol consumption and alcohol-related injuries in men who abuse alcohol.

“We did find that over time, reports of experiencing and perpetrating [intimate partner violence] and days of heavy drinking decreased significantly within the intervention and the control groups alike. However, there was no evidence that these outcomes were influenced by the intervention, the researchers said. At both 3-month and 12-month follow-ups, patient reports of episodes of heavy drinking and intimate-partner violence declined in all three study groups.

During a 2-year period, 592 women who sought treatment at two urban academic EDs agreed to participate in the study. The participants’ mean age was 32 years, and 43% disclosed a history of child sexual abuse, 40% screened positive for PTSD, and 85% screened positive for depression.

The women were randomly assigned to the brief intervention (239 patients), a control group that didn’t receive the intervention but did undergo periodic assessments of drinking and violence episodes (232 “assessed” controls), or a no-contact control group who were assessed only once at 3 months to determine if their drinking/violence episodes had improved (121 no-contact controls). All the women received usual care and a standard list of social services resources at the index ED visit.

At baseline, 51% of the intervention group and 46% of the assessed control group reported heavy drinking during the preceding week, which decreased to 43% and 41% at 3 months. Similarly, at baseline 57% of the intervention group and 63% of the assessed control group reported intimate partner violence during the preceding week, which decreased to 43% and 41% at 3 months, the investigators said. (JAMA. 2015 Aug 4. doi: 10.1001/jama.2015.8369.)

References

References

Publications
Publications
Topics
Article Type
Display Headline
Emergency intervention did not reduce women’s heavy drinking, partner violence
Display Headline
Emergency intervention did not reduce women’s heavy drinking, partner violence
Article Source

FROM JAMA

PURLs Copyright

Inside the Article

Vitals

Key clinical point: A brief motivational intervention in the ED failed to reduce recurrent episodes of heavy drinking or intimate-partner violence.

Major finding: At baseline, 51% of the intervention group and 46% of the assessed control group reported heavy drinking during the preceding week, which decreased to 43% and 41% at 3 months.

Data source: A prospective, randomized, controlled trial involving 592 women seeking ED treatment for intimate partner violence and heavy drinking who were followed for 1 year.

Disclosures: This study was supported by the National Institute on Alcohol Abuse and Alcoholism. Dr. Rhodes and her associates reported having no relevant financial disclosures.

Melanoma twice as likely after CLL/SLL than other types of NHL

Article Type
Changed
Display Headline
Melanoma twice as likely after CLL/SLL than other types of NHL

Survivors of chronic lymphocytic leukemia/small lymphocytic lymphoma are twice as likely to develop melanoma as are survivors of other types of non-Hodgkin lymphoma, according to a report published online Aug. 3 in Journal of Clinical Oncology.

Since patients with chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL) have profound and prolonged immune dysfunction characterized by B-cell and T-cell defects, this finding suggests that immune perturbation may account for the excess of melanoma diagnoses observed in patients with non-Hodgkin lymphoma (NHL), said Dr. Clara J. K. Lam of the radiation epidemiology branch, National Cancer Institute, Bethesda Md., and her associates.

©National Cancer Institute

Although patients with NHL are known to be at increased risk for melanoma compared with the general population, the reasons remain unclear. Additional factors such as chemotherapy regimens and sunlight exposure likely complicate the picture, and no studies to date have been able to account for these confounders. To assess a large enough study sample to examine these issues, Dr. Lam and her associates analyzed data concerning 44,870 NHL survivors in the Surveillance, Epidemiology, and End Results (SEER) database. They focused on older patients aged 66-83 years at NHL diagnosis (mean age, 74 years) who were followed for at least 1 year (mean follow-up, 5.5 years), of whom 13,950 had CLL/SLL.

A total of 202 melanomas developed, and the median interval between NHL diagnosis and melanoma diagnosis was 3 years (range, 1-15 years). Nearly half of these melanomas occurred in patients with CLL/SLL rather than other types of NHL; 41% occurred on the face, head, or neck, and 43% were 1 mm or more in thickness. In contrast, among survivors of other NHL types, melanoma occurred most often on the trunk, and only 28% were 1 mm or more in thickness. This aligns with previous reports that melanomas arising after NHL tend to be more advanced and aggressive than those in the general population, the investigators said (J Clin Oncol. 2015 Aug 3. doi:10.1200/JCO.2014.60.2094).

Further analysis revealed that among patients with CLL/SLL, melanoma risk was significantly increased in those who received fludarabine rather than other treatments (HR, 1.90, 95% CI, 1.08 to 3.37)), with or without the addition of rituximab. In contrast, melanoma risks were unrelated to treatment among patients who had other types of NHL.

Similarly, patients with CLL/SLL who had T-cell-activating autoimmune disorders (such as Graves’ disease, localized scleroderma, psoriasis, chronic rheumatic heart disease, asthma, or skin-related conditions) either before or after diagnosis of their leukemia/lymphoma also had 2-4 times the risk of developing melanoma than that of patients without such autoimmune disorders. In contrast, melanoma risks were unrelated to autoimmune disorders in patients with other types of NHL. This finding underscores the importance of T-cell dysfunction as a contributor to melanoma risk after CLL/SLL, Dr. Lam and her associates said.

Taken together, their findings identify which survivors of NHL are at highest risk for developing melanoma and would benefit the most from undergoing regular full-skin examinations to facilitate early detection.

This study was limited in that it was confined to patients over age 65 with NHL. The results may not be generalizable to younger patients, Dr. Lam and her associates added.

References

Click for Credit Link
Author and Disclosure Information

Publications
Topics
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Survivors of chronic lymphocytic leukemia/small lymphocytic lymphoma are twice as likely to develop melanoma as are survivors of other types of non-Hodgkin lymphoma, according to a report published online Aug. 3 in Journal of Clinical Oncology.

Since patients with chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL) have profound and prolonged immune dysfunction characterized by B-cell and T-cell defects, this finding suggests that immune perturbation may account for the excess of melanoma diagnoses observed in patients with non-Hodgkin lymphoma (NHL), said Dr. Clara J. K. Lam of the radiation epidemiology branch, National Cancer Institute, Bethesda Md., and her associates.

©National Cancer Institute

Although patients with NHL are known to be at increased risk for melanoma compared with the general population, the reasons remain unclear. Additional factors such as chemotherapy regimens and sunlight exposure likely complicate the picture, and no studies to date have been able to account for these confounders. To assess a large enough study sample to examine these issues, Dr. Lam and her associates analyzed data concerning 44,870 NHL survivors in the Surveillance, Epidemiology, and End Results (SEER) database. They focused on older patients aged 66-83 years at NHL diagnosis (mean age, 74 years) who were followed for at least 1 year (mean follow-up, 5.5 years), of whom 13,950 had CLL/SLL.

A total of 202 melanomas developed, and the median interval between NHL diagnosis and melanoma diagnosis was 3 years (range, 1-15 years). Nearly half of these melanomas occurred in patients with CLL/SLL rather than other types of NHL; 41% occurred on the face, head, or neck, and 43% were 1 mm or more in thickness. In contrast, among survivors of other NHL types, melanoma occurred most often on the trunk, and only 28% were 1 mm or more in thickness. This aligns with previous reports that melanomas arising after NHL tend to be more advanced and aggressive than those in the general population, the investigators said (J Clin Oncol. 2015 Aug 3. doi:10.1200/JCO.2014.60.2094).

Further analysis revealed that among patients with CLL/SLL, melanoma risk was significantly increased in those who received fludarabine rather than other treatments (HR, 1.90, 95% CI, 1.08 to 3.37)), with or without the addition of rituximab. In contrast, melanoma risks were unrelated to treatment among patients who had other types of NHL.

Similarly, patients with CLL/SLL who had T-cell-activating autoimmune disorders (such as Graves’ disease, localized scleroderma, psoriasis, chronic rheumatic heart disease, asthma, or skin-related conditions) either before or after diagnosis of their leukemia/lymphoma also had 2-4 times the risk of developing melanoma than that of patients without such autoimmune disorders. In contrast, melanoma risks were unrelated to autoimmune disorders in patients with other types of NHL. This finding underscores the importance of T-cell dysfunction as a contributor to melanoma risk after CLL/SLL, Dr. Lam and her associates said.

Taken together, their findings identify which survivors of NHL are at highest risk for developing melanoma and would benefit the most from undergoing regular full-skin examinations to facilitate early detection.

This study was limited in that it was confined to patients over age 65 with NHL. The results may not be generalizable to younger patients, Dr. Lam and her associates added.

Survivors of chronic lymphocytic leukemia/small lymphocytic lymphoma are twice as likely to develop melanoma as are survivors of other types of non-Hodgkin lymphoma, according to a report published online Aug. 3 in Journal of Clinical Oncology.

Since patients with chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL) have profound and prolonged immune dysfunction characterized by B-cell and T-cell defects, this finding suggests that immune perturbation may account for the excess of melanoma diagnoses observed in patients with non-Hodgkin lymphoma (NHL), said Dr. Clara J. K. Lam of the radiation epidemiology branch, National Cancer Institute, Bethesda Md., and her associates.

©National Cancer Institute

Although patients with NHL are known to be at increased risk for melanoma compared with the general population, the reasons remain unclear. Additional factors such as chemotherapy regimens and sunlight exposure likely complicate the picture, and no studies to date have been able to account for these confounders. To assess a large enough study sample to examine these issues, Dr. Lam and her associates analyzed data concerning 44,870 NHL survivors in the Surveillance, Epidemiology, and End Results (SEER) database. They focused on older patients aged 66-83 years at NHL diagnosis (mean age, 74 years) who were followed for at least 1 year (mean follow-up, 5.5 years), of whom 13,950 had CLL/SLL.

A total of 202 melanomas developed, and the median interval between NHL diagnosis and melanoma diagnosis was 3 years (range, 1-15 years). Nearly half of these melanomas occurred in patients with CLL/SLL rather than other types of NHL; 41% occurred on the face, head, or neck, and 43% were 1 mm or more in thickness. In contrast, among survivors of other NHL types, melanoma occurred most often on the trunk, and only 28% were 1 mm or more in thickness. This aligns with previous reports that melanomas arising after NHL tend to be more advanced and aggressive than those in the general population, the investigators said (J Clin Oncol. 2015 Aug 3. doi:10.1200/JCO.2014.60.2094).

Further analysis revealed that among patients with CLL/SLL, melanoma risk was significantly increased in those who received fludarabine rather than other treatments (HR, 1.90, 95% CI, 1.08 to 3.37)), with or without the addition of rituximab. In contrast, melanoma risks were unrelated to treatment among patients who had other types of NHL.

Similarly, patients with CLL/SLL who had T-cell-activating autoimmune disorders (such as Graves’ disease, localized scleroderma, psoriasis, chronic rheumatic heart disease, asthma, or skin-related conditions) either before or after diagnosis of their leukemia/lymphoma also had 2-4 times the risk of developing melanoma than that of patients without such autoimmune disorders. In contrast, melanoma risks were unrelated to autoimmune disorders in patients with other types of NHL. This finding underscores the importance of T-cell dysfunction as a contributor to melanoma risk after CLL/SLL, Dr. Lam and her associates said.

Taken together, their findings identify which survivors of NHL are at highest risk for developing melanoma and would benefit the most from undergoing regular full-skin examinations to facilitate early detection.

This study was limited in that it was confined to patients over age 65 with NHL. The results may not be generalizable to younger patients, Dr. Lam and her associates added.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Melanoma twice as likely after CLL/SLL than other types of NHL
Display Headline
Melanoma twice as likely after CLL/SLL than other types of NHL
Article Source

FROM JOURNAL OF CLINICAL ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Survivors of chronic lymphocytic leukemia/small lymphocytic lymphoma are twice as likely to develop melanoma as are survivors of other types of non-Hodgkin lymphoma.

Major finding: A total of 202 melanomas developed during 5.5 years of follow-up, and nearly half occurred in patients with CLL/SLL rather than other types of NHL.

Data source: A large population-based study using SEER data to assess melanoma risk in 44,870 older survivors of non-Hodgkin lymphoma.

Disclosures: The National Cancer Institute supported the study. Dr. Lam and her associates reported having no relevant financial disclosures.

Plasma tau level chronically elevated in TBI

Assay technique a game changer?
Article Type
Changed
Display Headline
Plasma tau level chronically elevated in TBI

Peripheral plasma levels of the CNS protein tau are chronically elevated after traumatic brain injury and appear to correlate with the severity of postconcussive symptoms, according to a report published Aug. 3 in JAMA Neurology.

If these findings are confirmed, this will be the first biomarker that is sensitive and specific to persistent traumatic brain injury–related symptoms. The results also suggest that “months to years after the primary brain injury, there may be a continuation of secondary injuries with residual axonal degeneration and blood-brain barrier disruptions in this population that may contribute to the maintenance of postconcussive disorder symptoms and affect symptom severity,” wrote Anlys Olivera, Ph.D., of the National Institute of Nursing Research, Bethesda, Md., and her associates.

Tau is a protein that stabilizes the structure of the axonal cytoskeleton. It is elevated in the cerebrospinal fluid and the peripheral blood (albeit in extremely low concentrations) of patients with severe traumatic brain injury (TBI), professional boxers, and athletes who sustain concussions. The extremely low levels of tau in the peripheral blood have been very difficult to measure until the recent development of an ultrahigh-sensitivity immunoassay technology. Using this innovation, the researchers were able to examine for the first time the associations between plasma tau levels and the frequency and severity of deployment-related TBIs.

Over a 2-year period, Dr. Olivera and her associates assessed tau levels in 70 members of the military who self-reported one or more TBIs and 28 control subjects without TBI who were matched for age, sex, race, time since deployment, and number of deployments. Almost all of those in the TBI group had been injured at least 18 months previously. The most common sources of TBI were blows to the head, exposure to blasts, vehicular crashes, and sports-related concussions.

Total tau was significantly increased in the TBI group (mean level, 1.13 pg/mL), compared with the control group (0.63 pg/mL). Total tau also increased with increasing severity of the initial brain injury, with increasing numbers of TBIs, and increasing severity of present-day postconcussive symptoms. These associations, moreover, were independent of symptoms of posttraumatic brain disorder (PTSD) and depression, which were prevalent in the TBI group, the investigators said (JAMA Neurol. 2015 Aug 3 doi: 10.1001/jamaneurol.2015.1383).

Tau is not only a marker of brain injury; it also can contribute to secondary injury processes such as inflammation, which makes it a potential target for therapy. If the findings of this study are confirmed and extended to demonstrate a direct mechanistic relationship between TBI and tau aggregation, treatments such as the direct delivery of proteasomes “would be invaluable, considering the dearth of treatments for TBIs and chronic [postconcussive disorder] symptoms,” Dr. Olivera and her associates said.

Among the limitations cited by the investigators are lack of neuroimaging and neuropsychological data.

This study was supported by the National Institutes of Health’s National Institute of Nursing Research and the Center for Neuroscience and Regenerative Medicine, which is a collaborative program between the Department of Defense and the NIH. Dr. Olivera reported having no relevant financial disclosures. One of her associates reported ties to Quanterix, developer of the ultrahigh-sensitivity Simoa technology used in this study, which allows measurement of extremely low levels of tau and other CNS-derived biomarkers in the plasma or serum.

References

Body

Researchers and funding agencies have been hoping that the recent development of ultrahigh-sensitivity technology to measure the extremely low levels of CNS-derived biomarkers in the blood would be a game changer for minor TBI, and it appears that our hopes are beginning to be borne out.

The usefulness of plasma tau as a marker of brain pathology in TBI, however, was relatively weak in this study, and it remains to be seen whether this marker will prove helpful in clinical practice. The between-group differences in mean plasma tau levels were small in this study, and the levels in individual samples overlapped substantially between affected patients and controls.

Dr. Elaine R. Peskind is with the Veterans Affairs Northwest Network Mental Illness Research, Education, and Clinical Center and the department of psychiatry and behavioral sciences at the University of Washington, both in Seattle. She reported having no relevant financial conflicts of interest. Dr. Peskind and her associates made these remarks in an editorial accompanying Dr. Olivera’s report (JAMA Neurol. 2015 Aug 3 doi: 10.1001/jamaneurol.2015.1789).

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Body

Researchers and funding agencies have been hoping that the recent development of ultrahigh-sensitivity technology to measure the extremely low levels of CNS-derived biomarkers in the blood would be a game changer for minor TBI, and it appears that our hopes are beginning to be borne out.

The usefulness of plasma tau as a marker of brain pathology in TBI, however, was relatively weak in this study, and it remains to be seen whether this marker will prove helpful in clinical practice. The between-group differences in mean plasma tau levels were small in this study, and the levels in individual samples overlapped substantially between affected patients and controls.

Dr. Elaine R. Peskind is with the Veterans Affairs Northwest Network Mental Illness Research, Education, and Clinical Center and the department of psychiatry and behavioral sciences at the University of Washington, both in Seattle. She reported having no relevant financial conflicts of interest. Dr. Peskind and her associates made these remarks in an editorial accompanying Dr. Olivera’s report (JAMA Neurol. 2015 Aug 3 doi: 10.1001/jamaneurol.2015.1789).

Body

Researchers and funding agencies have been hoping that the recent development of ultrahigh-sensitivity technology to measure the extremely low levels of CNS-derived biomarkers in the blood would be a game changer for minor TBI, and it appears that our hopes are beginning to be borne out.

The usefulness of plasma tau as a marker of brain pathology in TBI, however, was relatively weak in this study, and it remains to be seen whether this marker will prove helpful in clinical practice. The between-group differences in mean plasma tau levels were small in this study, and the levels in individual samples overlapped substantially between affected patients and controls.

Dr. Elaine R. Peskind is with the Veterans Affairs Northwest Network Mental Illness Research, Education, and Clinical Center and the department of psychiatry and behavioral sciences at the University of Washington, both in Seattle. She reported having no relevant financial conflicts of interest. Dr. Peskind and her associates made these remarks in an editorial accompanying Dr. Olivera’s report (JAMA Neurol. 2015 Aug 3 doi: 10.1001/jamaneurol.2015.1789).

Title
Assay technique a game changer?
Assay technique a game changer?

Peripheral plasma levels of the CNS protein tau are chronically elevated after traumatic brain injury and appear to correlate with the severity of postconcussive symptoms, according to a report published Aug. 3 in JAMA Neurology.

If these findings are confirmed, this will be the first biomarker that is sensitive and specific to persistent traumatic brain injury–related symptoms. The results also suggest that “months to years after the primary brain injury, there may be a continuation of secondary injuries with residual axonal degeneration and blood-brain barrier disruptions in this population that may contribute to the maintenance of postconcussive disorder symptoms and affect symptom severity,” wrote Anlys Olivera, Ph.D., of the National Institute of Nursing Research, Bethesda, Md., and her associates.

Tau is a protein that stabilizes the structure of the axonal cytoskeleton. It is elevated in the cerebrospinal fluid and the peripheral blood (albeit in extremely low concentrations) of patients with severe traumatic brain injury (TBI), professional boxers, and athletes who sustain concussions. The extremely low levels of tau in the peripheral blood have been very difficult to measure until the recent development of an ultrahigh-sensitivity immunoassay technology. Using this innovation, the researchers were able to examine for the first time the associations between plasma tau levels and the frequency and severity of deployment-related TBIs.

Over a 2-year period, Dr. Olivera and her associates assessed tau levels in 70 members of the military who self-reported one or more TBIs and 28 control subjects without TBI who were matched for age, sex, race, time since deployment, and number of deployments. Almost all of those in the TBI group had been injured at least 18 months previously. The most common sources of TBI were blows to the head, exposure to blasts, vehicular crashes, and sports-related concussions.

Total tau was significantly increased in the TBI group (mean level, 1.13 pg/mL), compared with the control group (0.63 pg/mL). Total tau also increased with increasing severity of the initial brain injury, with increasing numbers of TBIs, and increasing severity of present-day postconcussive symptoms. These associations, moreover, were independent of symptoms of posttraumatic brain disorder (PTSD) and depression, which were prevalent in the TBI group, the investigators said (JAMA Neurol. 2015 Aug 3 doi: 10.1001/jamaneurol.2015.1383).

Tau is not only a marker of brain injury; it also can contribute to secondary injury processes such as inflammation, which makes it a potential target for therapy. If the findings of this study are confirmed and extended to demonstrate a direct mechanistic relationship between TBI and tau aggregation, treatments such as the direct delivery of proteasomes “would be invaluable, considering the dearth of treatments for TBIs and chronic [postconcussive disorder] symptoms,” Dr. Olivera and her associates said.

Among the limitations cited by the investigators are lack of neuroimaging and neuropsychological data.

This study was supported by the National Institutes of Health’s National Institute of Nursing Research and the Center for Neuroscience and Regenerative Medicine, which is a collaborative program between the Department of Defense and the NIH. Dr. Olivera reported having no relevant financial disclosures. One of her associates reported ties to Quanterix, developer of the ultrahigh-sensitivity Simoa technology used in this study, which allows measurement of extremely low levels of tau and other CNS-derived biomarkers in the plasma or serum.

Peripheral plasma levels of the CNS protein tau are chronically elevated after traumatic brain injury and appear to correlate with the severity of postconcussive symptoms, according to a report published Aug. 3 in JAMA Neurology.

If these findings are confirmed, this will be the first biomarker that is sensitive and specific to persistent traumatic brain injury–related symptoms. The results also suggest that “months to years after the primary brain injury, there may be a continuation of secondary injuries with residual axonal degeneration and blood-brain barrier disruptions in this population that may contribute to the maintenance of postconcussive disorder symptoms and affect symptom severity,” wrote Anlys Olivera, Ph.D., of the National Institute of Nursing Research, Bethesda, Md., and her associates.

Tau is a protein that stabilizes the structure of the axonal cytoskeleton. It is elevated in the cerebrospinal fluid and the peripheral blood (albeit in extremely low concentrations) of patients with severe traumatic brain injury (TBI), professional boxers, and athletes who sustain concussions. The extremely low levels of tau in the peripheral blood have been very difficult to measure until the recent development of an ultrahigh-sensitivity immunoassay technology. Using this innovation, the researchers were able to examine for the first time the associations between plasma tau levels and the frequency and severity of deployment-related TBIs.

Over a 2-year period, Dr. Olivera and her associates assessed tau levels in 70 members of the military who self-reported one or more TBIs and 28 control subjects without TBI who were matched for age, sex, race, time since deployment, and number of deployments. Almost all of those in the TBI group had been injured at least 18 months previously. The most common sources of TBI were blows to the head, exposure to blasts, vehicular crashes, and sports-related concussions.

Total tau was significantly increased in the TBI group (mean level, 1.13 pg/mL), compared with the control group (0.63 pg/mL). Total tau also increased with increasing severity of the initial brain injury, with increasing numbers of TBIs, and increasing severity of present-day postconcussive symptoms. These associations, moreover, were independent of symptoms of posttraumatic brain disorder (PTSD) and depression, which were prevalent in the TBI group, the investigators said (JAMA Neurol. 2015 Aug 3 doi: 10.1001/jamaneurol.2015.1383).

Tau is not only a marker of brain injury; it also can contribute to secondary injury processes such as inflammation, which makes it a potential target for therapy. If the findings of this study are confirmed and extended to demonstrate a direct mechanistic relationship between TBI and tau aggregation, treatments such as the direct delivery of proteasomes “would be invaluable, considering the dearth of treatments for TBIs and chronic [postconcussive disorder] symptoms,” Dr. Olivera and her associates said.

Among the limitations cited by the investigators are lack of neuroimaging and neuropsychological data.

This study was supported by the National Institutes of Health’s National Institute of Nursing Research and the Center for Neuroscience and Regenerative Medicine, which is a collaborative program between the Department of Defense and the NIH. Dr. Olivera reported having no relevant financial disclosures. One of her associates reported ties to Quanterix, developer of the ultrahigh-sensitivity Simoa technology used in this study, which allows measurement of extremely low levels of tau and other CNS-derived biomarkers in the plasma or serum.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Plasma tau level chronically elevated in TBI
Display Headline
Plasma tau level chronically elevated in TBI
Article Source

FROM JAMA NEUROLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Peripheral plasma levels of the CNS protein tau are chronically elevated after traumatic brain injury and correlate with the severity of postconcussive symptoms.

Major finding: Total tau was significantly increased in the TBI group (mean level, 1.13 pg/mL), compared with the control group (0.63 pg/mL).

Data source: An observational case-control study assessing plasma tau levels in 70 military personnel with a history of TBI and 28 control subjects.

Disclosures: This study was supported by the National Institutes of Health’s National Institute of Nursing Research and the Center for Neuroscience and Regenerative Medicine, which is a collaborative program between the Department of Defense and the NIH. Dr. Olivera reported having no relevant financial disclosures. One of her associates reported ties to Quanterix, developer of the ultrahigh-sensitivity Simoa technology used in this study, which allows measurement of extremely low levels of tau and other CNS-derived biomarkers in the plasma or serum.

New guidelines stress identifying Lynch syndrome

Article Type
Changed
Display Headline
New guidelines stress identifying Lynch syndrome

Clinicians and researchers must get better at identifying Lynch syndrome because the diagnosis is so often missed, according to new AGA clinical guidelines for diagnosing and managing the disorder, published in the September issue of Gastroenterology.

Lynch syndrome, previously known as hereditary nonpolyposis colorectal cancer syndrome (HNPCC), is the most common heritable cause of colorectal cancer and also is associated with cancers of the endometrium, stomach, small intestine, pancreas, biliary tract, ovary, urinary tract, and brain. Lynch syndrome accounts for 2%-3% of all colorectal cancers in the United States, and the estimated prevalence is 1 in 440 in the general population. People with the syndrome are estimated to have a lifetime cumulative incidence of colorectal cancer approaching 80%, and affected women have an estimated 60% lifetime cumulative incidence of endometrial cancer.

The new AGA guidelines focus on identifying Lynch syndrome, both in patients without cancer who have a family history suggestive of the disorder and in all patients who develop colorectal cancer, said Dr. Joel H. Rubenstein and his associates on the clinical guidelines committee (Gastroenterology 2015 Jul. 28 [doi: 10.1053/j.gastro.2015.07.036]).

The guidelines strongly recommend that all colorectal cancers now be tested using immunohistochemistry or assessment of microsatellite instability to identify potential cases of Lynch syndrome. Given the high incidence of colorectal cancer in the United States, this recommendation in particular “may be ripe for consideration as a process measure of quality of care,” they noted.

Until now, older patients with colorectal cancer have not undergone such testing because the yield of positive results was lower than in younger patients, but now there is new appreciation that these results have a significant impact on younger family members, not just the patients themselves. From the perspective of preventing cancer in these relatives, such testing is actually cost effective, said Dr. Rubenstein of the Veterans Affairs Center for Clinical Management Research and the gastroenterology division at the University of Michigan, both in Ann Arbor, and his associates.

At present, the evidence is insufficient to recommend either of these tests above the other for identifying Lynch syndrome. The two have comparable sensitivities and specificities.

The guidelines also recommend that, in people who have no personal history of colorectal or other cancer but who have a family history that suggests Lynch syndrome, risk prediction should be performed, “rather than doing nothing.” If a first-degree relative is known to have a Lynch mutation, people should be offered germline genetic testing for that mutation. Alternatively, “if tumor tissue from an affected relative is available, the screening process should begin with testing on that tumor.”

If none of this information is available, online risk prediction models or free downloadable software incorporating such models can be used to quickly and easily estimate the probability of carrying a Lynch syndrome mutation. This approach is “imperative” to improve case finding, since it is likely that most Lynch syndrome kindreds are undiagnosed, Dr. Rubenstein and his associates said.

Such patients should be offered risk-prediction models rather than proceeding directly to germline genetic testing because of the currently high costs of genetic testing. People without cancer who have a family history suggestive of Lynch syndrome should proceed straight to germline testing if they are considered to be at high risk – for example, if they meet the highly specific Amsterdam criteria.

The AGA guidelines strongly recommend that patients identified as having Lynch syndrome undergo surveillance colonoscopy, as opposed to no surveillance. Good-quality evidence shows that this strategy decreases the overall burden of colorectal cancer and reduces disease-specific mortality. People who carry Lynch syndrome genetic mutations increase their life expectancy by 7 years if they undergo surveillance colonoscopy, and cost-effectiveness analyses indicate that the expense of such screening is lower than the expense of no screening.

The optimal screening interval for such patients has not been determined, but low-quality evidence suggests that undergoing colonoscopy every 1-2 years is the “most prudent” course and is better than doing so at longer intervals.

The guidelines also include a conditional recommendation that people found to have Lynch syndrome should be offered aspirin therapy as cancer prophylaxis. The optimal dose and frequency of aspirin use is not yet known, and there is no evidence that the treatment improves mortality, but some low-quality evidence suggests that aspirin therapy reduces the risk of colorectal and other cancers, and the risk of adverse events is quite low.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Lynch Syndrome
Sections
Author and Disclosure Information

Author and Disclosure Information

Clinicians and researchers must get better at identifying Lynch syndrome because the diagnosis is so often missed, according to new AGA clinical guidelines for diagnosing and managing the disorder, published in the September issue of Gastroenterology.

Lynch syndrome, previously known as hereditary nonpolyposis colorectal cancer syndrome (HNPCC), is the most common heritable cause of colorectal cancer and also is associated with cancers of the endometrium, stomach, small intestine, pancreas, biliary tract, ovary, urinary tract, and brain. Lynch syndrome accounts for 2%-3% of all colorectal cancers in the United States, and the estimated prevalence is 1 in 440 in the general population. People with the syndrome are estimated to have a lifetime cumulative incidence of colorectal cancer approaching 80%, and affected women have an estimated 60% lifetime cumulative incidence of endometrial cancer.

The new AGA guidelines focus on identifying Lynch syndrome, both in patients without cancer who have a family history suggestive of the disorder and in all patients who develop colorectal cancer, said Dr. Joel H. Rubenstein and his associates on the clinical guidelines committee (Gastroenterology 2015 Jul. 28 [doi: 10.1053/j.gastro.2015.07.036]).

The guidelines strongly recommend that all colorectal cancers now be tested using immunohistochemistry or assessment of microsatellite instability to identify potential cases of Lynch syndrome. Given the high incidence of colorectal cancer in the United States, this recommendation in particular “may be ripe for consideration as a process measure of quality of care,” they noted.

Until now, older patients with colorectal cancer have not undergone such testing because the yield of positive results was lower than in younger patients, but now there is new appreciation that these results have a significant impact on younger family members, not just the patients themselves. From the perspective of preventing cancer in these relatives, such testing is actually cost effective, said Dr. Rubenstein of the Veterans Affairs Center for Clinical Management Research and the gastroenterology division at the University of Michigan, both in Ann Arbor, and his associates.

At present, the evidence is insufficient to recommend either of these tests above the other for identifying Lynch syndrome. The two have comparable sensitivities and specificities.

The guidelines also recommend that, in people who have no personal history of colorectal or other cancer but who have a family history that suggests Lynch syndrome, risk prediction should be performed, “rather than doing nothing.” If a first-degree relative is known to have a Lynch mutation, people should be offered germline genetic testing for that mutation. Alternatively, “if tumor tissue from an affected relative is available, the screening process should begin with testing on that tumor.”

If none of this information is available, online risk prediction models or free downloadable software incorporating such models can be used to quickly and easily estimate the probability of carrying a Lynch syndrome mutation. This approach is “imperative” to improve case finding, since it is likely that most Lynch syndrome kindreds are undiagnosed, Dr. Rubenstein and his associates said.

Such patients should be offered risk-prediction models rather than proceeding directly to germline genetic testing because of the currently high costs of genetic testing. People without cancer who have a family history suggestive of Lynch syndrome should proceed straight to germline testing if they are considered to be at high risk – for example, if they meet the highly specific Amsterdam criteria.

The AGA guidelines strongly recommend that patients identified as having Lynch syndrome undergo surveillance colonoscopy, as opposed to no surveillance. Good-quality evidence shows that this strategy decreases the overall burden of colorectal cancer and reduces disease-specific mortality. People who carry Lynch syndrome genetic mutations increase their life expectancy by 7 years if they undergo surveillance colonoscopy, and cost-effectiveness analyses indicate that the expense of such screening is lower than the expense of no screening.

The optimal screening interval for such patients has not been determined, but low-quality evidence suggests that undergoing colonoscopy every 1-2 years is the “most prudent” course and is better than doing so at longer intervals.

The guidelines also include a conditional recommendation that people found to have Lynch syndrome should be offered aspirin therapy as cancer prophylaxis. The optimal dose and frequency of aspirin use is not yet known, and there is no evidence that the treatment improves mortality, but some low-quality evidence suggests that aspirin therapy reduces the risk of colorectal and other cancers, and the risk of adverse events is quite low.

Clinicians and researchers must get better at identifying Lynch syndrome because the diagnosis is so often missed, according to new AGA clinical guidelines for diagnosing and managing the disorder, published in the September issue of Gastroenterology.

Lynch syndrome, previously known as hereditary nonpolyposis colorectal cancer syndrome (HNPCC), is the most common heritable cause of colorectal cancer and also is associated with cancers of the endometrium, stomach, small intestine, pancreas, biliary tract, ovary, urinary tract, and brain. Lynch syndrome accounts for 2%-3% of all colorectal cancers in the United States, and the estimated prevalence is 1 in 440 in the general population. People with the syndrome are estimated to have a lifetime cumulative incidence of colorectal cancer approaching 80%, and affected women have an estimated 60% lifetime cumulative incidence of endometrial cancer.

The new AGA guidelines focus on identifying Lynch syndrome, both in patients without cancer who have a family history suggestive of the disorder and in all patients who develop colorectal cancer, said Dr. Joel H. Rubenstein and his associates on the clinical guidelines committee (Gastroenterology 2015 Jul. 28 [doi: 10.1053/j.gastro.2015.07.036]).

The guidelines strongly recommend that all colorectal cancers now be tested using immunohistochemistry or assessment of microsatellite instability to identify potential cases of Lynch syndrome. Given the high incidence of colorectal cancer in the United States, this recommendation in particular “may be ripe for consideration as a process measure of quality of care,” they noted.

Until now, older patients with colorectal cancer have not undergone such testing because the yield of positive results was lower than in younger patients, but now there is new appreciation that these results have a significant impact on younger family members, not just the patients themselves. From the perspective of preventing cancer in these relatives, such testing is actually cost effective, said Dr. Rubenstein of the Veterans Affairs Center for Clinical Management Research and the gastroenterology division at the University of Michigan, both in Ann Arbor, and his associates.

At present, the evidence is insufficient to recommend either of these tests above the other for identifying Lynch syndrome. The two have comparable sensitivities and specificities.

The guidelines also recommend that, in people who have no personal history of colorectal or other cancer but who have a family history that suggests Lynch syndrome, risk prediction should be performed, “rather than doing nothing.” If a first-degree relative is known to have a Lynch mutation, people should be offered germline genetic testing for that mutation. Alternatively, “if tumor tissue from an affected relative is available, the screening process should begin with testing on that tumor.”

If none of this information is available, online risk prediction models or free downloadable software incorporating such models can be used to quickly and easily estimate the probability of carrying a Lynch syndrome mutation. This approach is “imperative” to improve case finding, since it is likely that most Lynch syndrome kindreds are undiagnosed, Dr. Rubenstein and his associates said.

Such patients should be offered risk-prediction models rather than proceeding directly to germline genetic testing because of the currently high costs of genetic testing. People without cancer who have a family history suggestive of Lynch syndrome should proceed straight to germline testing if they are considered to be at high risk – for example, if they meet the highly specific Amsterdam criteria.

The AGA guidelines strongly recommend that patients identified as having Lynch syndrome undergo surveillance colonoscopy, as opposed to no surveillance. Good-quality evidence shows that this strategy decreases the overall burden of colorectal cancer and reduces disease-specific mortality. People who carry Lynch syndrome genetic mutations increase their life expectancy by 7 years if they undergo surveillance colonoscopy, and cost-effectiveness analyses indicate that the expense of such screening is lower than the expense of no screening.

The optimal screening interval for such patients has not been determined, but low-quality evidence suggests that undergoing colonoscopy every 1-2 years is the “most prudent” course and is better than doing so at longer intervals.

The guidelines also include a conditional recommendation that people found to have Lynch syndrome should be offered aspirin therapy as cancer prophylaxis. The optimal dose and frequency of aspirin use is not yet known, and there is no evidence that the treatment improves mortality, but some low-quality evidence suggests that aspirin therapy reduces the risk of colorectal and other cancers, and the risk of adverse events is quite low.

References

References

Publications
Publications
Topics
Article Type
Display Headline
New guidelines stress identifying Lynch syndrome
Display Headline
New guidelines stress identifying Lynch syndrome
Legacy Keywords
Lynch Syndrome
Legacy Keywords
Lynch Syndrome
Sections
Article Source

FROM GASTROENTEROLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: New AGA guidelines stress that identification of Lynch syndrome must improve because it is so underdiagnosed.

Major finding: Lynch syndrome accounts for 2%-3% of all colorectal cancers in the United States, and the estimated prevalence is 1 in 440 members of the general population.

Data source: A review of the literature and compilation of six key recommendations for diagnosing and managing Lynch syndrome.

Disclosures: This work was supported by the AGA Institute. Dr. Rubenstein and his associates on the clinical guidelines committee reported having no relevant financial or professional conflicts of interest.

LVEF improvements over time in ICD recipients tied to lower mortality

Larger, longer-term study needed
Article Type
Changed
Display Headline
LVEF improvements over time in ICD recipients tied to lower mortality

In the one-quarter of heart failure patients who receive an implantable cardioverter defibrillator for primary prevention and whose left ventricular ejection fraction improves more than 35%, both mortality and appropriate ICD shocks are decreased, according to a report published online July 27 in Journal of the American College of Cardiology.

This raises the question of whether such patients’ risk for sudden cardiac death still warrants replacement of the ICD generator years later, especially among those whose devices have never needed to deliver a shock, said Yiyi Zhang, Ph.D., of the Welch Center for Prevention, Epidemiology, and Clinical Research, Johns Hopkins University, Baltimore, and her associates.

© janulla/Thinkstock

To examine this issue, the investigators analyzed data from PROSE-ICD (Prospective Observational Study of Implantable Cardioverter-Defibrillators), in which patients with systolic heart failure received primary-prevention ICDs at four U.S. cardiology centers after an initial LVEF assessment. For their study, Dr. Zhang and her associates focused on 538 of these study participants whose LVEF was reassessed at least once during roughly 5 years of follow-up.

About 57% of the study subjects were white and 70% were men. The average age at baseline was 59 years.

LVEF improved after ICD implantation in 215 (40%) of the participants, including 134 patients (25%) in whom it improved to greater than 35%. These patients were at significantly reduced risk of all-cause mortality and of requiring ICD shocks, compared with patients whose LVEF was either unchanged (47%) or decreased (13%) after ICD implantation, the investigators said. In a Cox regression model adjusted for age, sex, race, baseline LVEF, and stratified by enrollment center, the hazard ratio for all-cause mortality was 0.31, and that for an appropriate shock was 0.33 (J. Am. Coll. Cardiol. 2015 July 27 [doi:10.1016/j.jacc.2015.05.057]).

The mode of death could not be determined in many cases because records were unreliable for patients who died out of hospital, so the researchers couldn’t examine any association between LVEF changes and cardiac-specific mortality.

These study results are consistent with those of several previous studies, Dr. Zhang and her associates noted.

“Findings from our study indicate that repeated LVEF assessment after ICD implantation can provide additional prognostic information and may also allow for more informed decision making regarding ICD generator replacement, especially in patients whose LVEF improved significantly,” they said.

Further studies in larger populations that have more frequent LVEF reassessments are needed to establish whether ICD generator replacement has a positive or negative impact on this patient population, and to better guide clinicians in deciding whether ICD generator replacement should be deferred in individual patients, the investigators added.

References

Click for Credit Link
Body

Dr. Zhang and colleagues have conducted a meticulous analysis and made an important contribution to a critical area of patient care.

However, even though the findings were consistent with those of previous studies and even though this is the largest series of ICD recipients with improved LVEF done to date, it included only 134 such patients. These are small numbers, and the results should be interpreted with caution.

The essential question for physicians – helping patients decide if the benefit of continued ICD therapy is worth the risk – requires longer-term follow-up in a considerably larger study population.

Dr. Kristen K. Patton is in the division of cardiology at the University of Washington, Seattle. She reported having no relevant financial disclosures. Dr. Patton made these remarks in an editorial comment accompanying Dr. Zhang’s report (J. Am. Coll. Cardiol. 2015 July 27 [doi:10.1016/j.jacc.2015.06.015]).

Author and Disclosure Information

Publications
Topics
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Body

Dr. Zhang and colleagues have conducted a meticulous analysis and made an important contribution to a critical area of patient care.

However, even though the findings were consistent with those of previous studies and even though this is the largest series of ICD recipients with improved LVEF done to date, it included only 134 such patients. These are small numbers, and the results should be interpreted with caution.

The essential question for physicians – helping patients decide if the benefit of continued ICD therapy is worth the risk – requires longer-term follow-up in a considerably larger study population.

Dr. Kristen K. Patton is in the division of cardiology at the University of Washington, Seattle. She reported having no relevant financial disclosures. Dr. Patton made these remarks in an editorial comment accompanying Dr. Zhang’s report (J. Am. Coll. Cardiol. 2015 July 27 [doi:10.1016/j.jacc.2015.06.015]).

Body

Dr. Zhang and colleagues have conducted a meticulous analysis and made an important contribution to a critical area of patient care.

However, even though the findings were consistent with those of previous studies and even though this is the largest series of ICD recipients with improved LVEF done to date, it included only 134 such patients. These are small numbers, and the results should be interpreted with caution.

The essential question for physicians – helping patients decide if the benefit of continued ICD therapy is worth the risk – requires longer-term follow-up in a considerably larger study population.

Dr. Kristen K. Patton is in the division of cardiology at the University of Washington, Seattle. She reported having no relevant financial disclosures. Dr. Patton made these remarks in an editorial comment accompanying Dr. Zhang’s report (J. Am. Coll. Cardiol. 2015 July 27 [doi:10.1016/j.jacc.2015.06.015]).

Title
Larger, longer-term study needed
Larger, longer-term study needed

In the one-quarter of heart failure patients who receive an implantable cardioverter defibrillator for primary prevention and whose left ventricular ejection fraction improves more than 35%, both mortality and appropriate ICD shocks are decreased, according to a report published online July 27 in Journal of the American College of Cardiology.

This raises the question of whether such patients’ risk for sudden cardiac death still warrants replacement of the ICD generator years later, especially among those whose devices have never needed to deliver a shock, said Yiyi Zhang, Ph.D., of the Welch Center for Prevention, Epidemiology, and Clinical Research, Johns Hopkins University, Baltimore, and her associates.

© janulla/Thinkstock

To examine this issue, the investigators analyzed data from PROSE-ICD (Prospective Observational Study of Implantable Cardioverter-Defibrillators), in which patients with systolic heart failure received primary-prevention ICDs at four U.S. cardiology centers after an initial LVEF assessment. For their study, Dr. Zhang and her associates focused on 538 of these study participants whose LVEF was reassessed at least once during roughly 5 years of follow-up.

About 57% of the study subjects were white and 70% were men. The average age at baseline was 59 years.

LVEF improved after ICD implantation in 215 (40%) of the participants, including 134 patients (25%) in whom it improved to greater than 35%. These patients were at significantly reduced risk of all-cause mortality and of requiring ICD shocks, compared with patients whose LVEF was either unchanged (47%) or decreased (13%) after ICD implantation, the investigators said. In a Cox regression model adjusted for age, sex, race, baseline LVEF, and stratified by enrollment center, the hazard ratio for all-cause mortality was 0.31, and that for an appropriate shock was 0.33 (J. Am. Coll. Cardiol. 2015 July 27 [doi:10.1016/j.jacc.2015.05.057]).

The mode of death could not be determined in many cases because records were unreliable for patients who died out of hospital, so the researchers couldn’t examine any association between LVEF changes and cardiac-specific mortality.

These study results are consistent with those of several previous studies, Dr. Zhang and her associates noted.

“Findings from our study indicate that repeated LVEF assessment after ICD implantation can provide additional prognostic information and may also allow for more informed decision making regarding ICD generator replacement, especially in patients whose LVEF improved significantly,” they said.

Further studies in larger populations that have more frequent LVEF reassessments are needed to establish whether ICD generator replacement has a positive or negative impact on this patient population, and to better guide clinicians in deciding whether ICD generator replacement should be deferred in individual patients, the investigators added.

In the one-quarter of heart failure patients who receive an implantable cardioverter defibrillator for primary prevention and whose left ventricular ejection fraction improves more than 35%, both mortality and appropriate ICD shocks are decreased, according to a report published online July 27 in Journal of the American College of Cardiology.

This raises the question of whether such patients’ risk for sudden cardiac death still warrants replacement of the ICD generator years later, especially among those whose devices have never needed to deliver a shock, said Yiyi Zhang, Ph.D., of the Welch Center for Prevention, Epidemiology, and Clinical Research, Johns Hopkins University, Baltimore, and her associates.

© janulla/Thinkstock

To examine this issue, the investigators analyzed data from PROSE-ICD (Prospective Observational Study of Implantable Cardioverter-Defibrillators), in which patients with systolic heart failure received primary-prevention ICDs at four U.S. cardiology centers after an initial LVEF assessment. For their study, Dr. Zhang and her associates focused on 538 of these study participants whose LVEF was reassessed at least once during roughly 5 years of follow-up.

About 57% of the study subjects were white and 70% were men. The average age at baseline was 59 years.

LVEF improved after ICD implantation in 215 (40%) of the participants, including 134 patients (25%) in whom it improved to greater than 35%. These patients were at significantly reduced risk of all-cause mortality and of requiring ICD shocks, compared with patients whose LVEF was either unchanged (47%) or decreased (13%) after ICD implantation, the investigators said. In a Cox regression model adjusted for age, sex, race, baseline LVEF, and stratified by enrollment center, the hazard ratio for all-cause mortality was 0.31, and that for an appropriate shock was 0.33 (J. Am. Coll. Cardiol. 2015 July 27 [doi:10.1016/j.jacc.2015.05.057]).

The mode of death could not be determined in many cases because records were unreliable for patients who died out of hospital, so the researchers couldn’t examine any association between LVEF changes and cardiac-specific mortality.

These study results are consistent with those of several previous studies, Dr. Zhang and her associates noted.

“Findings from our study indicate that repeated LVEF assessment after ICD implantation can provide additional prognostic information and may also allow for more informed decision making regarding ICD generator replacement, especially in patients whose LVEF improved significantly,” they said.

Further studies in larger populations that have more frequent LVEF reassessments are needed to establish whether ICD generator replacement has a positive or negative impact on this patient population, and to better guide clinicians in deciding whether ICD generator replacement should be deferred in individual patients, the investigators added.

References

References

Publications
Publications
Topics
Article Type
Display Headline
LVEF improvements over time in ICD recipients tied to lower mortality
Display Headline
LVEF improvements over time in ICD recipients tied to lower mortality
Article Source

FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: LVEF improves to greater than 35% in about a quarter of patients with heart failure who receive ICDs for primary prevention, and that change is associated with lower mortality and fewer inappropriate shocks.

Major finding: In HF patients whose LVEF improved after ICD implantation (40%), all-cause mortality and appropriate shocks were reduced by nearly 70%.

Data source: A secondary analysis of data from PROSE-ICD, a multicenter prospective observational study in 538 HF patients with ICDs whose LVEF was assessed at least once during 10 years of follow-up.

Disclosures: This study was supported by the Donald W. Reynolds Foundation and the National Institutes of Health. Dr. Zhang reported having no relevant financial disclosures; Dr. Zhang’s associates reported ties to Medtronic, Boston Scientific, Biotronik, and St. Jude Medical.

Adding ezetimibe to atorvastatin boosts coronary plaque regression

Ezetimibe’s non-cholesterol effects
Article Type
Changed
Display Headline
Adding ezetimibe to atorvastatin boosts coronary plaque regression

Adding ezetimibe to atorvastatin boosted the regression of coronary plaque in patients with elevated cholesterol who had just undergone percutaneous coronary intervention (PCI) for acute coronary syndromes or stable angina, according to a report published July 27 in the Journal of the American College of Cardiology.

Compared with atorvastatin monotherapy, the dual-agent lipid-lowering strategy also improved vascular remodeling in the targeted artery segments to a much greater degree in the prospective, multicenter trial known as PRECISE-IVUS (Plaque Regression With Cholesterol Absorption Inhibitor or Synthesis Inhibitor Evaluated by Intravascular Ultrasound).

“Our positive results from the PRECISE-IVUS trial could lead to an early reevaluation of the new [American College of Cardiology/ American Heart Association] lipid management guidelines that endorses statins as the only recommended drugs for treating cholesterol-related CV risk,” wrote Kenichi Tsujita, M.D., Ph.D., of Kumamoto University in Japan, and his associates.

The study findings also confirm that adding ezetimibe can be beneficial for patients who are unable to tolerate high-dose statins, as well as for those who don’t achieve adequate cholesterol control despite maximal statin therapy.

©Ugreen/thinkstockphotos.com

The investigators compared atorvastatin monotherapy against atorvastatin plus ezetimibe in 246 adults who underwent PCI under intravascular ultrasound guidance at 17 cardiovascular centers. The participants had LDL-cholesterol levels of more than 100 mg/dl and were randomly assigned to receive either atorvastatin plus ezetimibe (122 patients) or atorvastatin alone (124 patients) for 9-12 months, at which time they underwent repeat intravascular ultrasound imaging.

The primary end point – the absolute decrease in percent atheroma volume of the selected coronary segment – was superior with dual therapy (1.4% vs. 0.3%), and a significantly greater percentage of patients in the dual-therapy group showed coronary plaque regression (78% vs. 58%).

These between-group differences were most pronounced in the subset of patients with acute coronary syndromes, Dr. Tsujita and his associates wrote (J. Am. Coll. Cardiol. 2015; 66: 495-507 [doi:10.1016/j.jacc.2015.05.065]).

But despite the beneficial effect of combination therapy on coronary plaque, the frequencies of adverse cardiovascular events and the rates of target-vessel revascularization were similar between the two study groups.

The study was supported in part by the Japanese Ministry of Education, Science, and Culture. Dr. Tsujita reported having no relevant financial disclosures; his associates reported ties to Bayer, Boehringer Ingelheim, Daiichi-Sankyo, MSD, Pfizer, Takeda, Novartis, AstraZeneca, Astellas, Bristol-Myers Squibb, Chugai, Dainippon Sumitomo Pharma, Kowa, Ostuka, Sanofi, and Shionogi.

References

Body

At first glance, a greater reduction in LDL-cholesterol with dual-agent therapy appears to account for the greater reduction in atheroma volume in this excellent study. But linear regression analysis didn’t show any association between LDL levels and coronary plaque regression, so ezetimibe’s other beneficial effects must be involved.

Other sterols may play a role. Unlike the decrease in cholesterol level, the reduction in campesterol-to-cholesterol ratio was significantly associated with coronary plaque regression in a linear fashion. And lathosterol, campesterol, and sitosterol declined with dual therapy but increased with monotherapy. It would be interesting to investigate the clinical relevance of ezetimibe’s possible pleiotropic effects that are unrelated to cholesterol lowering.

Dr. Filippo Crea and Dr. Giampaolo Niccoli are at the Institute of Cardiology at Catholic University of the Sacred Heart, Rome. Both reported having no relevant financial disclosures. These comments were adapted from an editorial (J. Am. Coll. Cardiol. 2015; 66: 508-10 [doi:10.1016/j.jhacc.2015.05.064]).

Author and Disclosure Information

Publications
Topics
Legacy Keywords
ezetimibe, atorvastatin, coronary plaque regression, PCI
Author and Disclosure Information

Author and Disclosure Information

Body

At first glance, a greater reduction in LDL-cholesterol with dual-agent therapy appears to account for the greater reduction in atheroma volume in this excellent study. But linear regression analysis didn’t show any association between LDL levels and coronary plaque regression, so ezetimibe’s other beneficial effects must be involved.

Other sterols may play a role. Unlike the decrease in cholesterol level, the reduction in campesterol-to-cholesterol ratio was significantly associated with coronary plaque regression in a linear fashion. And lathosterol, campesterol, and sitosterol declined with dual therapy but increased with monotherapy. It would be interesting to investigate the clinical relevance of ezetimibe’s possible pleiotropic effects that are unrelated to cholesterol lowering.

Dr. Filippo Crea and Dr. Giampaolo Niccoli are at the Institute of Cardiology at Catholic University of the Sacred Heart, Rome. Both reported having no relevant financial disclosures. These comments were adapted from an editorial (J. Am. Coll. Cardiol. 2015; 66: 508-10 [doi:10.1016/j.jhacc.2015.05.064]).

Body

At first glance, a greater reduction in LDL-cholesterol with dual-agent therapy appears to account for the greater reduction in atheroma volume in this excellent study. But linear regression analysis didn’t show any association between LDL levels and coronary plaque regression, so ezetimibe’s other beneficial effects must be involved.

Other sterols may play a role. Unlike the decrease in cholesterol level, the reduction in campesterol-to-cholesterol ratio was significantly associated with coronary plaque regression in a linear fashion. And lathosterol, campesterol, and sitosterol declined with dual therapy but increased with monotherapy. It would be interesting to investigate the clinical relevance of ezetimibe’s possible pleiotropic effects that are unrelated to cholesterol lowering.

Dr. Filippo Crea and Dr. Giampaolo Niccoli are at the Institute of Cardiology at Catholic University of the Sacred Heart, Rome. Both reported having no relevant financial disclosures. These comments were adapted from an editorial (J. Am. Coll. Cardiol. 2015; 66: 508-10 [doi:10.1016/j.jhacc.2015.05.064]).

Title
Ezetimibe’s non-cholesterol effects
Ezetimibe’s non-cholesterol effects

Adding ezetimibe to atorvastatin boosted the regression of coronary plaque in patients with elevated cholesterol who had just undergone percutaneous coronary intervention (PCI) for acute coronary syndromes or stable angina, according to a report published July 27 in the Journal of the American College of Cardiology.

Compared with atorvastatin monotherapy, the dual-agent lipid-lowering strategy also improved vascular remodeling in the targeted artery segments to a much greater degree in the prospective, multicenter trial known as PRECISE-IVUS (Plaque Regression With Cholesterol Absorption Inhibitor or Synthesis Inhibitor Evaluated by Intravascular Ultrasound).

“Our positive results from the PRECISE-IVUS trial could lead to an early reevaluation of the new [American College of Cardiology/ American Heart Association] lipid management guidelines that endorses statins as the only recommended drugs for treating cholesterol-related CV risk,” wrote Kenichi Tsujita, M.D., Ph.D., of Kumamoto University in Japan, and his associates.

The study findings also confirm that adding ezetimibe can be beneficial for patients who are unable to tolerate high-dose statins, as well as for those who don’t achieve adequate cholesterol control despite maximal statin therapy.

©Ugreen/thinkstockphotos.com

The investigators compared atorvastatin monotherapy against atorvastatin plus ezetimibe in 246 adults who underwent PCI under intravascular ultrasound guidance at 17 cardiovascular centers. The participants had LDL-cholesterol levels of more than 100 mg/dl and were randomly assigned to receive either atorvastatin plus ezetimibe (122 patients) or atorvastatin alone (124 patients) for 9-12 months, at which time they underwent repeat intravascular ultrasound imaging.

The primary end point – the absolute decrease in percent atheroma volume of the selected coronary segment – was superior with dual therapy (1.4% vs. 0.3%), and a significantly greater percentage of patients in the dual-therapy group showed coronary plaque regression (78% vs. 58%).

These between-group differences were most pronounced in the subset of patients with acute coronary syndromes, Dr. Tsujita and his associates wrote (J. Am. Coll. Cardiol. 2015; 66: 495-507 [doi:10.1016/j.jacc.2015.05.065]).

But despite the beneficial effect of combination therapy on coronary plaque, the frequencies of adverse cardiovascular events and the rates of target-vessel revascularization were similar between the two study groups.

The study was supported in part by the Japanese Ministry of Education, Science, and Culture. Dr. Tsujita reported having no relevant financial disclosures; his associates reported ties to Bayer, Boehringer Ingelheim, Daiichi-Sankyo, MSD, Pfizer, Takeda, Novartis, AstraZeneca, Astellas, Bristol-Myers Squibb, Chugai, Dainippon Sumitomo Pharma, Kowa, Ostuka, Sanofi, and Shionogi.

Adding ezetimibe to atorvastatin boosted the regression of coronary plaque in patients with elevated cholesterol who had just undergone percutaneous coronary intervention (PCI) for acute coronary syndromes or stable angina, according to a report published July 27 in the Journal of the American College of Cardiology.

Compared with atorvastatin monotherapy, the dual-agent lipid-lowering strategy also improved vascular remodeling in the targeted artery segments to a much greater degree in the prospective, multicenter trial known as PRECISE-IVUS (Plaque Regression With Cholesterol Absorption Inhibitor or Synthesis Inhibitor Evaluated by Intravascular Ultrasound).

“Our positive results from the PRECISE-IVUS trial could lead to an early reevaluation of the new [American College of Cardiology/ American Heart Association] lipid management guidelines that endorses statins as the only recommended drugs for treating cholesterol-related CV risk,” wrote Kenichi Tsujita, M.D., Ph.D., of Kumamoto University in Japan, and his associates.

The study findings also confirm that adding ezetimibe can be beneficial for patients who are unable to tolerate high-dose statins, as well as for those who don’t achieve adequate cholesterol control despite maximal statin therapy.

©Ugreen/thinkstockphotos.com

The investigators compared atorvastatin monotherapy against atorvastatin plus ezetimibe in 246 adults who underwent PCI under intravascular ultrasound guidance at 17 cardiovascular centers. The participants had LDL-cholesterol levels of more than 100 mg/dl and were randomly assigned to receive either atorvastatin plus ezetimibe (122 patients) or atorvastatin alone (124 patients) for 9-12 months, at which time they underwent repeat intravascular ultrasound imaging.

The primary end point – the absolute decrease in percent atheroma volume of the selected coronary segment – was superior with dual therapy (1.4% vs. 0.3%), and a significantly greater percentage of patients in the dual-therapy group showed coronary plaque regression (78% vs. 58%).

These between-group differences were most pronounced in the subset of patients with acute coronary syndromes, Dr. Tsujita and his associates wrote (J. Am. Coll. Cardiol. 2015; 66: 495-507 [doi:10.1016/j.jacc.2015.05.065]).

But despite the beneficial effect of combination therapy on coronary plaque, the frequencies of adverse cardiovascular events and the rates of target-vessel revascularization were similar between the two study groups.

The study was supported in part by the Japanese Ministry of Education, Science, and Culture. Dr. Tsujita reported having no relevant financial disclosures; his associates reported ties to Bayer, Boehringer Ingelheim, Daiichi-Sankyo, MSD, Pfizer, Takeda, Novartis, AstraZeneca, Astellas, Bristol-Myers Squibb, Chugai, Dainippon Sumitomo Pharma, Kowa, Ostuka, Sanofi, and Shionogi.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Adding ezetimibe to atorvastatin boosts coronary plaque regression
Display Headline
Adding ezetimibe to atorvastatin boosts coronary plaque regression
Legacy Keywords
ezetimibe, atorvastatin, coronary plaque regression, PCI
Legacy Keywords
ezetimibe, atorvastatin, coronary plaque regression, PCI
Article Source

FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Adding ezetimibe to atorvastatin increased coronary plaque regression in Japanese patients who had recently undergone PCI.

Major finding: The primary end point – the absolute decrease in percent atheroma volume of the selected coronary segment – was superior with dual therapy (1.4% vs 0.3%), and a significantly greater percentage of patients in the dual-therapy group showed coronary plaque regression (78% vs 58%).

Data source: A multicenter, prospective, randomized, controlled, single-blind study involving 246 Japanese patients who underwent PCI for acute coronary syndromes or stable angina and were followed for 9-12 months.

Disclosures: This study was supported in part by the Japanese Ministry of Education, Science, and Culture. Dr. Tsujita reported having no relevant financial disclosures; some his associates reported ties to Bayer, Boehringer Ingelheim, Daiichi-Sankyo, MSD, Pfizer, Takeda, Novartis, AstraZeneca, Astellas, Bristol-Myers Squibb, Chugai, Dainippon Sumitomo Pharma, Kowa, Ostuka, Sanofi, and Shionogi.

Most hospitals overestimate their door-to-needle performance

Article Type
Changed
Display Headline
Most hospitals overestimate their door-to-needle performance

Personnel at most hospitals that treat acute stroke, particularly the lowest-performing hospitals, greatly overestimate their ability to deliver TPA to eligible patients within 1 hour, according to a report published online July 22 in Journal of the American Heart Association.

Overestimating the quality of care they actually provide may perpetuate this suboptimal performance, “whereas accurate measurements of current performance and realistic comparison to other, more successful, sites might provide the needed motivation to fuel quality improvement,” said Dr. Cheryl B. Lin of Tufts Medical Center Floating Hospital for Children, Boston, and her associates.

Copyright American Stroke Association

They compared stroke teams’ perceptions of their door-to-needle performance, as measured on survey questionnaires answered by nurses, neurologists, and other staff members, against the hospitals’ actual performance, which was recorded in a large stroke registry. The investigators focused on 141 hospitals that treated 48,201 stroke patients during a 1-year period. This included 49 top-performing, 52 average-performing, and 40 low-performing hospitals. The top category had door-to-needle rates of 45%-93%, while the bottom category had consistent door-to-needle rates of 0%. The middle category had door-to-needle rates of 16%-25%.

Regardless of their hospital’s performance category, 61% of the respondents overestimated how many eligible patients there actually received TPA within 1 hour. The lowest-performing hospitals had the most unrealistic estimates, with 68% of them guessing that 20% of their patients received timely TPA when in fact 0% of patients did so. Low-performing hospitals also overestimated their performance in comparison with other hospitals, with 85% of them characterizing their performance as average, above average, or even superior relative to other hospitals, when in fact it was very poor, Dr. Lin and her associates wrote (J. Am. Heart Assoc. 2015 July 22 [doi:10.1161/JAHA.114.001298]).

“Addressing misperceptions that one’s performance is average or above average when it actually is not is an important step in addressing motivation for change,” they added.

The study was supported by the U.S. Agency for Healthcare Research and Quality. Dr. Lin reported having no relevant financial disclosures; her associates reported ties to Genentech, Lilly, Johnson & Johnson, Bristol-Myers Squibb, Sanofi-Aventis, and Merck Schering-Plough.

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Personnel at most hospitals that treat acute stroke, particularly the lowest-performing hospitals, greatly overestimate their ability to deliver TPA to eligible patients within 1 hour, according to a report published online July 22 in Journal of the American Heart Association.

Overestimating the quality of care they actually provide may perpetuate this suboptimal performance, “whereas accurate measurements of current performance and realistic comparison to other, more successful, sites might provide the needed motivation to fuel quality improvement,” said Dr. Cheryl B. Lin of Tufts Medical Center Floating Hospital for Children, Boston, and her associates.

Copyright American Stroke Association

They compared stroke teams’ perceptions of their door-to-needle performance, as measured on survey questionnaires answered by nurses, neurologists, and other staff members, against the hospitals’ actual performance, which was recorded in a large stroke registry. The investigators focused on 141 hospitals that treated 48,201 stroke patients during a 1-year period. This included 49 top-performing, 52 average-performing, and 40 low-performing hospitals. The top category had door-to-needle rates of 45%-93%, while the bottom category had consistent door-to-needle rates of 0%. The middle category had door-to-needle rates of 16%-25%.

Regardless of their hospital’s performance category, 61% of the respondents overestimated how many eligible patients there actually received TPA within 1 hour. The lowest-performing hospitals had the most unrealistic estimates, with 68% of them guessing that 20% of their patients received timely TPA when in fact 0% of patients did so. Low-performing hospitals also overestimated their performance in comparison with other hospitals, with 85% of them characterizing their performance as average, above average, or even superior relative to other hospitals, when in fact it was very poor, Dr. Lin and her associates wrote (J. Am. Heart Assoc. 2015 July 22 [doi:10.1161/JAHA.114.001298]).

“Addressing misperceptions that one’s performance is average or above average when it actually is not is an important step in addressing motivation for change,” they added.

The study was supported by the U.S. Agency for Healthcare Research and Quality. Dr. Lin reported having no relevant financial disclosures; her associates reported ties to Genentech, Lilly, Johnson & Johnson, Bristol-Myers Squibb, Sanofi-Aventis, and Merck Schering-Plough.

Personnel at most hospitals that treat acute stroke, particularly the lowest-performing hospitals, greatly overestimate their ability to deliver TPA to eligible patients within 1 hour, according to a report published online July 22 in Journal of the American Heart Association.

Overestimating the quality of care they actually provide may perpetuate this suboptimal performance, “whereas accurate measurements of current performance and realistic comparison to other, more successful, sites might provide the needed motivation to fuel quality improvement,” said Dr. Cheryl B. Lin of Tufts Medical Center Floating Hospital for Children, Boston, and her associates.

Copyright American Stroke Association

They compared stroke teams’ perceptions of their door-to-needle performance, as measured on survey questionnaires answered by nurses, neurologists, and other staff members, against the hospitals’ actual performance, which was recorded in a large stroke registry. The investigators focused on 141 hospitals that treated 48,201 stroke patients during a 1-year period. This included 49 top-performing, 52 average-performing, and 40 low-performing hospitals. The top category had door-to-needle rates of 45%-93%, while the bottom category had consistent door-to-needle rates of 0%. The middle category had door-to-needle rates of 16%-25%.

Regardless of their hospital’s performance category, 61% of the respondents overestimated how many eligible patients there actually received TPA within 1 hour. The lowest-performing hospitals had the most unrealistic estimates, with 68% of them guessing that 20% of their patients received timely TPA when in fact 0% of patients did so. Low-performing hospitals also overestimated their performance in comparison with other hospitals, with 85% of them characterizing their performance as average, above average, or even superior relative to other hospitals, when in fact it was very poor, Dr. Lin and her associates wrote (J. Am. Heart Assoc. 2015 July 22 [doi:10.1161/JAHA.114.001298]).

“Addressing misperceptions that one’s performance is average or above average when it actually is not is an important step in addressing motivation for change,” they added.

The study was supported by the U.S. Agency for Healthcare Research and Quality. Dr. Lin reported having no relevant financial disclosures; her associates reported ties to Genentech, Lilly, Johnson & Johnson, Bristol-Myers Squibb, Sanofi-Aventis, and Merck Schering-Plough.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Most hospitals overestimate their door-to-needle performance
Display Headline
Most hospitals overestimate their door-to-needle performance
Article Source

FROM THE JOURNAL OF THE AMERICAN HEART ASSOCIATION

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Personnel at most hospitals, particularly the lowest-performing hospitals, greatly overestimate their performance at giving stroke patients TPA within 1 hour of arrival.

Major finding: The lowest-performing hospitals had the most unrealistic estimates of their door-to-needle times, with 68% of them guessing that 20% of their patients received timely TPA when in fact 0% of their patients did so.

Data source: An analysis of data in a stroke registry regarding 141 hospitals that treated 48,201 patients during a 1-year period, plus a survey of stroke personnel at those hospitals.

Disclosures: This study was supported by the U.S. Agency for Healthcare Research and Quality. Dr. Lin reported having no relevant financial disclosures; her associates reported ties to Genentech, Lilly, Johnson & Johnson, Bristol-Myers Squibb, Sanofi-Aventis and Merck Schering-Plough.

Study cannot rule out pioglitazone link to bladder cancer

Article Type
Changed
Display Headline
Study cannot rule out pioglitazone link to bladder cancer

The diabetes medication pioglitazone did not increase the risk of bladder cancer to a statistically significant degree in a large cohort study with a nested case-control substudy, but “a small increased risk” could not be ruled out, according to a report published online July 21 in JAMA.

Moreover, pioglitazone was linked to increased risks of prostate cancer and pancreatic cancer in a second cohort assessed in this study, a finding that “merits further investigation to assess whether the observed associations are causal or due to chance, residual confounding, or reverse causality,” said Dr. James D. Lewis of the center for clinical epidemiology and biostatistics, University of Pennsylvania, Philadelphia, and his associates.

©Sebastian Kaulitzki/ thinkstockphotos.com

Preclinical studies of pioglitazone showed an increase in bladder cancer in male rats, and early clinical studies detected a possible safety signal for bladder cancer in humans. The Food and Drug Administration and Takeda, the developer of pioglitazone, agreed to a 10-year observational study to assess a possible link to the disease, using data from the electronic health records of diabetes patients aged 40 and older enrolled in a large California health plan.

The study cohort comprised 193,099 patients, including 34,181 who took pioglitazone, who were followed for a median of 6-7 years (range, 1-16 years) for the development of bladder cancer. A total of 1,261 participants developed the disease, representing 0.65% of the entire cohort. There was no statistically significant association between the use of pioglitazone and bladder cancer (HR, 1.06), although the crude incidence of the disease was higher with exposure to pioglitazone (89.8 cases per 100,000 person-years) than without such exposure (75.9 cases per 100,000 person-years). The risk of bladder cancer also showed no clear association with duration of treatment or cumulative dose, the investigators said (JAMA 2015 July 21 [doi:10.1001/jama.2015.7996]).

The findings were similar in a nested, case-control substudy involving 464 patients who developed bladder cancer and 464 control subjects matched for age, sex, and date of study entry. The rate of pioglitazone use was identical between these two groups, at 16%.

However, given the confidence intervals of the data concerning more than 4 years of pioglitazone use, “this study cannot exclude up to a 54% increased risk of bladder cancer,” Dr. Lewis and his associates noted.

The researchers also analyzed data from a second observational cohort of 236,507 diabetes patients aged 40 and older who were followed for the development of 10 other cancers from enrollment in 1997-2005 until 2012. Unlike the first cohort, the data from this cohort were adjusted to account for several potential confounding factors such as race/ethnicity, diabetes duration, smoking history, and occupational exposures. A total of 15,993 patients had incident cancers of the prostate, pancreas, breast, lung/bronchus, endometrium, colon, rectum, or kidney/renal pelvis, or non-Hodgkin lymphoma or melanoma. Pioglitazone was associated with an increased risk of prostate cancer (HR, 1.13) and pancreatic cancer (HR, 1.41), but not with the other cancers.

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

The diabetes medication pioglitazone did not increase the risk of bladder cancer to a statistically significant degree in a large cohort study with a nested case-control substudy, but “a small increased risk” could not be ruled out, according to a report published online July 21 in JAMA.

Moreover, pioglitazone was linked to increased risks of prostate cancer and pancreatic cancer in a second cohort assessed in this study, a finding that “merits further investigation to assess whether the observed associations are causal or due to chance, residual confounding, or reverse causality,” said Dr. James D. Lewis of the center for clinical epidemiology and biostatistics, University of Pennsylvania, Philadelphia, and his associates.

©Sebastian Kaulitzki/ thinkstockphotos.com

Preclinical studies of pioglitazone showed an increase in bladder cancer in male rats, and early clinical studies detected a possible safety signal for bladder cancer in humans. The Food and Drug Administration and Takeda, the developer of pioglitazone, agreed to a 10-year observational study to assess a possible link to the disease, using data from the electronic health records of diabetes patients aged 40 and older enrolled in a large California health plan.

The study cohort comprised 193,099 patients, including 34,181 who took pioglitazone, who were followed for a median of 6-7 years (range, 1-16 years) for the development of bladder cancer. A total of 1,261 participants developed the disease, representing 0.65% of the entire cohort. There was no statistically significant association between the use of pioglitazone and bladder cancer (HR, 1.06), although the crude incidence of the disease was higher with exposure to pioglitazone (89.8 cases per 100,000 person-years) than without such exposure (75.9 cases per 100,000 person-years). The risk of bladder cancer also showed no clear association with duration of treatment or cumulative dose, the investigators said (JAMA 2015 July 21 [doi:10.1001/jama.2015.7996]).

The findings were similar in a nested, case-control substudy involving 464 patients who developed bladder cancer and 464 control subjects matched for age, sex, and date of study entry. The rate of pioglitazone use was identical between these two groups, at 16%.

However, given the confidence intervals of the data concerning more than 4 years of pioglitazone use, “this study cannot exclude up to a 54% increased risk of bladder cancer,” Dr. Lewis and his associates noted.

The researchers also analyzed data from a second observational cohort of 236,507 diabetes patients aged 40 and older who were followed for the development of 10 other cancers from enrollment in 1997-2005 until 2012. Unlike the first cohort, the data from this cohort were adjusted to account for several potential confounding factors such as race/ethnicity, diabetes duration, smoking history, and occupational exposures. A total of 15,993 patients had incident cancers of the prostate, pancreas, breast, lung/bronchus, endometrium, colon, rectum, or kidney/renal pelvis, or non-Hodgkin lymphoma or melanoma. Pioglitazone was associated with an increased risk of prostate cancer (HR, 1.13) and pancreatic cancer (HR, 1.41), but not with the other cancers.

The diabetes medication pioglitazone did not increase the risk of bladder cancer to a statistically significant degree in a large cohort study with a nested case-control substudy, but “a small increased risk” could not be ruled out, according to a report published online July 21 in JAMA.

Moreover, pioglitazone was linked to increased risks of prostate cancer and pancreatic cancer in a second cohort assessed in this study, a finding that “merits further investigation to assess whether the observed associations are causal or due to chance, residual confounding, or reverse causality,” said Dr. James D. Lewis of the center for clinical epidemiology and biostatistics, University of Pennsylvania, Philadelphia, and his associates.

©Sebastian Kaulitzki/ thinkstockphotos.com

Preclinical studies of pioglitazone showed an increase in bladder cancer in male rats, and early clinical studies detected a possible safety signal for bladder cancer in humans. The Food and Drug Administration and Takeda, the developer of pioglitazone, agreed to a 10-year observational study to assess a possible link to the disease, using data from the electronic health records of diabetes patients aged 40 and older enrolled in a large California health plan.

The study cohort comprised 193,099 patients, including 34,181 who took pioglitazone, who were followed for a median of 6-7 years (range, 1-16 years) for the development of bladder cancer. A total of 1,261 participants developed the disease, representing 0.65% of the entire cohort. There was no statistically significant association between the use of pioglitazone and bladder cancer (HR, 1.06), although the crude incidence of the disease was higher with exposure to pioglitazone (89.8 cases per 100,000 person-years) than without such exposure (75.9 cases per 100,000 person-years). The risk of bladder cancer also showed no clear association with duration of treatment or cumulative dose, the investigators said (JAMA 2015 July 21 [doi:10.1001/jama.2015.7996]).

The findings were similar in a nested, case-control substudy involving 464 patients who developed bladder cancer and 464 control subjects matched for age, sex, and date of study entry. The rate of pioglitazone use was identical between these two groups, at 16%.

However, given the confidence intervals of the data concerning more than 4 years of pioglitazone use, “this study cannot exclude up to a 54% increased risk of bladder cancer,” Dr. Lewis and his associates noted.

The researchers also analyzed data from a second observational cohort of 236,507 diabetes patients aged 40 and older who were followed for the development of 10 other cancers from enrollment in 1997-2005 until 2012. Unlike the first cohort, the data from this cohort were adjusted to account for several potential confounding factors such as race/ethnicity, diabetes duration, smoking history, and occupational exposures. A total of 15,993 patients had incident cancers of the prostate, pancreas, breast, lung/bronchus, endometrium, colon, rectum, or kidney/renal pelvis, or non-Hodgkin lymphoma or melanoma. Pioglitazone was associated with an increased risk of prostate cancer (HR, 1.13) and pancreatic cancer (HR, 1.41), but not with the other cancers.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Study cannot rule out pioglitazone link to bladder cancer
Display Headline
Study cannot rule out pioglitazone link to bladder cancer
Article Source

FROM JAMA

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Pioglitazone did not increase the risk of bladder cancer significantly in this study, but an association still cannot be ruled out.

Major finding: There was no statistically significant association between the use of pioglitazone and bladder cancer (HR, 1.06), although the crude incidence of the disease was higher with exposure to pioglitazone (89.8 cases per 100,000 person-years) than without such exposure (75.9 cases per 100,000 person-years).

Data source: An observational cohort study involving 193,099 patients with diabetes and a nested case-control study involving a subset of 928 of these patients, who were followed for a median of 7 years.

Disclosures: This study was funded by Takeda, the company that developed pioglitazone. Dr. Lewis reported serving as a consultant for Takeda, Janssen, and Lilly, as well as ties to Bayer, Pfizer, Merck, AstraZeneca, Centocor, and Nestle Health Science. His associates reported ties to numerous industry sponsors.

Dicloxacillin may cut INR levels in warfarin users

Article Type
Changed
Display Headline
Dicloxacillin may cut INR levels in warfarin users

The antibiotic dicloxacillin appears to markedly decrease INR levels in patients taking warfarin, reducing the mean INR to subtherapeutic ranges in the majority who take both drugs concomitantly, according to a research letter to the editor published online July 20 in JAMA.

Adverse interactions between warfarin and other drugs are often suspected, but solid data are lacking. Case reports have suggested that the commonly used antibiotic dicloxacillin reduces warfarin’s anticoagulant effects, but no studies have examined the issue, said Anton Pottegård, Ph.D., of the department of clinical pharmacology, University of Southern Denmark, Odense, and his associates (JAMA 2015;314:296-7).

To further investigate that possibility, the investigators analyzed information in an anticoagulant database covering 7,400 patients treated by three outpatient clinics and 50 general practitioners during a 15-year period. They focused on weekly INR levels recorded for 236 patients (median age, 68 years), most of whom took warfarin because of atrial fibrillation or heart valve replacement.

The mean INR level before dicloxacillin exposure was 2.59, compared with 1.97 after dicloxacillin exposure (P < .001). A total of 144 patients (61%) had subtherapeutic INR levels (< 2.0) during the 2-4 weeks following a course of dicloxacillin, Dr. Pottegård and his associates said.

A similar but less drastic decrease was observed among the 64 patients taking a different anticoagulant, phenprocoumon, who were given dicloxacillin. Mean INR levels dropped from 2.61 before exposure to 2.30 afterward (P = .003), and 41% of the group had subtherapeutic INR levels after taking the antibiotic.

No sponsor was reported for this study. Dr. Pottegård and his associates reported having no relevant financial disclosures.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
antibiotics, INR, dicloxacillin, warfarin, drug interaction
Sections
Author and Disclosure Information

Author and Disclosure Information

The antibiotic dicloxacillin appears to markedly decrease INR levels in patients taking warfarin, reducing the mean INR to subtherapeutic ranges in the majority who take both drugs concomitantly, according to a research letter to the editor published online July 20 in JAMA.

Adverse interactions between warfarin and other drugs are often suspected, but solid data are lacking. Case reports have suggested that the commonly used antibiotic dicloxacillin reduces warfarin’s anticoagulant effects, but no studies have examined the issue, said Anton Pottegård, Ph.D., of the department of clinical pharmacology, University of Southern Denmark, Odense, and his associates (JAMA 2015;314:296-7).

To further investigate that possibility, the investigators analyzed information in an anticoagulant database covering 7,400 patients treated by three outpatient clinics and 50 general practitioners during a 15-year period. They focused on weekly INR levels recorded for 236 patients (median age, 68 years), most of whom took warfarin because of atrial fibrillation or heart valve replacement.

The mean INR level before dicloxacillin exposure was 2.59, compared with 1.97 after dicloxacillin exposure (P < .001). A total of 144 patients (61%) had subtherapeutic INR levels (< 2.0) during the 2-4 weeks following a course of dicloxacillin, Dr. Pottegård and his associates said.

A similar but less drastic decrease was observed among the 64 patients taking a different anticoagulant, phenprocoumon, who were given dicloxacillin. Mean INR levels dropped from 2.61 before exposure to 2.30 afterward (P = .003), and 41% of the group had subtherapeutic INR levels after taking the antibiotic.

No sponsor was reported for this study. Dr. Pottegård and his associates reported having no relevant financial disclosures.

The antibiotic dicloxacillin appears to markedly decrease INR levels in patients taking warfarin, reducing the mean INR to subtherapeutic ranges in the majority who take both drugs concomitantly, according to a research letter to the editor published online July 20 in JAMA.

Adverse interactions between warfarin and other drugs are often suspected, but solid data are lacking. Case reports have suggested that the commonly used antibiotic dicloxacillin reduces warfarin’s anticoagulant effects, but no studies have examined the issue, said Anton Pottegård, Ph.D., of the department of clinical pharmacology, University of Southern Denmark, Odense, and his associates (JAMA 2015;314:296-7).

To further investigate that possibility, the investigators analyzed information in an anticoagulant database covering 7,400 patients treated by three outpatient clinics and 50 general practitioners during a 15-year period. They focused on weekly INR levels recorded for 236 patients (median age, 68 years), most of whom took warfarin because of atrial fibrillation or heart valve replacement.

The mean INR level before dicloxacillin exposure was 2.59, compared with 1.97 after dicloxacillin exposure (P < .001). A total of 144 patients (61%) had subtherapeutic INR levels (< 2.0) during the 2-4 weeks following a course of dicloxacillin, Dr. Pottegård and his associates said.

A similar but less drastic decrease was observed among the 64 patients taking a different anticoagulant, phenprocoumon, who were given dicloxacillin. Mean INR levels dropped from 2.61 before exposure to 2.30 afterward (P = .003), and 41% of the group had subtherapeutic INR levels after taking the antibiotic.

No sponsor was reported for this study. Dr. Pottegård and his associates reported having no relevant financial disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Dicloxacillin may cut INR levels in warfarin users
Display Headline
Dicloxacillin may cut INR levels in warfarin users
Legacy Keywords
antibiotics, INR, dicloxacillin, warfarin, drug interaction
Legacy Keywords
antibiotics, INR, dicloxacillin, warfarin, drug interaction
Sections
Article Source

FROM JAMA

PURLs Copyright

Inside the Article

Vitals

Key clinical point: The antibiotic dicloxacillin appears to markedly decrease INR levels in patients using warfarin.

Major finding: 144 patients taking warfarin (61%) had subtherapeutic international normalized ratio levels during the 2-4 weeks following a course of dicloxacillin.

Data source: An analysis of INR levels before and after antibiotic use from a Danish database of 7,400 patients taking anticoagulants.

Disclosures: No sponsor was reported for this study. Dr. Pottegard and his associates reported having no relevant financial disclosures.

ASCO: Include more older adults in cancer trials

Article Type
Changed
Display Headline
ASCO: Include more older adults in cancer trials

Cancer research must include more older adult participants because the evidence base for treating this patient population is too sparse, according to an American Society of Clinical Oncology position statement published online July 20 in Journal of Clinical Oncology.

Key evidence is lacking because older adults are usually excluded from clinical trials, even though most cancer patients are aged 65 and older. Both patients and their clinicians are forced to base treatment plans on data from younger, healthier patients, from studies that often don’t even consider endpoints that matter most to them: not just survival rates but quality of life measures and rates of functional independence.

Moreover, older adults respond differently than younger adults to cancer treatments, because of age-associated physiologic changes, a higher incidence of comorbidities, and greater use of medications that may interact with cancer therapies. “We need to see clinical trials that mirror the age distribution and health risk profile of patients with cancer,” said Dr. Arti Hurria, coauthor of the statement and director of the cancer and aging research program at City of Hope, Duarte, Calif., and his associates.

The position statement includes five recommendations and 16 specific action items to achieve this goal.

First, the cancer research community – regulatory agencies, study funders, and researchers – must expand eligibility criteria so more older adults can participate in studies. A rationale must be provided for all restrictions based on age, performance status, or comorbidities. And funders such as the National Cancer Institute and the National Institute on Aging should incentivize research that includes older adults.

Second, research design and infrastructure must be used to incentivize the inclusion of older adults. For example, Medicare could cover the off-label use of cancer therapies in older patients in selected trials, and research databases could be encouraged to collect information pertaining to older patients.

Third, the Food and Drug Administration should be given authority to both incentivize and require studies to include older adults. For example, the agency could reward drug manufacturers for including older patients in trials of new cancer therapies by granting them 6-month patent extensions, or it could encourage the development of new agents by expediting their review. Alternatively, the FDA could limit the compensation available to manufacturers that don’t include older study subjects. And the FDA should include geriatrics experts on its advisory boards, such as the Oncology Drug Advisory Committee.

Fourth, clinicians should increase the recruitment of their older patients into clinical trials. The single most important predictor of whether or not a cancer patient enrolls in a study is that his or her clinician has recommended it. And one way to increase such recommendations is to increase reimbursement for the time and effort it takes clinicians to find and explain relevant studies to patients.

Finally, professional journals should incentivize researchers to report on the substantial data they already collect about older study subjects but do not analyze or report on. And professional journals should include geriatric oncology experts on their editorial boards and as peer reviewers, to ensure that cancer research results are applicable to the majority of people who have cancer, according to the position statement (J. Clin. Oncol. 2015 July 20 [doi:10.1200/JCO.2015.63.0319]).

In a separate report, ASCO also called for the cancer research community to redefine eligibility criteria specifically for studies of molecular medicine. The Cancer Research Committee developed a list of questions to be used to streamline such criteria. Streamlining, in turn, is expected to enhance enrollment in studies of targeted biologics, expedite their development and approval, and better inform clinicians about their use in real-world patients, said Dr. Edward S. Kim,, immediate past chair of the committee and chair of solid tumor oncology and investigational therapeutics, Levine Cancer Institute, Charlotte, N.C., and his associates.

ASCO plans to organize a public workshop “with input from regulatory bodies and key stakeholders, with a goal of developing an algorithmic approach to determining eligibility criteria for individual study protocols,” he said (J. Clin. Oncol. 2015 July 20 [doi:10.1200/JCO.2015.62.1854]).

This position statement was supported by a subcommittee of the American Society of Clinical Oncology’s cancer research committee. Dr. Hurria reported serving as a consultant or advisor for and receiving research funding from GTx, Seattle Genetics, Boehringer Ingelheim, GlaxoSmithKline, and Celgene; his associates reported ties to numerous industry sources. Dr. Kim reported ties to Celgene, Eli Lilly, and Myriad Genetics, and his associates reported ties to numerous industry sources.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
ASCO, geriatric oncology
Author and Disclosure Information

Author and Disclosure Information

Cancer research must include more older adult participants because the evidence base for treating this patient population is too sparse, according to an American Society of Clinical Oncology position statement published online July 20 in Journal of Clinical Oncology.

Key evidence is lacking because older adults are usually excluded from clinical trials, even though most cancer patients are aged 65 and older. Both patients and their clinicians are forced to base treatment plans on data from younger, healthier patients, from studies that often don’t even consider endpoints that matter most to them: not just survival rates but quality of life measures and rates of functional independence.

Moreover, older adults respond differently than younger adults to cancer treatments, because of age-associated physiologic changes, a higher incidence of comorbidities, and greater use of medications that may interact with cancer therapies. “We need to see clinical trials that mirror the age distribution and health risk profile of patients with cancer,” said Dr. Arti Hurria, coauthor of the statement and director of the cancer and aging research program at City of Hope, Duarte, Calif., and his associates.

The position statement includes five recommendations and 16 specific action items to achieve this goal.

First, the cancer research community – regulatory agencies, study funders, and researchers – must expand eligibility criteria so more older adults can participate in studies. A rationale must be provided for all restrictions based on age, performance status, or comorbidities. And funders such as the National Cancer Institute and the National Institute on Aging should incentivize research that includes older adults.

Second, research design and infrastructure must be used to incentivize the inclusion of older adults. For example, Medicare could cover the off-label use of cancer therapies in older patients in selected trials, and research databases could be encouraged to collect information pertaining to older patients.

Third, the Food and Drug Administration should be given authority to both incentivize and require studies to include older adults. For example, the agency could reward drug manufacturers for including older patients in trials of new cancer therapies by granting them 6-month patent extensions, or it could encourage the development of new agents by expediting their review. Alternatively, the FDA could limit the compensation available to manufacturers that don’t include older study subjects. And the FDA should include geriatrics experts on its advisory boards, such as the Oncology Drug Advisory Committee.

Fourth, clinicians should increase the recruitment of their older patients into clinical trials. The single most important predictor of whether or not a cancer patient enrolls in a study is that his or her clinician has recommended it. And one way to increase such recommendations is to increase reimbursement for the time and effort it takes clinicians to find and explain relevant studies to patients.

Finally, professional journals should incentivize researchers to report on the substantial data they already collect about older study subjects but do not analyze or report on. And professional journals should include geriatric oncology experts on their editorial boards and as peer reviewers, to ensure that cancer research results are applicable to the majority of people who have cancer, according to the position statement (J. Clin. Oncol. 2015 July 20 [doi:10.1200/JCO.2015.63.0319]).

In a separate report, ASCO also called for the cancer research community to redefine eligibility criteria specifically for studies of molecular medicine. The Cancer Research Committee developed a list of questions to be used to streamline such criteria. Streamlining, in turn, is expected to enhance enrollment in studies of targeted biologics, expedite their development and approval, and better inform clinicians about their use in real-world patients, said Dr. Edward S. Kim,, immediate past chair of the committee and chair of solid tumor oncology and investigational therapeutics, Levine Cancer Institute, Charlotte, N.C., and his associates.

ASCO plans to organize a public workshop “with input from regulatory bodies and key stakeholders, with a goal of developing an algorithmic approach to determining eligibility criteria for individual study protocols,” he said (J. Clin. Oncol. 2015 July 20 [doi:10.1200/JCO.2015.62.1854]).

This position statement was supported by a subcommittee of the American Society of Clinical Oncology’s cancer research committee. Dr. Hurria reported serving as a consultant or advisor for and receiving research funding from GTx, Seattle Genetics, Boehringer Ingelheim, GlaxoSmithKline, and Celgene; his associates reported ties to numerous industry sources. Dr. Kim reported ties to Celgene, Eli Lilly, and Myriad Genetics, and his associates reported ties to numerous industry sources.

Cancer research must include more older adult participants because the evidence base for treating this patient population is too sparse, according to an American Society of Clinical Oncology position statement published online July 20 in Journal of Clinical Oncology.

Key evidence is lacking because older adults are usually excluded from clinical trials, even though most cancer patients are aged 65 and older. Both patients and their clinicians are forced to base treatment plans on data from younger, healthier patients, from studies that often don’t even consider endpoints that matter most to them: not just survival rates but quality of life measures and rates of functional independence.

Moreover, older adults respond differently than younger adults to cancer treatments, because of age-associated physiologic changes, a higher incidence of comorbidities, and greater use of medications that may interact with cancer therapies. “We need to see clinical trials that mirror the age distribution and health risk profile of patients with cancer,” said Dr. Arti Hurria, coauthor of the statement and director of the cancer and aging research program at City of Hope, Duarte, Calif., and his associates.

The position statement includes five recommendations and 16 specific action items to achieve this goal.

First, the cancer research community – regulatory agencies, study funders, and researchers – must expand eligibility criteria so more older adults can participate in studies. A rationale must be provided for all restrictions based on age, performance status, or comorbidities. And funders such as the National Cancer Institute and the National Institute on Aging should incentivize research that includes older adults.

Second, research design and infrastructure must be used to incentivize the inclusion of older adults. For example, Medicare could cover the off-label use of cancer therapies in older patients in selected trials, and research databases could be encouraged to collect information pertaining to older patients.

Third, the Food and Drug Administration should be given authority to both incentivize and require studies to include older adults. For example, the agency could reward drug manufacturers for including older patients in trials of new cancer therapies by granting them 6-month patent extensions, or it could encourage the development of new agents by expediting their review. Alternatively, the FDA could limit the compensation available to manufacturers that don’t include older study subjects. And the FDA should include geriatrics experts on its advisory boards, such as the Oncology Drug Advisory Committee.

Fourth, clinicians should increase the recruitment of their older patients into clinical trials. The single most important predictor of whether or not a cancer patient enrolls in a study is that his or her clinician has recommended it. And one way to increase such recommendations is to increase reimbursement for the time and effort it takes clinicians to find and explain relevant studies to patients.

Finally, professional journals should incentivize researchers to report on the substantial data they already collect about older study subjects but do not analyze or report on. And professional journals should include geriatric oncology experts on their editorial boards and as peer reviewers, to ensure that cancer research results are applicable to the majority of people who have cancer, according to the position statement (J. Clin. Oncol. 2015 July 20 [doi:10.1200/JCO.2015.63.0319]).

In a separate report, ASCO also called for the cancer research community to redefine eligibility criteria specifically for studies of molecular medicine. The Cancer Research Committee developed a list of questions to be used to streamline such criteria. Streamlining, in turn, is expected to enhance enrollment in studies of targeted biologics, expedite their development and approval, and better inform clinicians about their use in real-world patients, said Dr. Edward S. Kim,, immediate past chair of the committee and chair of solid tumor oncology and investigational therapeutics, Levine Cancer Institute, Charlotte, N.C., and his associates.

ASCO plans to organize a public workshop “with input from regulatory bodies and key stakeholders, with a goal of developing an algorithmic approach to determining eligibility criteria for individual study protocols,” he said (J. Clin. Oncol. 2015 July 20 [doi:10.1200/JCO.2015.62.1854]).

This position statement was supported by a subcommittee of the American Society of Clinical Oncology’s cancer research committee. Dr. Hurria reported serving as a consultant or advisor for and receiving research funding from GTx, Seattle Genetics, Boehringer Ingelheim, GlaxoSmithKline, and Celgene; his associates reported ties to numerous industry sources. Dr. Kim reported ties to Celgene, Eli Lilly, and Myriad Genetics, and his associates reported ties to numerous industry sources.

References

References

Publications
Publications
Topics
Article Type
Display Headline
ASCO: Include more older adults in cancer trials
Display Headline
ASCO: Include more older adults in cancer trials
Legacy Keywords
ASCO, geriatric oncology
Legacy Keywords
ASCO, geriatric oncology
Article Source

FROM JOURNAL OF CLINICAL ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: ASCO compiled five recommendations to increase the number of older adults included in cancer studies.

Major finding: Key evidence is lacking on treatment outcomes in older adults because this patient population is usually excluded from clinical trials, even though most cancer patients are aged 65 and older.

Data source: A position statement listing five recommendations for increasing the number of older participants in cancer studies.

Disclosures: This position statement was supported by a subcommittee of the American Society of Clinical Oncology’s cancer research committee. Dr. Hurria reported serving as a consultant or advisor for and receiving research funding from GTx, Seattle Genetics, Boehringer Ingelheim, GlaxoSmithKline, and Celgene; his associates reported ties to numerous industry sources. Dr. Kim reported ties to Celgene, Eli Lilly, and Myriad Genetics, and his associates reported ties to numerous industry sources.