User login
Neratinib not superior to trastuzumab
Neratinib plus paclitaxel was found not superior to trastuzumab plus paclitaxel as first-line therapy for ERBB2-positive metastatic breast cancer in an international phase II trial, according to investigators.
Recent research had suggested that small-molecule ERBB2 kinase inhibitors might be particularly effective against CNS metastases in such cases, so investigators performed the open-label randomized clinical trial in 479 women at 188 medical centers. After a median follow-up of 23 months, the primary endpoint – median progression-free survival – was 12.9 months for both neratinib-paclitaxel and trastuzumab-paclitaxel, said Dr. Ahmad Awada of the Medical Oncology Clinic, Jules Bordet Institute, Brussels, and his associates.
This outcome was consistent across all subgroups of patients, regardless of age, race, area of residence, hormone receptor status, or prior exposure to trastuzumab. Neratinib also was not superior to trastuzumab in any of the secondary study endpoints, including objective response rate, duration of response, and clinical benefit rate. These findings suggest that the two agents have similar efficacy in this patient population, the investigators said (JAMA Oncol. 2016 April 14. [doi: 10.1001/jamaoncol.2016.0237).
However, neratinib was associated with a reduced frequency of symptomatic or progressive CNS recurrences (RR, 0.48), as well as a delayed onset of such recurrences (HR, 0.45), compared with trastuzumab. This finding warrants further investigation in a larger trial with predefined CNS end points, they noted.
Neratinib was associated with more frequent adverse effects than trastuzumab, chiefly diarrhea (92.5% vs 33.3%) and nausea (44.2% vs 30.3%). Grade 3 diarrhea developed in 30.4% of patients receiving neratinib, compared with 3.8% of those receiving trastuzumab, and diarrhea accounted for discontinuation of study treatment in 3.8% of patients receiving neratinib, compared with 0.4% of those receiving trastuzumab. Aggressive primary prophylaxis of diarrhea should now be required for the first cycle of neratinib therapy, Dr. Awada and his associates added.
Neratinib’s benefits regarding CNS progression could prove to be an important therapeutic advance, given the debilitating sequelae of brain metastases and the especially poor prognosis associated with CNS progression.
However, the agent’s toxic effects are considerable. Grade 2-3 diarrhea, which developed in 65.4% of the neratinib arm of this study and 11.1% of the trastuzumab arm, should be considered clinically unacceptable.
Dr. Mark D. Pegram is at Stanford (Calif.) University’s Women’s Cancer Center. He reported serving as a consultant to Pfizer and Genentech/Roche, on the data and safety monitoring committee for a trial of neratinib monotherapy, and on the steering committee for a trial sponsored by Oncothyreon. Dr. Pegram made these remarks in an editorial accompanying Dr. Awada’s report (JAMA Oncol. 2016 April 14. doi: 10.1001/jamaoncol.2016.0238).
Neratinib’s benefits regarding CNS progression could prove to be an important therapeutic advance, given the debilitating sequelae of brain metastases and the especially poor prognosis associated with CNS progression.
However, the agent’s toxic effects are considerable. Grade 2-3 diarrhea, which developed in 65.4% of the neratinib arm of this study and 11.1% of the trastuzumab arm, should be considered clinically unacceptable.
Dr. Mark D. Pegram is at Stanford (Calif.) University’s Women’s Cancer Center. He reported serving as a consultant to Pfizer and Genentech/Roche, on the data and safety monitoring committee for a trial of neratinib monotherapy, and on the steering committee for a trial sponsored by Oncothyreon. Dr. Pegram made these remarks in an editorial accompanying Dr. Awada’s report (JAMA Oncol. 2016 April 14. doi: 10.1001/jamaoncol.2016.0238).
Neratinib’s benefits regarding CNS progression could prove to be an important therapeutic advance, given the debilitating sequelae of brain metastases and the especially poor prognosis associated with CNS progression.
However, the agent’s toxic effects are considerable. Grade 2-3 diarrhea, which developed in 65.4% of the neratinib arm of this study and 11.1% of the trastuzumab arm, should be considered clinically unacceptable.
Dr. Mark D. Pegram is at Stanford (Calif.) University’s Women’s Cancer Center. He reported serving as a consultant to Pfizer and Genentech/Roche, on the data and safety monitoring committee for a trial of neratinib monotherapy, and on the steering committee for a trial sponsored by Oncothyreon. Dr. Pegram made these remarks in an editorial accompanying Dr. Awada’s report (JAMA Oncol. 2016 April 14. doi: 10.1001/jamaoncol.2016.0238).
Neratinib plus paclitaxel was found not superior to trastuzumab plus paclitaxel as first-line therapy for ERBB2-positive metastatic breast cancer in an international phase II trial, according to investigators.
Recent research had suggested that small-molecule ERBB2 kinase inhibitors might be particularly effective against CNS metastases in such cases, so investigators performed the open-label randomized clinical trial in 479 women at 188 medical centers. After a median follow-up of 23 months, the primary endpoint – median progression-free survival – was 12.9 months for both neratinib-paclitaxel and trastuzumab-paclitaxel, said Dr. Ahmad Awada of the Medical Oncology Clinic, Jules Bordet Institute, Brussels, and his associates.
This outcome was consistent across all subgroups of patients, regardless of age, race, area of residence, hormone receptor status, or prior exposure to trastuzumab. Neratinib also was not superior to trastuzumab in any of the secondary study endpoints, including objective response rate, duration of response, and clinical benefit rate. These findings suggest that the two agents have similar efficacy in this patient population, the investigators said (JAMA Oncol. 2016 April 14. [doi: 10.1001/jamaoncol.2016.0237).
However, neratinib was associated with a reduced frequency of symptomatic or progressive CNS recurrences (RR, 0.48), as well as a delayed onset of such recurrences (HR, 0.45), compared with trastuzumab. This finding warrants further investigation in a larger trial with predefined CNS end points, they noted.
Neratinib was associated with more frequent adverse effects than trastuzumab, chiefly diarrhea (92.5% vs 33.3%) and nausea (44.2% vs 30.3%). Grade 3 diarrhea developed in 30.4% of patients receiving neratinib, compared with 3.8% of those receiving trastuzumab, and diarrhea accounted for discontinuation of study treatment in 3.8% of patients receiving neratinib, compared with 0.4% of those receiving trastuzumab. Aggressive primary prophylaxis of diarrhea should now be required for the first cycle of neratinib therapy, Dr. Awada and his associates added.
Neratinib plus paclitaxel was found not superior to trastuzumab plus paclitaxel as first-line therapy for ERBB2-positive metastatic breast cancer in an international phase II trial, according to investigators.
Recent research had suggested that small-molecule ERBB2 kinase inhibitors might be particularly effective against CNS metastases in such cases, so investigators performed the open-label randomized clinical trial in 479 women at 188 medical centers. After a median follow-up of 23 months, the primary endpoint – median progression-free survival – was 12.9 months for both neratinib-paclitaxel and trastuzumab-paclitaxel, said Dr. Ahmad Awada of the Medical Oncology Clinic, Jules Bordet Institute, Brussels, and his associates.
This outcome was consistent across all subgroups of patients, regardless of age, race, area of residence, hormone receptor status, or prior exposure to trastuzumab. Neratinib also was not superior to trastuzumab in any of the secondary study endpoints, including objective response rate, duration of response, and clinical benefit rate. These findings suggest that the two agents have similar efficacy in this patient population, the investigators said (JAMA Oncol. 2016 April 14. [doi: 10.1001/jamaoncol.2016.0237).
However, neratinib was associated with a reduced frequency of symptomatic or progressive CNS recurrences (RR, 0.48), as well as a delayed onset of such recurrences (HR, 0.45), compared with trastuzumab. This finding warrants further investigation in a larger trial with predefined CNS end points, they noted.
Neratinib was associated with more frequent adverse effects than trastuzumab, chiefly diarrhea (92.5% vs 33.3%) and nausea (44.2% vs 30.3%). Grade 3 diarrhea developed in 30.4% of patients receiving neratinib, compared with 3.8% of those receiving trastuzumab, and diarrhea accounted for discontinuation of study treatment in 3.8% of patients receiving neratinib, compared with 0.4% of those receiving trastuzumab. Aggressive primary prophylaxis of diarrhea should now be required for the first cycle of neratinib therapy, Dr. Awada and his associates added.
FROM JAMA ONCOLOGY
Key clinical point: Neratinib plus paclitaxel was found not superior to trastuzumab plus paclitaxel for ERBB2-positive metastatic breast cancer.
Major finding: Median progression-free survival was 12.9 months for both neratinib-paclitaxel and trastuzumab-paclitaxel.
Data source: An international randomized controlled open-label phase II trial involving 479 patients followed for 2 years.
Disclosures: This trial was sponsored by Wyeth, Pfizer, and Puma Biotechnology, which were involved in the design and conduct of the study; the collection, analysis, and interpretation of the data; and the preparation of the report. Dr. Awada reported receiving an honorarium from Wyeth; his associates reported ties to numerous industry sources.
Pembrolizumab: 33% response rate in advanced melanoma
Immunotherapy with the PD-1 inhibitor pembrolizumab yielded a 33% overall response rate, a 35% progression-free survival rate at 1 year, and a median overall survival of 23 months in 655 patients with advanced melanoma, according to a report published online in JAMA.
The agent generally was well tolerated, with grade 3 or 4 treatment-related toxicities occurring in 14% of patients. Only 4% of patients discontinued pembrolizumab because of adverse events, including colitis, pyrexia, pneumonitis, and thyroid abnormalities, said Dr. Antoni Ribas of University of California, Los Angeles, and his associates.
The investigators analyzed data pooled from 8 open-label phase I cohorts, some of which have been reported previously, in this manufacturer-sponsored study. All the participants were adults (median age, 61 years) with advanced unresectable melanoma who resided in Australia, Canada, France, and the U.S. This pooled study population was heterogeneous, as the various cohorts had different eligibility criteria; some patients were treatment-naïve, some had received ipilimumab, and some were treated concomitantly with BRAF or MEK inhibitors.
The estimated median duration of response was 28.2 months, and 44% of patients who responded to pembrolizumab had a response duration of longer than 1 year. Approximately half of the patients were still alive at 2 years.
Treatment benefit was consistent regardless of whether or not patients had previously received ipilimumab and regardless of the dose or schedule of pembrolizumab they were given. Patients who were treatment-naïve showed the greatest response to pembrolizumab: an overall response rate of 45%, a 1-year progression-free survival rate of 52%, and a median overall survival of 31 months, the investigators said (JAMA. 2016 Apr 19. doi: 10.1001/jama.2016.4059). “Collectively, these data suggest that the majority of patients with melanoma treated with pembrolizumab will experience lasting objective responses,” they added.
The encouraging results of Ribas et al add to the burgeoning literature on the potential of PD-1 blockade to improve outcomes in metastatic melanoma. One finding in particular – the fact that half of the patients in these cohorts were alive at 2 years – is noteworthy because the median overall survival in most previous clinical trials of the disease was consistently shorter than 1 year.
However, the results also highlight the limitations of PD-1 blockade. Median progression-free survival was only 4 months, and the 1-year progression-free survival rate was only 35%. This suggests that the host immune system fails to adequately control the malignancy in two-thirds of patients within the first year. And even among treatment responders, disease progression occurred in 26%.
Dr. Shailender Bhatia and Dr. John A. Thompson are at the University of Washington and the Fred Hutchinson Cancer Research Center, both in Seattle. Dr. Bhatia and Dr. Thompson made these remarks in an editorial accompanying Dr. Ribas’s report (JAMA. 2016 Apr 19;315:1573-4).
The encouraging results of Ribas et al add to the burgeoning literature on the potential of PD-1 blockade to improve outcomes in metastatic melanoma. One finding in particular – the fact that half of the patients in these cohorts were alive at 2 years – is noteworthy because the median overall survival in most previous clinical trials of the disease was consistently shorter than 1 year.
However, the results also highlight the limitations of PD-1 blockade. Median progression-free survival was only 4 months, and the 1-year progression-free survival rate was only 35%. This suggests that the host immune system fails to adequately control the malignancy in two-thirds of patients within the first year. And even among treatment responders, disease progression occurred in 26%.
Dr. Shailender Bhatia and Dr. John A. Thompson are at the University of Washington and the Fred Hutchinson Cancer Research Center, both in Seattle. Dr. Bhatia and Dr. Thompson made these remarks in an editorial accompanying Dr. Ribas’s report (JAMA. 2016 Apr 19;315:1573-4).
The encouraging results of Ribas et al add to the burgeoning literature on the potential of PD-1 blockade to improve outcomes in metastatic melanoma. One finding in particular – the fact that half of the patients in these cohorts were alive at 2 years – is noteworthy because the median overall survival in most previous clinical trials of the disease was consistently shorter than 1 year.
However, the results also highlight the limitations of PD-1 blockade. Median progression-free survival was only 4 months, and the 1-year progression-free survival rate was only 35%. This suggests that the host immune system fails to adequately control the malignancy in two-thirds of patients within the first year. And even among treatment responders, disease progression occurred in 26%.
Dr. Shailender Bhatia and Dr. John A. Thompson are at the University of Washington and the Fred Hutchinson Cancer Research Center, both in Seattle. Dr. Bhatia and Dr. Thompson made these remarks in an editorial accompanying Dr. Ribas’s report (JAMA. 2016 Apr 19;315:1573-4).
Immunotherapy with the PD-1 inhibitor pembrolizumab yielded a 33% overall response rate, a 35% progression-free survival rate at 1 year, and a median overall survival of 23 months in 655 patients with advanced melanoma, according to a report published online in JAMA.
The agent generally was well tolerated, with grade 3 or 4 treatment-related toxicities occurring in 14% of patients. Only 4% of patients discontinued pembrolizumab because of adverse events, including colitis, pyrexia, pneumonitis, and thyroid abnormalities, said Dr. Antoni Ribas of University of California, Los Angeles, and his associates.
The investigators analyzed data pooled from 8 open-label phase I cohorts, some of which have been reported previously, in this manufacturer-sponsored study. All the participants were adults (median age, 61 years) with advanced unresectable melanoma who resided in Australia, Canada, France, and the U.S. This pooled study population was heterogeneous, as the various cohorts had different eligibility criteria; some patients were treatment-naïve, some had received ipilimumab, and some were treated concomitantly with BRAF or MEK inhibitors.
The estimated median duration of response was 28.2 months, and 44% of patients who responded to pembrolizumab had a response duration of longer than 1 year. Approximately half of the patients were still alive at 2 years.
Treatment benefit was consistent regardless of whether or not patients had previously received ipilimumab and regardless of the dose or schedule of pembrolizumab they were given. Patients who were treatment-naïve showed the greatest response to pembrolizumab: an overall response rate of 45%, a 1-year progression-free survival rate of 52%, and a median overall survival of 31 months, the investigators said (JAMA. 2016 Apr 19. doi: 10.1001/jama.2016.4059). “Collectively, these data suggest that the majority of patients with melanoma treated with pembrolizumab will experience lasting objective responses,” they added.
Immunotherapy with the PD-1 inhibitor pembrolizumab yielded a 33% overall response rate, a 35% progression-free survival rate at 1 year, and a median overall survival of 23 months in 655 patients with advanced melanoma, according to a report published online in JAMA.
The agent generally was well tolerated, with grade 3 or 4 treatment-related toxicities occurring in 14% of patients. Only 4% of patients discontinued pembrolizumab because of adverse events, including colitis, pyrexia, pneumonitis, and thyroid abnormalities, said Dr. Antoni Ribas of University of California, Los Angeles, and his associates.
The investigators analyzed data pooled from 8 open-label phase I cohorts, some of which have been reported previously, in this manufacturer-sponsored study. All the participants were adults (median age, 61 years) with advanced unresectable melanoma who resided in Australia, Canada, France, and the U.S. This pooled study population was heterogeneous, as the various cohorts had different eligibility criteria; some patients were treatment-naïve, some had received ipilimumab, and some were treated concomitantly with BRAF or MEK inhibitors.
The estimated median duration of response was 28.2 months, and 44% of patients who responded to pembrolizumab had a response duration of longer than 1 year. Approximately half of the patients were still alive at 2 years.
Treatment benefit was consistent regardless of whether or not patients had previously received ipilimumab and regardless of the dose or schedule of pembrolizumab they were given. Patients who were treatment-naïve showed the greatest response to pembrolizumab: an overall response rate of 45%, a 1-year progression-free survival rate of 52%, and a median overall survival of 31 months, the investigators said (JAMA. 2016 Apr 19. doi: 10.1001/jama.2016.4059). “Collectively, these data suggest that the majority of patients with melanoma treated with pembrolizumab will experience lasting objective responses,” they added.
FROM JAMA
Key clinical point: Immunotherapy with pembrolizumab yielded a 33% overall response rate, a 35% 1-year progression-free survival rate, and a median overall survival of 23 months in 655 patients with advanced melanoma.
Major finding: The estimated median duration of response was 28.2 months, and half of the study population was still alive at 2 years.
Data source: A manufacturer-sponsored pooled analysis of data from 8 open-label phase I cohorts involving 655 patients followed for a median of 15 months.
Disclosures: This study was supported by Merck, maker of pembrolizumab, which also participated in the design and conduct of the study; the collection, analysis, and interpretation of the data; and preparation of the report. Dr. Ribas reported owning stock in several pharmaceutical companies. His associates reported ties to numerous industry sources.
Targeted corticosteroids cut GVHD incidence
Short-term low-dose corticosteroid prophylaxis reduces the incidence of graft-vs.-host disease in patients who undergo allogeneic haploidentical stem-cell transplantation to treat hematologic neoplasms, according to a report published online April 18 in the Journal of Clinical Oncology.
The key to selecting patients most likely to benefit from the corticosteroid therapy is to identify those at high risk for graft-vs.-host disease (GVHD) using two biomarkers: high levels of CD56bright natural killer cells in allogeneic grafts or high CD4:CD8 ratios in bone marrow grafts, according to Dr. Ying-Jun Chang of Peking University People’s Hospital, Beijing, and associates.
The investigators performed an open-label trial involving 228 patients aged 15-60 years treated at a single medical center during an 18-month period for acute myeloid leukemia, acute lymphoblastic leukemia, chronic myeloid leukemia, myelodysplastic syndrome, or other hematologic neoplasms. Using the two biomarkers, the patients were categorized as either high or low risk for developing GVHD. They were randomly assigned to three study groups: 72 high-risk patients who received short-term low-dose corticosteroids, 73 high-risk patients who received usual care, and 83 low-risk patients who received usual care.
The cumulative 100-day incidence of acute grade-II to grade-IV GVHD was significantly lower in the high-risk patients who received prophylaxis (21%) than in the high-risk patients who did not receive prophylaxis (48%). In fact, corticosteroids decreased the rate of GVHD so that it was comparable with that in the low-risk patients (26%), Dr. Chang and associates said (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2015.63l.8817).
Moreover, in the high-risk patients the median interval until GVHD developed was 25 days for those who took corticosteroids, compared with only 15 days for those who did not. Median times to myeloid recovery and platelet recovery were significantly shorter for high-risk patients who received corticosteroids than for either of the other study groups. However, 3-year overall survival and leukemia-free survival were comparable among the three study groups.
The short-term low-dose regimen of corticosteroids did not raise the rate of adverse events, including infection, which suggests that it is preferable to standard corticosteroid regimens in this patient population. The incidences of cytomegalovirus or Epstein-Barr virus reactivation, post-transplantation lymphoproliferative disorder, hemorrhagic cystitis, bacteremia, and invasive fungal infections were comparable among the three study groups. Of note, the incidences of osteonecrosis of the femoral head and secondary hypertension were significantly lower among high-risk patients who received corticosteroid prophylaxis than among those who did not.
“These results provide the first test, to our knowledge, of a novel risk-stratification-directed prophylaxis strategy that effectively prevented acute GVHD among patients who were at high risk for GVHD, without unnecessarily exposing patients who were at low risk to excessive toxicity from additional immunosuppressive agents,” Dr. Chang and associates said.
Despite the encouraging results of Chang et al, it would be premature to routinely use corticosteroid prophylaxis to prevent GVHD until further studies are completed.
This study wasn’t sufficiently powered to determine whether corticosteroids reduced treatment-specific mortality or improved overall survival. Future studies must examine these end points, as well as relapse rates, before this method of prophylaxis is widely adopted.
Dr. Edwin P. Alyea is at Dana-Farber Cancer Institute, Boston. He reported having no relevant financial disclosures. Dr. Alyea made these remarks in an editorial accompanying Dr. Chang’s report (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2015.66.0902).
Despite the encouraging results of Chang et al, it would be premature to routinely use corticosteroid prophylaxis to prevent GVHD until further studies are completed.
This study wasn’t sufficiently powered to determine whether corticosteroids reduced treatment-specific mortality or improved overall survival. Future studies must examine these end points, as well as relapse rates, before this method of prophylaxis is widely adopted.
Dr. Edwin P. Alyea is at Dana-Farber Cancer Institute, Boston. He reported having no relevant financial disclosures. Dr. Alyea made these remarks in an editorial accompanying Dr. Chang’s report (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2015.66.0902).
Despite the encouraging results of Chang et al, it would be premature to routinely use corticosteroid prophylaxis to prevent GVHD until further studies are completed.
This study wasn’t sufficiently powered to determine whether corticosteroids reduced treatment-specific mortality or improved overall survival. Future studies must examine these end points, as well as relapse rates, before this method of prophylaxis is widely adopted.
Dr. Edwin P. Alyea is at Dana-Farber Cancer Institute, Boston. He reported having no relevant financial disclosures. Dr. Alyea made these remarks in an editorial accompanying Dr. Chang’s report (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2015.66.0902).
Short-term low-dose corticosteroid prophylaxis reduces the incidence of graft-vs.-host disease in patients who undergo allogeneic haploidentical stem-cell transplantation to treat hematologic neoplasms, according to a report published online April 18 in the Journal of Clinical Oncology.
The key to selecting patients most likely to benefit from the corticosteroid therapy is to identify those at high risk for graft-vs.-host disease (GVHD) using two biomarkers: high levels of CD56bright natural killer cells in allogeneic grafts or high CD4:CD8 ratios in bone marrow grafts, according to Dr. Ying-Jun Chang of Peking University People’s Hospital, Beijing, and associates.
The investigators performed an open-label trial involving 228 patients aged 15-60 years treated at a single medical center during an 18-month period for acute myeloid leukemia, acute lymphoblastic leukemia, chronic myeloid leukemia, myelodysplastic syndrome, or other hematologic neoplasms. Using the two biomarkers, the patients were categorized as either high or low risk for developing GVHD. They were randomly assigned to three study groups: 72 high-risk patients who received short-term low-dose corticosteroids, 73 high-risk patients who received usual care, and 83 low-risk patients who received usual care.
The cumulative 100-day incidence of acute grade-II to grade-IV GVHD was significantly lower in the high-risk patients who received prophylaxis (21%) than in the high-risk patients who did not receive prophylaxis (48%). In fact, corticosteroids decreased the rate of GVHD so that it was comparable with that in the low-risk patients (26%), Dr. Chang and associates said (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2015.63l.8817).
Moreover, in the high-risk patients the median interval until GVHD developed was 25 days for those who took corticosteroids, compared with only 15 days for those who did not. Median times to myeloid recovery and platelet recovery were significantly shorter for high-risk patients who received corticosteroids than for either of the other study groups. However, 3-year overall survival and leukemia-free survival were comparable among the three study groups.
The short-term low-dose regimen of corticosteroids did not raise the rate of adverse events, including infection, which suggests that it is preferable to standard corticosteroid regimens in this patient population. The incidences of cytomegalovirus or Epstein-Barr virus reactivation, post-transplantation lymphoproliferative disorder, hemorrhagic cystitis, bacteremia, and invasive fungal infections were comparable among the three study groups. Of note, the incidences of osteonecrosis of the femoral head and secondary hypertension were significantly lower among high-risk patients who received corticosteroid prophylaxis than among those who did not.
“These results provide the first test, to our knowledge, of a novel risk-stratification-directed prophylaxis strategy that effectively prevented acute GVHD among patients who were at high risk for GVHD, without unnecessarily exposing patients who were at low risk to excessive toxicity from additional immunosuppressive agents,” Dr. Chang and associates said.
Short-term low-dose corticosteroid prophylaxis reduces the incidence of graft-vs.-host disease in patients who undergo allogeneic haploidentical stem-cell transplantation to treat hematologic neoplasms, according to a report published online April 18 in the Journal of Clinical Oncology.
The key to selecting patients most likely to benefit from the corticosteroid therapy is to identify those at high risk for graft-vs.-host disease (GVHD) using two biomarkers: high levels of CD56bright natural killer cells in allogeneic grafts or high CD4:CD8 ratios in bone marrow grafts, according to Dr. Ying-Jun Chang of Peking University People’s Hospital, Beijing, and associates.
The investigators performed an open-label trial involving 228 patients aged 15-60 years treated at a single medical center during an 18-month period for acute myeloid leukemia, acute lymphoblastic leukemia, chronic myeloid leukemia, myelodysplastic syndrome, or other hematologic neoplasms. Using the two biomarkers, the patients were categorized as either high or low risk for developing GVHD. They were randomly assigned to three study groups: 72 high-risk patients who received short-term low-dose corticosteroids, 73 high-risk patients who received usual care, and 83 low-risk patients who received usual care.
The cumulative 100-day incidence of acute grade-II to grade-IV GVHD was significantly lower in the high-risk patients who received prophylaxis (21%) than in the high-risk patients who did not receive prophylaxis (48%). In fact, corticosteroids decreased the rate of GVHD so that it was comparable with that in the low-risk patients (26%), Dr. Chang and associates said (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2015.63l.8817).
Moreover, in the high-risk patients the median interval until GVHD developed was 25 days for those who took corticosteroids, compared with only 15 days for those who did not. Median times to myeloid recovery and platelet recovery were significantly shorter for high-risk patients who received corticosteroids than for either of the other study groups. However, 3-year overall survival and leukemia-free survival were comparable among the three study groups.
The short-term low-dose regimen of corticosteroids did not raise the rate of adverse events, including infection, which suggests that it is preferable to standard corticosteroid regimens in this patient population. The incidences of cytomegalovirus or Epstein-Barr virus reactivation, post-transplantation lymphoproliferative disorder, hemorrhagic cystitis, bacteremia, and invasive fungal infections were comparable among the three study groups. Of note, the incidences of osteonecrosis of the femoral head and secondary hypertension were significantly lower among high-risk patients who received corticosteroid prophylaxis than among those who did not.
“These results provide the first test, to our knowledge, of a novel risk-stratification-directed prophylaxis strategy that effectively prevented acute GVHD among patients who were at high risk for GVHD, without unnecessarily exposing patients who were at low risk to excessive toxicity from additional immunosuppressive agents,” Dr. Chang and associates said.
FROM THE JOURNAL OF CLINICAL ONCOLOGY
Key clinical point: Short-term low-dose corticosteroid prophylaxis reduces the incidence of the GVHD in patients who undergo haploidentical stem-cell transplantation to treat hematologic neoplasms.
Major finding: The 100-day incidence of acute GVHD was significantly lower in the high-risk patients who received corticosteroid prophylaxis (21%) than in the high-risk patients who did not (48%).
Data source: An open-label randomized controlled trial involving 228 Chinese patients who underwent stem-cell transplantation.
Disclosures: This study was supported by the Beijing Committee of Science and Technology, the National High Technology Research and Development Program of China, and the National Natural Science Foundation of China. Dr. Chang and associates reported having no relevant financial disclosures.
Donor EBV status affects recipient graft-vs-host disease risk
In allogeneic hematopoietic stem-cell transplantation, the donor’s status regarding Epstein-Barr virus affects the recipient’s risk of developing graft-vs-host disease – a “completely new and striking” finding, according to a report published online April 18 in the Journal of Clinical Oncology.
Approximately 80% of the general population has been infected with EBV and carries persistent virus in memory B cells. When viral material is transmitted to stem-cell recipients, it is known to cause posttransplantation lymphoproliferative disorder. Until now, however, no data were available to examine EBV serology’s effect on other posttransplantation outcomes, said Dr. Jan Styczynski of the department of pediatric hematology and oncology at Nicolaus Copernicus University, Bydgoszcz, Poland, and his associates.
They analyzed information in the European Society of Blood and Marrow Transplantation database for 11,364 patients with acute lymphoblastic leukemia or acute myeloblastic leukemia who underwent stem-cell transplantation between 1997 and 2012 and who were followed for approximately 5 years. Most of the donors (82%) were seropositive for EBV. Acute graft-vs-host disease (GVHD) developed in 32% and chronic GVHD developed in 40% of these stem cell–transplant recipients.
The incidence of chronic GVHD was significantly higher when the donor was EBV-seropositive (41%) than when the donor was EBV-seronegative (31%). Similarly, the incidence of acute GVHD was significantly higher when the donor was EBV-seropositive (32% vs 30%), but the magnitude of the difference between the two groups was smaller. The risk for GVHD increased even though patients receiving transplants from EBV-seropositive donors underwent more intensive GVHD prophylaxis than did those who had seronegative donors, the investigators said (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2015.64.2405).
In contrast, the transplant recipients’ EBV status did not affect their risk of developing GVHD.
“Despite the effect of donor EBV serostatus on GVHD, we did not observe a corresponding GVHD-related death rate, and as a result, there was no effect on overall survival, relapse-free survival, relapse incidence, and nonrelapse mortality. However, it should be kept in mind that many other pre- and posttransplantation factors play a role in contributing to final transplantation outcomes,” Dr. Styczynski and his associates noted.
The current recommendation to monitor transplantation recipients for EBV and to give them “preemptive” rituximab to stave off the development of posttransplantation lymphoproliferative disorder might prove useful in also preventing GVHD, they added.
The findings of Dr. Styczynski and his associates raise the possibility that we may be able to prevent or treat GVHD in transplant recipients by controlling EBV infection.
Selecting only EBV-negative donors would be one way to accomplish this, but that would be impractical given the high seroprevalence of EBV in the general population. Depleting memory B cells, the reservoir of EBV infection, using monoclonal antibodies may prove helpful, and these agents might provide additional therapeutic effects. And novel antivirals such as retroviral integrase inhibitors may be more specific at targeting EBV than acyclovir and related agents, which have limited activity against latently infected B cells. These novel drugs, however, are not without risks and adverse effects.
A promising alternative might be to boost immunity to EBV using vaccination or adoptive transfer of ex vivo expanded EBV-specific cytotoxic T cells.
Dr. Katayoun Rezvani and Dr. Richard E. Champlin are with the University of Texas MD Andersen Cancer Center, Houston. Their financial disclosures are available at www.jco.org. They made these remarks in an editorial accompanying Dr. Styczynski’s report (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2016.66.6099).
The findings of Dr. Styczynski and his associates raise the possibility that we may be able to prevent or treat GVHD in transplant recipients by controlling EBV infection.
Selecting only EBV-negative donors would be one way to accomplish this, but that would be impractical given the high seroprevalence of EBV in the general population. Depleting memory B cells, the reservoir of EBV infection, using monoclonal antibodies may prove helpful, and these agents might provide additional therapeutic effects. And novel antivirals such as retroviral integrase inhibitors may be more specific at targeting EBV than acyclovir and related agents, which have limited activity against latently infected B cells. These novel drugs, however, are not without risks and adverse effects.
A promising alternative might be to boost immunity to EBV using vaccination or adoptive transfer of ex vivo expanded EBV-specific cytotoxic T cells.
Dr. Katayoun Rezvani and Dr. Richard E. Champlin are with the University of Texas MD Andersen Cancer Center, Houston. Their financial disclosures are available at www.jco.org. They made these remarks in an editorial accompanying Dr. Styczynski’s report (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2016.66.6099).
The findings of Dr. Styczynski and his associates raise the possibility that we may be able to prevent or treat GVHD in transplant recipients by controlling EBV infection.
Selecting only EBV-negative donors would be one way to accomplish this, but that would be impractical given the high seroprevalence of EBV in the general population. Depleting memory B cells, the reservoir of EBV infection, using monoclonal antibodies may prove helpful, and these agents might provide additional therapeutic effects. And novel antivirals such as retroviral integrase inhibitors may be more specific at targeting EBV than acyclovir and related agents, which have limited activity against latently infected B cells. These novel drugs, however, are not without risks and adverse effects.
A promising alternative might be to boost immunity to EBV using vaccination or adoptive transfer of ex vivo expanded EBV-specific cytotoxic T cells.
Dr. Katayoun Rezvani and Dr. Richard E. Champlin are with the University of Texas MD Andersen Cancer Center, Houston. Their financial disclosures are available at www.jco.org. They made these remarks in an editorial accompanying Dr. Styczynski’s report (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2016.66.6099).
In allogeneic hematopoietic stem-cell transplantation, the donor’s status regarding Epstein-Barr virus affects the recipient’s risk of developing graft-vs-host disease – a “completely new and striking” finding, according to a report published online April 18 in the Journal of Clinical Oncology.
Approximately 80% of the general population has been infected with EBV and carries persistent virus in memory B cells. When viral material is transmitted to stem-cell recipients, it is known to cause posttransplantation lymphoproliferative disorder. Until now, however, no data were available to examine EBV serology’s effect on other posttransplantation outcomes, said Dr. Jan Styczynski of the department of pediatric hematology and oncology at Nicolaus Copernicus University, Bydgoszcz, Poland, and his associates.
They analyzed information in the European Society of Blood and Marrow Transplantation database for 11,364 patients with acute lymphoblastic leukemia or acute myeloblastic leukemia who underwent stem-cell transplantation between 1997 and 2012 and who were followed for approximately 5 years. Most of the donors (82%) were seropositive for EBV. Acute graft-vs-host disease (GVHD) developed in 32% and chronic GVHD developed in 40% of these stem cell–transplant recipients.
The incidence of chronic GVHD was significantly higher when the donor was EBV-seropositive (41%) than when the donor was EBV-seronegative (31%). Similarly, the incidence of acute GVHD was significantly higher when the donor was EBV-seropositive (32% vs 30%), but the magnitude of the difference between the two groups was smaller. The risk for GVHD increased even though patients receiving transplants from EBV-seropositive donors underwent more intensive GVHD prophylaxis than did those who had seronegative donors, the investigators said (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2015.64.2405).
In contrast, the transplant recipients’ EBV status did not affect their risk of developing GVHD.
“Despite the effect of donor EBV serostatus on GVHD, we did not observe a corresponding GVHD-related death rate, and as a result, there was no effect on overall survival, relapse-free survival, relapse incidence, and nonrelapse mortality. However, it should be kept in mind that many other pre- and posttransplantation factors play a role in contributing to final transplantation outcomes,” Dr. Styczynski and his associates noted.
The current recommendation to monitor transplantation recipients for EBV and to give them “preemptive” rituximab to stave off the development of posttransplantation lymphoproliferative disorder might prove useful in also preventing GVHD, they added.
In allogeneic hematopoietic stem-cell transplantation, the donor’s status regarding Epstein-Barr virus affects the recipient’s risk of developing graft-vs-host disease – a “completely new and striking” finding, according to a report published online April 18 in the Journal of Clinical Oncology.
Approximately 80% of the general population has been infected with EBV and carries persistent virus in memory B cells. When viral material is transmitted to stem-cell recipients, it is known to cause posttransplantation lymphoproliferative disorder. Until now, however, no data were available to examine EBV serology’s effect on other posttransplantation outcomes, said Dr. Jan Styczynski of the department of pediatric hematology and oncology at Nicolaus Copernicus University, Bydgoszcz, Poland, and his associates.
They analyzed information in the European Society of Blood and Marrow Transplantation database for 11,364 patients with acute lymphoblastic leukemia or acute myeloblastic leukemia who underwent stem-cell transplantation between 1997 and 2012 and who were followed for approximately 5 years. Most of the donors (82%) were seropositive for EBV. Acute graft-vs-host disease (GVHD) developed in 32% and chronic GVHD developed in 40% of these stem cell–transplant recipients.
The incidence of chronic GVHD was significantly higher when the donor was EBV-seropositive (41%) than when the donor was EBV-seronegative (31%). Similarly, the incidence of acute GVHD was significantly higher when the donor was EBV-seropositive (32% vs 30%), but the magnitude of the difference between the two groups was smaller. The risk for GVHD increased even though patients receiving transplants from EBV-seropositive donors underwent more intensive GVHD prophylaxis than did those who had seronegative donors, the investigators said (J Clin Oncol. 2016 Apr 18. doi: 10.1200/JCO.2015.64.2405).
In contrast, the transplant recipients’ EBV status did not affect their risk of developing GVHD.
“Despite the effect of donor EBV serostatus on GVHD, we did not observe a corresponding GVHD-related death rate, and as a result, there was no effect on overall survival, relapse-free survival, relapse incidence, and nonrelapse mortality. However, it should be kept in mind that many other pre- and posttransplantation factors play a role in contributing to final transplantation outcomes,” Dr. Styczynski and his associates noted.
The current recommendation to monitor transplantation recipients for EBV and to give them “preemptive” rituximab to stave off the development of posttransplantation lymphoproliferative disorder might prove useful in also preventing GVHD, they added.
FROM THE JOURNAL OF CLINICAL ONCOLOGY
Key clinical point: In allogeneic hematopoietic stem-cell transplantation, the donor’s EBV status affects the recipient’s risk of developing GVHD.
Major finding: Chronic GVHD was significantly more likely to develop when the donor was EBV-seropositive (41%) than EBV-seronegative (31%).
Data source: A retrospective analysis of data regarding 11,364 European patients with acute leukemia who underwent stem-cell transplantation and were followed for 5 years.
Disclosures: No study sponsor was identified. Dr. Styczynski reported having no relevant financial disclosures; his associates reported ties to numerous industry sources.
Benefit of lumbar fusion for spinal stenosis found to be small to nonexistent
The benefit of adding lumbar fusion surgery to decompression surgery for spinal stenosis was nonexistent in one large clinical trial and very modest in another, according to separate reports published online April 13 in the New England Journal of Medicine.
Both studies indicated that, given the considerable cost and the potential complications associated with lumbar fusion, it may not be worthwhile to add it to decompression surgery for spinal stenosis. “The goal of surgery in lumbar spinal stenosis is to improve walking distance and to relieve pain by decompression of the nerve roots. The addition of instrumented fusion – ‘just to be sure’ – for the treatment of the most frequent forms of lumbar spinal stenosis does not create any added value for patients and might be regarded as an overcautious and unnecessary treatment,” Dr. Wilco C. Peul and Dr. Wouter A. Moojen said in an editorial accompanying the two reports.
Surgical decompression of spinal stenosis using laminectomy is increasingly being supplemented with lumbar fusion, which is thought to firm up spinal instability and minimize the risk of future deformity. In the United States, approximately half of patients who have surgery for spinal stenosis undergo fusion procedures. Of those who also show degenerative spondylolisthesis on preoperative imaging studies, 96% undergo fusion procedures because many spine surgeons see this as a sign of instability and a mandatory indication for fusion. However, the evidence supporting the use of fusion plus decompression, as opposed to decompression alone, is weak, according to the investigators who conducted the Swedish Spinal Stenosis Study. The other study in the New England Journal of Medicine, the Spinal Laminectomy Versus Instrumented Pedicle Screw (SLIP) trial, was conducted in the United States.
Both of those clinical trials were performed to shed further light on the issue.
In the Swedish Spinal Stenosis Study, the investigators assessed outcomes in 247 patients aged 50-80 years who were treated at seven Swedish hospitals over the course of 6 years. This open-label, superiority trial randomly assigned 124 patients to decompression surgery alone and 123 to decompression plus fusion. The primary outcome measure was score on the Oswestry Disability Index (ODI), which measures disability and quality of life in patients with low-back pain, 2 years after surgery. The ODI scale runs from 0 to 100, with higher scores indicating more severe disability, said Dr. Peter Försth of the department of surgical sciences at Uppsala (Sweden) University and the Stockholm Spine Center and his associates.
At 2 years, there was no significant difference between the two study groups; the decompression-only group had a mean ODI score of 24, and the fusion group had a mean score of 27. The ODI scores in both groups had improved from baseline to a similar degree: by 17 points with decompression alone and by 15 points with fusion. In addition, fusion surgery was not superior to decompression alone regardless of whether patients had any degree of spondylolisthesis and regardless of whether they had severe spondylolisthesis involving a vertebral slip of 7.4 mm or more, the investigators reported (N Engl J Med. 2016 April 13. doi: 10.1056/NEJMoa1513721).The two study groups also showed no significant differences in secondary outcome measures, including performance on the 6-minute walk test and subjective patient assessment of improvement in walking ability. Moreover, these results persisted in the 144 patients who were assessed at 5-year follow-up.
In contrast, decompression alone was associated with fewer complications than decompression plus fusion. Postoperative wound infection developed in only 4% of the decompression-only group, compared with 10% of the fusion group. Although this study wasn’t adequately powered to draw firm conclusions regarding complications, a previous analysis of registry data reported that adding fusion surgery to decompression surgery doubles the risk of severe adverse events in older patients, Dr. Försth and his associates said.
Decompression alone also was markedly less expensive than fusion surgery. Mean direct costs were $6,800 higher for fusion than for decompression alone, because of the additional operating time needed, the extended hospital stay, and the cost of the implant.
In the SLIP trial, the researchers compared outcomes in 66 patients aged 50-80 years who all had spinal stenosis with grade 1 degenerative spondylolisthesis. The participants were randomly assigned to undergo decompression alone (35 patients) or decompression plus fusion (31 patients) at five U.S. medical centers, said Dr. Zoher Ghogawala of the Alan and Jacqueline B. Stuart Spine Research Center in the department of neurosurgery at Lahey Hospital and Medical Center, Burlington, Mass., and his associates.
The primary outcome measure was the physical-component summary score on the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) 2 years after surgery. This scale also runs from 0 to 100, but higher scores indicate better physical health. Five points was prespecified as the minimal clinically important difference on the SF-36.
At 2 years, patients in the fusion group showed a small but significant advantage of 5.7 points on the SF-36, with a mean score of 15.2, compared with patients in the decompression-only group (mean score, 9.5). However, the ODI scores, a secondary outcome measure in this study, were not significantly different between the two study groups, Dr. Ghogawala and his associates reported (N Engl J Med. 2016 April 13. doi: 10.1056/NEJMoa1508788).Surgical complications, blood loss, and length of stay all were significantly greater with fusion than with decompression alone.
Dr. Försth’s study was supported by Uppsala University, Uppsala County Council, the Stockholm Spine Center, and Johnson & Johnson. Two of his associates reported ties to Medtronic and Quantify Research. Dr. Ghogawala’s study was supported by the Jean and David Wallace Foundation, the Greenwich Lumbar Stenosis SLIP Study Fund. His associates reported ties to numerous industry sources.
Both of these studies clearly demonstrated that for most patients, stenosis surgery should be limited to decompression when no overt instability is present. Dr. Ghogawala and his colleagues correctly concluded that the modest difference in SF-36 score in favor of fusion doesn’t justify that procedure’s higher cost and complication rate.
Fusion surgery is no longer best practice and should be restricted to patients who have proven spinal instability; vertebral destruction caused by trauma, tumors, infections, or spinal deformities; or possibly neuroforamen stenosis with compressed exiting nerves due to postsurgical disk collapse.
Dr. Wilco C. Peul is at Leiden (the Netherlands) University Medical Center and at Medical Center Haaglanden, the Hague. Dr. Wouter A. Moojen is at Medical Center Haaglanden. Dr. Peul reported receiving grants from ZonMW, Paradigm Spine, Medtronic, Eurospine Foundation, and CVZ. Dr. Moojen reported having no relevant financial disclosures. Dr. Peul and Dr. Moojen made these remarks in an editorial accompanying the reports on the Swedish Spinal Stenosis Study and the Spinal Laminectomy Versus Instrumented Pedicle Screw trial (N Engl J Med. 2016 April 13. doi: 10.1056/NEJMe1600955).
Both of these studies clearly demonstrated that for most patients, stenosis surgery should be limited to decompression when no overt instability is present. Dr. Ghogawala and his colleagues correctly concluded that the modest difference in SF-36 score in favor of fusion doesn’t justify that procedure’s higher cost and complication rate.
Fusion surgery is no longer best practice and should be restricted to patients who have proven spinal instability; vertebral destruction caused by trauma, tumors, infections, or spinal deformities; or possibly neuroforamen stenosis with compressed exiting nerves due to postsurgical disk collapse.
Dr. Wilco C. Peul is at Leiden (the Netherlands) University Medical Center and at Medical Center Haaglanden, the Hague. Dr. Wouter A. Moojen is at Medical Center Haaglanden. Dr. Peul reported receiving grants from ZonMW, Paradigm Spine, Medtronic, Eurospine Foundation, and CVZ. Dr. Moojen reported having no relevant financial disclosures. Dr. Peul and Dr. Moojen made these remarks in an editorial accompanying the reports on the Swedish Spinal Stenosis Study and the Spinal Laminectomy Versus Instrumented Pedicle Screw trial (N Engl J Med. 2016 April 13. doi: 10.1056/NEJMe1600955).
Both of these studies clearly demonstrated that for most patients, stenosis surgery should be limited to decompression when no overt instability is present. Dr. Ghogawala and his colleagues correctly concluded that the modest difference in SF-36 score in favor of fusion doesn’t justify that procedure’s higher cost and complication rate.
Fusion surgery is no longer best practice and should be restricted to patients who have proven spinal instability; vertebral destruction caused by trauma, tumors, infections, or spinal deformities; or possibly neuroforamen stenosis with compressed exiting nerves due to postsurgical disk collapse.
Dr. Wilco C. Peul is at Leiden (the Netherlands) University Medical Center and at Medical Center Haaglanden, the Hague. Dr. Wouter A. Moojen is at Medical Center Haaglanden. Dr. Peul reported receiving grants from ZonMW, Paradigm Spine, Medtronic, Eurospine Foundation, and CVZ. Dr. Moojen reported having no relevant financial disclosures. Dr. Peul and Dr. Moojen made these remarks in an editorial accompanying the reports on the Swedish Spinal Stenosis Study and the Spinal Laminectomy Versus Instrumented Pedicle Screw trial (N Engl J Med. 2016 April 13. doi: 10.1056/NEJMe1600955).
The benefit of adding lumbar fusion surgery to decompression surgery for spinal stenosis was nonexistent in one large clinical trial and very modest in another, according to separate reports published online April 13 in the New England Journal of Medicine.
Both studies indicated that, given the considerable cost and the potential complications associated with lumbar fusion, it may not be worthwhile to add it to decompression surgery for spinal stenosis. “The goal of surgery in lumbar spinal stenosis is to improve walking distance and to relieve pain by decompression of the nerve roots. The addition of instrumented fusion – ‘just to be sure’ – for the treatment of the most frequent forms of lumbar spinal stenosis does not create any added value for patients and might be regarded as an overcautious and unnecessary treatment,” Dr. Wilco C. Peul and Dr. Wouter A. Moojen said in an editorial accompanying the two reports.
Surgical decompression of spinal stenosis using laminectomy is increasingly being supplemented with lumbar fusion, which is thought to firm up spinal instability and minimize the risk of future deformity. In the United States, approximately half of patients who have surgery for spinal stenosis undergo fusion procedures. Of those who also show degenerative spondylolisthesis on preoperative imaging studies, 96% undergo fusion procedures because many spine surgeons see this as a sign of instability and a mandatory indication for fusion. However, the evidence supporting the use of fusion plus decompression, as opposed to decompression alone, is weak, according to the investigators who conducted the Swedish Spinal Stenosis Study. The other study in the New England Journal of Medicine, the Spinal Laminectomy Versus Instrumented Pedicle Screw (SLIP) trial, was conducted in the United States.
Both of those clinical trials were performed to shed further light on the issue.
In the Swedish Spinal Stenosis Study, the investigators assessed outcomes in 247 patients aged 50-80 years who were treated at seven Swedish hospitals over the course of 6 years. This open-label, superiority trial randomly assigned 124 patients to decompression surgery alone and 123 to decompression plus fusion. The primary outcome measure was score on the Oswestry Disability Index (ODI), which measures disability and quality of life in patients with low-back pain, 2 years after surgery. The ODI scale runs from 0 to 100, with higher scores indicating more severe disability, said Dr. Peter Försth of the department of surgical sciences at Uppsala (Sweden) University and the Stockholm Spine Center and his associates.
At 2 years, there was no significant difference between the two study groups; the decompression-only group had a mean ODI score of 24, and the fusion group had a mean score of 27. The ODI scores in both groups had improved from baseline to a similar degree: by 17 points with decompression alone and by 15 points with fusion. In addition, fusion surgery was not superior to decompression alone regardless of whether patients had any degree of spondylolisthesis and regardless of whether they had severe spondylolisthesis involving a vertebral slip of 7.4 mm or more, the investigators reported (N Engl J Med. 2016 April 13. doi: 10.1056/NEJMoa1513721).The two study groups also showed no significant differences in secondary outcome measures, including performance on the 6-minute walk test and subjective patient assessment of improvement in walking ability. Moreover, these results persisted in the 144 patients who were assessed at 5-year follow-up.
In contrast, decompression alone was associated with fewer complications than decompression plus fusion. Postoperative wound infection developed in only 4% of the decompression-only group, compared with 10% of the fusion group. Although this study wasn’t adequately powered to draw firm conclusions regarding complications, a previous analysis of registry data reported that adding fusion surgery to decompression surgery doubles the risk of severe adverse events in older patients, Dr. Försth and his associates said.
Decompression alone also was markedly less expensive than fusion surgery. Mean direct costs were $6,800 higher for fusion than for decompression alone, because of the additional operating time needed, the extended hospital stay, and the cost of the implant.
In the SLIP trial, the researchers compared outcomes in 66 patients aged 50-80 years who all had spinal stenosis with grade 1 degenerative spondylolisthesis. The participants were randomly assigned to undergo decompression alone (35 patients) or decompression plus fusion (31 patients) at five U.S. medical centers, said Dr. Zoher Ghogawala of the Alan and Jacqueline B. Stuart Spine Research Center in the department of neurosurgery at Lahey Hospital and Medical Center, Burlington, Mass., and his associates.
The primary outcome measure was the physical-component summary score on the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) 2 years after surgery. This scale also runs from 0 to 100, but higher scores indicate better physical health. Five points was prespecified as the minimal clinically important difference on the SF-36.
At 2 years, patients in the fusion group showed a small but significant advantage of 5.7 points on the SF-36, with a mean score of 15.2, compared with patients in the decompression-only group (mean score, 9.5). However, the ODI scores, a secondary outcome measure in this study, were not significantly different between the two study groups, Dr. Ghogawala and his associates reported (N Engl J Med. 2016 April 13. doi: 10.1056/NEJMoa1508788).Surgical complications, blood loss, and length of stay all were significantly greater with fusion than with decompression alone.
Dr. Försth’s study was supported by Uppsala University, Uppsala County Council, the Stockholm Spine Center, and Johnson & Johnson. Two of his associates reported ties to Medtronic and Quantify Research. Dr. Ghogawala’s study was supported by the Jean and David Wallace Foundation, the Greenwich Lumbar Stenosis SLIP Study Fund. His associates reported ties to numerous industry sources.
The benefit of adding lumbar fusion surgery to decompression surgery for spinal stenosis was nonexistent in one large clinical trial and very modest in another, according to separate reports published online April 13 in the New England Journal of Medicine.
Both studies indicated that, given the considerable cost and the potential complications associated with lumbar fusion, it may not be worthwhile to add it to decompression surgery for spinal stenosis. “The goal of surgery in lumbar spinal stenosis is to improve walking distance and to relieve pain by decompression of the nerve roots. The addition of instrumented fusion – ‘just to be sure’ – for the treatment of the most frequent forms of lumbar spinal stenosis does not create any added value for patients and might be regarded as an overcautious and unnecessary treatment,” Dr. Wilco C. Peul and Dr. Wouter A. Moojen said in an editorial accompanying the two reports.
Surgical decompression of spinal stenosis using laminectomy is increasingly being supplemented with lumbar fusion, which is thought to firm up spinal instability and minimize the risk of future deformity. In the United States, approximately half of patients who have surgery for spinal stenosis undergo fusion procedures. Of those who also show degenerative spondylolisthesis on preoperative imaging studies, 96% undergo fusion procedures because many spine surgeons see this as a sign of instability and a mandatory indication for fusion. However, the evidence supporting the use of fusion plus decompression, as opposed to decompression alone, is weak, according to the investigators who conducted the Swedish Spinal Stenosis Study. The other study in the New England Journal of Medicine, the Spinal Laminectomy Versus Instrumented Pedicle Screw (SLIP) trial, was conducted in the United States.
Both of those clinical trials were performed to shed further light on the issue.
In the Swedish Spinal Stenosis Study, the investigators assessed outcomes in 247 patients aged 50-80 years who were treated at seven Swedish hospitals over the course of 6 years. This open-label, superiority trial randomly assigned 124 patients to decompression surgery alone and 123 to decompression plus fusion. The primary outcome measure was score on the Oswestry Disability Index (ODI), which measures disability and quality of life in patients with low-back pain, 2 years after surgery. The ODI scale runs from 0 to 100, with higher scores indicating more severe disability, said Dr. Peter Försth of the department of surgical sciences at Uppsala (Sweden) University and the Stockholm Spine Center and his associates.
At 2 years, there was no significant difference between the two study groups; the decompression-only group had a mean ODI score of 24, and the fusion group had a mean score of 27. The ODI scores in both groups had improved from baseline to a similar degree: by 17 points with decompression alone and by 15 points with fusion. In addition, fusion surgery was not superior to decompression alone regardless of whether patients had any degree of spondylolisthesis and regardless of whether they had severe spondylolisthesis involving a vertebral slip of 7.4 mm or more, the investigators reported (N Engl J Med. 2016 April 13. doi: 10.1056/NEJMoa1513721).The two study groups also showed no significant differences in secondary outcome measures, including performance on the 6-minute walk test and subjective patient assessment of improvement in walking ability. Moreover, these results persisted in the 144 patients who were assessed at 5-year follow-up.
In contrast, decompression alone was associated with fewer complications than decompression plus fusion. Postoperative wound infection developed in only 4% of the decompression-only group, compared with 10% of the fusion group. Although this study wasn’t adequately powered to draw firm conclusions regarding complications, a previous analysis of registry data reported that adding fusion surgery to decompression surgery doubles the risk of severe adverse events in older patients, Dr. Försth and his associates said.
Decompression alone also was markedly less expensive than fusion surgery. Mean direct costs were $6,800 higher for fusion than for decompression alone, because of the additional operating time needed, the extended hospital stay, and the cost of the implant.
In the SLIP trial, the researchers compared outcomes in 66 patients aged 50-80 years who all had spinal stenosis with grade 1 degenerative spondylolisthesis. The participants were randomly assigned to undergo decompression alone (35 patients) or decompression plus fusion (31 patients) at five U.S. medical centers, said Dr. Zoher Ghogawala of the Alan and Jacqueline B. Stuart Spine Research Center in the department of neurosurgery at Lahey Hospital and Medical Center, Burlington, Mass., and his associates.
The primary outcome measure was the physical-component summary score on the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) 2 years after surgery. This scale also runs from 0 to 100, but higher scores indicate better physical health. Five points was prespecified as the minimal clinically important difference on the SF-36.
At 2 years, patients in the fusion group showed a small but significant advantage of 5.7 points on the SF-36, with a mean score of 15.2, compared with patients in the decompression-only group (mean score, 9.5). However, the ODI scores, a secondary outcome measure in this study, were not significantly different between the two study groups, Dr. Ghogawala and his associates reported (N Engl J Med. 2016 April 13. doi: 10.1056/NEJMoa1508788).Surgical complications, blood loss, and length of stay all were significantly greater with fusion than with decompression alone.
Dr. Försth’s study was supported by Uppsala University, Uppsala County Council, the Stockholm Spine Center, and Johnson & Johnson. Two of his associates reported ties to Medtronic and Quantify Research. Dr. Ghogawala’s study was supported by the Jean and David Wallace Foundation, the Greenwich Lumbar Stenosis SLIP Study Fund. His associates reported ties to numerous industry sources.
FROM THE NEW ENGLAND JOURNAL OF MEDICINE
Key clinical point: The benefit of adding lumbar fusion surgery to decompression surgery for spinal stenosis was nonexistent in one large trial and very modest in another.
Major finding: At 2 years in the Swedish Spinal Stenosis Study, there was no significant difference between the two study groups; the decompression-only group had a mean Oswestry Disability Index score of 24, and the fusion group had a mean score of 27.
Data source: Two multicenter, randomized trials involving 247 patients and 66 patients, comparing decompression surgery alone with decompression plus fusion.
Disclosures: Dr. Försth’s study was supported by Uppsala University, Uppsala County Council, Stockholm Spine Center, and Johnson & Johnson. Two of his associates reported ties to Medtronic and Quantify Research. Dr. Ghogawala’s study was supported by the Jean and David Wallace Foundation, the Greenwich Lumbar Stenosis SLIP Study Fund. His associates reported ties to numerous industry sources.
ED bedside flu test accurate across flu seasons
A rapid bedside diagnostic test for influenza showed consistent sensitivity and specificity across four consecutive flu seasons in a single pediatric ED in France, according to a report in Diagnostic Microbiology and Infectious Disease.
During flu seasons, it is difficult to distinguish young children who have the flu from those who have serious bacterial infections because clinical symptoms alone cannot differentiate the two conditions and fever may be the only symptom during the onset of a bacterial infection. Rapid influenza diagnostic tests purport to help ED clinicians estimate the probability of influenza at the bedside, which in turn can reduce the need for further diagnostic testing, length of ED stay, inappropriate use of antibiotics, and the costs of care, said Dr. E. Avril of the pediatric ED, University Hospital in Nantes, France, and associates.
To assess the diagnostic value of one rapid influenza diagnostic test used in this setting every winter, the investigators studied 764 patients younger than age 5 years who were admitted to the ED during four consecutive flu seasons with fever of no known origin. The prevalence of influenza varied widely during the study period, from a low of 30% to a high of 62%.
The rapid diagnostic test performed comparably well across the four flu seasons, with only a modest decrease in sensitivity and specificity during the 2010 H1N1 flu pandemic. The bedside test had an overall sensitivity of 0.82, a specificity of 0.98, a positive likelihood ratio of 37.8, and a negative likelihood ratio of 0.19. These results are similar to those of two previous small-scale studies that found sensitivities of 69%-85% and specificities of 83%-98%, Dr. Avril and associates said (Diag Microbiol Infect Dis. 2016 doi:10.1016/j.diagmicrobio.2016.03.015).
These findings “support the rational use of rapid influenza diagnostic tests in clinical practice for young children presenting with fever without a source during flu season,” the investigators said.
Dr. Avril and associates added that they assessed only one rapid diagnostic test for influenza (QuickVue) – the only one available in their ED because of cost – but that there are 22 such tests commercially available. Nantes University Hospital supported the study. Dr. Avril and associates reported having no relevant disclosures.
A rapid bedside diagnostic test for influenza showed consistent sensitivity and specificity across four consecutive flu seasons in a single pediatric ED in France, according to a report in Diagnostic Microbiology and Infectious Disease.
During flu seasons, it is difficult to distinguish young children who have the flu from those who have serious bacterial infections because clinical symptoms alone cannot differentiate the two conditions and fever may be the only symptom during the onset of a bacterial infection. Rapid influenza diagnostic tests purport to help ED clinicians estimate the probability of influenza at the bedside, which in turn can reduce the need for further diagnostic testing, length of ED stay, inappropriate use of antibiotics, and the costs of care, said Dr. E. Avril of the pediatric ED, University Hospital in Nantes, France, and associates.
To assess the diagnostic value of one rapid influenza diagnostic test used in this setting every winter, the investigators studied 764 patients younger than age 5 years who were admitted to the ED during four consecutive flu seasons with fever of no known origin. The prevalence of influenza varied widely during the study period, from a low of 30% to a high of 62%.
The rapid diagnostic test performed comparably well across the four flu seasons, with only a modest decrease in sensitivity and specificity during the 2010 H1N1 flu pandemic. The bedside test had an overall sensitivity of 0.82, a specificity of 0.98, a positive likelihood ratio of 37.8, and a negative likelihood ratio of 0.19. These results are similar to those of two previous small-scale studies that found sensitivities of 69%-85% and specificities of 83%-98%, Dr. Avril and associates said (Diag Microbiol Infect Dis. 2016 doi:10.1016/j.diagmicrobio.2016.03.015).
These findings “support the rational use of rapid influenza diagnostic tests in clinical practice for young children presenting with fever without a source during flu season,” the investigators said.
Dr. Avril and associates added that they assessed only one rapid diagnostic test for influenza (QuickVue) – the only one available in their ED because of cost – but that there are 22 such tests commercially available. Nantes University Hospital supported the study. Dr. Avril and associates reported having no relevant disclosures.
A rapid bedside diagnostic test for influenza showed consistent sensitivity and specificity across four consecutive flu seasons in a single pediatric ED in France, according to a report in Diagnostic Microbiology and Infectious Disease.
During flu seasons, it is difficult to distinguish young children who have the flu from those who have serious bacterial infections because clinical symptoms alone cannot differentiate the two conditions and fever may be the only symptom during the onset of a bacterial infection. Rapid influenza diagnostic tests purport to help ED clinicians estimate the probability of influenza at the bedside, which in turn can reduce the need for further diagnostic testing, length of ED stay, inappropriate use of antibiotics, and the costs of care, said Dr. E. Avril of the pediatric ED, University Hospital in Nantes, France, and associates.
To assess the diagnostic value of one rapid influenza diagnostic test used in this setting every winter, the investigators studied 764 patients younger than age 5 years who were admitted to the ED during four consecutive flu seasons with fever of no known origin. The prevalence of influenza varied widely during the study period, from a low of 30% to a high of 62%.
The rapid diagnostic test performed comparably well across the four flu seasons, with only a modest decrease in sensitivity and specificity during the 2010 H1N1 flu pandemic. The bedside test had an overall sensitivity of 0.82, a specificity of 0.98, a positive likelihood ratio of 37.8, and a negative likelihood ratio of 0.19. These results are similar to those of two previous small-scale studies that found sensitivities of 69%-85% and specificities of 83%-98%, Dr. Avril and associates said (Diag Microbiol Infect Dis. 2016 doi:10.1016/j.diagmicrobio.2016.03.015).
These findings “support the rational use of rapid influenza diagnostic tests in clinical practice for young children presenting with fever without a source during flu season,” the investigators said.
Dr. Avril and associates added that they assessed only one rapid diagnostic test for influenza (QuickVue) – the only one available in their ED because of cost – but that there are 22 such tests commercially available. Nantes University Hospital supported the study. Dr. Avril and associates reported having no relevant disclosures.
FROM DIAGNOSTIC MICROBIOLOGY AND INFECTIOUS DISEASE
Key clinical point: A rapid bedside diagnostic test for influenza was accurate across four consecutive flu seasons in a pediatric ED.
Major finding: The bedside test had an overall sensitivity of 0.82, a specificity of 0.98, a positive likelihood ratio of 37.8, and a negative likelihood ratio of 0.19.
Data source: A prospective analysis of the sensitivity and specificity of one rapid bedside diagnostic test in 764 children seen over a 4-year period.
Disclosures: Nantes University Hospital supported the study. Dr. Avril and associates reported having no relevant disclosures.
ED bedside flu test accurate across flu seasons
A rapid bedside diagnostic test for influenza showed consistent sensitivity and specificity across four consecutive flu seasons in a single pediatric ED in France, according to a report in Diagnostic Microbiology and Infectious Disease.
During flu seasons, it is difficult to distinguish young children who have the flu from those who have serious bacterial infections because clinical symptoms alone cannot differentiate the two conditions and fever may be the only symptom during the onset of a bacterial infection. Rapid influenza diagnostic tests purport to help ED clinicians estimate the probability of influenza at the bedside, which in turn can reduce the need for further diagnostic testing, length of ED stay, inappropriate use of antibiotics, and the costs of care, said Dr. E. Avril of the pediatric ED, University Hospital in Nantes, France, and associates.
To assess the diagnostic value of one rapid influenza diagnostic test used in this setting every winter, the investigators studied 764 patients younger than age 5 years who were admitted to the ED during four consecutive flu seasons with fever of no known origin. The prevalence of influenza varied widely during the study period, from a low of 30% to a high of 62%.
The rapid diagnostic test performed comparably well across the four flu seasons, with only a modest decrease in sensitivity and specificity during the 2010 H1N1 flu pandemic. The bedside test had an overall sensitivity of 0.82, a specificity of 0.98, a positive likelihood ratio of 37.8, and a negative likelihood ratio of 0.19. These results are similar to those of two previous small-scale studies that found sensitivities of 69%-85% and specificities of 83%-98%, Dr. Avril and associates said (Diag Microbiol Infect Dis. 2016 doi:10.1016/j.diagmicrobio.2016.03.015).
These findings “support the rational use of rapid influenza diagnostic tests in clinical practice for young children presenting with fever without a source during flu season,” the investigators said.
Dr. Avril and associates added that they assessed only one rapid diagnostic test for influenza (QuickVue) – the only one available in their ED because of cost – but that there are 22 such tests commercially available. Nantes University Hospital supported the study. Dr. Avril and associates reported having no relevant disclosures.
A rapid bedside diagnostic test for influenza showed consistent sensitivity and specificity across four consecutive flu seasons in a single pediatric ED in France, according to a report in Diagnostic Microbiology and Infectious Disease.
During flu seasons, it is difficult to distinguish young children who have the flu from those who have serious bacterial infections because clinical symptoms alone cannot differentiate the two conditions and fever may be the only symptom during the onset of a bacterial infection. Rapid influenza diagnostic tests purport to help ED clinicians estimate the probability of influenza at the bedside, which in turn can reduce the need for further diagnostic testing, length of ED stay, inappropriate use of antibiotics, and the costs of care, said Dr. E. Avril of the pediatric ED, University Hospital in Nantes, France, and associates.
To assess the diagnostic value of one rapid influenza diagnostic test used in this setting every winter, the investigators studied 764 patients younger than age 5 years who were admitted to the ED during four consecutive flu seasons with fever of no known origin. The prevalence of influenza varied widely during the study period, from a low of 30% to a high of 62%.
The rapid diagnostic test performed comparably well across the four flu seasons, with only a modest decrease in sensitivity and specificity during the 2010 H1N1 flu pandemic. The bedside test had an overall sensitivity of 0.82, a specificity of 0.98, a positive likelihood ratio of 37.8, and a negative likelihood ratio of 0.19. These results are similar to those of two previous small-scale studies that found sensitivities of 69%-85% and specificities of 83%-98%, Dr. Avril and associates said (Diag Microbiol Infect Dis. 2016 doi:10.1016/j.diagmicrobio.2016.03.015).
These findings “support the rational use of rapid influenza diagnostic tests in clinical practice for young children presenting with fever without a source during flu season,” the investigators said.
Dr. Avril and associates added that they assessed only one rapid diagnostic test for influenza (QuickVue) – the only one available in their ED because of cost – but that there are 22 such tests commercially available. Nantes University Hospital supported the study. Dr. Avril and associates reported having no relevant disclosures.
A rapid bedside diagnostic test for influenza showed consistent sensitivity and specificity across four consecutive flu seasons in a single pediatric ED in France, according to a report in Diagnostic Microbiology and Infectious Disease.
During flu seasons, it is difficult to distinguish young children who have the flu from those who have serious bacterial infections because clinical symptoms alone cannot differentiate the two conditions and fever may be the only symptom during the onset of a bacterial infection. Rapid influenza diagnostic tests purport to help ED clinicians estimate the probability of influenza at the bedside, which in turn can reduce the need for further diagnostic testing, length of ED stay, inappropriate use of antibiotics, and the costs of care, said Dr. E. Avril of the pediatric ED, University Hospital in Nantes, France, and associates.
To assess the diagnostic value of one rapid influenza diagnostic test used in this setting every winter, the investigators studied 764 patients younger than age 5 years who were admitted to the ED during four consecutive flu seasons with fever of no known origin. The prevalence of influenza varied widely during the study period, from a low of 30% to a high of 62%.
The rapid diagnostic test performed comparably well across the four flu seasons, with only a modest decrease in sensitivity and specificity during the 2010 H1N1 flu pandemic. The bedside test had an overall sensitivity of 0.82, a specificity of 0.98, a positive likelihood ratio of 37.8, and a negative likelihood ratio of 0.19. These results are similar to those of two previous small-scale studies that found sensitivities of 69%-85% and specificities of 83%-98%, Dr. Avril and associates said (Diag Microbiol Infect Dis. 2016 doi:10.1016/j.diagmicrobio.2016.03.015).
These findings “support the rational use of rapid influenza diagnostic tests in clinical practice for young children presenting with fever without a source during flu season,” the investigators said.
Dr. Avril and associates added that they assessed only one rapid diagnostic test for influenza (QuickVue) – the only one available in their ED because of cost – but that there are 22 such tests commercially available. Nantes University Hospital supported the study. Dr. Avril and associates reported having no relevant disclosures.
FROM DIAGNOSTIC MICROBIOLOGY AND INFECTIOUS DISEASE
Key clinical point: A rapid bedside diagnostic test for influenza was accurate across four consecutive flu seasons in a pediatric ED.
Major finding: The bedside test had an overall sensitivity of 0.82, a specificity of 0.98, a positive likelihood ratio of 37.8, and a negative likelihood ratio of 0.19.
Data source: A prospective analysis of the sensitivity and specificity of one rapid bedside diagnostic test in 764 children seen over a 4-year period.
Disclosures: Nantes University Hospital supported the study. Dr. Avril and associates reported having no relevant disclosures.
Pertussis vaccines effective against pertactin-deficient strains
Current pertussis vaccines were as effective against the rapidly evolving pertactin-deficient strains of the organism as they have been against other strains, according to a report published online April 13 in Pediatrics.
The proportion of pertussis strains lacking pertactin increased markedly in the United States, from 14% in 2010 to 85% in 2012. Pertactin, an autotransporter thought to be “involved in bacterial adhesion to the respiratory tract and resistance to neutrophil-induced bacterial clearance,” is a component of acellular pertussis vaccines. Some have speculated that pertactin deficiency evolved to give the bacteria an advantage in response to vaccine-related selection pressure, and that this evolution has contributed to the recent resurgence of pertussis disease, said Lucy Breakwell, Ph.D., of the epidemic intelligence service, Centers for Disease Control and Prevention, Atlanta, and her associates.
To assess vaccine efficacy in the setting of pertactin deficiency, the investigators studied 820 cases and 2,369 matched control subjects treated in Vermont during a 3-year period encompassing a recent pertussis outbreak there. The study included children aged 4-10 years given the five-dose DTaP childhood series and adolescents aged 11-19 years given the adolescent Tdap dose. Specimens from these cases had been cultured routinely by the state department of health laboratory, and more than 90% of the available isolates were found to be pertactin deficient.
The overall vaccine efficacy of the DTap series was 84%, and of the Tdap booster, 70%. “Remarkably,” these rates are comparable to the 89% efficacy of DTap reported in a 2010 California outbreak and the 64% efficacy of Tdap reported in a 2012 Washington state outbreak, the investigators said. “Our findings suggest that both acellular pertussis vaccines remain protective against pertussis disease in the setting of high pertactin deficiency,” and therefore remain the best method for protecting against severe disease, Dr. Breakwell and her associates said (Pediatr. 2016 April 12. doi: 10.1542/peds.2015-3973).
Nevertheless, further study is warranted “to better understand the implications of pertactin deficiency on pertussis pathogenesis and host immunologic response, which could provide insight into the development of novel pertussis vaccines,” they wrote.
This study was supported by the Centers for Disease Control and Prevention. Dr. Breakwell and her associates reported having no relevant financial disclosures.
Immunity to pertussis from either natural infection or vaccination, is not lifelong. Two acellular pertussis vaccines are in widespread use in the United States, and they differ in the number and amounts of purified proteins and method of being chemically inactivated. A vaccine made by GlaxoSmithKline includes three proteins: pertussis toxin (PT), filamentous hemagglutinin (FHA), and pertactin (PRN). A vaccine made by Sanofi Pasteur includes the same three proteins (PT, FHA, and PRN) plus two types of fimbriae (FIM), for a total of five ingredients. PT causes virtually all the symptoms of pertussis disease. The other proteins included in the two acellular vaccines are principally there to prevent the Bordetella pertussis bacteria from attaching to the nasopharynx and lung because prevention of attachment is a prevention of pathogenesis.
Although both acellular pertussis vaccines provide good protection, many reports support that acellular pertussis vaccines have shown waning immunity. One hypothesis among experts is that waning immunity might be related to changes in the protein structure of one or more of the targets for acellular vaccines. The protein that has been shown to have changed since introduction of acellular vaccines is PRN.
In the current study from Vermont, we learn that acellular pertussis vaccine efficacy remained high in that state despite the presence of over 90% of the B. pertussis strains lacking a PRN protein on the bacteria surface that would serve as an antibody target following vaccination. In other words, the lack of PRN as a vaccine target did not reduce vaccine efficacy nor did it impact the waning immunity following vaccination.
The result is reassuring and expected. All bacteria that seek to attach themselves to our respiratory tract in the nose or lungs or both have many different proteins to accomplish that attachment task. The redundancy of those proteins fits easily in a biologic necessity framework because pathogenesis cannot begin for any of the respiratory pathogenic bacteria unless they can attach themselves to the host in the nose or lungs or both. The addition of FHA as well as PRN in the GlaxoSmithKline vaccine and FHA plus two types of FIM antigen as ingredients in the Sanofi Pasteur vaccine was to raise antibody to multiple “adhesion” proteins. That way if “escape mutants” occurred, as we are now observing for PRN-deficient strains, the vaccines would still work. The study from Vermont tells us that they still do work.
Michael E. Pichichero, M.D., a specialist in pediatric infectious diseases, is director of the Research Institute, Rochester (N.Y.) General Hospital. He is also a pediatrician at Legacy Pediatrics in Rochester. Dr. Pichichero commented in an interview. He said he had no relevant financial disclosures.
Immunity to pertussis from either natural infection or vaccination, is not lifelong. Two acellular pertussis vaccines are in widespread use in the United States, and they differ in the number and amounts of purified proteins and method of being chemically inactivated. A vaccine made by GlaxoSmithKline includes three proteins: pertussis toxin (PT), filamentous hemagglutinin (FHA), and pertactin (PRN). A vaccine made by Sanofi Pasteur includes the same three proteins (PT, FHA, and PRN) plus two types of fimbriae (FIM), for a total of five ingredients. PT causes virtually all the symptoms of pertussis disease. The other proteins included in the two acellular vaccines are principally there to prevent the Bordetella pertussis bacteria from attaching to the nasopharynx and lung because prevention of attachment is a prevention of pathogenesis.
Although both acellular pertussis vaccines provide good protection, many reports support that acellular pertussis vaccines have shown waning immunity. One hypothesis among experts is that waning immunity might be related to changes in the protein structure of one or more of the targets for acellular vaccines. The protein that has been shown to have changed since introduction of acellular vaccines is PRN.
In the current study from Vermont, we learn that acellular pertussis vaccine efficacy remained high in that state despite the presence of over 90% of the B. pertussis strains lacking a PRN protein on the bacteria surface that would serve as an antibody target following vaccination. In other words, the lack of PRN as a vaccine target did not reduce vaccine efficacy nor did it impact the waning immunity following vaccination.
The result is reassuring and expected. All bacteria that seek to attach themselves to our respiratory tract in the nose or lungs or both have many different proteins to accomplish that attachment task. The redundancy of those proteins fits easily in a biologic necessity framework because pathogenesis cannot begin for any of the respiratory pathogenic bacteria unless they can attach themselves to the host in the nose or lungs or both. The addition of FHA as well as PRN in the GlaxoSmithKline vaccine and FHA plus two types of FIM antigen as ingredients in the Sanofi Pasteur vaccine was to raise antibody to multiple “adhesion” proteins. That way if “escape mutants” occurred, as we are now observing for PRN-deficient strains, the vaccines would still work. The study from Vermont tells us that they still do work.
Michael E. Pichichero, M.D., a specialist in pediatric infectious diseases, is director of the Research Institute, Rochester (N.Y.) General Hospital. He is also a pediatrician at Legacy Pediatrics in Rochester. Dr. Pichichero commented in an interview. He said he had no relevant financial disclosures.
Immunity to pertussis from either natural infection or vaccination, is not lifelong. Two acellular pertussis vaccines are in widespread use in the United States, and they differ in the number and amounts of purified proteins and method of being chemically inactivated. A vaccine made by GlaxoSmithKline includes three proteins: pertussis toxin (PT), filamentous hemagglutinin (FHA), and pertactin (PRN). A vaccine made by Sanofi Pasteur includes the same three proteins (PT, FHA, and PRN) plus two types of fimbriae (FIM), for a total of five ingredients. PT causes virtually all the symptoms of pertussis disease. The other proteins included in the two acellular vaccines are principally there to prevent the Bordetella pertussis bacteria from attaching to the nasopharynx and lung because prevention of attachment is a prevention of pathogenesis.
Although both acellular pertussis vaccines provide good protection, many reports support that acellular pertussis vaccines have shown waning immunity. One hypothesis among experts is that waning immunity might be related to changes in the protein structure of one or more of the targets for acellular vaccines. The protein that has been shown to have changed since introduction of acellular vaccines is PRN.
In the current study from Vermont, we learn that acellular pertussis vaccine efficacy remained high in that state despite the presence of over 90% of the B. pertussis strains lacking a PRN protein on the bacteria surface that would serve as an antibody target following vaccination. In other words, the lack of PRN as a vaccine target did not reduce vaccine efficacy nor did it impact the waning immunity following vaccination.
The result is reassuring and expected. All bacteria that seek to attach themselves to our respiratory tract in the nose or lungs or both have many different proteins to accomplish that attachment task. The redundancy of those proteins fits easily in a biologic necessity framework because pathogenesis cannot begin for any of the respiratory pathogenic bacteria unless they can attach themselves to the host in the nose or lungs or both. The addition of FHA as well as PRN in the GlaxoSmithKline vaccine and FHA plus two types of FIM antigen as ingredients in the Sanofi Pasteur vaccine was to raise antibody to multiple “adhesion” proteins. That way if “escape mutants” occurred, as we are now observing for PRN-deficient strains, the vaccines would still work. The study from Vermont tells us that they still do work.
Michael E. Pichichero, M.D., a specialist in pediatric infectious diseases, is director of the Research Institute, Rochester (N.Y.) General Hospital. He is also a pediatrician at Legacy Pediatrics in Rochester. Dr. Pichichero commented in an interview. He said he had no relevant financial disclosures.
Current pertussis vaccines were as effective against the rapidly evolving pertactin-deficient strains of the organism as they have been against other strains, according to a report published online April 13 in Pediatrics.
The proportion of pertussis strains lacking pertactin increased markedly in the United States, from 14% in 2010 to 85% in 2012. Pertactin, an autotransporter thought to be “involved in bacterial adhesion to the respiratory tract and resistance to neutrophil-induced bacterial clearance,” is a component of acellular pertussis vaccines. Some have speculated that pertactin deficiency evolved to give the bacteria an advantage in response to vaccine-related selection pressure, and that this evolution has contributed to the recent resurgence of pertussis disease, said Lucy Breakwell, Ph.D., of the epidemic intelligence service, Centers for Disease Control and Prevention, Atlanta, and her associates.
To assess vaccine efficacy in the setting of pertactin deficiency, the investigators studied 820 cases and 2,369 matched control subjects treated in Vermont during a 3-year period encompassing a recent pertussis outbreak there. The study included children aged 4-10 years given the five-dose DTaP childhood series and adolescents aged 11-19 years given the adolescent Tdap dose. Specimens from these cases had been cultured routinely by the state department of health laboratory, and more than 90% of the available isolates were found to be pertactin deficient.
The overall vaccine efficacy of the DTap series was 84%, and of the Tdap booster, 70%. “Remarkably,” these rates are comparable to the 89% efficacy of DTap reported in a 2010 California outbreak and the 64% efficacy of Tdap reported in a 2012 Washington state outbreak, the investigators said. “Our findings suggest that both acellular pertussis vaccines remain protective against pertussis disease in the setting of high pertactin deficiency,” and therefore remain the best method for protecting against severe disease, Dr. Breakwell and her associates said (Pediatr. 2016 April 12. doi: 10.1542/peds.2015-3973).
Nevertheless, further study is warranted “to better understand the implications of pertactin deficiency on pertussis pathogenesis and host immunologic response, which could provide insight into the development of novel pertussis vaccines,” they wrote.
This study was supported by the Centers for Disease Control and Prevention. Dr. Breakwell and her associates reported having no relevant financial disclosures.
Current pertussis vaccines were as effective against the rapidly evolving pertactin-deficient strains of the organism as they have been against other strains, according to a report published online April 13 in Pediatrics.
The proportion of pertussis strains lacking pertactin increased markedly in the United States, from 14% in 2010 to 85% in 2012. Pertactin, an autotransporter thought to be “involved in bacterial adhesion to the respiratory tract and resistance to neutrophil-induced bacterial clearance,” is a component of acellular pertussis vaccines. Some have speculated that pertactin deficiency evolved to give the bacteria an advantage in response to vaccine-related selection pressure, and that this evolution has contributed to the recent resurgence of pertussis disease, said Lucy Breakwell, Ph.D., of the epidemic intelligence service, Centers for Disease Control and Prevention, Atlanta, and her associates.
To assess vaccine efficacy in the setting of pertactin deficiency, the investigators studied 820 cases and 2,369 matched control subjects treated in Vermont during a 3-year period encompassing a recent pertussis outbreak there. The study included children aged 4-10 years given the five-dose DTaP childhood series and adolescents aged 11-19 years given the adolescent Tdap dose. Specimens from these cases had been cultured routinely by the state department of health laboratory, and more than 90% of the available isolates were found to be pertactin deficient.
The overall vaccine efficacy of the DTap series was 84%, and of the Tdap booster, 70%. “Remarkably,” these rates are comparable to the 89% efficacy of DTap reported in a 2010 California outbreak and the 64% efficacy of Tdap reported in a 2012 Washington state outbreak, the investigators said. “Our findings suggest that both acellular pertussis vaccines remain protective against pertussis disease in the setting of high pertactin deficiency,” and therefore remain the best method for protecting against severe disease, Dr. Breakwell and her associates said (Pediatr. 2016 April 12. doi: 10.1542/peds.2015-3973).
Nevertheless, further study is warranted “to better understand the implications of pertactin deficiency on pertussis pathogenesis and host immunologic response, which could provide insight into the development of novel pertussis vaccines,” they wrote.
This study was supported by the Centers for Disease Control and Prevention. Dr. Breakwell and her associates reported having no relevant financial disclosures.
FROM PEDIATRICS
Key clinical point: Current pertussis vaccines remain effective against rapidly evolving pertactin-deficient strains of the organism.
Major finding: The overall vaccine efficacy of the DTap series was 84%, and of the Tdap booster, 70%.
Data source: A case-control study assessing vaccine efficacy in 820 patients and 2,369 controls involved in the recent Vermont outbreak of pertussis.
Disclosures: This study was supported by the Centers for Disease Control and Prevention. Dr. Breakwell and her associates reported having no relevant financial disclosures.
USPSTF Updates Guideline for Preventive Aspirin Therapy
Many patients aged 50-59 years should start low-dose aspirin for the primary prevention of cardiovascular disease and colorectal cancer, according to the U.S. Preventive Services Task Force’s updated clinical practice guideline on aspirin therapy, published online April 11 in Annals of Internal Medicine.
The evidence is clear that the benefits outweigh the potential harms of low-dose aspirin in this age group if patients have a 10% or greater 10-year cardiovascular disease (CVD) risk, are not at increased risk of bleeding, have a life expectancy of at least 10 years, and are willing to take the treatment for at least 10 years, said Dr. Albert L. Siu and his associates in the USPSTF.
The organization based this guideline on 2 systematic reviews of the literature, updating its 2009 review on aspirin therapy to prevent cardiovascular disease and updating its 2007 review on aspirin therapy to prevent colorectal cancer. The findings from these reviews of the current evidence were used to develop a decision-analysis model to weigh the benefits and harms of treatment in various patient groups defined by age, gender, and risk factors.
Recent studies of primary prevention of CVD, which included 118,445 participants, “consistently demonstrated effectiveness of aspirin in preventing nonfatal MI and stroke.” Pooled analyses showed that low-dose aspirin reduced nonfatal MI and coronary events 17% (risk ratio, 0.83) and that any aspirin dose reduced them 22%. Low-dose aspirin also reduced all-cause mortality risk (RR, 0.95) in pooled analyses.
Aspirin therapy also reduced the risk of colorectal cancer, but this benefit didn’t appear until after 5-10 years of treatment. Three trials reported a 40% reduction (RR, 0.60) after 10-20 years of daily low-dose aspirin.
On the other side of the equation, major GI bleeding increased by 65% among aspirin users when the data from 15 CVD prevention trials were pooled. Similarly, pooled analyses showed a 33% increase in hemorrhagic stroke among aspirin users, compared with nonusers.
The benefits of low-dose aspirin were highest and the harms were lowest in patients aged 50-59 years, hence the first recommendation in the new guideline. In patients aged 60-69 years, the benefit-to-harm balance isn’t as clear-cut, so the decision to initiate or continue aspirin therapy in this age group must be made on an individual basis. “Some adults [at this age] may decide that avoiding an MI or stroke is very important and that having a GI bleeding event is not as significant. They may decide to take aspirin at a lower CVD risk level than those who are more concerned about GI bleeding. Adults who have a high likelihood of benefit with little potential for harm should be encouraged to consider aspirin use.
“Conversely, adults who have little potential of benefit or high risk for GI bleeding should be discouraged” from taking aspirin therapy, the investigators said (Ann Intern Med. 2016 Apr 11. doi: 10.7326/M16-0577).
The task force found that current evidence is insufficient to assess the balance of benefits and harms regarding aspirin therapy for adults younger than age 50 or older than age 70. In the latter group in particular, the picture is complicated by the effects of age, use of other medications, and concomitant illness. However, since cardiovascular risks are increased after age 70 and the incidences of MI and stroke are relatively high, the benefits of preventive aspirin could be substantial in this age group, said Dr. Siu of Icahn School of Medicine Mount Sinai, New York, and the Veterans Affairs Medical Center, the Bronx, and his associates.
The USPSTF guideline generally accords with existing recommendations from the American Heart Association, the American Stroke Association, the American Diabetes Association, the American Academy of Family Physicians, and the American College of Chest Physicians. At present, the American Cancer Society doesn’t have recommendations for or against aspirin therapy; the American Gastroenterological Association and the National Comprehensive Care Network “limit their recommendations to patients who are at increased risk for colorectal cancer,” Dr. Siu and his associates added.
Copies of the guideline and of the supporting literature reviews and the decision-analysis tool are available at www.uspreventiveservicestaskforce.org.
Many patients aged 50-59 years should start low-dose aspirin for the primary prevention of cardiovascular disease and colorectal cancer, according to the U.S. Preventive Services Task Force’s updated clinical practice guideline on aspirin therapy, published online April 11 in Annals of Internal Medicine.
The evidence is clear that the benefits outweigh the potential harms of low-dose aspirin in this age group if patients have a 10% or greater 10-year cardiovascular disease (CVD) risk, are not at increased risk of bleeding, have a life expectancy of at least 10 years, and are willing to take the treatment for at least 10 years, said Dr. Albert L. Siu and his associates in the USPSTF.
The organization based this guideline on 2 systematic reviews of the literature, updating its 2009 review on aspirin therapy to prevent cardiovascular disease and updating its 2007 review on aspirin therapy to prevent colorectal cancer. The findings from these reviews of the current evidence were used to develop a decision-analysis model to weigh the benefits and harms of treatment in various patient groups defined by age, gender, and risk factors.
Recent studies of primary prevention of CVD, which included 118,445 participants, “consistently demonstrated effectiveness of aspirin in preventing nonfatal MI and stroke.” Pooled analyses showed that low-dose aspirin reduced nonfatal MI and coronary events 17% (risk ratio, 0.83) and that any aspirin dose reduced them 22%. Low-dose aspirin also reduced all-cause mortality risk (RR, 0.95) in pooled analyses.
Aspirin therapy also reduced the risk of colorectal cancer, but this benefit didn’t appear until after 5-10 years of treatment. Three trials reported a 40% reduction (RR, 0.60) after 10-20 years of daily low-dose aspirin.
On the other side of the equation, major GI bleeding increased by 65% among aspirin users when the data from 15 CVD prevention trials were pooled. Similarly, pooled analyses showed a 33% increase in hemorrhagic stroke among aspirin users, compared with nonusers.
The benefits of low-dose aspirin were highest and the harms were lowest in patients aged 50-59 years, hence the first recommendation in the new guideline. In patients aged 60-69 years, the benefit-to-harm balance isn’t as clear-cut, so the decision to initiate or continue aspirin therapy in this age group must be made on an individual basis. “Some adults [at this age] may decide that avoiding an MI or stroke is very important and that having a GI bleeding event is not as significant. They may decide to take aspirin at a lower CVD risk level than those who are more concerned about GI bleeding. Adults who have a high likelihood of benefit with little potential for harm should be encouraged to consider aspirin use.
“Conversely, adults who have little potential of benefit or high risk for GI bleeding should be discouraged” from taking aspirin therapy, the investigators said (Ann Intern Med. 2016 Apr 11. doi: 10.7326/M16-0577).
The task force found that current evidence is insufficient to assess the balance of benefits and harms regarding aspirin therapy for adults younger than age 50 or older than age 70. In the latter group in particular, the picture is complicated by the effects of age, use of other medications, and concomitant illness. However, since cardiovascular risks are increased after age 70 and the incidences of MI and stroke are relatively high, the benefits of preventive aspirin could be substantial in this age group, said Dr. Siu of Icahn School of Medicine Mount Sinai, New York, and the Veterans Affairs Medical Center, the Bronx, and his associates.
The USPSTF guideline generally accords with existing recommendations from the American Heart Association, the American Stroke Association, the American Diabetes Association, the American Academy of Family Physicians, and the American College of Chest Physicians. At present, the American Cancer Society doesn’t have recommendations for or against aspirin therapy; the American Gastroenterological Association and the National Comprehensive Care Network “limit their recommendations to patients who are at increased risk for colorectal cancer,” Dr. Siu and his associates added.
Copies of the guideline and of the supporting literature reviews and the decision-analysis tool are available at www.uspreventiveservicestaskforce.org.
Many patients aged 50-59 years should start low-dose aspirin for the primary prevention of cardiovascular disease and colorectal cancer, according to the U.S. Preventive Services Task Force’s updated clinical practice guideline on aspirin therapy, published online April 11 in Annals of Internal Medicine.
The evidence is clear that the benefits outweigh the potential harms of low-dose aspirin in this age group if patients have a 10% or greater 10-year cardiovascular disease (CVD) risk, are not at increased risk of bleeding, have a life expectancy of at least 10 years, and are willing to take the treatment for at least 10 years, said Dr. Albert L. Siu and his associates in the USPSTF.
The organization based this guideline on 2 systematic reviews of the literature, updating its 2009 review on aspirin therapy to prevent cardiovascular disease and updating its 2007 review on aspirin therapy to prevent colorectal cancer. The findings from these reviews of the current evidence were used to develop a decision-analysis model to weigh the benefits and harms of treatment in various patient groups defined by age, gender, and risk factors.
Recent studies of primary prevention of CVD, which included 118,445 participants, “consistently demonstrated effectiveness of aspirin in preventing nonfatal MI and stroke.” Pooled analyses showed that low-dose aspirin reduced nonfatal MI and coronary events 17% (risk ratio, 0.83) and that any aspirin dose reduced them 22%. Low-dose aspirin also reduced all-cause mortality risk (RR, 0.95) in pooled analyses.
Aspirin therapy also reduced the risk of colorectal cancer, but this benefit didn’t appear until after 5-10 years of treatment. Three trials reported a 40% reduction (RR, 0.60) after 10-20 years of daily low-dose aspirin.
On the other side of the equation, major GI bleeding increased by 65% among aspirin users when the data from 15 CVD prevention trials were pooled. Similarly, pooled analyses showed a 33% increase in hemorrhagic stroke among aspirin users, compared with nonusers.
The benefits of low-dose aspirin were highest and the harms were lowest in patients aged 50-59 years, hence the first recommendation in the new guideline. In patients aged 60-69 years, the benefit-to-harm balance isn’t as clear-cut, so the decision to initiate or continue aspirin therapy in this age group must be made on an individual basis. “Some adults [at this age] may decide that avoiding an MI or stroke is very important and that having a GI bleeding event is not as significant. They may decide to take aspirin at a lower CVD risk level than those who are more concerned about GI bleeding. Adults who have a high likelihood of benefit with little potential for harm should be encouraged to consider aspirin use.
“Conversely, adults who have little potential of benefit or high risk for GI bleeding should be discouraged” from taking aspirin therapy, the investigators said (Ann Intern Med. 2016 Apr 11. doi: 10.7326/M16-0577).
The task force found that current evidence is insufficient to assess the balance of benefits and harms regarding aspirin therapy for adults younger than age 50 or older than age 70. In the latter group in particular, the picture is complicated by the effects of age, use of other medications, and concomitant illness. However, since cardiovascular risks are increased after age 70 and the incidences of MI and stroke are relatively high, the benefits of preventive aspirin could be substantial in this age group, said Dr. Siu of Icahn School of Medicine Mount Sinai, New York, and the Veterans Affairs Medical Center, the Bronx, and his associates.
The USPSTF guideline generally accords with existing recommendations from the American Heart Association, the American Stroke Association, the American Diabetes Association, the American Academy of Family Physicians, and the American College of Chest Physicians. At present, the American Cancer Society doesn’t have recommendations for or against aspirin therapy; the American Gastroenterological Association and the National Comprehensive Care Network “limit their recommendations to patients who are at increased risk for colorectal cancer,” Dr. Siu and his associates added.
Copies of the guideline and of the supporting literature reviews and the decision-analysis tool are available at www.uspreventiveservicestaskforce.org.
FROM ANNALS OF INTERNAL MEDICINE
USPSTF updates guideline for preventive aspirin therapy
Many patients aged 50-59 years should start low-dose aspirin for the primary prevention of cardiovascular disease and colorectal cancer, according to the U.S. Preventive Services Task Force’s updated clinical practice guideline on aspirin therapy, published online April 11 in Annals of Internal Medicine.
The evidence is clear that the benefits outweigh the potential harms of low-dose aspirin in this age group if patients have a 10% or greater 10-year cardiovascular disease (CVD) risk, are not at increased risk of bleeding, have a life expectancy of at least 10 years, and are willing to take the treatment for at least 10 years, said Dr. Albert L. Siu and his associates in the USPSTF.
The organization based this guideline on 2 systematic reviews of the literature, updating its 2009 review on aspirin therapy to prevent cardiovascular disease and updating its 2007 review on aspirin therapy to prevent colorectal cancer. The findings from these reviews of the current evidence were used to develop a decision-analysis model to weigh the benefits and harms of treatment in various patient groups defined by age, gender, and risk factors.
Recent studies of primary prevention of CVD, which included 118,445 participants, “consistently demonstrated effectiveness of aspirin in preventing nonfatal MI and stroke.” Pooled analyses showed that low-dose aspirin reduced nonfatal MI and coronary events 17% (risk ratio, 0.83) and that any aspirin dose reduced them 22%. Low-dose aspirin also reduced all-cause mortality risk (RR, 0.95) in pooled analyses.
Aspirin therapy also reduced the risk of colorectal cancer, but this benefit didn’t appear until after 5-10 years of treatment. Three trials reported a 40% reduction (RR, 0.60) after 10-20 years of daily low-dose aspirin.
On the other side of the equation, major GI bleeding increased by 65% among aspirin users when the data from 15 CVD prevention trials were pooled. Similarly, pooled analyses showed a 33% increase in hemorrhagic stroke among aspirin users, compared with nonusers.
The benefits of low-dose aspirin were highest and the harms were lowest in patients aged 50-59 years, hence the first recommendation in the new guideline. In patients aged 60-69 years, the benefit-to-harm balance isn’t as clear-cut, so the decision to initiate or continue aspirin therapy in this age group must be made on an individual basis. “Some adults [at this age] may decide that avoiding an MI or stroke is very important and that having a GI bleeding event is not as significant. They may decide to take aspirin at a lower CVD risk level than those who are more concerned about GI bleeding. Adults who have a high likelihood of benefit with little potential for harm should be encouraged to consider aspirin use.
“Conversely, adults who have little potential of benefit or high risk for GI bleeding should be discouraged” from taking aspirin therapy, the investigators said (Ann Intern Med. 2016 Apr 11. doi: 10.7326/M16-0577).
The task force found that current evidence is insufficient to assess the balance of benefits and harms regarding aspirin therapy for adults younger than age 50 or older than age 70. In the latter group in particular, the picture is complicated by the effects of age, use of other medications, and concomitant illness. However, since cardiovascular risks are increased after age 70 and the incidences of MI and stroke are relatively high, the benefits of preventive aspirin could be substantial in this age group, said Dr. Siu of Icahn School of Medicine Mount Sinai, New York, and the Veterans Affairs Medical Center, the Bronx, and his associates.
The USPSTF guideline generally accords with existing recommendations from the American Heart Association, the American Stroke Association, the American Diabetes Association, the American Academy of Family Physicians, and the American College of Chest Physicians. At present, the American Cancer Society doesn’t have recommendations for or against aspirin therapy; the American Gastroenterological Association and the National Comprehensive Care Network “limit their recommendations to patients who are at increased risk for colorectal cancer,” Dr. Siu and his associates added.
Copies of the guideline and of the supporting literature reviews and the decision-analysis tool are available at www.uspreventiveservicestaskforce.org.
Patients and providers must read beyond the headlines advocating expanded aspirin use. The USPSTF explicitly endorses low-dose aspirin only for those with a 10-year cardiovascular disease (CVD) risk of 10% or greater, who are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take the medication for at least that long. Balancing these competing risks is no small task. While Internet-based CVD risk calculators are readily available, they are not routinely used. GI risk is less easily quantified, with prior ulcer history or bleeding most important to consider, but concomitant medications (NSAIDs, anticoagulants, and SSRIs) must also be recognized. Proton pump inhibitors can reduce aspirin-induced upper GI bleeding and are cost effective in a primary prevention population with increased GI risk (Arch Intern Med. 2011;171:218-25). There is no intervention to reduce bleeding events in the small bowel and colon.
The USPSTF acknowledged that the benefits of aspirin in reducing colon cancer incidence and mortality were not established in the primary CVD prevention population. Its impact in other patient populations required 10 years or more – leading to their targeting the 50-70-year-old group. Aspirin therapy should not be considered a substitute for colonoscopy, and among those undergoing screening it remains of uncertain incremental value (Ann Intern Med. 2001;135:769-81). Gastroenterologists must play an active role to ensure the appropriate use of aspirin therapy in contributing to improved global patient outcomes.
Dr. James M. Scheiman is professor of internal medicine in the University of Michigan Health System, Ann Arbor. His is a consultant to Aralez, Pfizer, Stryker, Intec, and Teva.
Patients and providers must read beyond the headlines advocating expanded aspirin use. The USPSTF explicitly endorses low-dose aspirin only for those with a 10-year cardiovascular disease (CVD) risk of 10% or greater, who are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take the medication for at least that long. Balancing these competing risks is no small task. While Internet-based CVD risk calculators are readily available, they are not routinely used. GI risk is less easily quantified, with prior ulcer history or bleeding most important to consider, but concomitant medications (NSAIDs, anticoagulants, and SSRIs) must also be recognized. Proton pump inhibitors can reduce aspirin-induced upper GI bleeding and are cost effective in a primary prevention population with increased GI risk (Arch Intern Med. 2011;171:218-25). There is no intervention to reduce bleeding events in the small bowel and colon.
The USPSTF acknowledged that the benefits of aspirin in reducing colon cancer incidence and mortality were not established in the primary CVD prevention population. Its impact in other patient populations required 10 years or more – leading to their targeting the 50-70-year-old group. Aspirin therapy should not be considered a substitute for colonoscopy, and among those undergoing screening it remains of uncertain incremental value (Ann Intern Med. 2001;135:769-81). Gastroenterologists must play an active role to ensure the appropriate use of aspirin therapy in contributing to improved global patient outcomes.
Dr. James M. Scheiman is professor of internal medicine in the University of Michigan Health System, Ann Arbor. His is a consultant to Aralez, Pfizer, Stryker, Intec, and Teva.
Patients and providers must read beyond the headlines advocating expanded aspirin use. The USPSTF explicitly endorses low-dose aspirin only for those with a 10-year cardiovascular disease (CVD) risk of 10% or greater, who are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take the medication for at least that long. Balancing these competing risks is no small task. While Internet-based CVD risk calculators are readily available, they are not routinely used. GI risk is less easily quantified, with prior ulcer history or bleeding most important to consider, but concomitant medications (NSAIDs, anticoagulants, and SSRIs) must also be recognized. Proton pump inhibitors can reduce aspirin-induced upper GI bleeding and are cost effective in a primary prevention population with increased GI risk (Arch Intern Med. 2011;171:218-25). There is no intervention to reduce bleeding events in the small bowel and colon.
The USPSTF acknowledged that the benefits of aspirin in reducing colon cancer incidence and mortality were not established in the primary CVD prevention population. Its impact in other patient populations required 10 years or more – leading to their targeting the 50-70-year-old group. Aspirin therapy should not be considered a substitute for colonoscopy, and among those undergoing screening it remains of uncertain incremental value (Ann Intern Med. 2001;135:769-81). Gastroenterologists must play an active role to ensure the appropriate use of aspirin therapy in contributing to improved global patient outcomes.
Dr. James M. Scheiman is professor of internal medicine in the University of Michigan Health System, Ann Arbor. His is a consultant to Aralez, Pfizer, Stryker, Intec, and Teva.
Many patients aged 50-59 years should start low-dose aspirin for the primary prevention of cardiovascular disease and colorectal cancer, according to the U.S. Preventive Services Task Force’s updated clinical practice guideline on aspirin therapy, published online April 11 in Annals of Internal Medicine.
The evidence is clear that the benefits outweigh the potential harms of low-dose aspirin in this age group if patients have a 10% or greater 10-year cardiovascular disease (CVD) risk, are not at increased risk of bleeding, have a life expectancy of at least 10 years, and are willing to take the treatment for at least 10 years, said Dr. Albert L. Siu and his associates in the USPSTF.
The organization based this guideline on 2 systematic reviews of the literature, updating its 2009 review on aspirin therapy to prevent cardiovascular disease and updating its 2007 review on aspirin therapy to prevent colorectal cancer. The findings from these reviews of the current evidence were used to develop a decision-analysis model to weigh the benefits and harms of treatment in various patient groups defined by age, gender, and risk factors.
Recent studies of primary prevention of CVD, which included 118,445 participants, “consistently demonstrated effectiveness of aspirin in preventing nonfatal MI and stroke.” Pooled analyses showed that low-dose aspirin reduced nonfatal MI and coronary events 17% (risk ratio, 0.83) and that any aspirin dose reduced them 22%. Low-dose aspirin also reduced all-cause mortality risk (RR, 0.95) in pooled analyses.
Aspirin therapy also reduced the risk of colorectal cancer, but this benefit didn’t appear until after 5-10 years of treatment. Three trials reported a 40% reduction (RR, 0.60) after 10-20 years of daily low-dose aspirin.
On the other side of the equation, major GI bleeding increased by 65% among aspirin users when the data from 15 CVD prevention trials were pooled. Similarly, pooled analyses showed a 33% increase in hemorrhagic stroke among aspirin users, compared with nonusers.
The benefits of low-dose aspirin were highest and the harms were lowest in patients aged 50-59 years, hence the first recommendation in the new guideline. In patients aged 60-69 years, the benefit-to-harm balance isn’t as clear-cut, so the decision to initiate or continue aspirin therapy in this age group must be made on an individual basis. “Some adults [at this age] may decide that avoiding an MI or stroke is very important and that having a GI bleeding event is not as significant. They may decide to take aspirin at a lower CVD risk level than those who are more concerned about GI bleeding. Adults who have a high likelihood of benefit with little potential for harm should be encouraged to consider aspirin use.
“Conversely, adults who have little potential of benefit or high risk for GI bleeding should be discouraged” from taking aspirin therapy, the investigators said (Ann Intern Med. 2016 Apr 11. doi: 10.7326/M16-0577).
The task force found that current evidence is insufficient to assess the balance of benefits and harms regarding aspirin therapy for adults younger than age 50 or older than age 70. In the latter group in particular, the picture is complicated by the effects of age, use of other medications, and concomitant illness. However, since cardiovascular risks are increased after age 70 and the incidences of MI and stroke are relatively high, the benefits of preventive aspirin could be substantial in this age group, said Dr. Siu of Icahn School of Medicine Mount Sinai, New York, and the Veterans Affairs Medical Center, the Bronx, and his associates.
The USPSTF guideline generally accords with existing recommendations from the American Heart Association, the American Stroke Association, the American Diabetes Association, the American Academy of Family Physicians, and the American College of Chest Physicians. At present, the American Cancer Society doesn’t have recommendations for or against aspirin therapy; the American Gastroenterological Association and the National Comprehensive Care Network “limit their recommendations to patients who are at increased risk for colorectal cancer,” Dr. Siu and his associates added.
Copies of the guideline and of the supporting literature reviews and the decision-analysis tool are available at www.uspreventiveservicestaskforce.org.
Many patients aged 50-59 years should start low-dose aspirin for the primary prevention of cardiovascular disease and colorectal cancer, according to the U.S. Preventive Services Task Force’s updated clinical practice guideline on aspirin therapy, published online April 11 in Annals of Internal Medicine.
The evidence is clear that the benefits outweigh the potential harms of low-dose aspirin in this age group if patients have a 10% or greater 10-year cardiovascular disease (CVD) risk, are not at increased risk of bleeding, have a life expectancy of at least 10 years, and are willing to take the treatment for at least 10 years, said Dr. Albert L. Siu and his associates in the USPSTF.
The organization based this guideline on 2 systematic reviews of the literature, updating its 2009 review on aspirin therapy to prevent cardiovascular disease and updating its 2007 review on aspirin therapy to prevent colorectal cancer. The findings from these reviews of the current evidence were used to develop a decision-analysis model to weigh the benefits and harms of treatment in various patient groups defined by age, gender, and risk factors.
Recent studies of primary prevention of CVD, which included 118,445 participants, “consistently demonstrated effectiveness of aspirin in preventing nonfatal MI and stroke.” Pooled analyses showed that low-dose aspirin reduced nonfatal MI and coronary events 17% (risk ratio, 0.83) and that any aspirin dose reduced them 22%. Low-dose aspirin also reduced all-cause mortality risk (RR, 0.95) in pooled analyses.
Aspirin therapy also reduced the risk of colorectal cancer, but this benefit didn’t appear until after 5-10 years of treatment. Three trials reported a 40% reduction (RR, 0.60) after 10-20 years of daily low-dose aspirin.
On the other side of the equation, major GI bleeding increased by 65% among aspirin users when the data from 15 CVD prevention trials were pooled. Similarly, pooled analyses showed a 33% increase in hemorrhagic stroke among aspirin users, compared with nonusers.
The benefits of low-dose aspirin were highest and the harms were lowest in patients aged 50-59 years, hence the first recommendation in the new guideline. In patients aged 60-69 years, the benefit-to-harm balance isn’t as clear-cut, so the decision to initiate or continue aspirin therapy in this age group must be made on an individual basis. “Some adults [at this age] may decide that avoiding an MI or stroke is very important and that having a GI bleeding event is not as significant. They may decide to take aspirin at a lower CVD risk level than those who are more concerned about GI bleeding. Adults who have a high likelihood of benefit with little potential for harm should be encouraged to consider aspirin use.
“Conversely, adults who have little potential of benefit or high risk for GI bleeding should be discouraged” from taking aspirin therapy, the investigators said (Ann Intern Med. 2016 Apr 11. doi: 10.7326/M16-0577).
The task force found that current evidence is insufficient to assess the balance of benefits and harms regarding aspirin therapy for adults younger than age 50 or older than age 70. In the latter group in particular, the picture is complicated by the effects of age, use of other medications, and concomitant illness. However, since cardiovascular risks are increased after age 70 and the incidences of MI and stroke are relatively high, the benefits of preventive aspirin could be substantial in this age group, said Dr. Siu of Icahn School of Medicine Mount Sinai, New York, and the Veterans Affairs Medical Center, the Bronx, and his associates.
The USPSTF guideline generally accords with existing recommendations from the American Heart Association, the American Stroke Association, the American Diabetes Association, the American Academy of Family Physicians, and the American College of Chest Physicians. At present, the American Cancer Society doesn’t have recommendations for or against aspirin therapy; the American Gastroenterological Association and the National Comprehensive Care Network “limit their recommendations to patients who are at increased risk for colorectal cancer,” Dr. Siu and his associates added.
Copies of the guideline and of the supporting literature reviews and the decision-analysis tool are available at www.uspreventiveservicestaskforce.org.
FROM ANNALS OF INTERNAL MEDICINE
Key clinical point: The USPSTF recommends that many adults aged 50-59 years start low-dose aspirin for primary prevention of cardiovascular disease and colorectal cancer.
Major finding: Low-dose aspirin reduced nonfatal MI and coronary events 17% (RR, 0.83), and 10-20 years of daily aspirin reduced the risk of colorectal cancer 40%.
Data source: Three systematic reviews of the literature and a compilation of clinical practice guidelines for preventive aspirin therapy.
Disclosures: The USPSTF is an independent, voluntary group funded by the Agency for Healthcare Research and Quality by mandate of the U.S. Congress. Dr. Siu and his associates reported having no relevant financial disclosures.