User login
Does Oral Chemotherapy Venetoclax Combined with Rituximab Improve Survival in Patients with Relapsed or Refractory Chronic Lymphocytic Leukemia?
Study Overview
Objective. To assess whether a combination of venetoclax with rituximab, compared to standard chemoimmunotherapy (bendamustine with rituximab), improves outcomes in patients with relapsed or refractory chronic lymphocytic leukemia.
Design. International, randomized, open-label, phase 3 clinical trial (MURANO).
Setting and participants. Patients were eligilble for the study if they were 18 years of age or older with a diagnosis of relapsed or refractory chronic lymphocytic leukemia that required therapy, and had received 1 to 3 previous treatments (including at least 1 chemotherapy-containing regimen), had an Eastern Cooperative Oncology Group performance status score of 0 or 1, and had adequate bone marrow, renal, and hepatic function. Patients were randomly assigned either to receive venetoclax plus rituximab or bendamustine plus rituximab. Randomization was stratified by geographic region, responsiveness to previous therapy, as well as the presence or absence of chromosome 17p deletion.
Main outcome measures. Primary outcome was investigator-assessed progression-free survival, which was defined as the time from randomization to the first occurrence of disease progression or relapse or death from any cause, whichever occurs first. Secondary efficacy endpoints included independent review committee-assessed progression-free survival (stratified by chromosome 17p deletion), independent review committee-assessed overall response rate and complete response rate, overall survival, rates of clearance of minimal residual disease, the duration of response, event-free survival, and the time to the next treatment for chronic lymphocytic leukemia.
Main results. From 31 March 2014 to 23 September 2015, a total of 389 patients were enrolled at 109 sites in 20 countries and were randomly assigned to receive venetoclax plus rituximab (n = 194), or bendamustine plus rituximab (n = 195). Median age was 65 years (range, 22–85) and a majority of the patients (73.8%) were men. Overall, the demographic and disease characteristics of the 2 groups were similar at baseline.
The median follow-up period was 23.8 months (range, 0–37.4). The median investigator-assessed progression-free survival was significantly longer in the venetoclax-rituximab group (median progression-free survival not reached, 32 events of progression or death in 194 patients) and was 17 months in the bendamustine-rituximab group (114 events in 195 patients). The 2-year rate of investigator-assessed progression-free survival was 84.9% (95% confidence interval [CI] 79.1–90.5) in the venetoclax-rituximab group and 36.3% (95% CI 28.5–44.0) in the bendamustine-rituximab group (hazard ratio for progression or death, 0.17; 95% CI 0.11 to 0.25; P < 0.001). Benefit was consistent in favor of the venetoclax-rituximab group in all prespecified subgroup analyses, with or without chromosome 17p deletion.
The rate of overall survival was higher in the venetoclax-rituximab group than in the bendamustine-rituximab group, with 24-month rates of 91.9% and 86.6%, respectively (hazard ratio 0.58, 95% CI 0.25–0.90). Assessments of minimal residual disease were available for 366 of the 389 patients (94.1%). On the basis of peripheral-blood samples, the venetoclax-rituximab group had a higher minimal residual disease compared to the bendamustine-rituximab group (121 of 194 patients [62.4%] vs. 26 of 195 patients [13.3%]). In bone marrow aspirate, higher rates of clearance of minimal residual disease was seen in the venetoclax-rituximab group (53 of 194 patients [27.3%]) as compared to the bendamustine-rituximab group (3 of 195 patients [1.5%]).
In terms of safety, the most common adverse event reported was neutropenia (60.8% of the patients in the venetoclax-rituximab group vs. 44.1% of the patients in the bendamustine-rituximab group). This contributed to the overall higher grade 3 or 4 adverse event rate in the venetoclax-rituximab group (159 of the 194 patients, or 82.0%) as compared to the bendamustine-rituximab group (132 of 188 patients, or 70.2%). The incidence of serious adverse events, as well as adverse events that resulted in death were similar in the 2 groups.
Conclusion. For patients with relapsed or refractory chronic lymphocytic leukemia, venetoclax plus rituximab resulted in significantly higher rates of progression-free survival than standard therapy with bendamustine plus rituximab.
Commentary
Despite advances in treatment, chronic lymphocytic leukemia remains incurable with conventional chemoimmunotherapy regimens, and almost all patient relapse after initial therapy. Following relapse of the disease, the goal is to provide durable progression-free survival, which may extend overall survival [1]. In a subset of chronic lymphocytic leukemia patients with deletion or mutation of TP53 loci on chromosome 17p13, their disease responds especially poorly to conventional treatment and they have a median survival of less than 3 years from the time of initiating first treatment.
Apoptosis defines a process of programmed cell death with an extrinsic and intrinsic cellular apoptotic pathway. B-cell lymphoma/leukemia 2 (BCL-2) protein is a key regulator of the intrinsic apoptotic pathway and almost all chronic lymphocytic leukemia cells elude apoptosis through overexpression of BCL-2. Venetoclax is an orally administered, highly selective, potent BCL-2 inhibitor approved by the FDA in 2016 for the treatment of chronic lymphocytic leukemia patients with 17p deletion who have received at least 1 prior therapy [3]. There has been great interest in combining venetoclax with other active agents in chronic lymphocytic leukemia such as chemotherapy, monoclonal antibodies, and B-cell receptor inhibitors. The combination of venetoclax with the CD20 antibody rituximab was found to be able to overcome micro-environment-induced resistance to venetoclax [4].
In this analysis of the phase 3 MURANO trial of venetoclax plus rituximab in relapsed or refractory chronic lymphocytic leukemia by Seymour et al, the authors demonstrated a significantly higher rate of progression-free survival with venetoclax plus rituximab than with standard chemoimmunotherapy bendamustine plus rituximab. In addition, secondary efficacy measures, including the complete response rate, the overall response rate, and overall survival were also higher in the venetoclax plus rituximab than with bendamustine plus rituximab.
There are several limitations of this study. First, this study was terminated early at the time of the data review on 6 September 2017. The independent data monitoring committee recommended that the primary analysis be conducted at that time because the prespecified statistical boundaries for early stopping were crossed for progression-free survival on the basis of stratified log-rank tests. In a letter to the editor, Alexander et al questioned the validity of results when design stages are violated. In immunotherapy trials, progression-free survival curves often separated at later time, rather than as a constant process; this violates the key assumption of proportionality of hazard functions. When the study was terminated early, post hoc confirmatory analyses and evaluations of robustness of the statistical plan could be used; however, prespecified analyses are critical to reproducibility in trials that are meant to be practice-changing [5]. Second, complete response rates were lower when responses was assessed by the independent review committee than when assessed by the investigator. While this represented a certain degree of author bias, the overall results were similar and the effect of venetoclax plus rituximab remain significantly better than bendamustine plus rituximab.
Applications for Clinical Practice
The current study demonstrated that venetoclax is safe and effective when combining with rituximab in the treating of chronic lymphocytic leukemia patients with or without 17p deletion who have received at least one prior therapy. The most common serious adverse event was neutropenia, correlated with tumor lysis syndrome. Careful monitoring, slow dose ramp-up, and adequate prophylaxis can mitigate some of the adverse effects.
—Ka Ming Gordon Ngai, MD, MPH
1. Tam CS, Stilgenbauder S. How best to manage patients with chronic lymphocytic leuekmia with 17p deletion and/or TP53 mutation? Leuk Lymphoma 2015;56:587–93.
2. Zenz T, Eichhorst B, B
3. FDA news release. FDA approves new drug for chronic lymphocytic leukemia in patients with a specific chromosomal abnormality. 11 April 2016. Accessed 9 May 2018 at www.fda.gov/newsevents/newsroom/pressannouncements/ucm495253.htm.
4. Thijssen R, Slinger E, Weller K, et al. Resistance to ABT-199 induced by micro-environmental signals in chronic lymphocytic leukemia can be counteracted by CD20 antibodies or kinase inhibitors. Haematologica 2015;100:e302-e306.
5. Alexander BM, Schoenfeld JD, Trippa L. Hazards of hazard ratios—deviations from model assumptions in immunotherapy. N Engl J Med 2018;378:1158–9.
Study Overview
Objective. To assess whether a combination of venetoclax with rituximab, compared to standard chemoimmunotherapy (bendamustine with rituximab), improves outcomes in patients with relapsed or refractory chronic lymphocytic leukemia.
Design. International, randomized, open-label, phase 3 clinical trial (MURANO).
Setting and participants. Patients were eligilble for the study if they were 18 years of age or older with a diagnosis of relapsed or refractory chronic lymphocytic leukemia that required therapy, and had received 1 to 3 previous treatments (including at least 1 chemotherapy-containing regimen), had an Eastern Cooperative Oncology Group performance status score of 0 or 1, and had adequate bone marrow, renal, and hepatic function. Patients were randomly assigned either to receive venetoclax plus rituximab or bendamustine plus rituximab. Randomization was stratified by geographic region, responsiveness to previous therapy, as well as the presence or absence of chromosome 17p deletion.
Main outcome measures. Primary outcome was investigator-assessed progression-free survival, which was defined as the time from randomization to the first occurrence of disease progression or relapse or death from any cause, whichever occurs first. Secondary efficacy endpoints included independent review committee-assessed progression-free survival (stratified by chromosome 17p deletion), independent review committee-assessed overall response rate and complete response rate, overall survival, rates of clearance of minimal residual disease, the duration of response, event-free survival, and the time to the next treatment for chronic lymphocytic leukemia.
Main results. From 31 March 2014 to 23 September 2015, a total of 389 patients were enrolled at 109 sites in 20 countries and were randomly assigned to receive venetoclax plus rituximab (n = 194), or bendamustine plus rituximab (n = 195). Median age was 65 years (range, 22–85) and a majority of the patients (73.8%) were men. Overall, the demographic and disease characteristics of the 2 groups were similar at baseline.
The median follow-up period was 23.8 months (range, 0–37.4). The median investigator-assessed progression-free survival was significantly longer in the venetoclax-rituximab group (median progression-free survival not reached, 32 events of progression or death in 194 patients) and was 17 months in the bendamustine-rituximab group (114 events in 195 patients). The 2-year rate of investigator-assessed progression-free survival was 84.9% (95% confidence interval [CI] 79.1–90.5) in the venetoclax-rituximab group and 36.3% (95% CI 28.5–44.0) in the bendamustine-rituximab group (hazard ratio for progression or death, 0.17; 95% CI 0.11 to 0.25; P < 0.001). Benefit was consistent in favor of the venetoclax-rituximab group in all prespecified subgroup analyses, with or without chromosome 17p deletion.
The rate of overall survival was higher in the venetoclax-rituximab group than in the bendamustine-rituximab group, with 24-month rates of 91.9% and 86.6%, respectively (hazard ratio 0.58, 95% CI 0.25–0.90). Assessments of minimal residual disease were available for 366 of the 389 patients (94.1%). On the basis of peripheral-blood samples, the venetoclax-rituximab group had a higher minimal residual disease compared to the bendamustine-rituximab group (121 of 194 patients [62.4%] vs. 26 of 195 patients [13.3%]). In bone marrow aspirate, higher rates of clearance of minimal residual disease was seen in the venetoclax-rituximab group (53 of 194 patients [27.3%]) as compared to the bendamustine-rituximab group (3 of 195 patients [1.5%]).
In terms of safety, the most common adverse event reported was neutropenia (60.8% of the patients in the venetoclax-rituximab group vs. 44.1% of the patients in the bendamustine-rituximab group). This contributed to the overall higher grade 3 or 4 adverse event rate in the venetoclax-rituximab group (159 of the 194 patients, or 82.0%) as compared to the bendamustine-rituximab group (132 of 188 patients, or 70.2%). The incidence of serious adverse events, as well as adverse events that resulted in death were similar in the 2 groups.
Conclusion. For patients with relapsed or refractory chronic lymphocytic leukemia, venetoclax plus rituximab resulted in significantly higher rates of progression-free survival than standard therapy with bendamustine plus rituximab.
Commentary
Despite advances in treatment, chronic lymphocytic leukemia remains incurable with conventional chemoimmunotherapy regimens, and almost all patient relapse after initial therapy. Following relapse of the disease, the goal is to provide durable progression-free survival, which may extend overall survival [1]. In a subset of chronic lymphocytic leukemia patients with deletion or mutation of TP53 loci on chromosome 17p13, their disease responds especially poorly to conventional treatment and they have a median survival of less than 3 years from the time of initiating first treatment.
Apoptosis defines a process of programmed cell death with an extrinsic and intrinsic cellular apoptotic pathway. B-cell lymphoma/leukemia 2 (BCL-2) protein is a key regulator of the intrinsic apoptotic pathway and almost all chronic lymphocytic leukemia cells elude apoptosis through overexpression of BCL-2. Venetoclax is an orally administered, highly selective, potent BCL-2 inhibitor approved by the FDA in 2016 for the treatment of chronic lymphocytic leukemia patients with 17p deletion who have received at least 1 prior therapy [3]. There has been great interest in combining venetoclax with other active agents in chronic lymphocytic leukemia such as chemotherapy, monoclonal antibodies, and B-cell receptor inhibitors. The combination of venetoclax with the CD20 antibody rituximab was found to be able to overcome micro-environment-induced resistance to venetoclax [4].
In this analysis of the phase 3 MURANO trial of venetoclax plus rituximab in relapsed or refractory chronic lymphocytic leukemia by Seymour et al, the authors demonstrated a significantly higher rate of progression-free survival with venetoclax plus rituximab than with standard chemoimmunotherapy bendamustine plus rituximab. In addition, secondary efficacy measures, including the complete response rate, the overall response rate, and overall survival were also higher in the venetoclax plus rituximab than with bendamustine plus rituximab.
There are several limitations of this study. First, this study was terminated early at the time of the data review on 6 September 2017. The independent data monitoring committee recommended that the primary analysis be conducted at that time because the prespecified statistical boundaries for early stopping were crossed for progression-free survival on the basis of stratified log-rank tests. In a letter to the editor, Alexander et al questioned the validity of results when design stages are violated. In immunotherapy trials, progression-free survival curves often separated at later time, rather than as a constant process; this violates the key assumption of proportionality of hazard functions. When the study was terminated early, post hoc confirmatory analyses and evaluations of robustness of the statistical plan could be used; however, prespecified analyses are critical to reproducibility in trials that are meant to be practice-changing [5]. Second, complete response rates were lower when responses was assessed by the independent review committee than when assessed by the investigator. While this represented a certain degree of author bias, the overall results were similar and the effect of venetoclax plus rituximab remain significantly better than bendamustine plus rituximab.
Applications for Clinical Practice
The current study demonstrated that venetoclax is safe and effective when combining with rituximab in the treating of chronic lymphocytic leukemia patients with or without 17p deletion who have received at least one prior therapy. The most common serious adverse event was neutropenia, correlated with tumor lysis syndrome. Careful monitoring, slow dose ramp-up, and adequate prophylaxis can mitigate some of the adverse effects.
—Ka Ming Gordon Ngai, MD, MPH
Study Overview
Objective. To assess whether a combination of venetoclax with rituximab, compared to standard chemoimmunotherapy (bendamustine with rituximab), improves outcomes in patients with relapsed or refractory chronic lymphocytic leukemia.
Design. International, randomized, open-label, phase 3 clinical trial (MURANO).
Setting and participants. Patients were eligilble for the study if they were 18 years of age or older with a diagnosis of relapsed or refractory chronic lymphocytic leukemia that required therapy, and had received 1 to 3 previous treatments (including at least 1 chemotherapy-containing regimen), had an Eastern Cooperative Oncology Group performance status score of 0 or 1, and had adequate bone marrow, renal, and hepatic function. Patients were randomly assigned either to receive venetoclax plus rituximab or bendamustine plus rituximab. Randomization was stratified by geographic region, responsiveness to previous therapy, as well as the presence or absence of chromosome 17p deletion.
Main outcome measures. Primary outcome was investigator-assessed progression-free survival, which was defined as the time from randomization to the first occurrence of disease progression or relapse or death from any cause, whichever occurs first. Secondary efficacy endpoints included independent review committee-assessed progression-free survival (stratified by chromosome 17p deletion), independent review committee-assessed overall response rate and complete response rate, overall survival, rates of clearance of minimal residual disease, the duration of response, event-free survival, and the time to the next treatment for chronic lymphocytic leukemia.
Main results. From 31 March 2014 to 23 September 2015, a total of 389 patients were enrolled at 109 sites in 20 countries and were randomly assigned to receive venetoclax plus rituximab (n = 194), or bendamustine plus rituximab (n = 195). Median age was 65 years (range, 22–85) and a majority of the patients (73.8%) were men. Overall, the demographic and disease characteristics of the 2 groups were similar at baseline.
The median follow-up period was 23.8 months (range, 0–37.4). The median investigator-assessed progression-free survival was significantly longer in the venetoclax-rituximab group (median progression-free survival not reached, 32 events of progression or death in 194 patients) and was 17 months in the bendamustine-rituximab group (114 events in 195 patients). The 2-year rate of investigator-assessed progression-free survival was 84.9% (95% confidence interval [CI] 79.1–90.5) in the venetoclax-rituximab group and 36.3% (95% CI 28.5–44.0) in the bendamustine-rituximab group (hazard ratio for progression or death, 0.17; 95% CI 0.11 to 0.25; P < 0.001). Benefit was consistent in favor of the venetoclax-rituximab group in all prespecified subgroup analyses, with or without chromosome 17p deletion.
The rate of overall survival was higher in the venetoclax-rituximab group than in the bendamustine-rituximab group, with 24-month rates of 91.9% and 86.6%, respectively (hazard ratio 0.58, 95% CI 0.25–0.90). Assessments of minimal residual disease were available for 366 of the 389 patients (94.1%). On the basis of peripheral-blood samples, the venetoclax-rituximab group had a higher minimal residual disease compared to the bendamustine-rituximab group (121 of 194 patients [62.4%] vs. 26 of 195 patients [13.3%]). In bone marrow aspirate, higher rates of clearance of minimal residual disease was seen in the venetoclax-rituximab group (53 of 194 patients [27.3%]) as compared to the bendamustine-rituximab group (3 of 195 patients [1.5%]).
In terms of safety, the most common adverse event reported was neutropenia (60.8% of the patients in the venetoclax-rituximab group vs. 44.1% of the patients in the bendamustine-rituximab group). This contributed to the overall higher grade 3 or 4 adverse event rate in the venetoclax-rituximab group (159 of the 194 patients, or 82.0%) as compared to the bendamustine-rituximab group (132 of 188 patients, or 70.2%). The incidence of serious adverse events, as well as adverse events that resulted in death were similar in the 2 groups.
Conclusion. For patients with relapsed or refractory chronic lymphocytic leukemia, venetoclax plus rituximab resulted in significantly higher rates of progression-free survival than standard therapy with bendamustine plus rituximab.
Commentary
Despite advances in treatment, chronic lymphocytic leukemia remains incurable with conventional chemoimmunotherapy regimens, and almost all patient relapse after initial therapy. Following relapse of the disease, the goal is to provide durable progression-free survival, which may extend overall survival [1]. In a subset of chronic lymphocytic leukemia patients with deletion or mutation of TP53 loci on chromosome 17p13, their disease responds especially poorly to conventional treatment and they have a median survival of less than 3 years from the time of initiating first treatment.
Apoptosis defines a process of programmed cell death with an extrinsic and intrinsic cellular apoptotic pathway. B-cell lymphoma/leukemia 2 (BCL-2) protein is a key regulator of the intrinsic apoptotic pathway and almost all chronic lymphocytic leukemia cells elude apoptosis through overexpression of BCL-2. Venetoclax is an orally administered, highly selective, potent BCL-2 inhibitor approved by the FDA in 2016 for the treatment of chronic lymphocytic leukemia patients with 17p deletion who have received at least 1 prior therapy [3]. There has been great interest in combining venetoclax with other active agents in chronic lymphocytic leukemia such as chemotherapy, monoclonal antibodies, and B-cell receptor inhibitors. The combination of venetoclax with the CD20 antibody rituximab was found to be able to overcome micro-environment-induced resistance to venetoclax [4].
In this analysis of the phase 3 MURANO trial of venetoclax plus rituximab in relapsed or refractory chronic lymphocytic leukemia by Seymour et al, the authors demonstrated a significantly higher rate of progression-free survival with venetoclax plus rituximab than with standard chemoimmunotherapy bendamustine plus rituximab. In addition, secondary efficacy measures, including the complete response rate, the overall response rate, and overall survival were also higher in the venetoclax plus rituximab than with bendamustine plus rituximab.
There are several limitations of this study. First, this study was terminated early at the time of the data review on 6 September 2017. The independent data monitoring committee recommended that the primary analysis be conducted at that time because the prespecified statistical boundaries for early stopping were crossed for progression-free survival on the basis of stratified log-rank tests. In a letter to the editor, Alexander et al questioned the validity of results when design stages are violated. In immunotherapy trials, progression-free survival curves often separated at later time, rather than as a constant process; this violates the key assumption of proportionality of hazard functions. When the study was terminated early, post hoc confirmatory analyses and evaluations of robustness of the statistical plan could be used; however, prespecified analyses are critical to reproducibility in trials that are meant to be practice-changing [5]. Second, complete response rates were lower when responses was assessed by the independent review committee than when assessed by the investigator. While this represented a certain degree of author bias, the overall results were similar and the effect of venetoclax plus rituximab remain significantly better than bendamustine plus rituximab.
Applications for Clinical Practice
The current study demonstrated that venetoclax is safe and effective when combining with rituximab in the treating of chronic lymphocytic leukemia patients with or without 17p deletion who have received at least one prior therapy. The most common serious adverse event was neutropenia, correlated with tumor lysis syndrome. Careful monitoring, slow dose ramp-up, and adequate prophylaxis can mitigate some of the adverse effects.
—Ka Ming Gordon Ngai, MD, MPH
1. Tam CS, Stilgenbauder S. How best to manage patients with chronic lymphocytic leuekmia with 17p deletion and/or TP53 mutation? Leuk Lymphoma 2015;56:587–93.
2. Zenz T, Eichhorst B, B
3. FDA news release. FDA approves new drug for chronic lymphocytic leukemia in patients with a specific chromosomal abnormality. 11 April 2016. Accessed 9 May 2018 at www.fda.gov/newsevents/newsroom/pressannouncements/ucm495253.htm.
4. Thijssen R, Slinger E, Weller K, et al. Resistance to ABT-199 induced by micro-environmental signals in chronic lymphocytic leukemia can be counteracted by CD20 antibodies or kinase inhibitors. Haematologica 2015;100:e302-e306.
5. Alexander BM, Schoenfeld JD, Trippa L. Hazards of hazard ratios—deviations from model assumptions in immunotherapy. N Engl J Med 2018;378:1158–9.
1. Tam CS, Stilgenbauder S. How best to manage patients with chronic lymphocytic leuekmia with 17p deletion and/or TP53 mutation? Leuk Lymphoma 2015;56:587–93.
2. Zenz T, Eichhorst B, B
3. FDA news release. FDA approves new drug for chronic lymphocytic leukemia in patients with a specific chromosomal abnormality. 11 April 2016. Accessed 9 May 2018 at www.fda.gov/newsevents/newsroom/pressannouncements/ucm495253.htm.
4. Thijssen R, Slinger E, Weller K, et al. Resistance to ABT-199 induced by micro-environmental signals in chronic lymphocytic leukemia can be counteracted by CD20 antibodies or kinase inhibitors. Haematologica 2015;100:e302-e306.
5. Alexander BM, Schoenfeld JD, Trippa L. Hazards of hazard ratios—deviations from model assumptions in immunotherapy. N Engl J Med 2018;378:1158–9.
Low-Intensity PSA-Based Screening Did Not Reduce Prostate Cancer Mortality
Study Overview
Objective. To determine the effect of a single prostate-specific antigen (PSA) screening and standardized diagnostic pathway on prostate cancer–specific mortality when compared with no screening.
Design. Cluster randomized controlled trial.
Setting and participants. The study was conducted at 573 primary care clinics in the United Kingdom. 419,582 men, 50 to 69 years of age, were recruited between 2001 and 2009 and follow-up ended in 2016. Primary care clinics were randomized to intervention or control. Men in intervention group primary care clinics received an invitation to a single PSA test followed by standardized prostate biopsy in men with PSA levels of 3 ng/mL or greater. A trial that compared radical prostatectomy, radiotherapy, and androgen deprivation therapy and active monitoring was embedded within the screening trial [1]. The control group practices provided standard treatment and PSA testing was provided only to men who requested it. The majority of primary practices were in urban areas (88%–90%) and with multiple partners within the practice (88%–89%). Cases of prostate cancer that were detected in the intervention or control groups during the course of the study were managed by the same clinicians.
Main outcome measures. Main study outcome measures were definite, probable, or intervention-related prostate cancer mortality at a median follow-up of 10 years. An independent cause of death evaluation committee that was blinded to group assignment determined the cause of death in each case. The secondary outcomes included all-cause mortality and prostate cancer stage and Gleason grade at cancer diagnosis. The analysis was an intention-to-screen analysis. Survival analysis using Kaplan-Meier plots were done to demonstrate cumulative incidence of outcomes discussed above. Mixed effects Poisson regression models were used to compare prostate cancer incidence and mortality in intervention vs. control practices accounting for clustering.
Main results. A total of 189,386 men were in the intervention group, 40% attended the PSA testing clinic, and 67,313 (36%) had a blood sample taken for PSA testing, resulting in 64,436 valid PSA test result. 6857 (11%) had elevated PSA levels, of which 85% had a prostate biopsy. In the control group, it was estimated that contamination (PSA testing in the control group) occurred at a rate of approximately 10%–15% over 10 years. After a median follow-up of 10 years, 549 men died of prostate cancer–related causes in the intervention group, at a rate of 0.3 per 1000 person-years, and 647 men died of prostate cancer–related causes in the control group, at a rate of 0.31 per 1000 person-years. The rate difference was 0.013 per 1000 person-years with a risk ratio (RR) of 0.96 (95% confidence interval [CI], 0.85–1.08), P = 0.50), which was not statistically significant. The number of men diagnosed with prostate cancer was higher in the intervention group than in the control group (4.3% vs. 3.6%, RR 1.19 (95% CI 1.14–1.25), P < 0.001). The incidence rate was 4.45 per 1000 person-years in the intervention group and 3.80 per 1000 person-years in the control group. The prostate cancer tumors in the intervention group were less likely to be high grade or advanced stage when compared to the control group. There were 25,459 deaths in the intervention group and 28,306 deaths in the control group. There was no significant difference in the rates of all-cause mortality between the two groups.
Conclusion. The study found that a single PSA screening among men aged 50–69 did not reduce prostate cancer mortality at 10 years follow-up, but led to the increase in the detection of low-risk prostate cancer cases. This result does not support the screening strategy of a single PSA testing for population-based screening for prostate cancer.
Commentary
The use of a PSA test for population-based screening for prostate cancer is controversial; the United States Preventive Services Task Force (USPSTF) recommended against the routine use of PSA test for screening for prostate cancer because the evidence of its benefit is weak and because of the potential risks of unintended consequences of PSA screening [2]. This study is the largest study to date on PSA screening and it found that a low-intensity screening approach—a single PSA test—was not effective in reducing prostate cancer deaths, but rather identified early-stage prostate cancer cases. This result contrasts with previous large scale studies that found that screening led to an increased rate of prostate cancer diagnosis and reduced prostate cancer mortality in one trial [3] and no effect on diagnosis or mortality in another [4].
The rationale for US
Applications for Clinical Practice
PSA test as a diagnostic tool for prostate cancer has significant drawbacks, and population screening strategies using this test will need to grapple with issues of misdiagnosis, overdiagnosis, and treatment that can have potential harmful consequences. The alternative of not screening is that prostate cancer may be diagnosed at later stages and more men may suffer morbidity and mortality from the disease. A better test and screening strategy are needed to balance the benefits and harms of screening so that older men may benefit from early diagnosis of prostate cancer.
1. Hamdy FC, Donovan JL, Lane JA, et al. 10-year outcomes after monitoring, surgery, or radiotherapy for localized prostate cancer. N Engl J Med 2016;375:1415–24.
2. Moyer VA; U.S. Preventive Services Task Force. Screening for prostate cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2012;157:120–34.
3. Schroder FH, Hugosson J, Roobol MJ, et al. Screening and prostate-cancer mortality in a randomized European study. N Engl J Med 2009;360:1320–8.
4. Andriole GL, Grubb RL III, Buys SS, et al. Mortality results from a randomized prostate-cancer screening trial. N Engl J Med 2009;360:1310–9.
5. Donovan JL, Hamdy FC, Lane JA, et al. Patient-reported outcomes after monitoring, surgery, or radiotherapy for prostate cancer. N Engl J Med 2016;375:1425–37.
Study Overview
Objective. To determine the effect of a single prostate-specific antigen (PSA) screening and standardized diagnostic pathway on prostate cancer–specific mortality when compared with no screening.
Design. Cluster randomized controlled trial.
Setting and participants. The study was conducted at 573 primary care clinics in the United Kingdom. 419,582 men, 50 to 69 years of age, were recruited between 2001 and 2009 and follow-up ended in 2016. Primary care clinics were randomized to intervention or control. Men in intervention group primary care clinics received an invitation to a single PSA test followed by standardized prostate biopsy in men with PSA levels of 3 ng/mL or greater. A trial that compared radical prostatectomy, radiotherapy, and androgen deprivation therapy and active monitoring was embedded within the screening trial [1]. The control group practices provided standard treatment and PSA testing was provided only to men who requested it. The majority of primary practices were in urban areas (88%–90%) and with multiple partners within the practice (88%–89%). Cases of prostate cancer that were detected in the intervention or control groups during the course of the study were managed by the same clinicians.
Main outcome measures. Main study outcome measures were definite, probable, or intervention-related prostate cancer mortality at a median follow-up of 10 years. An independent cause of death evaluation committee that was blinded to group assignment determined the cause of death in each case. The secondary outcomes included all-cause mortality and prostate cancer stage and Gleason grade at cancer diagnosis. The analysis was an intention-to-screen analysis. Survival analysis using Kaplan-Meier plots were done to demonstrate cumulative incidence of outcomes discussed above. Mixed effects Poisson regression models were used to compare prostate cancer incidence and mortality in intervention vs. control practices accounting for clustering.
Main results. A total of 189,386 men were in the intervention group, 40% attended the PSA testing clinic, and 67,313 (36%) had a blood sample taken for PSA testing, resulting in 64,436 valid PSA test result. 6857 (11%) had elevated PSA levels, of which 85% had a prostate biopsy. In the control group, it was estimated that contamination (PSA testing in the control group) occurred at a rate of approximately 10%–15% over 10 years. After a median follow-up of 10 years, 549 men died of prostate cancer–related causes in the intervention group, at a rate of 0.3 per 1000 person-years, and 647 men died of prostate cancer–related causes in the control group, at a rate of 0.31 per 1000 person-years. The rate difference was 0.013 per 1000 person-years with a risk ratio (RR) of 0.96 (95% confidence interval [CI], 0.85–1.08), P = 0.50), which was not statistically significant. The number of men diagnosed with prostate cancer was higher in the intervention group than in the control group (4.3% vs. 3.6%, RR 1.19 (95% CI 1.14–1.25), P < 0.001). The incidence rate was 4.45 per 1000 person-years in the intervention group and 3.80 per 1000 person-years in the control group. The prostate cancer tumors in the intervention group were less likely to be high grade or advanced stage when compared to the control group. There were 25,459 deaths in the intervention group and 28,306 deaths in the control group. There was no significant difference in the rates of all-cause mortality between the two groups.
Conclusion. The study found that a single PSA screening among men aged 50–69 did not reduce prostate cancer mortality at 10 years follow-up, but led to the increase in the detection of low-risk prostate cancer cases. This result does not support the screening strategy of a single PSA testing for population-based screening for prostate cancer.
Commentary
The use of a PSA test for population-based screening for prostate cancer is controversial; the United States Preventive Services Task Force (USPSTF) recommended against the routine use of PSA test for screening for prostate cancer because the evidence of its benefit is weak and because of the potential risks of unintended consequences of PSA screening [2]. This study is the largest study to date on PSA screening and it found that a low-intensity screening approach—a single PSA test—was not effective in reducing prostate cancer deaths, but rather identified early-stage prostate cancer cases. This result contrasts with previous large scale studies that found that screening led to an increased rate of prostate cancer diagnosis and reduced prostate cancer mortality in one trial [3] and no effect on diagnosis or mortality in another [4].
The rationale for US
Applications for Clinical Practice
PSA test as a diagnostic tool for prostate cancer has significant drawbacks, and population screening strategies using this test will need to grapple with issues of misdiagnosis, overdiagnosis, and treatment that can have potential harmful consequences. The alternative of not screening is that prostate cancer may be diagnosed at later stages and more men may suffer morbidity and mortality from the disease. A better test and screening strategy are needed to balance the benefits and harms of screening so that older men may benefit from early diagnosis of prostate cancer.
Study Overview
Objective. To determine the effect of a single prostate-specific antigen (PSA) screening and standardized diagnostic pathway on prostate cancer–specific mortality when compared with no screening.
Design. Cluster randomized controlled trial.
Setting and participants. The study was conducted at 573 primary care clinics in the United Kingdom. 419,582 men, 50 to 69 years of age, were recruited between 2001 and 2009 and follow-up ended in 2016. Primary care clinics were randomized to intervention or control. Men in intervention group primary care clinics received an invitation to a single PSA test followed by standardized prostate biopsy in men with PSA levels of 3 ng/mL or greater. A trial that compared radical prostatectomy, radiotherapy, and androgen deprivation therapy and active monitoring was embedded within the screening trial [1]. The control group practices provided standard treatment and PSA testing was provided only to men who requested it. The majority of primary practices were in urban areas (88%–90%) and with multiple partners within the practice (88%–89%). Cases of prostate cancer that were detected in the intervention or control groups during the course of the study were managed by the same clinicians.
Main outcome measures. Main study outcome measures were definite, probable, or intervention-related prostate cancer mortality at a median follow-up of 10 years. An independent cause of death evaluation committee that was blinded to group assignment determined the cause of death in each case. The secondary outcomes included all-cause mortality and prostate cancer stage and Gleason grade at cancer diagnosis. The analysis was an intention-to-screen analysis. Survival analysis using Kaplan-Meier plots were done to demonstrate cumulative incidence of outcomes discussed above. Mixed effects Poisson regression models were used to compare prostate cancer incidence and mortality in intervention vs. control practices accounting for clustering.
Main results. A total of 189,386 men were in the intervention group, 40% attended the PSA testing clinic, and 67,313 (36%) had a blood sample taken for PSA testing, resulting in 64,436 valid PSA test result. 6857 (11%) had elevated PSA levels, of which 85% had a prostate biopsy. In the control group, it was estimated that contamination (PSA testing in the control group) occurred at a rate of approximately 10%–15% over 10 years. After a median follow-up of 10 years, 549 men died of prostate cancer–related causes in the intervention group, at a rate of 0.3 per 1000 person-years, and 647 men died of prostate cancer–related causes in the control group, at a rate of 0.31 per 1000 person-years. The rate difference was 0.013 per 1000 person-years with a risk ratio (RR) of 0.96 (95% confidence interval [CI], 0.85–1.08), P = 0.50), which was not statistically significant. The number of men diagnosed with prostate cancer was higher in the intervention group than in the control group (4.3% vs. 3.6%, RR 1.19 (95% CI 1.14–1.25), P < 0.001). The incidence rate was 4.45 per 1000 person-years in the intervention group and 3.80 per 1000 person-years in the control group. The prostate cancer tumors in the intervention group were less likely to be high grade or advanced stage when compared to the control group. There were 25,459 deaths in the intervention group and 28,306 deaths in the control group. There was no significant difference in the rates of all-cause mortality between the two groups.
Conclusion. The study found that a single PSA screening among men aged 50–69 did not reduce prostate cancer mortality at 10 years follow-up, but led to the increase in the detection of low-risk prostate cancer cases. This result does not support the screening strategy of a single PSA testing for population-based screening for prostate cancer.
Commentary
The use of a PSA test for population-based screening for prostate cancer is controversial; the United States Preventive Services Task Force (USPSTF) recommended against the routine use of PSA test for screening for prostate cancer because the evidence of its benefit is weak and because of the potential risks of unintended consequences of PSA screening [2]. This study is the largest study to date on PSA screening and it found that a low-intensity screening approach—a single PSA test—was not effective in reducing prostate cancer deaths, but rather identified early-stage prostate cancer cases. This result contrasts with previous large scale studies that found that screening led to an increased rate of prostate cancer diagnosis and reduced prostate cancer mortality in one trial [3] and no effect on diagnosis or mortality in another [4].
The rationale for US
Applications for Clinical Practice
PSA test as a diagnostic tool for prostate cancer has significant drawbacks, and population screening strategies using this test will need to grapple with issues of misdiagnosis, overdiagnosis, and treatment that can have potential harmful consequences. The alternative of not screening is that prostate cancer may be diagnosed at later stages and more men may suffer morbidity and mortality from the disease. A better test and screening strategy are needed to balance the benefits and harms of screening so that older men may benefit from early diagnosis of prostate cancer.
1. Hamdy FC, Donovan JL, Lane JA, et al. 10-year outcomes after monitoring, surgery, or radiotherapy for localized prostate cancer. N Engl J Med 2016;375:1415–24.
2. Moyer VA; U.S. Preventive Services Task Force. Screening for prostate cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2012;157:120–34.
3. Schroder FH, Hugosson J, Roobol MJ, et al. Screening and prostate-cancer mortality in a randomized European study. N Engl J Med 2009;360:1320–8.
4. Andriole GL, Grubb RL III, Buys SS, et al. Mortality results from a randomized prostate-cancer screening trial. N Engl J Med 2009;360:1310–9.
5. Donovan JL, Hamdy FC, Lane JA, et al. Patient-reported outcomes after monitoring, surgery, or radiotherapy for prostate cancer. N Engl J Med 2016;375:1425–37.
1. Hamdy FC, Donovan JL, Lane JA, et al. 10-year outcomes after monitoring, surgery, or radiotherapy for localized prostate cancer. N Engl J Med 2016;375:1415–24.
2. Moyer VA; U.S. Preventive Services Task Force. Screening for prostate cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2012;157:120–34.
3. Schroder FH, Hugosson J, Roobol MJ, et al. Screening and prostate-cancer mortality in a randomized European study. N Engl J Med 2009;360:1320–8.
4. Andriole GL, Grubb RL III, Buys SS, et al. Mortality results from a randomized prostate-cancer screening trial. N Engl J Med 2009;360:1310–9.
5. Donovan JL, Hamdy FC, Lane JA, et al. Patient-reported outcomes after monitoring, surgery, or radiotherapy for prostate cancer. N Engl J Med 2016;375:1425–37.
CAR T-Cell Therapy Shows High Levels of Durable Response in Refractory Large B-Cell Lymphoma
Study Overview
Objective. To evaluate the efficacy and safety of the anti-CD19 chimeric antigen receptor (CAR) T-cell, axicabtagene ciloleucel (axi-cel), in patients with refractory large B-cell lymphoma.
Design. The ZUMA-1 trial was a phase 1-2 multicenter study. The results of the primary analysis and updated analysis with 1-year follow up of the phase 2 portion of ZUMA-1 are reported here.
Setting and participants. The phase 2 portion of the ZUMA-1 trial enrolled 111 patients from 22 centers in the United States (21) and Israel (1) from November 2015 through September 2016. Eligible patients included those with histologically confirmed large B-cell lymphoma, primary mediastinal B-cell lymphoma or transformed follicular lymphoma. Patients were required to have refractory disease, defined as disease progression or stable disease as the best response to chemotherapy or disease progression within 12 months following autologous stem cell transplantation. All patients were required to have adequate organ function, an absolute neutrophil count > 1000, absolute lymphocyte count > 100 and platelet count > 75,000.
Intervention. Patients first underwent leukapheresis and CAR T-cell manufacturing. Following this patients were admitted to the hospital and received a low-dose conditioning regimen consisting of fludarabine 30 mg/m2 and cyclophosphamide 500 mg/m2 given on days –5, –4 and –3. On day 0 the patient was infused with their manufactured CAR T-cell product at a target dose of 2 x106 CAR T cells per kilogram of body weight. Patients could not receive “bridging chemotherapy” between leukapheresis and infusion of axi-cel product. Patients could be retreated with axi-cel if they experienced disease progression at least 3 months after their first dose.
Main outcome measures. The primary endpoint of this study was objective response rate, which was defined as the combined rate of complete response (CR) and partial response (PR). The secondary endpoints were duration of response, progression-free survival (PFS), overall survival (OS), and adverse events. Blood levels of CAR T cells and serum cytokine levels were followed.
Main results. A total of 111 patients were enrolled. Axi-cel was administered to 101 patients included in the intention to treat analysis. Of these, 77 had diffuse large B-cell lymphoma and 24 had primary mediastinal B-cell lymphoma or transformed follicular lymphoma. The median follow-up was 8.7 months for the primary analysis and updated analysis median follow-up was 15.4 months. The median time from leukapheresis to delivery of the product was 17 days. Only 1 patient had unsuccessful manufacturing. The median age of the treated patients was 58 years. Most of the patients (77%) had disease resistant to second-line or later therapy and 21% had disease relapse after autologous stem cell transplant.
Primary analysis results. The objective response rate was 82% with a 54% CR rate. The median time to response was 1 month and median duration of response was 8.1 months. The response rates were consistent across all subgroups including age, disease stage, IPI score, presence or absence of bulky disease, cell-of-origin subtype, and the use of tocilizumab or glucocorticoids. High response rates were maintained in those with primary refractory disease (response rate 88%) and those with prior autologous stem cell transplant (response rate 76%). The response rate was not influenced by CD19 expression. At the time of the primary analysis 52 patients died from disease progression and 3 died from adverse events during treatment. Forty-four patients remained in remission, 39 of whom maintained a CR.
Updated analysis results. At the time of the updated analysis 108 patients in the phase 1 and phase 2 portions had been followed for at least 12 months. The objective response rate was 82% with a CR rate of 58%. At the data cut-off, 42% remained in response with 40% maintaining a CR. Again, response rates were consistent across all previously mentioned subgroups. The median duration of response was 11.1 months. The median PFS was 5.8 months with PFS rate of 41% at 15 months. The median OS was not reached. A total of 56% of patients remained alive at the time of this analysis.
Safety. During treatment 100% of patients had adverse events (AEs), which were grade 3 or higher in 95%. Fevers (85%), neutropenia (84%) and anemia (66%) were the most common AEs. Myelosuppression was the most common grade 3 or higher AE. Cytokine release syndrome occurred in 93% of patients of which 13% were grade 3 or higher (9% grade 3, 3% grade 4 and 1% grade 5). 17% of patients required vasopressor support. The median time from infusion to the onset of cytokine release syndrome was 2 days (range, 1–12). The median time to resolution was 8 days. One grade 5 event of hemophagocytic lymphohistiocytosis and one grade 5 cardiac arrest occurred. Grade 3 or higher neurological events occurred in 28% of patients, with encephalopathy occurring in 21%. Neurological events occurred at a median of 5 days after infusion and lasted for a median of 17 days after infusion. Forty-three percent of patients received tocilizumab and 27% received glucocorticoids.
Biomarkers. CAR T levels peaked within 14 days after infusion. Three patients with a CR at 24 months still had detectable levels in the blood. CAR T cell expansion as significantly associated with disease response. Interleukin -6, -10, -15 and -2Ra levels were significantly associated with neurological events and cytokine release syndrome of grade 3 or higher. Anti-CAR antibodies were not detected in any patient.
Commentary
Diffuse large B-cell lymphoma (DLBCL) is the most common non-Hodgkin lymphoma with 5-year survival rates of ~60% following conventional chemoimmunotherapy in the first-line setting. Following relapse, salvage therapy followed by high-dose chemotherapy with autologous stem-cell transplantation can result in long-term remissions; however, those who relapse have a poor prognosis. The recently published SCHOLAR-1 study retrospectively analyzed the outcomes of patients with relapsed or refractory DLBCL and found that for patients with refractory disease the objective response to salvage therapy was only 26% (7% CR) with a median OS of 6.3 months [1]. CAR-engineered T cells offer a novel and revolutionary therapy for these patients, whom otherwise have very poor outcomes.
Early CAR T-cell trials by Bretjens and colleagues first documented a CR in a subset of patients with refractory hematologic malignancies [2]. Since that time there has been tremendous advancement in CAR T development and clinical application. In the December 2017 issue of the New England Journal of Medicine there were 2 studies published validating the efficacy of CD19-targeted CAR T-cell therapy in relapsed/refractory lymphoma, the current ZUMA-1 study as well as another small case-series by Schuster and colleagues. Schuster et al evaluated the CD19-directed CAR, CTL019, in 28 patients with relapsed/refractory DLBCL or follicular lymphoma. The ORR noted in this study was 64% with a CR rate of 57% [3]. Similarly, in the current ZUMA-1 study the CR rate was 54% in 101 patients with relapsed and refractory large B-cell lymphomas. In addition, with a median follow-up of 15.4 months responses were ongoing in 42% of patients including 40% who had a CR. The durability of such responses has been demonstrated in 3 of 7 patients from the phase 1 portion of this study at 24 months. Durable responses have also been reported with anti-CD19 CAR T-cell therapy in 4 of 5 patients who had a CR and remain in remission after 3-4 years of follow-up [4]. While promising, the durability of responses remains unclear. While CAR therapy represents an exciting therapeutic strategy, it should be noted that in this study approximately 50% of patients will not achieve a durable response and the reason for this is not completely understood.
One of the most discussed aspects of CAR therapy has been the unique toxicity profile, which was again noted in the ZUMA-1 study. As noted, 95% of patients in this study experienced a grade 3 or higher AE. Of interest, cytokine release syndrome occurred in 93% of patients with 13% being grade 3 or higher. There were 2 deaths attributed to such. Neurological toxicity was also noted in 64% of patients in this trial. While the vast majority of these AEs were reversible, they clearly represent high treatment-related morbidity.
The results of the ZUMA-1 study lead to the FDA approval of anti-CD19 CAR T-cell therapy for relapsed or refractory large B-cell lymphoma in October 2017 and represents a pivotal advancement in the management of these patients with otherwise limited treatment options and overall poor outcomes. The ZUMA-1 trial not only demonstrates the efficacy of such agents but also demonstrates the feasibility of incorporating them into clinical practice with a 99% manufacturing success rate and short (median 17 days) product delivery time. The economic burden of such therapies warrant particular consideration as the indications for CAR therapy will continue to expand, driving the cost of care higher. Nevertheless, this represents an exciting step forward in personalized medicine.
Applications for Clinical Practice
CAR T-cell therapy with the CD-19 targeted CAR axicabtagene ciloleucel (axi-cel) results in a high rate of objective and durable responses in patients with relapsed or refractory large B-cell lymphomas. While such treatment does carry a high rate of toxicity in regards to cytokine release and neurological complications, this represents an important treatment option in patients with refractory disease with a historically poor prognosis. However, there will be a need to develop policies to address the economic challenges associated with such treatments.
1. Crump M, Neelapu SS, Farooq U, et al. Outcomes in refractory diffuse large B-cell lymphoma: results from the international SCHOLAR-1 study. Blood 2017;130:1800–8.
2. Brentjens RJ, RIviere I, Park JH, et al. Safety and persistence of adoptively transferred autologous CD19-targeted T cells in patients with relapsed or chemotherapy refractory B-cell leukemias. Blood 2011;118:4817–28.
3. Schuster SJ, Svoboda J, Chong EA, et al. Chimeric antigen receptor T cells in refractory B-cell lymphomas. N Engl J Med 2017;377:2545–54.
4. Kochenderfer JN, Somerville RP, Lu T, et al. Long-duration complete remissions of diffuse large B cell lymphoma after anti-CD19 chimeric antigen receptor T cell therapy. Mol Ther 2017;25:2245–53.
Study Overview
Objective. To evaluate the efficacy and safety of the anti-CD19 chimeric antigen receptor (CAR) T-cell, axicabtagene ciloleucel (axi-cel), in patients with refractory large B-cell lymphoma.
Design. The ZUMA-1 trial was a phase 1-2 multicenter study. The results of the primary analysis and updated analysis with 1-year follow up of the phase 2 portion of ZUMA-1 are reported here.
Setting and participants. The phase 2 portion of the ZUMA-1 trial enrolled 111 patients from 22 centers in the United States (21) and Israel (1) from November 2015 through September 2016. Eligible patients included those with histologically confirmed large B-cell lymphoma, primary mediastinal B-cell lymphoma or transformed follicular lymphoma. Patients were required to have refractory disease, defined as disease progression or stable disease as the best response to chemotherapy or disease progression within 12 months following autologous stem cell transplantation. All patients were required to have adequate organ function, an absolute neutrophil count > 1000, absolute lymphocyte count > 100 and platelet count > 75,000.
Intervention. Patients first underwent leukapheresis and CAR T-cell manufacturing. Following this patients were admitted to the hospital and received a low-dose conditioning regimen consisting of fludarabine 30 mg/m2 and cyclophosphamide 500 mg/m2 given on days –5, –4 and –3. On day 0 the patient was infused with their manufactured CAR T-cell product at a target dose of 2 x106 CAR T cells per kilogram of body weight. Patients could not receive “bridging chemotherapy” between leukapheresis and infusion of axi-cel product. Patients could be retreated with axi-cel if they experienced disease progression at least 3 months after their first dose.
Main outcome measures. The primary endpoint of this study was objective response rate, which was defined as the combined rate of complete response (CR) and partial response (PR). The secondary endpoints were duration of response, progression-free survival (PFS), overall survival (OS), and adverse events. Blood levels of CAR T cells and serum cytokine levels were followed.
Main results. A total of 111 patients were enrolled. Axi-cel was administered to 101 patients included in the intention to treat analysis. Of these, 77 had diffuse large B-cell lymphoma and 24 had primary mediastinal B-cell lymphoma or transformed follicular lymphoma. The median follow-up was 8.7 months for the primary analysis and updated analysis median follow-up was 15.4 months. The median time from leukapheresis to delivery of the product was 17 days. Only 1 patient had unsuccessful manufacturing. The median age of the treated patients was 58 years. Most of the patients (77%) had disease resistant to second-line or later therapy and 21% had disease relapse after autologous stem cell transplant.
Primary analysis results. The objective response rate was 82% with a 54% CR rate. The median time to response was 1 month and median duration of response was 8.1 months. The response rates were consistent across all subgroups including age, disease stage, IPI score, presence or absence of bulky disease, cell-of-origin subtype, and the use of tocilizumab or glucocorticoids. High response rates were maintained in those with primary refractory disease (response rate 88%) and those with prior autologous stem cell transplant (response rate 76%). The response rate was not influenced by CD19 expression. At the time of the primary analysis 52 patients died from disease progression and 3 died from adverse events during treatment. Forty-four patients remained in remission, 39 of whom maintained a CR.
Updated analysis results. At the time of the updated analysis 108 patients in the phase 1 and phase 2 portions had been followed for at least 12 months. The objective response rate was 82% with a CR rate of 58%. At the data cut-off, 42% remained in response with 40% maintaining a CR. Again, response rates were consistent across all previously mentioned subgroups. The median duration of response was 11.1 months. The median PFS was 5.8 months with PFS rate of 41% at 15 months. The median OS was not reached. A total of 56% of patients remained alive at the time of this analysis.
Safety. During treatment 100% of patients had adverse events (AEs), which were grade 3 or higher in 95%. Fevers (85%), neutropenia (84%) and anemia (66%) were the most common AEs. Myelosuppression was the most common grade 3 or higher AE. Cytokine release syndrome occurred in 93% of patients of which 13% were grade 3 or higher (9% grade 3, 3% grade 4 and 1% grade 5). 17% of patients required vasopressor support. The median time from infusion to the onset of cytokine release syndrome was 2 days (range, 1–12). The median time to resolution was 8 days. One grade 5 event of hemophagocytic lymphohistiocytosis and one grade 5 cardiac arrest occurred. Grade 3 or higher neurological events occurred in 28% of patients, with encephalopathy occurring in 21%. Neurological events occurred at a median of 5 days after infusion and lasted for a median of 17 days after infusion. Forty-three percent of patients received tocilizumab and 27% received glucocorticoids.
Biomarkers. CAR T levels peaked within 14 days after infusion. Three patients with a CR at 24 months still had detectable levels in the blood. CAR T cell expansion as significantly associated with disease response. Interleukin -6, -10, -15 and -2Ra levels were significantly associated with neurological events and cytokine release syndrome of grade 3 or higher. Anti-CAR antibodies were not detected in any patient.
Commentary
Diffuse large B-cell lymphoma (DLBCL) is the most common non-Hodgkin lymphoma with 5-year survival rates of ~60% following conventional chemoimmunotherapy in the first-line setting. Following relapse, salvage therapy followed by high-dose chemotherapy with autologous stem-cell transplantation can result in long-term remissions; however, those who relapse have a poor prognosis. The recently published SCHOLAR-1 study retrospectively analyzed the outcomes of patients with relapsed or refractory DLBCL and found that for patients with refractory disease the objective response to salvage therapy was only 26% (7% CR) with a median OS of 6.3 months [1]. CAR-engineered T cells offer a novel and revolutionary therapy for these patients, whom otherwise have very poor outcomes.
Early CAR T-cell trials by Bretjens and colleagues first documented a CR in a subset of patients with refractory hematologic malignancies [2]. Since that time there has been tremendous advancement in CAR T development and clinical application. In the December 2017 issue of the New England Journal of Medicine there were 2 studies published validating the efficacy of CD19-targeted CAR T-cell therapy in relapsed/refractory lymphoma, the current ZUMA-1 study as well as another small case-series by Schuster and colleagues. Schuster et al evaluated the CD19-directed CAR, CTL019, in 28 patients with relapsed/refractory DLBCL or follicular lymphoma. The ORR noted in this study was 64% with a CR rate of 57% [3]. Similarly, in the current ZUMA-1 study the CR rate was 54% in 101 patients with relapsed and refractory large B-cell lymphomas. In addition, with a median follow-up of 15.4 months responses were ongoing in 42% of patients including 40% who had a CR. The durability of such responses has been demonstrated in 3 of 7 patients from the phase 1 portion of this study at 24 months. Durable responses have also been reported with anti-CD19 CAR T-cell therapy in 4 of 5 patients who had a CR and remain in remission after 3-4 years of follow-up [4]. While promising, the durability of responses remains unclear. While CAR therapy represents an exciting therapeutic strategy, it should be noted that in this study approximately 50% of patients will not achieve a durable response and the reason for this is not completely understood.
One of the most discussed aspects of CAR therapy has been the unique toxicity profile, which was again noted in the ZUMA-1 study. As noted, 95% of patients in this study experienced a grade 3 or higher AE. Of interest, cytokine release syndrome occurred in 93% of patients with 13% being grade 3 or higher. There were 2 deaths attributed to such. Neurological toxicity was also noted in 64% of patients in this trial. While the vast majority of these AEs were reversible, they clearly represent high treatment-related morbidity.
The results of the ZUMA-1 study lead to the FDA approval of anti-CD19 CAR T-cell therapy for relapsed or refractory large B-cell lymphoma in October 2017 and represents a pivotal advancement in the management of these patients with otherwise limited treatment options and overall poor outcomes. The ZUMA-1 trial not only demonstrates the efficacy of such agents but also demonstrates the feasibility of incorporating them into clinical practice with a 99% manufacturing success rate and short (median 17 days) product delivery time. The economic burden of such therapies warrant particular consideration as the indications for CAR therapy will continue to expand, driving the cost of care higher. Nevertheless, this represents an exciting step forward in personalized medicine.
Applications for Clinical Practice
CAR T-cell therapy with the CD-19 targeted CAR axicabtagene ciloleucel (axi-cel) results in a high rate of objective and durable responses in patients with relapsed or refractory large B-cell lymphomas. While such treatment does carry a high rate of toxicity in regards to cytokine release and neurological complications, this represents an important treatment option in patients with refractory disease with a historically poor prognosis. However, there will be a need to develop policies to address the economic challenges associated with such treatments.
Study Overview
Objective. To evaluate the efficacy and safety of the anti-CD19 chimeric antigen receptor (CAR) T-cell, axicabtagene ciloleucel (axi-cel), in patients with refractory large B-cell lymphoma.
Design. The ZUMA-1 trial was a phase 1-2 multicenter study. The results of the primary analysis and updated analysis with 1-year follow up of the phase 2 portion of ZUMA-1 are reported here.
Setting and participants. The phase 2 portion of the ZUMA-1 trial enrolled 111 patients from 22 centers in the United States (21) and Israel (1) from November 2015 through September 2016. Eligible patients included those with histologically confirmed large B-cell lymphoma, primary mediastinal B-cell lymphoma or transformed follicular lymphoma. Patients were required to have refractory disease, defined as disease progression or stable disease as the best response to chemotherapy or disease progression within 12 months following autologous stem cell transplantation. All patients were required to have adequate organ function, an absolute neutrophil count > 1000, absolute lymphocyte count > 100 and platelet count > 75,000.
Intervention. Patients first underwent leukapheresis and CAR T-cell manufacturing. Following this patients were admitted to the hospital and received a low-dose conditioning regimen consisting of fludarabine 30 mg/m2 and cyclophosphamide 500 mg/m2 given on days –5, –4 and –3. On day 0 the patient was infused with their manufactured CAR T-cell product at a target dose of 2 x106 CAR T cells per kilogram of body weight. Patients could not receive “bridging chemotherapy” between leukapheresis and infusion of axi-cel product. Patients could be retreated with axi-cel if they experienced disease progression at least 3 months after their first dose.
Main outcome measures. The primary endpoint of this study was objective response rate, which was defined as the combined rate of complete response (CR) and partial response (PR). The secondary endpoints were duration of response, progression-free survival (PFS), overall survival (OS), and adverse events. Blood levels of CAR T cells and serum cytokine levels were followed.
Main results. A total of 111 patients were enrolled. Axi-cel was administered to 101 patients included in the intention to treat analysis. Of these, 77 had diffuse large B-cell lymphoma and 24 had primary mediastinal B-cell lymphoma or transformed follicular lymphoma. The median follow-up was 8.7 months for the primary analysis and updated analysis median follow-up was 15.4 months. The median time from leukapheresis to delivery of the product was 17 days. Only 1 patient had unsuccessful manufacturing. The median age of the treated patients was 58 years. Most of the patients (77%) had disease resistant to second-line or later therapy and 21% had disease relapse after autologous stem cell transplant.
Primary analysis results. The objective response rate was 82% with a 54% CR rate. The median time to response was 1 month and median duration of response was 8.1 months. The response rates were consistent across all subgroups including age, disease stage, IPI score, presence or absence of bulky disease, cell-of-origin subtype, and the use of tocilizumab or glucocorticoids. High response rates were maintained in those with primary refractory disease (response rate 88%) and those with prior autologous stem cell transplant (response rate 76%). The response rate was not influenced by CD19 expression. At the time of the primary analysis 52 patients died from disease progression and 3 died from adverse events during treatment. Forty-four patients remained in remission, 39 of whom maintained a CR.
Updated analysis results. At the time of the updated analysis 108 patients in the phase 1 and phase 2 portions had been followed for at least 12 months. The objective response rate was 82% with a CR rate of 58%. At the data cut-off, 42% remained in response with 40% maintaining a CR. Again, response rates were consistent across all previously mentioned subgroups. The median duration of response was 11.1 months. The median PFS was 5.8 months with PFS rate of 41% at 15 months. The median OS was not reached. A total of 56% of patients remained alive at the time of this analysis.
Safety. During treatment 100% of patients had adverse events (AEs), which were grade 3 or higher in 95%. Fevers (85%), neutropenia (84%) and anemia (66%) were the most common AEs. Myelosuppression was the most common grade 3 or higher AE. Cytokine release syndrome occurred in 93% of patients of which 13% were grade 3 or higher (9% grade 3, 3% grade 4 and 1% grade 5). 17% of patients required vasopressor support. The median time from infusion to the onset of cytokine release syndrome was 2 days (range, 1–12). The median time to resolution was 8 days. One grade 5 event of hemophagocytic lymphohistiocytosis and one grade 5 cardiac arrest occurred. Grade 3 or higher neurological events occurred in 28% of patients, with encephalopathy occurring in 21%. Neurological events occurred at a median of 5 days after infusion and lasted for a median of 17 days after infusion. Forty-three percent of patients received tocilizumab and 27% received glucocorticoids.
Biomarkers. CAR T levels peaked within 14 days after infusion. Three patients with a CR at 24 months still had detectable levels in the blood. CAR T cell expansion as significantly associated with disease response. Interleukin -6, -10, -15 and -2Ra levels were significantly associated with neurological events and cytokine release syndrome of grade 3 or higher. Anti-CAR antibodies were not detected in any patient.
Commentary
Diffuse large B-cell lymphoma (DLBCL) is the most common non-Hodgkin lymphoma with 5-year survival rates of ~60% following conventional chemoimmunotherapy in the first-line setting. Following relapse, salvage therapy followed by high-dose chemotherapy with autologous stem-cell transplantation can result in long-term remissions; however, those who relapse have a poor prognosis. The recently published SCHOLAR-1 study retrospectively analyzed the outcomes of patients with relapsed or refractory DLBCL and found that for patients with refractory disease the objective response to salvage therapy was only 26% (7% CR) with a median OS of 6.3 months [1]. CAR-engineered T cells offer a novel and revolutionary therapy for these patients, whom otherwise have very poor outcomes.
Early CAR T-cell trials by Bretjens and colleagues first documented a CR in a subset of patients with refractory hematologic malignancies [2]. Since that time there has been tremendous advancement in CAR T development and clinical application. In the December 2017 issue of the New England Journal of Medicine there were 2 studies published validating the efficacy of CD19-targeted CAR T-cell therapy in relapsed/refractory lymphoma, the current ZUMA-1 study as well as another small case-series by Schuster and colleagues. Schuster et al evaluated the CD19-directed CAR, CTL019, in 28 patients with relapsed/refractory DLBCL or follicular lymphoma. The ORR noted in this study was 64% with a CR rate of 57% [3]. Similarly, in the current ZUMA-1 study the CR rate was 54% in 101 patients with relapsed and refractory large B-cell lymphomas. In addition, with a median follow-up of 15.4 months responses were ongoing in 42% of patients including 40% who had a CR. The durability of such responses has been demonstrated in 3 of 7 patients from the phase 1 portion of this study at 24 months. Durable responses have also been reported with anti-CD19 CAR T-cell therapy in 4 of 5 patients who had a CR and remain in remission after 3-4 years of follow-up [4]. While promising, the durability of responses remains unclear. While CAR therapy represents an exciting therapeutic strategy, it should be noted that in this study approximately 50% of patients will not achieve a durable response and the reason for this is not completely understood.
One of the most discussed aspects of CAR therapy has been the unique toxicity profile, which was again noted in the ZUMA-1 study. As noted, 95% of patients in this study experienced a grade 3 or higher AE. Of interest, cytokine release syndrome occurred in 93% of patients with 13% being grade 3 or higher. There were 2 deaths attributed to such. Neurological toxicity was also noted in 64% of patients in this trial. While the vast majority of these AEs were reversible, they clearly represent high treatment-related morbidity.
The results of the ZUMA-1 study lead to the FDA approval of anti-CD19 CAR T-cell therapy for relapsed or refractory large B-cell lymphoma in October 2017 and represents a pivotal advancement in the management of these patients with otherwise limited treatment options and overall poor outcomes. The ZUMA-1 trial not only demonstrates the efficacy of such agents but also demonstrates the feasibility of incorporating them into clinical practice with a 99% manufacturing success rate and short (median 17 days) product delivery time. The economic burden of such therapies warrant particular consideration as the indications for CAR therapy will continue to expand, driving the cost of care higher. Nevertheless, this represents an exciting step forward in personalized medicine.
Applications for Clinical Practice
CAR T-cell therapy with the CD-19 targeted CAR axicabtagene ciloleucel (axi-cel) results in a high rate of objective and durable responses in patients with relapsed or refractory large B-cell lymphomas. While such treatment does carry a high rate of toxicity in regards to cytokine release and neurological complications, this represents an important treatment option in patients with refractory disease with a historically poor prognosis. However, there will be a need to develop policies to address the economic challenges associated with such treatments.
1. Crump M, Neelapu SS, Farooq U, et al. Outcomes in refractory diffuse large B-cell lymphoma: results from the international SCHOLAR-1 study. Blood 2017;130:1800–8.
2. Brentjens RJ, RIviere I, Park JH, et al. Safety and persistence of adoptively transferred autologous CD19-targeted T cells in patients with relapsed or chemotherapy refractory B-cell leukemias. Blood 2011;118:4817–28.
3. Schuster SJ, Svoboda J, Chong EA, et al. Chimeric antigen receptor T cells in refractory B-cell lymphomas. N Engl J Med 2017;377:2545–54.
4. Kochenderfer JN, Somerville RP, Lu T, et al. Long-duration complete remissions of diffuse large B cell lymphoma after anti-CD19 chimeric antigen receptor T cell therapy. Mol Ther 2017;25:2245–53.
1. Crump M, Neelapu SS, Farooq U, et al. Outcomes in refractory diffuse large B-cell lymphoma: results from the international SCHOLAR-1 study. Blood 2017;130:1800–8.
2. Brentjens RJ, RIviere I, Park JH, et al. Safety and persistence of adoptively transferred autologous CD19-targeted T cells in patients with relapsed or chemotherapy refractory B-cell leukemias. Blood 2011;118:4817–28.
3. Schuster SJ, Svoboda J, Chong EA, et al. Chimeric antigen receptor T cells in refractory B-cell lymphomas. N Engl J Med 2017;377:2545–54.
4. Kochenderfer JN, Somerville RP, Lu T, et al. Long-duration complete remissions of diffuse large B cell lymphoma after anti-CD19 chimeric antigen receptor T cell therapy. Mol Ther 2017;25:2245–53.
HIV Transmission Risk Is Considerable at the Time of STI Diagnosis in HIV-Infected Persons
Study Overview
Objective. To evaluate the incidence and demographic factors associated with chlamydia, gonorrhea, and syphilis among HIV-infected persons in Washington, DC.
Design. Descriptive, retrospective cohort study.
Setting and participants. HIV-infected persons enrolled at 13 DC Cohort sites from 2011 to 2015. The DC Cohort is a clinic-based, city-wide, longitudinal observational cohort launched in 2011 to better understand HIV epidemiology in DC, describe clinical outcomes among those in care, and improve the quality of care for people living with HIV in the DC metropolitan area. Eligible participants included those enrolled from 1 January 2011 to 31 March 2015. Participant follow-up time included time from enrollment to 30 June 2015 or until one of these occurred: death, withdrawal from the DC Cohort, or loss to follow-up.
Main outcomes measures. Confirmed cases of chlamydia, gonorrhea, and syphilis, as well as HIV viral loads at the time of sexually transmitted infection (STI) diagnosis as a proxy for HIV transmission risk.
Main results. Around the time of the study, there were approximately 11,235 persons with HIV infection receiving care at the 13 DC Cohort sites, of which 8732 (77.7%) were approached for enrollment. Of those approached, 7004 (80.2%) agreed to participate and provided consent, 948 (10.9%) declined to enroll, 14 (0.2%) withdrew consent, and 766 (8.8%) remained undecided. There were significant differences between those consenting and declining, including female gender (27.8% of those consenting vs 36.1% of those declining, P < 0.001), white race/ethnicity (13.1% of those consenting vs 6.6% of those declining, P < 0.001), and private insurance status (27.6% of those consenting vs 33.2% of those declining, P < 0.001).
Median age of patients was 47 years (interquartile range, 36.5–54.5 years); 71% were male, 76% were non-Hispanic black, 39% were men who have sex with men (MSM), and 29% were heterosexual. 63.8% had public insurance. 6.7% (451/6672) developed an incident STI during a median follow-up of 32.5 months (4% chlamydia, 3% gonorrhea, 2% syphilis); 30% of participants had 2 or more STI episodes. The incidence rate of any STI was 3.8 cases per 100 person-years (95% confidence interval [CI], 3.5–4.1); age 18–34 years, 10.8 (95% CI, 9.7–12.0); transgender women, 9.9 (95% CI, 6.9–14.0); Hispanics, 9.2 (95% CI, 7.2–11.8); and MSM, 7.7 (95% CI, 7.1–8.4). Multivariate regression analysis showed younger age, Hispanic ethnicity, MSM risk, and higher nadir CD4 counts to be strongly associated with STIs. Among those with an STI, 41.8% had a detectable viral load within 1 month of STI diagnosis, and 14.6% had a viral load ≥ 1500 copies/mL.
Conclusion. STIs are highly prevalent among HIV-infected persons receiving care in DC. HIV transmission risk is considerable at the time of STI diagnosis. Interventions toward risk reduction, antiretroviral therapy adherence, and HIV virologic suppression are critical at the time of STI evaluation.
Commentary
Although the number of new HIV cases in Washington, DC, has been decreasing over recent years [1], it still has one of the highest rates of HIV infection in the United States [2]. In this large-scale, single-city analysis, Lucar et al reported on the incidence and factors associated with the development of chlamydia, gonorrhea, and syphilis in a cohort of people living with HIV in care in DC. Consistent with incidence rates among the DC general population [2], chlamydia had the highest incidence, followed by gonorrhea and then syphilis, each with particularly high rates among 18- to 34-year-olds, MSM, transgender women, and Hispanics.
Studies have shown that many people with HIV do not consistently practice safer sex, placing themselves and others at risk for HIV or STI infection/co-infection [3]. While most HIV prevention programs target HIV-negative individuals, targeting sexual risk behaviors in HIV-positive people can prevent the transmission of HIV and other STIs to uninfected individuals and can also prevent co-infections with other STIs [3]. However, effective interventions to maintain long-term behavior change and prevent HIV transmission are needed. In a recent systematic review and meta-analysis by Globerman et al [3] assessing the effectiveness of HIV/STI prevention interventions for people living with HIV, group-level health education interventions were found to be effective in reducing HIV/STI incidence when compared to attention controls. Another intervention type, comprehensive risk counseling and services, was found to be effective in reducing sexual risk behaviors when compared to both active and attention controls. All other intervention types showed no statistically significant effect or had low or very low quality of evidence. Improving strategies to reduce the impact of HIV and STDs may require an understanding of how different populations are experiencing those conditions [1].
This study has several limitations. First, the observational nature of the DC Cohort precluded standardized STI screening for all participants. STIs are frequently asymptomatic, and differences in screening practices can impact the observed STI frequency [4,5]. Subsequently, reported STI incidence rates are likely underestimating the true STI incidence in people with HIV in care in DC. Furthermore, STI screening may provide diagnosis dates distant from the actual time of STI acquisition. Similarly, the study design also limited the availability of HIV viral loads during the same encounter of STI diagnosis. In addition, the population enrolled in the DC Cohort may not be fully representative of the larger HIV-infected population in DC, as enrollment requires some degree of engagement in care, and the demographics of those declining to participate differed somewhat from those who provided consent.
Strengths of the study include its city-wide reach, prospective enrollment of participants, its longitudinal study design, and the large sample size. Also, since the study linked data from clinical sites with data reported to the local health department, this improved the accuracy of STI diagnosis frequency and provided insight into care received for STIs outside of the primary HIV care site.
Applications for Clinical Practice
Risk reduction interventions are needed for people living with HIV to help control the spread of STIs and reduce HIV transmission. More high-quality research on HIV/STI prevention interventions is needed. While there have been only a few studies, the existing data indicate that integration of STI services into HIV care and treatment service can be feasible and can have positive outcomes [6].
1. Annual Epidemiology & Surveillance Report. District of Columbia Department of Health HIV/AIDS, Hepatitis, STD, and TB Administration (HAHSTA). Accessed at https://doh.dc.gov/sites/default/files/dc/sites/doh/publication/attachments/HAHSTA%20Annual%20Report%202017%20-%20Final%20%282%29.pdf.
2. Centers for Disease Control and Prevention. HIV Surveillance Report, 2016; vol. 28. Accessed at www.cdc.gov/hiv/library/reports/hiv-surveillance.html.
3. Globerman J, Mitra S, Gogolishvili D, et al. HIV/STI prevention interventions: a systematic review and meta-analysis. Open Med (Wars) 2017;12:450–67.
4. Berry SA, Ghanem KG, Mathews WC, et al; HIV Research Network. Brief report: gonorrhea and chlamydia testing increasing but still lagging in HIV clinics in the United States. J Acquir Immune Defic Syndr 2015;70:275–9.
5. Hoover KW, Butler M, Workowski K, et al; Evaluation Group for Adherence to STD and Hepatitis Screening. STD screening of HIV-infected MSM in HIV clinics. Sex Transm Dis 2010;37:771–6.
6. Kennedy CE, Haberlen SA, Narasimhan M. Integration of sexually transmitted infection (STI) services into HIV care and treatment services for women living with HIV: a systematic review. BMJ Open 2017;7:e015310.
Study Overview
Objective. To evaluate the incidence and demographic factors associated with chlamydia, gonorrhea, and syphilis among HIV-infected persons in Washington, DC.
Design. Descriptive, retrospective cohort study.
Setting and participants. HIV-infected persons enrolled at 13 DC Cohort sites from 2011 to 2015. The DC Cohort is a clinic-based, city-wide, longitudinal observational cohort launched in 2011 to better understand HIV epidemiology in DC, describe clinical outcomes among those in care, and improve the quality of care for people living with HIV in the DC metropolitan area. Eligible participants included those enrolled from 1 January 2011 to 31 March 2015. Participant follow-up time included time from enrollment to 30 June 2015 or until one of these occurred: death, withdrawal from the DC Cohort, or loss to follow-up.
Main outcomes measures. Confirmed cases of chlamydia, gonorrhea, and syphilis, as well as HIV viral loads at the time of sexually transmitted infection (STI) diagnosis as a proxy for HIV transmission risk.
Main results. Around the time of the study, there were approximately 11,235 persons with HIV infection receiving care at the 13 DC Cohort sites, of which 8732 (77.7%) were approached for enrollment. Of those approached, 7004 (80.2%) agreed to participate and provided consent, 948 (10.9%) declined to enroll, 14 (0.2%) withdrew consent, and 766 (8.8%) remained undecided. There were significant differences between those consenting and declining, including female gender (27.8% of those consenting vs 36.1% of those declining, P < 0.001), white race/ethnicity (13.1% of those consenting vs 6.6% of those declining, P < 0.001), and private insurance status (27.6% of those consenting vs 33.2% of those declining, P < 0.001).
Median age of patients was 47 years (interquartile range, 36.5–54.5 years); 71% were male, 76% were non-Hispanic black, 39% were men who have sex with men (MSM), and 29% were heterosexual. 63.8% had public insurance. 6.7% (451/6672) developed an incident STI during a median follow-up of 32.5 months (4% chlamydia, 3% gonorrhea, 2% syphilis); 30% of participants had 2 or more STI episodes. The incidence rate of any STI was 3.8 cases per 100 person-years (95% confidence interval [CI], 3.5–4.1); age 18–34 years, 10.8 (95% CI, 9.7–12.0); transgender women, 9.9 (95% CI, 6.9–14.0); Hispanics, 9.2 (95% CI, 7.2–11.8); and MSM, 7.7 (95% CI, 7.1–8.4). Multivariate regression analysis showed younger age, Hispanic ethnicity, MSM risk, and higher nadir CD4 counts to be strongly associated with STIs. Among those with an STI, 41.8% had a detectable viral load within 1 month of STI diagnosis, and 14.6% had a viral load ≥ 1500 copies/mL.
Conclusion. STIs are highly prevalent among HIV-infected persons receiving care in DC. HIV transmission risk is considerable at the time of STI diagnosis. Interventions toward risk reduction, antiretroviral therapy adherence, and HIV virologic suppression are critical at the time of STI evaluation.
Commentary
Although the number of new HIV cases in Washington, DC, has been decreasing over recent years [1], it still has one of the highest rates of HIV infection in the United States [2]. In this large-scale, single-city analysis, Lucar et al reported on the incidence and factors associated with the development of chlamydia, gonorrhea, and syphilis in a cohort of people living with HIV in care in DC. Consistent with incidence rates among the DC general population [2], chlamydia had the highest incidence, followed by gonorrhea and then syphilis, each with particularly high rates among 18- to 34-year-olds, MSM, transgender women, and Hispanics.
Studies have shown that many people with HIV do not consistently practice safer sex, placing themselves and others at risk for HIV or STI infection/co-infection [3]. While most HIV prevention programs target HIV-negative individuals, targeting sexual risk behaviors in HIV-positive people can prevent the transmission of HIV and other STIs to uninfected individuals and can also prevent co-infections with other STIs [3]. However, effective interventions to maintain long-term behavior change and prevent HIV transmission are needed. In a recent systematic review and meta-analysis by Globerman et al [3] assessing the effectiveness of HIV/STI prevention interventions for people living with HIV, group-level health education interventions were found to be effective in reducing HIV/STI incidence when compared to attention controls. Another intervention type, comprehensive risk counseling and services, was found to be effective in reducing sexual risk behaviors when compared to both active and attention controls. All other intervention types showed no statistically significant effect or had low or very low quality of evidence. Improving strategies to reduce the impact of HIV and STDs may require an understanding of how different populations are experiencing those conditions [1].
This study has several limitations. First, the observational nature of the DC Cohort precluded standardized STI screening for all participants. STIs are frequently asymptomatic, and differences in screening practices can impact the observed STI frequency [4,5]. Subsequently, reported STI incidence rates are likely underestimating the true STI incidence in people with HIV in care in DC. Furthermore, STI screening may provide diagnosis dates distant from the actual time of STI acquisition. Similarly, the study design also limited the availability of HIV viral loads during the same encounter of STI diagnosis. In addition, the population enrolled in the DC Cohort may not be fully representative of the larger HIV-infected population in DC, as enrollment requires some degree of engagement in care, and the demographics of those declining to participate differed somewhat from those who provided consent.
Strengths of the study include its city-wide reach, prospective enrollment of participants, its longitudinal study design, and the large sample size. Also, since the study linked data from clinical sites with data reported to the local health department, this improved the accuracy of STI diagnosis frequency and provided insight into care received for STIs outside of the primary HIV care site.
Applications for Clinical Practice
Risk reduction interventions are needed for people living with HIV to help control the spread of STIs and reduce HIV transmission. More high-quality research on HIV/STI prevention interventions is needed. While there have been only a few studies, the existing data indicate that integration of STI services into HIV care and treatment service can be feasible and can have positive outcomes [6].
Study Overview
Objective. To evaluate the incidence and demographic factors associated with chlamydia, gonorrhea, and syphilis among HIV-infected persons in Washington, DC.
Design. Descriptive, retrospective cohort study.
Setting and participants. HIV-infected persons enrolled at 13 DC Cohort sites from 2011 to 2015. The DC Cohort is a clinic-based, city-wide, longitudinal observational cohort launched in 2011 to better understand HIV epidemiology in DC, describe clinical outcomes among those in care, and improve the quality of care for people living with HIV in the DC metropolitan area. Eligible participants included those enrolled from 1 January 2011 to 31 March 2015. Participant follow-up time included time from enrollment to 30 June 2015 or until one of these occurred: death, withdrawal from the DC Cohort, or loss to follow-up.
Main outcomes measures. Confirmed cases of chlamydia, gonorrhea, and syphilis, as well as HIV viral loads at the time of sexually transmitted infection (STI) diagnosis as a proxy for HIV transmission risk.
Main results. Around the time of the study, there were approximately 11,235 persons with HIV infection receiving care at the 13 DC Cohort sites, of which 8732 (77.7%) were approached for enrollment. Of those approached, 7004 (80.2%) agreed to participate and provided consent, 948 (10.9%) declined to enroll, 14 (0.2%) withdrew consent, and 766 (8.8%) remained undecided. There were significant differences between those consenting and declining, including female gender (27.8% of those consenting vs 36.1% of those declining, P < 0.001), white race/ethnicity (13.1% of those consenting vs 6.6% of those declining, P < 0.001), and private insurance status (27.6% of those consenting vs 33.2% of those declining, P < 0.001).
Median age of patients was 47 years (interquartile range, 36.5–54.5 years); 71% were male, 76% were non-Hispanic black, 39% were men who have sex with men (MSM), and 29% were heterosexual. 63.8% had public insurance. 6.7% (451/6672) developed an incident STI during a median follow-up of 32.5 months (4% chlamydia, 3% gonorrhea, 2% syphilis); 30% of participants had 2 or more STI episodes. The incidence rate of any STI was 3.8 cases per 100 person-years (95% confidence interval [CI], 3.5–4.1); age 18–34 years, 10.8 (95% CI, 9.7–12.0); transgender women, 9.9 (95% CI, 6.9–14.0); Hispanics, 9.2 (95% CI, 7.2–11.8); and MSM, 7.7 (95% CI, 7.1–8.4). Multivariate regression analysis showed younger age, Hispanic ethnicity, MSM risk, and higher nadir CD4 counts to be strongly associated with STIs. Among those with an STI, 41.8% had a detectable viral load within 1 month of STI diagnosis, and 14.6% had a viral load ≥ 1500 copies/mL.
Conclusion. STIs are highly prevalent among HIV-infected persons receiving care in DC. HIV transmission risk is considerable at the time of STI diagnosis. Interventions toward risk reduction, antiretroviral therapy adherence, and HIV virologic suppression are critical at the time of STI evaluation.
Commentary
Although the number of new HIV cases in Washington, DC, has been decreasing over recent years [1], it still has one of the highest rates of HIV infection in the United States [2]. In this large-scale, single-city analysis, Lucar et al reported on the incidence and factors associated with the development of chlamydia, gonorrhea, and syphilis in a cohort of people living with HIV in care in DC. Consistent with incidence rates among the DC general population [2], chlamydia had the highest incidence, followed by gonorrhea and then syphilis, each with particularly high rates among 18- to 34-year-olds, MSM, transgender women, and Hispanics.
Studies have shown that many people with HIV do not consistently practice safer sex, placing themselves and others at risk for HIV or STI infection/co-infection [3]. While most HIV prevention programs target HIV-negative individuals, targeting sexual risk behaviors in HIV-positive people can prevent the transmission of HIV and other STIs to uninfected individuals and can also prevent co-infections with other STIs [3]. However, effective interventions to maintain long-term behavior change and prevent HIV transmission are needed. In a recent systematic review and meta-analysis by Globerman et al [3] assessing the effectiveness of HIV/STI prevention interventions for people living with HIV, group-level health education interventions were found to be effective in reducing HIV/STI incidence when compared to attention controls. Another intervention type, comprehensive risk counseling and services, was found to be effective in reducing sexual risk behaviors when compared to both active and attention controls. All other intervention types showed no statistically significant effect or had low or very low quality of evidence. Improving strategies to reduce the impact of HIV and STDs may require an understanding of how different populations are experiencing those conditions [1].
This study has several limitations. First, the observational nature of the DC Cohort precluded standardized STI screening for all participants. STIs are frequently asymptomatic, and differences in screening practices can impact the observed STI frequency [4,5]. Subsequently, reported STI incidence rates are likely underestimating the true STI incidence in people with HIV in care in DC. Furthermore, STI screening may provide diagnosis dates distant from the actual time of STI acquisition. Similarly, the study design also limited the availability of HIV viral loads during the same encounter of STI diagnosis. In addition, the population enrolled in the DC Cohort may not be fully representative of the larger HIV-infected population in DC, as enrollment requires some degree of engagement in care, and the demographics of those declining to participate differed somewhat from those who provided consent.
Strengths of the study include its city-wide reach, prospective enrollment of participants, its longitudinal study design, and the large sample size. Also, since the study linked data from clinical sites with data reported to the local health department, this improved the accuracy of STI diagnosis frequency and provided insight into care received for STIs outside of the primary HIV care site.
Applications for Clinical Practice
Risk reduction interventions are needed for people living with HIV to help control the spread of STIs and reduce HIV transmission. More high-quality research on HIV/STI prevention interventions is needed. While there have been only a few studies, the existing data indicate that integration of STI services into HIV care and treatment service can be feasible and can have positive outcomes [6].
1. Annual Epidemiology & Surveillance Report. District of Columbia Department of Health HIV/AIDS, Hepatitis, STD, and TB Administration (HAHSTA). Accessed at https://doh.dc.gov/sites/default/files/dc/sites/doh/publication/attachments/HAHSTA%20Annual%20Report%202017%20-%20Final%20%282%29.pdf.
2. Centers for Disease Control and Prevention. HIV Surveillance Report, 2016; vol. 28. Accessed at www.cdc.gov/hiv/library/reports/hiv-surveillance.html.
3. Globerman J, Mitra S, Gogolishvili D, et al. HIV/STI prevention interventions: a systematic review and meta-analysis. Open Med (Wars) 2017;12:450–67.
4. Berry SA, Ghanem KG, Mathews WC, et al; HIV Research Network. Brief report: gonorrhea and chlamydia testing increasing but still lagging in HIV clinics in the United States. J Acquir Immune Defic Syndr 2015;70:275–9.
5. Hoover KW, Butler M, Workowski K, et al; Evaluation Group for Adherence to STD and Hepatitis Screening. STD screening of HIV-infected MSM in HIV clinics. Sex Transm Dis 2010;37:771–6.
6. Kennedy CE, Haberlen SA, Narasimhan M. Integration of sexually transmitted infection (STI) services into HIV care and treatment services for women living with HIV: a systematic review. BMJ Open 2017;7:e015310.
1. Annual Epidemiology & Surveillance Report. District of Columbia Department of Health HIV/AIDS, Hepatitis, STD, and TB Administration (HAHSTA). Accessed at https://doh.dc.gov/sites/default/files/dc/sites/doh/publication/attachments/HAHSTA%20Annual%20Report%202017%20-%20Final%20%282%29.pdf.
2. Centers for Disease Control and Prevention. HIV Surveillance Report, 2016; vol. 28. Accessed at www.cdc.gov/hiv/library/reports/hiv-surveillance.html.
3. Globerman J, Mitra S, Gogolishvili D, et al. HIV/STI prevention interventions: a systematic review and meta-analysis. Open Med (Wars) 2017;12:450–67.
4. Berry SA, Ghanem KG, Mathews WC, et al; HIV Research Network. Brief report: gonorrhea and chlamydia testing increasing but still lagging in HIV clinics in the United States. J Acquir Immune Defic Syndr 2015;70:275–9.
5. Hoover KW, Butler M, Workowski K, et al; Evaluation Group for Adherence to STD and Hepatitis Screening. STD screening of HIV-infected MSM in HIV clinics. Sex Transm Dis 2010;37:771–6.
6. Kennedy CE, Haberlen SA, Narasimhan M. Integration of sexually transmitted infection (STI) services into HIV care and treatment services for women living with HIV: a systematic review. BMJ Open 2017;7:e015310.
Which Is More Effective For Hypertension Management: User- Or Expert-Driven E-Counseling?
Study Overview
Objective. To assess whether systolic blood pressure improved with expert-driven or user-driven e-counseling compared with control intervention in patients with hypertension over a 4-month period.
Design. Three–parallel group, double-blind randomized controlled trial.
Setting and participants. In Toronto, Canada, participants were recruited through the Heart and Stroke Foundation heart disease risk assessment website, as well as posters at University Health Network facilities. Participants diagnosed with stage 1 or 2 hypertension (systolic blood pressure [SBP] = 140–180 mm Hg, diastolic blood pressure [DBP] = 90–110 mm Hg) and between the ages of 35 and 74 years were eligible. Hypertension diagnoses were confirmed with the participant’s family doctor at baseline if they were not prescribed antihypertensive medication. All participants were required to have an unchanged prescription for antihypertensive medication 42 months before enrollment. Participants prescribed antihypertensive medication were also required to have SBP ≥ 130 mm Hg or DBP ≥ 85 mm Hg in order to prevent “floor effects.” Exclusion criteria included: diagnosis of kidney disease, major psychiatric illness (eg, psychosis), alcohol or drug dependence in the previous year, pregnancy, and sleep apnea.
Participants were randomly assigned to 1 of 3 intervention groups: control, expert-driven, and user-driven e-counseling. Randomization was conducted by a web-based program using randomly permuted blocks. The randomization code was known only to the research coordinator and not to the investigators or research assistants who administered the assessments.
Intervention. Briefly, user-driven e-counseling enabled the participants to set their own goals or to select the interventions used to reach their behavioral goal. The user-driven group received weekly e-mails that enabled participants to select their areas of lifestyle change using text and video web links embedded in the e-mail. Expert-driven e-counseling involved prescribed specific changes for lifestyle behavior, which were intended to facilitate adherence to behavior change. Participants in the expert-driven group received the same hypertension management recommendations for lifestyle change as the user-driven group; however, the weekly e-mails consisted of predetermined exercise and dietary goals. The control group received weekly e-mails provided by the Heart and Stroke Foundation e-Health program that contained a brief newsletter article regarding BP management through lifestyle changes. The control group was distinct from the intervention groups, as the e-mails were limited to general information on BP management. Blinding to group assignment was maintained during baseline and 4-month follow-up.
Main outcome measures. The primary outcome was SDP; secondary outcomes included DBP, pulse pressure (PP), total cholesterol, 10-year Framingham cardiovascular risk (10-year CVD risk), daily physical activity, and dietary habits. Anthropometric characteristics, medical history, medication information, resting BP, daily step count, dietary behavior, participants’ readiness for lifestyle behavior changes, and participants’ cardiovascular risk (calculated by the Framingham 10-year absolute risk) were collected during the baseline and 4-month follow-up assessment.
Baseline and 4-month follow-up assessments at the Peter Munk Cardiac Center, Toronto General Hospital, University Health Network were scheduled between 8 AM and 12 PM to minimize diurnal BP variability. All participants fasted for 12 hours prior to their assessment in order to obtain accurate samples of cholesterol. Participants were also instructed to avoid smoking for > 4 hours, caffeine for 12 hours, and strenuous exercise for 24 hours prior to their assessment.
BP was measured by a validated protocol for automated BP assessments with the BpTRU blood pressure recording device. Participants were seated for >5 minutes prior to activation of the BpTRU device. The BP cuff was applied to participants’ left arms by a trained research assistant. Following the initial BP measurement, the research assistant exited the room while the BpTRU device completed an automated series of 5 BP recordings with 1-minute intervals separating each of these recordings. The recorded BP at each assessment interval was the mean of these 5 BpTRU measurements. PP was determined by the difference between SBP and DBP readings.
Daily physical activity was defined as the mean 4-day steps (3 weekdays, 1 weekend day) recorded on a pedometer (XL-18CN Activity Monitor), which all participants were given to use as part of the study. Diet was measured as adherence to recommended guidelines for daily intake of fruits and vegetables, and evaluated by the validated NIH/National Cancer Institute Diet History Questionnaire. Readiness for exercise and dietary change were measured using a questionnaire from the authors’ previous trial and the stages of change were defined as the following: precontemplation (not ready to adhere to the target behavior in the next 6 months), contemplation (ready to adhere to the target behavior in the next 6 months), preparation (ready to adhere to the target behavior in the next 4 weeks), action (adherence to the behavior but for < 6 months), and maintenance (adherence to the behavior for ≥ 6 months).
For the primary outcome (SBP), the difference among groups was evaluated using univariate linear regression. Post-hoc comparisons with Bonferroni adjustment, among the three treatment groups were performed only if the overall F-test was significant. Secondary outcomes (DBP, PP, total cholesterol, 10-year CVD risk, daily steps, and daily fruit and vegetable consumption) followed a similar statistical approach as the primary outcome analysis. Statistical significance was defined by a two-tailed test with a P value < 0.05.
Main results. Of those screened (n = 847), 128 participants were randomized into the study. Between the 3 groups (control with n = 43, user-driven with n = 42, expert-driven with n = 43), there were no statistically significant differences in age, sex, household income, education, ethnicity, body mass index, and medications (antihypertensive and lipid-lowering) at baseline. The average age was 56.9 ± 0.8 years, 48% were female, 66% had a household income of > $60,000, 79% had a college/university or graduate school education, 73% identified as white, and over 85% were taking ≥ 1 antihypertensive medications. Baseline SBP, DBP, PP, cholesterol, 10-year CVD risk, daily steps, daily vegetable intake, smoking status, readiness for exercise behavior change and readiness for dietary behavior change were also similar across the 3 groups. All participants were highly motivated at baseline for adopting a healthy lifestyle. The percentage of participants that were already in preparation, action, or maintenance of readiness for exercise and diet were 96% and 92%, respectively. Only 4% and 8% of participants were in either precontemplation or contemplation stage of readiness at baseline for exercise and diet, respectively.
The expert-driven group showed a greater SBP decrease than controls at follow-up (mean difference between expert-driven versus control: −7.5 mm Hg, 95% CI −12.5 to −2.6, P = 0.001). SBP reduction did not significantly differ between user- and expert-driven (P > 0.05). DBP reduction and improvement in daily vegetable intake was not significantly different across groups. However, the expert-driven group demonstrated a significant reduction compared with controls in PP (−4.6 mm Hg, 95% CI −8.3 to −0.9, P = 0.008), cholesterol (−0.48 mmol/L, 95% CI −0.84 to −0.14, P < 0.001), and 10-year CVD risk (−3.3%, 95% C −5.0 to −1.5, P = 0.005). The expert-driven group showed a significantly greater improvement than both controls and the user-driven group in daily steps (expert versus control: 2460 steps/day, 95% CI 1137–3783, P < 0.001; expert versus user: 1844 steps/day, 95% CI 512–3176, P = 0.003) and servings of fruit consumption (expert versus control: 1.5 servings/day, 95% CI 0.2–2.7, P = 0.01; expert versus user: 1.8 servings/day, 95% CI 0.8–3.2, P = 0.001).
Conclusion. Expert-driven e-counseling was more effective than control in reducing SBP, PP, cholesterol, and 10-year CVD risk at the 4-month follow-up. In addition, expert-driven e-counseling was more effective that user-driven counseling in improving daily steps and fruit intake. It may be advisable to incorporate an expert-driven e-counseling protocol in order to accommodate participants with greater motivation to change their lifestyle behaviors and improve BP.
Commentary
In a 2014 article, the authors summarized the efficacy of lifestyle counseling interventions in face-to-face, telehealth, and e-counseling settings, especially noting e-counseling as an emerging preventive strategy for hypertension [10]. E-counseling, a form of telehealth, presents information dynamically though combined video, text, image, and audio media, and incorporates two-way communication through phone, internet, and videoconferencing (ie, between patient and provider). This approach has the potential to increase adherence to counseling and self-care approaches by providing improved and convenient access to information, incorporating engaging components, expanding accessibility and comprehension of information among individuals with varying levels of health literacy, enabling increased and more frequent interactivity with health care professionals, and increasing engagement. Importantly, effective counseling approaches, whether through conventional or e-counseling approaches, should include certain core components, including goal-setting, self-monitoring of symptoms of behaviors, personalized training (based on patient setting or resources), performance-based feedback and reinforcement of health-promoting behaviors, and procedures to enhance self-efficacy [10].
This study adds to the literature by demonstrating that the counseling communication strategies (expert- and user-driven) used to deliver e-counseling can significantly influence intervention outcomes related to hypertension management. Strengths of this study include the use of a double-blind randomized controlled study design powered to detect clinically meaningful SBP differences, the three– parallel group assignments (expert-driven, user-driven, control) that incorporated multiple evidence-based counseling approaches, the measurement of changes in multiple cardiovascular and behavioral outcomes (clinical and self-report measures), the inclusion of a theory-based measure of readiness for dietary and exercise behavior change, and the low attrition rate. However, there are key limitations, many acknowledged by the authors. The majority of the study participants were white, from higher income households, had completed higher education, and were already motivated for dietary and exercise behavior change, thus limiting the generalizability of findings. The study had a limited follow-up period (only 4 months) and the study design did not allow for the identification of the most impactful components of the intervention groups.
Applications for Clinical Practice
Expert-driven e-counseling may be an effective approach to managing hypertension, as this study showed that expert-driven e-counseling was more effective than control in reducing SBP, PP, cholesterol, and 10-year CVD risk at the 4-month follow-up, and expert-driven e-counseling was more effective that user-driven counseling in improving daily steps and fruit intake. However, providers should be mindful that this approach may be limited to patients with greater motivation to change their lifestyle behaviors to lower blood pressure.
1. Weber MA, Schiffrin EL, White WB, et al. Clinical practice guidelines for the management of hypertension in the community. J Clin Hypertens 2014;16:14–26.
2. American College of Cardiology. New ACC/AHA high blood pressure guidelines lower definition of hypertension; 2017.
3. Ruilope LM. Current challenges in the clinical management of hypertension. Nat Rev Cardiol 2012;9:267–75.
4. Borghi C, Cicero AFG. Hypertension: management perspectives. Expert Opin Pharmacother 2012;13:1999–2003.
5. Gupta R, Guptha S. Strategies for initial management of hypertension. Indian J Med Res 2010;132:531–42.
6. Pietrzak E, Cotea C, Pullman S. Primary and secondary prevention of cardiovascular disease. J Cardiopulm Rehabil Prev 2014;34:303–17.
7. Watson AJ, Singh K, Myint-U K, et al. Evaluating a web-based self-management program for employees with hypertension and prehypertension: A randomized clinical trial. Am Heart J 2012;164:625–31.
8. Thomas KL, Shah BR, Elliot-Bynum S, et al. Check it, change it: a community-based, multifaceted intervention to improve blood pressure control. Circ Cardiovasc Qual Outcomes 2014;7:828–34.
9. Hallberg I, Ranerup A, Kjellgren K. Supporting the self-management of hypertension: Patients’ experiences of using a mobile phone-based system. J Hum Hypertens 2016;30:141–6.
10. Nolan RP, Liu S, Payne AYM. E-counseling as an emerging preventive strategy for hypertension. Curr Opin Cardiol 2014;29:319–23.
11. Carter BL, Bosworth HB, Green BB. The hypertension team: the role of the pharmacist, nurse, and teamwork in hypertension therapy. J Clin Hypertens 2012;14:51–65.
Study Overview
Objective. To assess whether systolic blood pressure improved with expert-driven or user-driven e-counseling compared with control intervention in patients with hypertension over a 4-month period.
Design. Three–parallel group, double-blind randomized controlled trial.
Setting and participants. In Toronto, Canada, participants were recruited through the Heart and Stroke Foundation heart disease risk assessment website, as well as posters at University Health Network facilities. Participants diagnosed with stage 1 or 2 hypertension (systolic blood pressure [SBP] = 140–180 mm Hg, diastolic blood pressure [DBP] = 90–110 mm Hg) and between the ages of 35 and 74 years were eligible. Hypertension diagnoses were confirmed with the participant’s family doctor at baseline if they were not prescribed antihypertensive medication. All participants were required to have an unchanged prescription for antihypertensive medication 42 months before enrollment. Participants prescribed antihypertensive medication were also required to have SBP ≥ 130 mm Hg or DBP ≥ 85 mm Hg in order to prevent “floor effects.” Exclusion criteria included: diagnosis of kidney disease, major psychiatric illness (eg, psychosis), alcohol or drug dependence in the previous year, pregnancy, and sleep apnea.
Participants were randomly assigned to 1 of 3 intervention groups: control, expert-driven, and user-driven e-counseling. Randomization was conducted by a web-based program using randomly permuted blocks. The randomization code was known only to the research coordinator and not to the investigators or research assistants who administered the assessments.
Intervention. Briefly, user-driven e-counseling enabled the participants to set their own goals or to select the interventions used to reach their behavioral goal. The user-driven group received weekly e-mails that enabled participants to select their areas of lifestyle change using text and video web links embedded in the e-mail. Expert-driven e-counseling involved prescribed specific changes for lifestyle behavior, which were intended to facilitate adherence to behavior change. Participants in the expert-driven group received the same hypertension management recommendations for lifestyle change as the user-driven group; however, the weekly e-mails consisted of predetermined exercise and dietary goals. The control group received weekly e-mails provided by the Heart and Stroke Foundation e-Health program that contained a brief newsletter article regarding BP management through lifestyle changes. The control group was distinct from the intervention groups, as the e-mails were limited to general information on BP management. Blinding to group assignment was maintained during baseline and 4-month follow-up.
Main outcome measures. The primary outcome was SDP; secondary outcomes included DBP, pulse pressure (PP), total cholesterol, 10-year Framingham cardiovascular risk (10-year CVD risk), daily physical activity, and dietary habits. Anthropometric characteristics, medical history, medication information, resting BP, daily step count, dietary behavior, participants’ readiness for lifestyle behavior changes, and participants’ cardiovascular risk (calculated by the Framingham 10-year absolute risk) were collected during the baseline and 4-month follow-up assessment.
Baseline and 4-month follow-up assessments at the Peter Munk Cardiac Center, Toronto General Hospital, University Health Network were scheduled between 8 AM and 12 PM to minimize diurnal BP variability. All participants fasted for 12 hours prior to their assessment in order to obtain accurate samples of cholesterol. Participants were also instructed to avoid smoking for > 4 hours, caffeine for 12 hours, and strenuous exercise for 24 hours prior to their assessment.
BP was measured by a validated protocol for automated BP assessments with the BpTRU blood pressure recording device. Participants were seated for >5 minutes prior to activation of the BpTRU device. The BP cuff was applied to participants’ left arms by a trained research assistant. Following the initial BP measurement, the research assistant exited the room while the BpTRU device completed an automated series of 5 BP recordings with 1-minute intervals separating each of these recordings. The recorded BP at each assessment interval was the mean of these 5 BpTRU measurements. PP was determined by the difference between SBP and DBP readings.
Daily physical activity was defined as the mean 4-day steps (3 weekdays, 1 weekend day) recorded on a pedometer (XL-18CN Activity Monitor), which all participants were given to use as part of the study. Diet was measured as adherence to recommended guidelines for daily intake of fruits and vegetables, and evaluated by the validated NIH/National Cancer Institute Diet History Questionnaire. Readiness for exercise and dietary change were measured using a questionnaire from the authors’ previous trial and the stages of change were defined as the following: precontemplation (not ready to adhere to the target behavior in the next 6 months), contemplation (ready to adhere to the target behavior in the next 6 months), preparation (ready to adhere to the target behavior in the next 4 weeks), action (adherence to the behavior but for < 6 months), and maintenance (adherence to the behavior for ≥ 6 months).
For the primary outcome (SBP), the difference among groups was evaluated using univariate linear regression. Post-hoc comparisons with Bonferroni adjustment, among the three treatment groups were performed only if the overall F-test was significant. Secondary outcomes (DBP, PP, total cholesterol, 10-year CVD risk, daily steps, and daily fruit and vegetable consumption) followed a similar statistical approach as the primary outcome analysis. Statistical significance was defined by a two-tailed test with a P value < 0.05.
Main results. Of those screened (n = 847), 128 participants were randomized into the study. Between the 3 groups (control with n = 43, user-driven with n = 42, expert-driven with n = 43), there were no statistically significant differences in age, sex, household income, education, ethnicity, body mass index, and medications (antihypertensive and lipid-lowering) at baseline. The average age was 56.9 ± 0.8 years, 48% were female, 66% had a household income of > $60,000, 79% had a college/university or graduate school education, 73% identified as white, and over 85% were taking ≥ 1 antihypertensive medications. Baseline SBP, DBP, PP, cholesterol, 10-year CVD risk, daily steps, daily vegetable intake, smoking status, readiness for exercise behavior change and readiness for dietary behavior change were also similar across the 3 groups. All participants were highly motivated at baseline for adopting a healthy lifestyle. The percentage of participants that were already in preparation, action, or maintenance of readiness for exercise and diet were 96% and 92%, respectively. Only 4% and 8% of participants were in either precontemplation or contemplation stage of readiness at baseline for exercise and diet, respectively.
The expert-driven group showed a greater SBP decrease than controls at follow-up (mean difference between expert-driven versus control: −7.5 mm Hg, 95% CI −12.5 to −2.6, P = 0.001). SBP reduction did not significantly differ between user- and expert-driven (P > 0.05). DBP reduction and improvement in daily vegetable intake was not significantly different across groups. However, the expert-driven group demonstrated a significant reduction compared with controls in PP (−4.6 mm Hg, 95% CI −8.3 to −0.9, P = 0.008), cholesterol (−0.48 mmol/L, 95% CI −0.84 to −0.14, P < 0.001), and 10-year CVD risk (−3.3%, 95% C −5.0 to −1.5, P = 0.005). The expert-driven group showed a significantly greater improvement than both controls and the user-driven group in daily steps (expert versus control: 2460 steps/day, 95% CI 1137–3783, P < 0.001; expert versus user: 1844 steps/day, 95% CI 512–3176, P = 0.003) and servings of fruit consumption (expert versus control: 1.5 servings/day, 95% CI 0.2–2.7, P = 0.01; expert versus user: 1.8 servings/day, 95% CI 0.8–3.2, P = 0.001).
Conclusion. Expert-driven e-counseling was more effective than control in reducing SBP, PP, cholesterol, and 10-year CVD risk at the 4-month follow-up. In addition, expert-driven e-counseling was more effective that user-driven counseling in improving daily steps and fruit intake. It may be advisable to incorporate an expert-driven e-counseling protocol in order to accommodate participants with greater motivation to change their lifestyle behaviors and improve BP.
Commentary
In a 2014 article, the authors summarized the efficacy of lifestyle counseling interventions in face-to-face, telehealth, and e-counseling settings, especially noting e-counseling as an emerging preventive strategy for hypertension [10]. E-counseling, a form of telehealth, presents information dynamically though combined video, text, image, and audio media, and incorporates two-way communication through phone, internet, and videoconferencing (ie, between patient and provider). This approach has the potential to increase adherence to counseling and self-care approaches by providing improved and convenient access to information, incorporating engaging components, expanding accessibility and comprehension of information among individuals with varying levels of health literacy, enabling increased and more frequent interactivity with health care professionals, and increasing engagement. Importantly, effective counseling approaches, whether through conventional or e-counseling approaches, should include certain core components, including goal-setting, self-monitoring of symptoms of behaviors, personalized training (based on patient setting or resources), performance-based feedback and reinforcement of health-promoting behaviors, and procedures to enhance self-efficacy [10].
This study adds to the literature by demonstrating that the counseling communication strategies (expert- and user-driven) used to deliver e-counseling can significantly influence intervention outcomes related to hypertension management. Strengths of this study include the use of a double-blind randomized controlled study design powered to detect clinically meaningful SBP differences, the three– parallel group assignments (expert-driven, user-driven, control) that incorporated multiple evidence-based counseling approaches, the measurement of changes in multiple cardiovascular and behavioral outcomes (clinical and self-report measures), the inclusion of a theory-based measure of readiness for dietary and exercise behavior change, and the low attrition rate. However, there are key limitations, many acknowledged by the authors. The majority of the study participants were white, from higher income households, had completed higher education, and were already motivated for dietary and exercise behavior change, thus limiting the generalizability of findings. The study had a limited follow-up period (only 4 months) and the study design did not allow for the identification of the most impactful components of the intervention groups.
Applications for Clinical Practice
Expert-driven e-counseling may be an effective approach to managing hypertension, as this study showed that expert-driven e-counseling was more effective than control in reducing SBP, PP, cholesterol, and 10-year CVD risk at the 4-month follow-up, and expert-driven e-counseling was more effective that user-driven counseling in improving daily steps and fruit intake. However, providers should be mindful that this approach may be limited to patients with greater motivation to change their lifestyle behaviors to lower blood pressure.
Study Overview
Objective. To assess whether systolic blood pressure improved with expert-driven or user-driven e-counseling compared with control intervention in patients with hypertension over a 4-month period.
Design. Three–parallel group, double-blind randomized controlled trial.
Setting and participants. In Toronto, Canada, participants were recruited through the Heart and Stroke Foundation heart disease risk assessment website, as well as posters at University Health Network facilities. Participants diagnosed with stage 1 or 2 hypertension (systolic blood pressure [SBP] = 140–180 mm Hg, diastolic blood pressure [DBP] = 90–110 mm Hg) and between the ages of 35 and 74 years were eligible. Hypertension diagnoses were confirmed with the participant’s family doctor at baseline if they were not prescribed antihypertensive medication. All participants were required to have an unchanged prescription for antihypertensive medication 42 months before enrollment. Participants prescribed antihypertensive medication were also required to have SBP ≥ 130 mm Hg or DBP ≥ 85 mm Hg in order to prevent “floor effects.” Exclusion criteria included: diagnosis of kidney disease, major psychiatric illness (eg, psychosis), alcohol or drug dependence in the previous year, pregnancy, and sleep apnea.
Participants were randomly assigned to 1 of 3 intervention groups: control, expert-driven, and user-driven e-counseling. Randomization was conducted by a web-based program using randomly permuted blocks. The randomization code was known only to the research coordinator and not to the investigators or research assistants who administered the assessments.
Intervention. Briefly, user-driven e-counseling enabled the participants to set their own goals or to select the interventions used to reach their behavioral goal. The user-driven group received weekly e-mails that enabled participants to select their areas of lifestyle change using text and video web links embedded in the e-mail. Expert-driven e-counseling involved prescribed specific changes for lifestyle behavior, which were intended to facilitate adherence to behavior change. Participants in the expert-driven group received the same hypertension management recommendations for lifestyle change as the user-driven group; however, the weekly e-mails consisted of predetermined exercise and dietary goals. The control group received weekly e-mails provided by the Heart and Stroke Foundation e-Health program that contained a brief newsletter article regarding BP management through lifestyle changes. The control group was distinct from the intervention groups, as the e-mails were limited to general information on BP management. Blinding to group assignment was maintained during baseline and 4-month follow-up.
Main outcome measures. The primary outcome was SDP; secondary outcomes included DBP, pulse pressure (PP), total cholesterol, 10-year Framingham cardiovascular risk (10-year CVD risk), daily physical activity, and dietary habits. Anthropometric characteristics, medical history, medication information, resting BP, daily step count, dietary behavior, participants’ readiness for lifestyle behavior changes, and participants’ cardiovascular risk (calculated by the Framingham 10-year absolute risk) were collected during the baseline and 4-month follow-up assessment.
Baseline and 4-month follow-up assessments at the Peter Munk Cardiac Center, Toronto General Hospital, University Health Network were scheduled between 8 AM and 12 PM to minimize diurnal BP variability. All participants fasted for 12 hours prior to their assessment in order to obtain accurate samples of cholesterol. Participants were also instructed to avoid smoking for > 4 hours, caffeine for 12 hours, and strenuous exercise for 24 hours prior to their assessment.
BP was measured by a validated protocol for automated BP assessments with the BpTRU blood pressure recording device. Participants were seated for >5 minutes prior to activation of the BpTRU device. The BP cuff was applied to participants’ left arms by a trained research assistant. Following the initial BP measurement, the research assistant exited the room while the BpTRU device completed an automated series of 5 BP recordings with 1-minute intervals separating each of these recordings. The recorded BP at each assessment interval was the mean of these 5 BpTRU measurements. PP was determined by the difference between SBP and DBP readings.
Daily physical activity was defined as the mean 4-day steps (3 weekdays, 1 weekend day) recorded on a pedometer (XL-18CN Activity Monitor), which all participants were given to use as part of the study. Diet was measured as adherence to recommended guidelines for daily intake of fruits and vegetables, and evaluated by the validated NIH/National Cancer Institute Diet History Questionnaire. Readiness for exercise and dietary change were measured using a questionnaire from the authors’ previous trial and the stages of change were defined as the following: precontemplation (not ready to adhere to the target behavior in the next 6 months), contemplation (ready to adhere to the target behavior in the next 6 months), preparation (ready to adhere to the target behavior in the next 4 weeks), action (adherence to the behavior but for < 6 months), and maintenance (adherence to the behavior for ≥ 6 months).
For the primary outcome (SBP), the difference among groups was evaluated using univariate linear regression. Post-hoc comparisons with Bonferroni adjustment, among the three treatment groups were performed only if the overall F-test was significant. Secondary outcomes (DBP, PP, total cholesterol, 10-year CVD risk, daily steps, and daily fruit and vegetable consumption) followed a similar statistical approach as the primary outcome analysis. Statistical significance was defined by a two-tailed test with a P value < 0.05.
Main results. Of those screened (n = 847), 128 participants were randomized into the study. Between the 3 groups (control with n = 43, user-driven with n = 42, expert-driven with n = 43), there were no statistically significant differences in age, sex, household income, education, ethnicity, body mass index, and medications (antihypertensive and lipid-lowering) at baseline. The average age was 56.9 ± 0.8 years, 48% were female, 66% had a household income of > $60,000, 79% had a college/university or graduate school education, 73% identified as white, and over 85% were taking ≥ 1 antihypertensive medications. Baseline SBP, DBP, PP, cholesterol, 10-year CVD risk, daily steps, daily vegetable intake, smoking status, readiness for exercise behavior change and readiness for dietary behavior change were also similar across the 3 groups. All participants were highly motivated at baseline for adopting a healthy lifestyle. The percentage of participants that were already in preparation, action, or maintenance of readiness for exercise and diet were 96% and 92%, respectively. Only 4% and 8% of participants were in either precontemplation or contemplation stage of readiness at baseline for exercise and diet, respectively.
The expert-driven group showed a greater SBP decrease than controls at follow-up (mean difference between expert-driven versus control: −7.5 mm Hg, 95% CI −12.5 to −2.6, P = 0.001). SBP reduction did not significantly differ between user- and expert-driven (P > 0.05). DBP reduction and improvement in daily vegetable intake was not significantly different across groups. However, the expert-driven group demonstrated a significant reduction compared with controls in PP (−4.6 mm Hg, 95% CI −8.3 to −0.9, P = 0.008), cholesterol (−0.48 mmol/L, 95% CI −0.84 to −0.14, P < 0.001), and 10-year CVD risk (−3.3%, 95% C −5.0 to −1.5, P = 0.005). The expert-driven group showed a significantly greater improvement than both controls and the user-driven group in daily steps (expert versus control: 2460 steps/day, 95% CI 1137–3783, P < 0.001; expert versus user: 1844 steps/day, 95% CI 512–3176, P = 0.003) and servings of fruit consumption (expert versus control: 1.5 servings/day, 95% CI 0.2–2.7, P = 0.01; expert versus user: 1.8 servings/day, 95% CI 0.8–3.2, P = 0.001).
Conclusion. Expert-driven e-counseling was more effective than control in reducing SBP, PP, cholesterol, and 10-year CVD risk at the 4-month follow-up. In addition, expert-driven e-counseling was more effective that user-driven counseling in improving daily steps and fruit intake. It may be advisable to incorporate an expert-driven e-counseling protocol in order to accommodate participants with greater motivation to change their lifestyle behaviors and improve BP.
Commentary
In a 2014 article, the authors summarized the efficacy of lifestyle counseling interventions in face-to-face, telehealth, and e-counseling settings, especially noting e-counseling as an emerging preventive strategy for hypertension [10]. E-counseling, a form of telehealth, presents information dynamically though combined video, text, image, and audio media, and incorporates two-way communication through phone, internet, and videoconferencing (ie, between patient and provider). This approach has the potential to increase adherence to counseling and self-care approaches by providing improved and convenient access to information, incorporating engaging components, expanding accessibility and comprehension of information among individuals with varying levels of health literacy, enabling increased and more frequent interactivity with health care professionals, and increasing engagement. Importantly, effective counseling approaches, whether through conventional or e-counseling approaches, should include certain core components, including goal-setting, self-monitoring of symptoms of behaviors, personalized training (based on patient setting or resources), performance-based feedback and reinforcement of health-promoting behaviors, and procedures to enhance self-efficacy [10].
This study adds to the literature by demonstrating that the counseling communication strategies (expert- and user-driven) used to deliver e-counseling can significantly influence intervention outcomes related to hypertension management. Strengths of this study include the use of a double-blind randomized controlled study design powered to detect clinically meaningful SBP differences, the three– parallel group assignments (expert-driven, user-driven, control) that incorporated multiple evidence-based counseling approaches, the measurement of changes in multiple cardiovascular and behavioral outcomes (clinical and self-report measures), the inclusion of a theory-based measure of readiness for dietary and exercise behavior change, and the low attrition rate. However, there are key limitations, many acknowledged by the authors. The majority of the study participants were white, from higher income households, had completed higher education, and were already motivated for dietary and exercise behavior change, thus limiting the generalizability of findings. The study had a limited follow-up period (only 4 months) and the study design did not allow for the identification of the most impactful components of the intervention groups.
Applications for Clinical Practice
Expert-driven e-counseling may be an effective approach to managing hypertension, as this study showed that expert-driven e-counseling was more effective than control in reducing SBP, PP, cholesterol, and 10-year CVD risk at the 4-month follow-up, and expert-driven e-counseling was more effective that user-driven counseling in improving daily steps and fruit intake. However, providers should be mindful that this approach may be limited to patients with greater motivation to change their lifestyle behaviors to lower blood pressure.
1. Weber MA, Schiffrin EL, White WB, et al. Clinical practice guidelines for the management of hypertension in the community. J Clin Hypertens 2014;16:14–26.
2. American College of Cardiology. New ACC/AHA high blood pressure guidelines lower definition of hypertension; 2017.
3. Ruilope LM. Current challenges in the clinical management of hypertension. Nat Rev Cardiol 2012;9:267–75.
4. Borghi C, Cicero AFG. Hypertension: management perspectives. Expert Opin Pharmacother 2012;13:1999–2003.
5. Gupta R, Guptha S. Strategies for initial management of hypertension. Indian J Med Res 2010;132:531–42.
6. Pietrzak E, Cotea C, Pullman S. Primary and secondary prevention of cardiovascular disease. J Cardiopulm Rehabil Prev 2014;34:303–17.
7. Watson AJ, Singh K, Myint-U K, et al. Evaluating a web-based self-management program for employees with hypertension and prehypertension: A randomized clinical trial. Am Heart J 2012;164:625–31.
8. Thomas KL, Shah BR, Elliot-Bynum S, et al. Check it, change it: a community-based, multifaceted intervention to improve blood pressure control. Circ Cardiovasc Qual Outcomes 2014;7:828–34.
9. Hallberg I, Ranerup A, Kjellgren K. Supporting the self-management of hypertension: Patients’ experiences of using a mobile phone-based system. J Hum Hypertens 2016;30:141–6.
10. Nolan RP, Liu S, Payne AYM. E-counseling as an emerging preventive strategy for hypertension. Curr Opin Cardiol 2014;29:319–23.
11. Carter BL, Bosworth HB, Green BB. The hypertension team: the role of the pharmacist, nurse, and teamwork in hypertension therapy. J Clin Hypertens 2012;14:51–65.
1. Weber MA, Schiffrin EL, White WB, et al. Clinical practice guidelines for the management of hypertension in the community. J Clin Hypertens 2014;16:14–26.
2. American College of Cardiology. New ACC/AHA high blood pressure guidelines lower definition of hypertension; 2017.
3. Ruilope LM. Current challenges in the clinical management of hypertension. Nat Rev Cardiol 2012;9:267–75.
4. Borghi C, Cicero AFG. Hypertension: management perspectives. Expert Opin Pharmacother 2012;13:1999–2003.
5. Gupta R, Guptha S. Strategies for initial management of hypertension. Indian J Med Res 2010;132:531–42.
6. Pietrzak E, Cotea C, Pullman S. Primary and secondary prevention of cardiovascular disease. J Cardiopulm Rehabil Prev 2014;34:303–17.
7. Watson AJ, Singh K, Myint-U K, et al. Evaluating a web-based self-management program for employees with hypertension and prehypertension: A randomized clinical trial. Am Heart J 2012;164:625–31.
8. Thomas KL, Shah BR, Elliot-Bynum S, et al. Check it, change it: a community-based, multifaceted intervention to improve blood pressure control. Circ Cardiovasc Qual Outcomes 2014;7:828–34.
9. Hallberg I, Ranerup A, Kjellgren K. Supporting the self-management of hypertension: Patients’ experiences of using a mobile phone-based system. J Hum Hypertens 2016;30:141–6.
10. Nolan RP, Liu S, Payne AYM. E-counseling as an emerging preventive strategy for hypertension. Curr Opin Cardiol 2014;29:319–23.
11. Carter BL, Bosworth HB, Green BB. The hypertension team: the role of the pharmacist, nurse, and teamwork in hypertension therapy. J Clin Hypertens 2012;14:51–65.
HIPEC for Ovarian Cancer: Standard of Care or Experimental Approach?
Study Overview
Objective. To evaluate whether the addition of hyperthermic intraperotoneal chemotherapy (HIPEC) to interval cytoreductive surgery would improve outcomes among patients who were receiving neoadjuvant chemotherapy for stage III epithelial ovarian cancer.
Design. Phase 3 prospective randomized clinical trial.
Setting and participants. The trial was conducted at 8 hospitals in the Netherlands and Belgium at which medical personnel had experience in administering HIPEC in patients with peritoneal disease from colon cancer or from pseudomyxoma perotinei. Eligible patients had newly diagnosed stage III epithelial ovarian, fallopian tube, or peritoneal cancer and were referred for neoadjuvant chemotherapy because of extensive abdominal disease or incomplete cytoreductive surgery (one or more residual tumors measuring > 1 cm in diameter). Eligibility criteria also including performance status score of 0 to 2, normal blood counts, and adequate renal function.
Intervention. At the time of surgery, patients were randomly assigned in a 1:1 ratio to undergo interval cytoreductive surgery either with HIPEC (surgery-plus-HIPEC group) or without HIPEC (surgery group). HIPEC was administered at the end of the cytoreductive surgical procedure. The abdomen was filled with saline that circulated continuously with the use of a roller pump through a heat exchanger. Perfusion with cisplatin at a dose of 100 mg per square meter and at a flow rate of 1 liter per minute was then initiated. The procedure took 120 minutes in total. To prevent nephrotoxicity, sodium thiosulphate was administered at the start of perfusion as an intravenous bolus (9 g per square meter in 200 mL), followed by a continuous infusion (12 g per square meter in 1000 mL) over 6 hours. Patient received in addition 3 cycles of carboplatin and paclitaxel after surgery. During follow-up, physical examinations and measurement of CA-125 level were repeated every 3 months for 2 years and then every 6 months until 5 years after the completion of chemotherapy. Computed tomography was performed at 1, 6, 12, and 24 months after the last cycle of chemotherapy.
Main outcome measure. The primary endpoint was recurrence-free survival in the intent-to-treat population. Secondary endpoints included overall survival, the side-effect profile, and health-related quality of life.
Main results. A total of 245 women were randomized between April 2007 and April 2016. The median follow-up at the time of recurrence-free survival analysis was 4.7 years. Recurrence-free survival events occurred in 81% of the HIPEC group vs 89% of the control group; median recurrence-free survival was 14.2 months vs 10.7 months, respectively (hazard ratio [HR] 0.66, P = 0.003). The benefit of HIPEC was consistent across stratification factors and post hoc subgroups. Hazard ratios (none reaching statistical significance) were 0.63 and 0.72 for those aged ≥ 65 and < 65 years; 0.69 and 0.56 for those with high-grade serous and other histology; 0.71 and 0.47 for those with no previous surgery and previous surgery; 0.64 and 0.66 for those with 0 to 5 and 6 to 8 involved regions; and 0.69 and 0.61 for those with no laparoscopy vs laparoscopy before surgery. Death occurred in 50% of the hyperthermic intraperitoneal chemotherapy group vs 62% of the control group; median overall survival was 45.7 vs 33.9 months (HR 0.67, P = 0.02).
No significant differences between the HIPEC and control groups were observed in the incidence of adverse events of any grade. The most common adverse events of any grade in the HIPEC group were nausea (63% vs 57%), abdominal pain (60% vs 575), and fatigue (37% vs 30%). Grade ≥ 3 adverse events occurred in 27% vs 25% of patients (P = 0.76). The most common grade 3 or 4 adverse events in the HIPEC group were infection (6% vs 2%), abdominal pain (5% vs 6%), and ileus (4% vs 2%). Among the patients who underwent bowel resection, a colostomy or ileostomy was performed more commonly among patients in the surgery-plus-HIPEC group (21 of 29 patients [72%]) than among those in the surgery group (13 of 30 patients [43%]) (P = 0.04).
Conclusion. Among patients with stage III epithelial ovarian cancer, the addition of hyperthermic intraperitoneal chemotherapy to interval cytoreductive surgery resulted in longer recurrence-free survival and overall survival than surgery alone and did not result in higher rates of side effects.
Commentary
Ovarian cancer is associated with the highest mortality of all gynecologic cancers in the Western world [1].The majority of the patients have advanced disease at diagnosis and the most effective treatment for advanced disease involved maximum debulking surgery followed by chemotherapy. For those patients for whom primary surgery is not feasible, primary chemotherapy is given, which is followed by interval debulking after 3 courses of chemotherapy [2].However, outcome remains dismal for patients with advanced disease. Regional (intraperitoneal) chemotherapy theoretically results in a decreased rate of systemic toxic effects and may improve outcomes by eliminating residual microscopic disease more effectively than intravenous chemotherapy [3].
Intraperitoneal chemotherapy during surgery that can be delivered under hyperthermic conditions is termed hyperthermic intraperitoneal chemotherapy. Rationale for using hyperthermic conditions when delivering intraperitoneal chemotherapy is multifactorial. Clinical hyperthermia is defined as the use of temperatures of 41oC and higher. Hyperthermia itself has a direct cytotoxic effects on cells caused by impaired DNA repair, denaturation of proteins, inductions of heat-shock proteins which may serve as receptors for natural killer-cells, induction of apoptosis, and inhibition of angiogenesis. In addition to its intrinsic cytotoxic effect, hyperthermia acts in synergy with some chemotherapeutics agents and increase peritoneal and tumour drug penetration [4].
The study by van Driel et al evaluates the impact of addition of HIPEC to interval cytoreductive surgery in patients who received neoadjuvant chemotherapy for stage III epithelial ovarian cancer. Authors found that addition of HIPEC resulted in 11.8 months improvement in overall survival compared to surgery alone without increased rate of side effects.
The outcomes of the trial by van Driel et al are encouraging, but questions remain about how to apply these results in everyday clinical practice. First, with the extensive reported experience with HIPEC in select single center or multicenter trials, it is reasonable to conclude the procedure can be successfully undertaken by well-trained surgical/gynecologic oncologists and at institutions experienced in the approach. However, clinical trials have limited external validity, and while providing evidence regarding efficacy (ie, the effect of the intervention under highly selected conditions), they generally do not provide evidence of effectiveness (ie, the benefit to the general population of patients with the disease). Can the same results be reproduced in hospitals across the country? Second, what part of HIPEC was responsible for benefit? Was it merely administration of chemotherapy through intraperitoneal route? Is hyperthermia necessary to see the observed benefit in this trial? The answers to these questions are not known. Third, the assessment of cost-benefit ratio warrants serious consideration as well. As authors pointed, the addition of HIPEC resulted in extension of duration of surgery by 2 hours and a perfusionist was needed. Additional standard costs are incurred due to the use of HIPEC machine, the disposable products needed to administer HIPEC, and the 1-day stay in the ICU. Increased use of diverting colostomy and ileostomy will also increase the overall cost of the treatment.
Applications for Clinical Practice
This trial is an important step in establishing the efficacy of adding HIPEC to interval cytoreductive surgery without increasing the side effects. However, whether the same results can be reproduced at centers at which surgeons do not have as much expertise in administering HIPEC remains to be seen. New confirmatory clinical trials of HIPEC are needed before it can be recommended as a common treatment strategy.
—Deval Rajyaguru, MD, Gundersen Health System, La Crosse, WI
1. Levi F, Lucchini F, Negri E, La Vecchia C. Trends in mortality from major cancers in the European Union, including acceding countries, in 2004. Cancer 2006;101:2843–50.
2. van der Burg MEL, van Lent M, Buyse M, et al. The effect of debulking surgery after induction chemotherapy on the prognosis in advanced epithelial ovarian cancer. N Engl J Med 1995;332:629–34.
3. Armstrong DK, Bundy B, Wenzel L, et al. Intraperitoneal cisplatin and paclitaxel in ovarian cancer. N Engl J Med 2006;354:34–43.
4. Ohno S, Siddik ZH, Kido Y, et al. Thermal enhancement of drug uptake and DNA adducts as a possible mechanism for the effect of sequencing hyperthermia on cisplatin-induced cytotoxicity in L1210 cells. Cancer Chemother Pharmacol 1994;34:302–6.
Study Overview
Objective. To evaluate whether the addition of hyperthermic intraperotoneal chemotherapy (HIPEC) to interval cytoreductive surgery would improve outcomes among patients who were receiving neoadjuvant chemotherapy for stage III epithelial ovarian cancer.
Design. Phase 3 prospective randomized clinical trial.
Setting and participants. The trial was conducted at 8 hospitals in the Netherlands and Belgium at which medical personnel had experience in administering HIPEC in patients with peritoneal disease from colon cancer or from pseudomyxoma perotinei. Eligible patients had newly diagnosed stage III epithelial ovarian, fallopian tube, or peritoneal cancer and were referred for neoadjuvant chemotherapy because of extensive abdominal disease or incomplete cytoreductive surgery (one or more residual tumors measuring > 1 cm in diameter). Eligibility criteria also including performance status score of 0 to 2, normal blood counts, and adequate renal function.
Intervention. At the time of surgery, patients were randomly assigned in a 1:1 ratio to undergo interval cytoreductive surgery either with HIPEC (surgery-plus-HIPEC group) or without HIPEC (surgery group). HIPEC was administered at the end of the cytoreductive surgical procedure. The abdomen was filled with saline that circulated continuously with the use of a roller pump through a heat exchanger. Perfusion with cisplatin at a dose of 100 mg per square meter and at a flow rate of 1 liter per minute was then initiated. The procedure took 120 minutes in total. To prevent nephrotoxicity, sodium thiosulphate was administered at the start of perfusion as an intravenous bolus (9 g per square meter in 200 mL), followed by a continuous infusion (12 g per square meter in 1000 mL) over 6 hours. Patient received in addition 3 cycles of carboplatin and paclitaxel after surgery. During follow-up, physical examinations and measurement of CA-125 level were repeated every 3 months for 2 years and then every 6 months until 5 years after the completion of chemotherapy. Computed tomography was performed at 1, 6, 12, and 24 months after the last cycle of chemotherapy.
Main outcome measure. The primary endpoint was recurrence-free survival in the intent-to-treat population. Secondary endpoints included overall survival, the side-effect profile, and health-related quality of life.
Main results. A total of 245 women were randomized between April 2007 and April 2016. The median follow-up at the time of recurrence-free survival analysis was 4.7 years. Recurrence-free survival events occurred in 81% of the HIPEC group vs 89% of the control group; median recurrence-free survival was 14.2 months vs 10.7 months, respectively (hazard ratio [HR] 0.66, P = 0.003). The benefit of HIPEC was consistent across stratification factors and post hoc subgroups. Hazard ratios (none reaching statistical significance) were 0.63 and 0.72 for those aged ≥ 65 and < 65 years; 0.69 and 0.56 for those with high-grade serous and other histology; 0.71 and 0.47 for those with no previous surgery and previous surgery; 0.64 and 0.66 for those with 0 to 5 and 6 to 8 involved regions; and 0.69 and 0.61 for those with no laparoscopy vs laparoscopy before surgery. Death occurred in 50% of the hyperthermic intraperitoneal chemotherapy group vs 62% of the control group; median overall survival was 45.7 vs 33.9 months (HR 0.67, P = 0.02).
No significant differences between the HIPEC and control groups were observed in the incidence of adverse events of any grade. The most common adverse events of any grade in the HIPEC group were nausea (63% vs 57%), abdominal pain (60% vs 575), and fatigue (37% vs 30%). Grade ≥ 3 adverse events occurred in 27% vs 25% of patients (P = 0.76). The most common grade 3 or 4 adverse events in the HIPEC group were infection (6% vs 2%), abdominal pain (5% vs 6%), and ileus (4% vs 2%). Among the patients who underwent bowel resection, a colostomy or ileostomy was performed more commonly among patients in the surgery-plus-HIPEC group (21 of 29 patients [72%]) than among those in the surgery group (13 of 30 patients [43%]) (P = 0.04).
Conclusion. Among patients with stage III epithelial ovarian cancer, the addition of hyperthermic intraperitoneal chemotherapy to interval cytoreductive surgery resulted in longer recurrence-free survival and overall survival than surgery alone and did not result in higher rates of side effects.
Commentary
Ovarian cancer is associated with the highest mortality of all gynecologic cancers in the Western world [1].The majority of the patients have advanced disease at diagnosis and the most effective treatment for advanced disease involved maximum debulking surgery followed by chemotherapy. For those patients for whom primary surgery is not feasible, primary chemotherapy is given, which is followed by interval debulking after 3 courses of chemotherapy [2].However, outcome remains dismal for patients with advanced disease. Regional (intraperitoneal) chemotherapy theoretically results in a decreased rate of systemic toxic effects and may improve outcomes by eliminating residual microscopic disease more effectively than intravenous chemotherapy [3].
Intraperitoneal chemotherapy during surgery that can be delivered under hyperthermic conditions is termed hyperthermic intraperitoneal chemotherapy. Rationale for using hyperthermic conditions when delivering intraperitoneal chemotherapy is multifactorial. Clinical hyperthermia is defined as the use of temperatures of 41oC and higher. Hyperthermia itself has a direct cytotoxic effects on cells caused by impaired DNA repair, denaturation of proteins, inductions of heat-shock proteins which may serve as receptors for natural killer-cells, induction of apoptosis, and inhibition of angiogenesis. In addition to its intrinsic cytotoxic effect, hyperthermia acts in synergy with some chemotherapeutics agents and increase peritoneal and tumour drug penetration [4].
The study by van Driel et al evaluates the impact of addition of HIPEC to interval cytoreductive surgery in patients who received neoadjuvant chemotherapy for stage III epithelial ovarian cancer. Authors found that addition of HIPEC resulted in 11.8 months improvement in overall survival compared to surgery alone without increased rate of side effects.
The outcomes of the trial by van Driel et al are encouraging, but questions remain about how to apply these results in everyday clinical practice. First, with the extensive reported experience with HIPEC in select single center or multicenter trials, it is reasonable to conclude the procedure can be successfully undertaken by well-trained surgical/gynecologic oncologists and at institutions experienced in the approach. However, clinical trials have limited external validity, and while providing evidence regarding efficacy (ie, the effect of the intervention under highly selected conditions), they generally do not provide evidence of effectiveness (ie, the benefit to the general population of patients with the disease). Can the same results be reproduced in hospitals across the country? Second, what part of HIPEC was responsible for benefit? Was it merely administration of chemotherapy through intraperitoneal route? Is hyperthermia necessary to see the observed benefit in this trial? The answers to these questions are not known. Third, the assessment of cost-benefit ratio warrants serious consideration as well. As authors pointed, the addition of HIPEC resulted in extension of duration of surgery by 2 hours and a perfusionist was needed. Additional standard costs are incurred due to the use of HIPEC machine, the disposable products needed to administer HIPEC, and the 1-day stay in the ICU. Increased use of diverting colostomy and ileostomy will also increase the overall cost of the treatment.
Applications for Clinical Practice
This trial is an important step in establishing the efficacy of adding HIPEC to interval cytoreductive surgery without increasing the side effects. However, whether the same results can be reproduced at centers at which surgeons do not have as much expertise in administering HIPEC remains to be seen. New confirmatory clinical trials of HIPEC are needed before it can be recommended as a common treatment strategy.
—Deval Rajyaguru, MD, Gundersen Health System, La Crosse, WI
Study Overview
Objective. To evaluate whether the addition of hyperthermic intraperotoneal chemotherapy (HIPEC) to interval cytoreductive surgery would improve outcomes among patients who were receiving neoadjuvant chemotherapy for stage III epithelial ovarian cancer.
Design. Phase 3 prospective randomized clinical trial.
Setting and participants. The trial was conducted at 8 hospitals in the Netherlands and Belgium at which medical personnel had experience in administering HIPEC in patients with peritoneal disease from colon cancer or from pseudomyxoma perotinei. Eligible patients had newly diagnosed stage III epithelial ovarian, fallopian tube, or peritoneal cancer and were referred for neoadjuvant chemotherapy because of extensive abdominal disease or incomplete cytoreductive surgery (one or more residual tumors measuring > 1 cm in diameter). Eligibility criteria also including performance status score of 0 to 2, normal blood counts, and adequate renal function.
Intervention. At the time of surgery, patients were randomly assigned in a 1:1 ratio to undergo interval cytoreductive surgery either with HIPEC (surgery-plus-HIPEC group) or without HIPEC (surgery group). HIPEC was administered at the end of the cytoreductive surgical procedure. The abdomen was filled with saline that circulated continuously with the use of a roller pump through a heat exchanger. Perfusion with cisplatin at a dose of 100 mg per square meter and at a flow rate of 1 liter per minute was then initiated. The procedure took 120 minutes in total. To prevent nephrotoxicity, sodium thiosulphate was administered at the start of perfusion as an intravenous bolus (9 g per square meter in 200 mL), followed by a continuous infusion (12 g per square meter in 1000 mL) over 6 hours. Patient received in addition 3 cycles of carboplatin and paclitaxel after surgery. During follow-up, physical examinations and measurement of CA-125 level were repeated every 3 months for 2 years and then every 6 months until 5 years after the completion of chemotherapy. Computed tomography was performed at 1, 6, 12, and 24 months after the last cycle of chemotherapy.
Main outcome measure. The primary endpoint was recurrence-free survival in the intent-to-treat population. Secondary endpoints included overall survival, the side-effect profile, and health-related quality of life.
Main results. A total of 245 women were randomized between April 2007 and April 2016. The median follow-up at the time of recurrence-free survival analysis was 4.7 years. Recurrence-free survival events occurred in 81% of the HIPEC group vs 89% of the control group; median recurrence-free survival was 14.2 months vs 10.7 months, respectively (hazard ratio [HR] 0.66, P = 0.003). The benefit of HIPEC was consistent across stratification factors and post hoc subgroups. Hazard ratios (none reaching statistical significance) were 0.63 and 0.72 for those aged ≥ 65 and < 65 years; 0.69 and 0.56 for those with high-grade serous and other histology; 0.71 and 0.47 for those with no previous surgery and previous surgery; 0.64 and 0.66 for those with 0 to 5 and 6 to 8 involved regions; and 0.69 and 0.61 for those with no laparoscopy vs laparoscopy before surgery. Death occurred in 50% of the hyperthermic intraperitoneal chemotherapy group vs 62% of the control group; median overall survival was 45.7 vs 33.9 months (HR 0.67, P = 0.02).
No significant differences between the HIPEC and control groups were observed in the incidence of adverse events of any grade. The most common adverse events of any grade in the HIPEC group were nausea (63% vs 57%), abdominal pain (60% vs 575), and fatigue (37% vs 30%). Grade ≥ 3 adverse events occurred in 27% vs 25% of patients (P = 0.76). The most common grade 3 or 4 adverse events in the HIPEC group were infection (6% vs 2%), abdominal pain (5% vs 6%), and ileus (4% vs 2%). Among the patients who underwent bowel resection, a colostomy or ileostomy was performed more commonly among patients in the surgery-plus-HIPEC group (21 of 29 patients [72%]) than among those in the surgery group (13 of 30 patients [43%]) (P = 0.04).
Conclusion. Among patients with stage III epithelial ovarian cancer, the addition of hyperthermic intraperitoneal chemotherapy to interval cytoreductive surgery resulted in longer recurrence-free survival and overall survival than surgery alone and did not result in higher rates of side effects.
Commentary
Ovarian cancer is associated with the highest mortality of all gynecologic cancers in the Western world [1].The majority of the patients have advanced disease at diagnosis and the most effective treatment for advanced disease involved maximum debulking surgery followed by chemotherapy. For those patients for whom primary surgery is not feasible, primary chemotherapy is given, which is followed by interval debulking after 3 courses of chemotherapy [2].However, outcome remains dismal for patients with advanced disease. Regional (intraperitoneal) chemotherapy theoretically results in a decreased rate of systemic toxic effects and may improve outcomes by eliminating residual microscopic disease more effectively than intravenous chemotherapy [3].
Intraperitoneal chemotherapy during surgery that can be delivered under hyperthermic conditions is termed hyperthermic intraperitoneal chemotherapy. Rationale for using hyperthermic conditions when delivering intraperitoneal chemotherapy is multifactorial. Clinical hyperthermia is defined as the use of temperatures of 41oC and higher. Hyperthermia itself has a direct cytotoxic effects on cells caused by impaired DNA repair, denaturation of proteins, inductions of heat-shock proteins which may serve as receptors for natural killer-cells, induction of apoptosis, and inhibition of angiogenesis. In addition to its intrinsic cytotoxic effect, hyperthermia acts in synergy with some chemotherapeutics agents and increase peritoneal and tumour drug penetration [4].
The study by van Driel et al evaluates the impact of addition of HIPEC to interval cytoreductive surgery in patients who received neoadjuvant chemotherapy for stage III epithelial ovarian cancer. Authors found that addition of HIPEC resulted in 11.8 months improvement in overall survival compared to surgery alone without increased rate of side effects.
The outcomes of the trial by van Driel et al are encouraging, but questions remain about how to apply these results in everyday clinical practice. First, with the extensive reported experience with HIPEC in select single center or multicenter trials, it is reasonable to conclude the procedure can be successfully undertaken by well-trained surgical/gynecologic oncologists and at institutions experienced in the approach. However, clinical trials have limited external validity, and while providing evidence regarding efficacy (ie, the effect of the intervention under highly selected conditions), they generally do not provide evidence of effectiveness (ie, the benefit to the general population of patients with the disease). Can the same results be reproduced in hospitals across the country? Second, what part of HIPEC was responsible for benefit? Was it merely administration of chemotherapy through intraperitoneal route? Is hyperthermia necessary to see the observed benefit in this trial? The answers to these questions are not known. Third, the assessment of cost-benefit ratio warrants serious consideration as well. As authors pointed, the addition of HIPEC resulted in extension of duration of surgery by 2 hours and a perfusionist was needed. Additional standard costs are incurred due to the use of HIPEC machine, the disposable products needed to administer HIPEC, and the 1-day stay in the ICU. Increased use of diverting colostomy and ileostomy will also increase the overall cost of the treatment.
Applications for Clinical Practice
This trial is an important step in establishing the efficacy of adding HIPEC to interval cytoreductive surgery without increasing the side effects. However, whether the same results can be reproduced at centers at which surgeons do not have as much expertise in administering HIPEC remains to be seen. New confirmatory clinical trials of HIPEC are needed before it can be recommended as a common treatment strategy.
—Deval Rajyaguru, MD, Gundersen Health System, La Crosse, WI
1. Levi F, Lucchini F, Negri E, La Vecchia C. Trends in mortality from major cancers in the European Union, including acceding countries, in 2004. Cancer 2006;101:2843–50.
2. van der Burg MEL, van Lent M, Buyse M, et al. The effect of debulking surgery after induction chemotherapy on the prognosis in advanced epithelial ovarian cancer. N Engl J Med 1995;332:629–34.
3. Armstrong DK, Bundy B, Wenzel L, et al. Intraperitoneal cisplatin and paclitaxel in ovarian cancer. N Engl J Med 2006;354:34–43.
4. Ohno S, Siddik ZH, Kido Y, et al. Thermal enhancement of drug uptake and DNA adducts as a possible mechanism for the effect of sequencing hyperthermia on cisplatin-induced cytotoxicity in L1210 cells. Cancer Chemother Pharmacol 1994;34:302–6.
1. Levi F, Lucchini F, Negri E, La Vecchia C. Trends in mortality from major cancers in the European Union, including acceding countries, in 2004. Cancer 2006;101:2843–50.
2. van der Burg MEL, van Lent M, Buyse M, et al. The effect of debulking surgery after induction chemotherapy on the prognosis in advanced epithelial ovarian cancer. N Engl J Med 1995;332:629–34.
3. Armstrong DK, Bundy B, Wenzel L, et al. Intraperitoneal cisplatin and paclitaxel in ovarian cancer. N Engl J Med 2006;354:34–43.
4. Ohno S, Siddik ZH, Kido Y, et al. Thermal enhancement of drug uptake and DNA adducts as a possible mechanism for the effect of sequencing hyperthermia on cisplatin-induced cytotoxicity in L1210 cells. Cancer Chemother Pharmacol 1994;34:302–6.
Effect of Romosozumab vs. Alendronate on Osteoporosis Fracture Risk
Study Overview
Objective. To determine if romosuzumab, an antisclerostin antibody, is superior to alendronate in reducing the incidence of fracture in postmenopausal women with osteoporosis at high-risk for fracture.
Design. Multicenter, international, double-blind, randomized clinical trial.
Setting and participants. 4093 postmenopausal women with osteoporosis and a previous fragility fracture were enrolled from over 40 countries worldwide. Patients were eligible for the study if they were 55 to 90 years old and were deemed at high risk for future fracture based on bone mineral density (BMD) T score at the total hip or femoral neck and fracture history. This included T score ≤ –2.5 and ≥ 1 moderate or severe vertebral fractures or ≥ 2 mild vertebral fractures; T score ≤ –2.0 and either ≥ 2 moderate or severe vertebral fractures or proximal femur fracture within 3 to 24 months before randomization. Subjects with a history of prior use of medications that affect bone metabolism were excluded, as were those with other metabolic bone disease, vitamin D deficiency, uncontrolled metabolic disease, malabsorption syndromes, history of transplant, severe renal insufficiency, malignancy or severe illness.
Intervention. Patients were randomized to either subcutaneous romosuzumab 210 mg monthly or oral alendronate 70 mg weekly for 12 months. Following the 12-month double-blind period, all patients received open-label weekly alendronate until the end of the trial, with maintenance of blinding to the initial treatment assignment. Primary analysis occurred when all subjects had completed the 24-month visit and clinical fractures had been confirmed in at least 330 patients. All patients received daily calcium and vitamin D. Lateral radiographs of the thoracic and lumbar spine were obtained at screening and months 12 and 24. The BMD at the lumbar spine and proximal femur was evaluated by dual-energy x-ray absorptiometry at baseline and every 12 months thereafter. Serum concentrations of bone-turnover markers were measured in a subgroup of patients.
Main outcome measures. The primary outcomes were the incidence of new vertebral fracture and the incidence of clinical fracture at 24 months. Clinical fractures included symptomatic vertebral fracture and nonvertebral fractures. The secondary outcomes were the BMD at the lumbar spine, total hip, and femoral neck at 12 and 24 months, the incidence of nonvertebral fracture, and fracture category. Safety outcomes included the incidence of adjudicated clinical events, including serious cardiovascular adverse events, osteonecrosis of the jaw, and atypical femoral fracture. Serious cardiovascular events were defined as cardiac ischemic event, cerebrovascular event, heart failure, death, non-coronary revascularization and peripheral vascular ischemic event not requiring revascularization.
Analysis. An intention to treat approach was used for data analysis. For the incidence of fractures, the treatment groups were compared using a Cox proportional-hazards model and the Mantel-Haenszel method with adjustment for age (< 75 vs ≥ 75 years), the presence or absence of severe vertebral fracture at baseline, and baseline BMD T score at the total hip. Between-group comparisons of the percentage change in BMD from baseline were analyzed by means of a repeated-measures model with adjustment for treatment, age category, baseline severe vertebral fracture, visit, treatment-by-visit interaction, and baseline BMD. Percentage changes from baseline in bone turnover were assessed using a Wilcoxon rank-sum test. The safety analysis included cumulated incidence rates of adverse outcomes. Odds ratios and confidence intervals were estimated for serious cardiovascular adverse events with the use of a logistic regression model.
Main results. 2046 participants were randomized to the romosozumab group and 2047 to the alendronate group. A total of 3654 participants from both groups (89.3%) completed 12 months of the trial, and 3150 (77.0%) completed the primary analysis period. The treatment groups were similar in baseline age, ethnicity, and fracture history. The majority of patients in both groups were non-Hispanic (> 60%) and ≥ 75 years old (> 50%). The mean age of the patients was 74.3 years. Baseline mean bone mineral density T scores were –2.96 at the lumbar spine, –2.8 at the total hip, and –2.9 at the femoral neck.
After 24 months of treatment, 6.2% of patients in the romosozumab-alendronate group had a new vertebral fracture as compared to 11.9% in the alendronate-alendronate group. This represents a 48% lower risk (risk ratio 0.52, 95% confidence interval [CI] 0.4–0.66; P < 0.001) of new vertebral fractures with romosozumab. At the time of the primary analysis, romosozumab followed by alendronate resulted in a 27% lower risk of clinical fracture than alendronate alone (hazard ratio 0.73, 95% CI 0.61–0.88; P < 0.001). 8.7% of the romosozumab-alendronate group had a nonvertebral fracture versus 10.6% in the alendronate-alendronate group, representing a 19% lower risk with romosozumab (hazard ratio 0.81, 95% CI 0.66–0.99; P = 0.04). Hip fractures occurred in 2.0% of the romosozumab-alendronate group as compared with 3.2% in the alendronate-alendronate group, representing a 38% lower risk with romosozumab (hazard ratio 0.62, 95% CI 0.42–0.92; P
Patients in the romosozumab-alendronate group had greater gains in BMD from baseline at the lumbar spine (14.9% vs 8.5%) and total hip (7% vs 3.6%) compared to the alendronate-alendronate group. (P < 0.001 for all comparisons). At 12 months, romosozumab treatment resulted in decreased levels of bone resorption marker β-CTX and increased levels of bone formation marker P1NP. β-CTX and P1NP decreased and remained below baseline levels after transitioning to alendronate. In the alendronate-alendronate group, P1NP and β-CTX decreased within 1 month and remained below baseline levels at 36 months.
Overall, the adverse events and serious event rates were similar between the 2 treatment groups during the double-blind period with 2 exceptions. In the first 12 months, injection-site reactions were reported in 4.4% of patients receiving romosozumab compared to 2.6% in those receiving alendronate. Patients in the romosozumab group had an increased incidence of adjudicated serious cardiovascular outcomes during the double-blind period, 2.5% (50 of 2040 patients) compared to 1.9% (38 of 2014 patients) in the alendronate group. During the open-label period, osteonecrosis of the jaw occurred in one patient in each group. Two atypical femoral fractures occurred in the romosozumab-alendronate group, compared to 4 in the alendronate-alendronate group. During the first 18 months of the study, binding anti-romosozumab antibodies were observed in 15.3% of the romosozumab group, with neutralizing antibodies in 0.6%.
Conclusion. In postmenopausal woman with osteoporosis and high fracture risk, 12 months of romosozumab treatment followed by alendronate resulted in significantly lower risk of fracture than use of alendronate alone.
Commentary
Osteoporosis-related fragility fractures carry a substantial risk of morbidity and mortality [1]. The goal of osteoporosis treatment is to ameliorate this risk. The current FDA-approved medications for osteoporosis can be divided into anabolic (teriparatide, abaloparatide) and anti-resorptive (bisphosphonate, denosumab, selective estrogen receptor modulators) categories. Sclerostin is a glycoprotein produced by osteocytes that inhibits the Wnt signaling pathway, thereby impeding osteoblast proliferation and activity. Romosozumab is a monoclonal antisclerostin antibody that results in both increased bone formation and decreased bone resorption [1]. By apparently uncoupling bone formation and resorption to increase bone mass, this medication holds promise to become the ideal osteoporosis drug.
Initial studies have shown that 12 months of romosozumab treatment significantly increased BMD at the lumbar spine (+11.3%), as compared to placebo (–0.1%), alendronate (+4.1%), and teriparatide (+7.1%) [2]. The Fracture Study in Postmenopausal Women with Osteoporosis (FRAME) was a large (7180 patients) randomized controlled trial that demonstrated that 12 months of romosozumab resulted in a 73% lower risk of vertebral fracture and 36% lower risk of clinical fracture compared to placebo [3]. However, there was no significant reduction in non-vertebral facture [3]. This may be due to the fact that FRAME excluded women at the highest risk for fracture. That is, exclusion criteria included history of hip fracture, any severe vertebral facture, or more than 2 moderate vertebral fractures. The current phase 3 ARCH trial (Active-Controlled Fracture Study in Postmenopausal Women with Osteoporosis at High Risk) attempts to clarify the potential benefit of romosozumab treatment in this very high-risk patient population, compared to a common first-line osteoporosis treatment, alendronate.
Indeed, ARCH demonstrates that sequential therapy with romosozumab followed by alendronate is superior to alendronate alone in improving BMD at all sites and preventing new vertebral, clinical, and non-vertebral fractures in postmenopausal women with osteoporosis and a history of fragility fracture. While ARCH was not designed as a cardiovascular outcomes trial, the higher rate of serious cardiovascular adverse events in the romosozumab group raises concern that romosozumab may have a negative effect on vascular tissue. Sclerostin is expressed in vascular smooth muscle [4] and upregulated at sites of vascular calcification [5]. It is possible that inhibiting sclerostin activity could alter vascular remodeling or increase vascular calcification. However, it is interesting that in the larger FRAME trial, no increase in adverse cardiovascular events was seen in the romosozumab group compared to placebo. This may be due to the fact that the average age of patients in FRAME was lower than ARCH. However, it also raises the hypothesis that alendronate itself may be protective in terms of cardiovascular risk. It has been postulated that bisphosphonates may have cardiovascular protective effects, given animal studies have demonstrated that alendronate downregulates monocyte chemoattractant protein 1 and macrophage inflammatory protein 1 [6]. However no cardioprotective benefit was seen in meta-analysis [7].
ARCH has several strengths, including its design as an international, double-blind, and randomized clinical trial. The primary outcome of cumulative fracture incidence is a hard endpoint and is clinically relevant. The intervention is simple and the results are clearly defined. The statistical assessment yields significant results. However, there are some limitations to the study. The lead author has received research support from Amgen and UCB Pharma, the makers of romosuzumab. Amgen and UCB Pharma designed the trial, and Amgen was responsible for trial oversight and data analyses per a pre-specified statistical analysis plan. An external independent data monitoring committee monitored unblinded safety data. Because there was no placebo-controlled arm, it is difficult to determine whether the unexpected cardiovascular signal was due to romosuzumab itself or a protective effect of alendronate. In addition, the majority of study participants were non-Hispanic from Central or Eastern Europe and Latin America, with only ~2% of patients from North America. As a result, ARCH findings may not be generalizable to other regional or ethnic populations. Furthermore, the majority of the patients were ≥ 75 years of age and were at very high fracture risk. It is unclear if younger patients or those with lower risk of fracture would see the same fracture prevention and BMD gain. In addition, because of the relatively short length of the trial, the durability of the metabolic bone benefit and cardiovascular risk is unknown. While the authors reported the increased anti-romosozumab antibodies in the romosozumab group had no detectable effect on efficacy or safety, given the short duration of the trial, this has not been proven.
Applications for Clinical Practice
The dual anti-resorptive and anabolic effect of romosozumab makes it an attractive and promising new osteoporosis therapy. ARCH suggests that sequential therapy with romosuzumab and alendronate is superior in terms of fracture prevention to alendronate alone in elderly postmenopausal women with osteoporosis and a history of fragility fractures, although longer term studies are needed to define the durability of this effect. While the absolute number of serious adjudicated cardiovascular events was low, the increased incidence in the romosuzumab group will likely prevent the FDA from approving this medication for widespread use at this time. Additional studies are needed to clarify the cause and magnitude of this cardiovascular risk and to determine whether prevention of fracture-associated morbidity and mortality is enough to mitigate it.
—Simona Frunza-Stefan, MD, and Hillary B. Whitlach, MD, University of Maryland School of Medicine, Baltimore, MD
1. Cummings SR, Melton IJ. Epidemiology and outcomes of osteoporotic fractures. Lancet 2002; 359:176107.
2. McClung MR, Grauer A, Boonen S, et al. Romosozumab in postmenopausal women with low bone mineral density. N Engl J Med 2014;370:412–20.
3. Cosman F, Crittenden DB, Adachi JD, et al. Romosozumab treatment in postmenopausal women with osteoporosis. N Engl J Med 2016;375:1532–43.
4. Zhu D, Mackenzie NCW, Millán JL, et al. The appearance and modulation of osteocyte marker expres- sion during calcification of vascular smooth muscle cells. PLoS One 2011;6:e19595.
5. Evenepoel P, Goffin E, Meijers B, et al. Sclerostin serum levels and vascular calcification progression in prevalent renal transplant recipients. J Clin Endocrinol Metab 2015;100:4669–76.
6. Masuda T, Deng X, Tamai R. Mouse macrophages primed with alendronate down-regulate monocyte chemoattractant protein-1 (MCP-1) and macrophage inflammatory protein-1alpha (MIP-1alpha) production in response to Toll-like receptor (TLR) 2 and TLR4 agonist via Smad3 activation. Int Immunopharmacol 2009;9:1115–21.
7. Kim DH, Rogers JR, Fulchino LA, et al. Bisphosphonates and risk of cardiovascular events: a meta-analysis. PLoS One 2015;10:e0122646.
Study Overview
Objective. To determine if romosuzumab, an antisclerostin antibody, is superior to alendronate in reducing the incidence of fracture in postmenopausal women with osteoporosis at high-risk for fracture.
Design. Multicenter, international, double-blind, randomized clinical trial.
Setting and participants. 4093 postmenopausal women with osteoporosis and a previous fragility fracture were enrolled from over 40 countries worldwide. Patients were eligible for the study if they were 55 to 90 years old and were deemed at high risk for future fracture based on bone mineral density (BMD) T score at the total hip or femoral neck and fracture history. This included T score ≤ –2.5 and ≥ 1 moderate or severe vertebral fractures or ≥ 2 mild vertebral fractures; T score ≤ –2.0 and either ≥ 2 moderate or severe vertebral fractures or proximal femur fracture within 3 to 24 months before randomization. Subjects with a history of prior use of medications that affect bone metabolism were excluded, as were those with other metabolic bone disease, vitamin D deficiency, uncontrolled metabolic disease, malabsorption syndromes, history of transplant, severe renal insufficiency, malignancy or severe illness.
Intervention. Patients were randomized to either subcutaneous romosuzumab 210 mg monthly or oral alendronate 70 mg weekly for 12 months. Following the 12-month double-blind period, all patients received open-label weekly alendronate until the end of the trial, with maintenance of blinding to the initial treatment assignment. Primary analysis occurred when all subjects had completed the 24-month visit and clinical fractures had been confirmed in at least 330 patients. All patients received daily calcium and vitamin D. Lateral radiographs of the thoracic and lumbar spine were obtained at screening and months 12 and 24. The BMD at the lumbar spine and proximal femur was evaluated by dual-energy x-ray absorptiometry at baseline and every 12 months thereafter. Serum concentrations of bone-turnover markers were measured in a subgroup of patients.
Main outcome measures. The primary outcomes were the incidence of new vertebral fracture and the incidence of clinical fracture at 24 months. Clinical fractures included symptomatic vertebral fracture and nonvertebral fractures. The secondary outcomes were the BMD at the lumbar spine, total hip, and femoral neck at 12 and 24 months, the incidence of nonvertebral fracture, and fracture category. Safety outcomes included the incidence of adjudicated clinical events, including serious cardiovascular adverse events, osteonecrosis of the jaw, and atypical femoral fracture. Serious cardiovascular events were defined as cardiac ischemic event, cerebrovascular event, heart failure, death, non-coronary revascularization and peripheral vascular ischemic event not requiring revascularization.
Analysis. An intention to treat approach was used for data analysis. For the incidence of fractures, the treatment groups were compared using a Cox proportional-hazards model and the Mantel-Haenszel method with adjustment for age (< 75 vs ≥ 75 years), the presence or absence of severe vertebral fracture at baseline, and baseline BMD T score at the total hip. Between-group comparisons of the percentage change in BMD from baseline were analyzed by means of a repeated-measures model with adjustment for treatment, age category, baseline severe vertebral fracture, visit, treatment-by-visit interaction, and baseline BMD. Percentage changes from baseline in bone turnover were assessed using a Wilcoxon rank-sum test. The safety analysis included cumulated incidence rates of adverse outcomes. Odds ratios and confidence intervals were estimated for serious cardiovascular adverse events with the use of a logistic regression model.
Main results. 2046 participants were randomized to the romosozumab group and 2047 to the alendronate group. A total of 3654 participants from both groups (89.3%) completed 12 months of the trial, and 3150 (77.0%) completed the primary analysis period. The treatment groups were similar in baseline age, ethnicity, and fracture history. The majority of patients in both groups were non-Hispanic (> 60%) and ≥ 75 years old (> 50%). The mean age of the patients was 74.3 years. Baseline mean bone mineral density T scores were –2.96 at the lumbar spine, –2.8 at the total hip, and –2.9 at the femoral neck.
After 24 months of treatment, 6.2% of patients in the romosozumab-alendronate group had a new vertebral fracture as compared to 11.9% in the alendronate-alendronate group. This represents a 48% lower risk (risk ratio 0.52, 95% confidence interval [CI] 0.4–0.66; P < 0.001) of new vertebral fractures with romosozumab. At the time of the primary analysis, romosozumab followed by alendronate resulted in a 27% lower risk of clinical fracture than alendronate alone (hazard ratio 0.73, 95% CI 0.61–0.88; P < 0.001). 8.7% of the romosozumab-alendronate group had a nonvertebral fracture versus 10.6% in the alendronate-alendronate group, representing a 19% lower risk with romosozumab (hazard ratio 0.81, 95% CI 0.66–0.99; P = 0.04). Hip fractures occurred in 2.0% of the romosozumab-alendronate group as compared with 3.2% in the alendronate-alendronate group, representing a 38% lower risk with romosozumab (hazard ratio 0.62, 95% CI 0.42–0.92; P
Patients in the romosozumab-alendronate group had greater gains in BMD from baseline at the lumbar spine (14.9% vs 8.5%) and total hip (7% vs 3.6%) compared to the alendronate-alendronate group. (P < 0.001 for all comparisons). At 12 months, romosozumab treatment resulted in decreased levels of bone resorption marker β-CTX and increased levels of bone formation marker P1NP. β-CTX and P1NP decreased and remained below baseline levels after transitioning to alendronate. In the alendronate-alendronate group, P1NP and β-CTX decreased within 1 month and remained below baseline levels at 36 months.
Overall, the adverse events and serious event rates were similar between the 2 treatment groups during the double-blind period with 2 exceptions. In the first 12 months, injection-site reactions were reported in 4.4% of patients receiving romosozumab compared to 2.6% in those receiving alendronate. Patients in the romosozumab group had an increased incidence of adjudicated serious cardiovascular outcomes during the double-blind period, 2.5% (50 of 2040 patients) compared to 1.9% (38 of 2014 patients) in the alendronate group. During the open-label period, osteonecrosis of the jaw occurred in one patient in each group. Two atypical femoral fractures occurred in the romosozumab-alendronate group, compared to 4 in the alendronate-alendronate group. During the first 18 months of the study, binding anti-romosozumab antibodies were observed in 15.3% of the romosozumab group, with neutralizing antibodies in 0.6%.
Conclusion. In postmenopausal woman with osteoporosis and high fracture risk, 12 months of romosozumab treatment followed by alendronate resulted in significantly lower risk of fracture than use of alendronate alone.
Commentary
Osteoporosis-related fragility fractures carry a substantial risk of morbidity and mortality [1]. The goal of osteoporosis treatment is to ameliorate this risk. The current FDA-approved medications for osteoporosis can be divided into anabolic (teriparatide, abaloparatide) and anti-resorptive (bisphosphonate, denosumab, selective estrogen receptor modulators) categories. Sclerostin is a glycoprotein produced by osteocytes that inhibits the Wnt signaling pathway, thereby impeding osteoblast proliferation and activity. Romosozumab is a monoclonal antisclerostin antibody that results in both increased bone formation and decreased bone resorption [1]. By apparently uncoupling bone formation and resorption to increase bone mass, this medication holds promise to become the ideal osteoporosis drug.
Initial studies have shown that 12 months of romosozumab treatment significantly increased BMD at the lumbar spine (+11.3%), as compared to placebo (–0.1%), alendronate (+4.1%), and teriparatide (+7.1%) [2]. The Fracture Study in Postmenopausal Women with Osteoporosis (FRAME) was a large (7180 patients) randomized controlled trial that demonstrated that 12 months of romosozumab resulted in a 73% lower risk of vertebral fracture and 36% lower risk of clinical fracture compared to placebo [3]. However, there was no significant reduction in non-vertebral facture [3]. This may be due to the fact that FRAME excluded women at the highest risk for fracture. That is, exclusion criteria included history of hip fracture, any severe vertebral facture, or more than 2 moderate vertebral fractures. The current phase 3 ARCH trial (Active-Controlled Fracture Study in Postmenopausal Women with Osteoporosis at High Risk) attempts to clarify the potential benefit of romosozumab treatment in this very high-risk patient population, compared to a common first-line osteoporosis treatment, alendronate.
Indeed, ARCH demonstrates that sequential therapy with romosozumab followed by alendronate is superior to alendronate alone in improving BMD at all sites and preventing new vertebral, clinical, and non-vertebral fractures in postmenopausal women with osteoporosis and a history of fragility fracture. While ARCH was not designed as a cardiovascular outcomes trial, the higher rate of serious cardiovascular adverse events in the romosozumab group raises concern that romosozumab may have a negative effect on vascular tissue. Sclerostin is expressed in vascular smooth muscle [4] and upregulated at sites of vascular calcification [5]. It is possible that inhibiting sclerostin activity could alter vascular remodeling or increase vascular calcification. However, it is interesting that in the larger FRAME trial, no increase in adverse cardiovascular events was seen in the romosozumab group compared to placebo. This may be due to the fact that the average age of patients in FRAME was lower than ARCH. However, it also raises the hypothesis that alendronate itself may be protective in terms of cardiovascular risk. It has been postulated that bisphosphonates may have cardiovascular protective effects, given animal studies have demonstrated that alendronate downregulates monocyte chemoattractant protein 1 and macrophage inflammatory protein 1 [6]. However no cardioprotective benefit was seen in meta-analysis [7].
ARCH has several strengths, including its design as an international, double-blind, and randomized clinical trial. The primary outcome of cumulative fracture incidence is a hard endpoint and is clinically relevant. The intervention is simple and the results are clearly defined. The statistical assessment yields significant results. However, there are some limitations to the study. The lead author has received research support from Amgen and UCB Pharma, the makers of romosuzumab. Amgen and UCB Pharma designed the trial, and Amgen was responsible for trial oversight and data analyses per a pre-specified statistical analysis plan. An external independent data monitoring committee monitored unblinded safety data. Because there was no placebo-controlled arm, it is difficult to determine whether the unexpected cardiovascular signal was due to romosuzumab itself or a protective effect of alendronate. In addition, the majority of study participants were non-Hispanic from Central or Eastern Europe and Latin America, with only ~2% of patients from North America. As a result, ARCH findings may not be generalizable to other regional or ethnic populations. Furthermore, the majority of the patients were ≥ 75 years of age and were at very high fracture risk. It is unclear if younger patients or those with lower risk of fracture would see the same fracture prevention and BMD gain. In addition, because of the relatively short length of the trial, the durability of the metabolic bone benefit and cardiovascular risk is unknown. While the authors reported the increased anti-romosozumab antibodies in the romosozumab group had no detectable effect on efficacy or safety, given the short duration of the trial, this has not been proven.
Applications for Clinical Practice
The dual anti-resorptive and anabolic effect of romosozumab makes it an attractive and promising new osteoporosis therapy. ARCH suggests that sequential therapy with romosuzumab and alendronate is superior in terms of fracture prevention to alendronate alone in elderly postmenopausal women with osteoporosis and a history of fragility fractures, although longer term studies are needed to define the durability of this effect. While the absolute number of serious adjudicated cardiovascular events was low, the increased incidence in the romosuzumab group will likely prevent the FDA from approving this medication for widespread use at this time. Additional studies are needed to clarify the cause and magnitude of this cardiovascular risk and to determine whether prevention of fracture-associated morbidity and mortality is enough to mitigate it.
—Simona Frunza-Stefan, MD, and Hillary B. Whitlach, MD, University of Maryland School of Medicine, Baltimore, MD
Study Overview
Objective. To determine if romosuzumab, an antisclerostin antibody, is superior to alendronate in reducing the incidence of fracture in postmenopausal women with osteoporosis at high-risk for fracture.
Design. Multicenter, international, double-blind, randomized clinical trial.
Setting and participants. 4093 postmenopausal women with osteoporosis and a previous fragility fracture were enrolled from over 40 countries worldwide. Patients were eligible for the study if they were 55 to 90 years old and were deemed at high risk for future fracture based on bone mineral density (BMD) T score at the total hip or femoral neck and fracture history. This included T score ≤ –2.5 and ≥ 1 moderate or severe vertebral fractures or ≥ 2 mild vertebral fractures; T score ≤ –2.0 and either ≥ 2 moderate or severe vertebral fractures or proximal femur fracture within 3 to 24 months before randomization. Subjects with a history of prior use of medications that affect bone metabolism were excluded, as were those with other metabolic bone disease, vitamin D deficiency, uncontrolled metabolic disease, malabsorption syndromes, history of transplant, severe renal insufficiency, malignancy or severe illness.
Intervention. Patients were randomized to either subcutaneous romosuzumab 210 mg monthly or oral alendronate 70 mg weekly for 12 months. Following the 12-month double-blind period, all patients received open-label weekly alendronate until the end of the trial, with maintenance of blinding to the initial treatment assignment. Primary analysis occurred when all subjects had completed the 24-month visit and clinical fractures had been confirmed in at least 330 patients. All patients received daily calcium and vitamin D. Lateral radiographs of the thoracic and lumbar spine were obtained at screening and months 12 and 24. The BMD at the lumbar spine and proximal femur was evaluated by dual-energy x-ray absorptiometry at baseline and every 12 months thereafter. Serum concentrations of bone-turnover markers were measured in a subgroup of patients.
Main outcome measures. The primary outcomes were the incidence of new vertebral fracture and the incidence of clinical fracture at 24 months. Clinical fractures included symptomatic vertebral fracture and nonvertebral fractures. The secondary outcomes were the BMD at the lumbar spine, total hip, and femoral neck at 12 and 24 months, the incidence of nonvertebral fracture, and fracture category. Safety outcomes included the incidence of adjudicated clinical events, including serious cardiovascular adverse events, osteonecrosis of the jaw, and atypical femoral fracture. Serious cardiovascular events were defined as cardiac ischemic event, cerebrovascular event, heart failure, death, non-coronary revascularization and peripheral vascular ischemic event not requiring revascularization.
Analysis. An intention to treat approach was used for data analysis. For the incidence of fractures, the treatment groups were compared using a Cox proportional-hazards model and the Mantel-Haenszel method with adjustment for age (< 75 vs ≥ 75 years), the presence or absence of severe vertebral fracture at baseline, and baseline BMD T score at the total hip. Between-group comparisons of the percentage change in BMD from baseline were analyzed by means of a repeated-measures model with adjustment for treatment, age category, baseline severe vertebral fracture, visit, treatment-by-visit interaction, and baseline BMD. Percentage changes from baseline in bone turnover were assessed using a Wilcoxon rank-sum test. The safety analysis included cumulated incidence rates of adverse outcomes. Odds ratios and confidence intervals were estimated for serious cardiovascular adverse events with the use of a logistic regression model.
Main results. 2046 participants were randomized to the romosozumab group and 2047 to the alendronate group. A total of 3654 participants from both groups (89.3%) completed 12 months of the trial, and 3150 (77.0%) completed the primary analysis period. The treatment groups were similar in baseline age, ethnicity, and fracture history. The majority of patients in both groups were non-Hispanic (> 60%) and ≥ 75 years old (> 50%). The mean age of the patients was 74.3 years. Baseline mean bone mineral density T scores were –2.96 at the lumbar spine, –2.8 at the total hip, and –2.9 at the femoral neck.
After 24 months of treatment, 6.2% of patients in the romosozumab-alendronate group had a new vertebral fracture as compared to 11.9% in the alendronate-alendronate group. This represents a 48% lower risk (risk ratio 0.52, 95% confidence interval [CI] 0.4–0.66; P < 0.001) of new vertebral fractures with romosozumab. At the time of the primary analysis, romosozumab followed by alendronate resulted in a 27% lower risk of clinical fracture than alendronate alone (hazard ratio 0.73, 95% CI 0.61–0.88; P < 0.001). 8.7% of the romosozumab-alendronate group had a nonvertebral fracture versus 10.6% in the alendronate-alendronate group, representing a 19% lower risk with romosozumab (hazard ratio 0.81, 95% CI 0.66–0.99; P = 0.04). Hip fractures occurred in 2.0% of the romosozumab-alendronate group as compared with 3.2% in the alendronate-alendronate group, representing a 38% lower risk with romosozumab (hazard ratio 0.62, 95% CI 0.42–0.92; P
Patients in the romosozumab-alendronate group had greater gains in BMD from baseline at the lumbar spine (14.9% vs 8.5%) and total hip (7% vs 3.6%) compared to the alendronate-alendronate group. (P < 0.001 for all comparisons). At 12 months, romosozumab treatment resulted in decreased levels of bone resorption marker β-CTX and increased levels of bone formation marker P1NP. β-CTX and P1NP decreased and remained below baseline levels after transitioning to alendronate. In the alendronate-alendronate group, P1NP and β-CTX decreased within 1 month and remained below baseline levels at 36 months.
Overall, the adverse events and serious event rates were similar between the 2 treatment groups during the double-blind period with 2 exceptions. In the first 12 months, injection-site reactions were reported in 4.4% of patients receiving romosozumab compared to 2.6% in those receiving alendronate. Patients in the romosozumab group had an increased incidence of adjudicated serious cardiovascular outcomes during the double-blind period, 2.5% (50 of 2040 patients) compared to 1.9% (38 of 2014 patients) in the alendronate group. During the open-label period, osteonecrosis of the jaw occurred in one patient in each group. Two atypical femoral fractures occurred in the romosozumab-alendronate group, compared to 4 in the alendronate-alendronate group. During the first 18 months of the study, binding anti-romosozumab antibodies were observed in 15.3% of the romosozumab group, with neutralizing antibodies in 0.6%.
Conclusion. In postmenopausal woman with osteoporosis and high fracture risk, 12 months of romosozumab treatment followed by alendronate resulted in significantly lower risk of fracture than use of alendronate alone.
Commentary
Osteoporosis-related fragility fractures carry a substantial risk of morbidity and mortality [1]. The goal of osteoporosis treatment is to ameliorate this risk. The current FDA-approved medications for osteoporosis can be divided into anabolic (teriparatide, abaloparatide) and anti-resorptive (bisphosphonate, denosumab, selective estrogen receptor modulators) categories. Sclerostin is a glycoprotein produced by osteocytes that inhibits the Wnt signaling pathway, thereby impeding osteoblast proliferation and activity. Romosozumab is a monoclonal antisclerostin antibody that results in both increased bone formation and decreased bone resorption [1]. By apparently uncoupling bone formation and resorption to increase bone mass, this medication holds promise to become the ideal osteoporosis drug.
Initial studies have shown that 12 months of romosozumab treatment significantly increased BMD at the lumbar spine (+11.3%), as compared to placebo (–0.1%), alendronate (+4.1%), and teriparatide (+7.1%) [2]. The Fracture Study in Postmenopausal Women with Osteoporosis (FRAME) was a large (7180 patients) randomized controlled trial that demonstrated that 12 months of romosozumab resulted in a 73% lower risk of vertebral fracture and 36% lower risk of clinical fracture compared to placebo [3]. However, there was no significant reduction in non-vertebral facture [3]. This may be due to the fact that FRAME excluded women at the highest risk for fracture. That is, exclusion criteria included history of hip fracture, any severe vertebral facture, or more than 2 moderate vertebral fractures. The current phase 3 ARCH trial (Active-Controlled Fracture Study in Postmenopausal Women with Osteoporosis at High Risk) attempts to clarify the potential benefit of romosozumab treatment in this very high-risk patient population, compared to a common first-line osteoporosis treatment, alendronate.
Indeed, ARCH demonstrates that sequential therapy with romosozumab followed by alendronate is superior to alendronate alone in improving BMD at all sites and preventing new vertebral, clinical, and non-vertebral fractures in postmenopausal women with osteoporosis and a history of fragility fracture. While ARCH was not designed as a cardiovascular outcomes trial, the higher rate of serious cardiovascular adverse events in the romosozumab group raises concern that romosozumab may have a negative effect on vascular tissue. Sclerostin is expressed in vascular smooth muscle [4] and upregulated at sites of vascular calcification [5]. It is possible that inhibiting sclerostin activity could alter vascular remodeling or increase vascular calcification. However, it is interesting that in the larger FRAME trial, no increase in adverse cardiovascular events was seen in the romosozumab group compared to placebo. This may be due to the fact that the average age of patients in FRAME was lower than ARCH. However, it also raises the hypothesis that alendronate itself may be protective in terms of cardiovascular risk. It has been postulated that bisphosphonates may have cardiovascular protective effects, given animal studies have demonstrated that alendronate downregulates monocyte chemoattractant protein 1 and macrophage inflammatory protein 1 [6]. However no cardioprotective benefit was seen in meta-analysis [7].
ARCH has several strengths, including its design as an international, double-blind, and randomized clinical trial. The primary outcome of cumulative fracture incidence is a hard endpoint and is clinically relevant. The intervention is simple and the results are clearly defined. The statistical assessment yields significant results. However, there are some limitations to the study. The lead author has received research support from Amgen and UCB Pharma, the makers of romosuzumab. Amgen and UCB Pharma designed the trial, and Amgen was responsible for trial oversight and data analyses per a pre-specified statistical analysis plan. An external independent data monitoring committee monitored unblinded safety data. Because there was no placebo-controlled arm, it is difficult to determine whether the unexpected cardiovascular signal was due to romosuzumab itself or a protective effect of alendronate. In addition, the majority of study participants were non-Hispanic from Central or Eastern Europe and Latin America, with only ~2% of patients from North America. As a result, ARCH findings may not be generalizable to other regional or ethnic populations. Furthermore, the majority of the patients were ≥ 75 years of age and were at very high fracture risk. It is unclear if younger patients or those with lower risk of fracture would see the same fracture prevention and BMD gain. In addition, because of the relatively short length of the trial, the durability of the metabolic bone benefit and cardiovascular risk is unknown. While the authors reported the increased anti-romosozumab antibodies in the romosozumab group had no detectable effect on efficacy or safety, given the short duration of the trial, this has not been proven.
Applications for Clinical Practice
The dual anti-resorptive and anabolic effect of romosozumab makes it an attractive and promising new osteoporosis therapy. ARCH suggests that sequential therapy with romosuzumab and alendronate is superior in terms of fracture prevention to alendronate alone in elderly postmenopausal women with osteoporosis and a history of fragility fractures, although longer term studies are needed to define the durability of this effect. While the absolute number of serious adjudicated cardiovascular events was low, the increased incidence in the romosuzumab group will likely prevent the FDA from approving this medication for widespread use at this time. Additional studies are needed to clarify the cause and magnitude of this cardiovascular risk and to determine whether prevention of fracture-associated morbidity and mortality is enough to mitigate it.
—Simona Frunza-Stefan, MD, and Hillary B. Whitlach, MD, University of Maryland School of Medicine, Baltimore, MD
1. Cummings SR, Melton IJ. Epidemiology and outcomes of osteoporotic fractures. Lancet 2002; 359:176107.
2. McClung MR, Grauer A, Boonen S, et al. Romosozumab in postmenopausal women with low bone mineral density. N Engl J Med 2014;370:412–20.
3. Cosman F, Crittenden DB, Adachi JD, et al. Romosozumab treatment in postmenopausal women with osteoporosis. N Engl J Med 2016;375:1532–43.
4. Zhu D, Mackenzie NCW, Millán JL, et al. The appearance and modulation of osteocyte marker expres- sion during calcification of vascular smooth muscle cells. PLoS One 2011;6:e19595.
5. Evenepoel P, Goffin E, Meijers B, et al. Sclerostin serum levels and vascular calcification progression in prevalent renal transplant recipients. J Clin Endocrinol Metab 2015;100:4669–76.
6. Masuda T, Deng X, Tamai R. Mouse macrophages primed with alendronate down-regulate monocyte chemoattractant protein-1 (MCP-1) and macrophage inflammatory protein-1alpha (MIP-1alpha) production in response to Toll-like receptor (TLR) 2 and TLR4 agonist via Smad3 activation. Int Immunopharmacol 2009;9:1115–21.
7. Kim DH, Rogers JR, Fulchino LA, et al. Bisphosphonates and risk of cardiovascular events: a meta-analysis. PLoS One 2015;10:e0122646.
1. Cummings SR, Melton IJ. Epidemiology and outcomes of osteoporotic fractures. Lancet 2002; 359:176107.
2. McClung MR, Grauer A, Boonen S, et al. Romosozumab in postmenopausal women with low bone mineral density. N Engl J Med 2014;370:412–20.
3. Cosman F, Crittenden DB, Adachi JD, et al. Romosozumab treatment in postmenopausal women with osteoporosis. N Engl J Med 2016;375:1532–43.
4. Zhu D, Mackenzie NCW, Millán JL, et al. The appearance and modulation of osteocyte marker expres- sion during calcification of vascular smooth muscle cells. PLoS One 2011;6:e19595.
5. Evenepoel P, Goffin E, Meijers B, et al. Sclerostin serum levels and vascular calcification progression in prevalent renal transplant recipients. J Clin Endocrinol Metab 2015;100:4669–76.
6. Masuda T, Deng X, Tamai R. Mouse macrophages primed with alendronate down-regulate monocyte chemoattractant protein-1 (MCP-1) and macrophage inflammatory protein-1alpha (MIP-1alpha) production in response to Toll-like receptor (TLR) 2 and TLR4 agonist via Smad3 activation. Int Immunopharmacol 2009;9:1115–21.
7. Kim DH, Rogers JR, Fulchino LA, et al. Bisphosphonates and risk of cardiovascular events: a meta-analysis. PLoS One 2015;10:e0122646.
Which Herpes Zoster Vaccine is Most Cost-Effective?
Study Overview
Objective. To assess the cost-effectiveness of the new adjuvanted herpes zoster subunit vaccine (HZ/su) as compared with that of the current live attenuated herpes zoster vaccine (ZVL), or no vaccine.
Design. Markov decision model evaluating 3 strategies from a societal perspective: (1) no vaccination, (2) vaccination with single dose ZVL, and (3) vaccination with 2-dose series of HZ/su.
Setting and participants. Data for the model were extracted from the US medical literature using PubMed through January 2015. Data were derived from studies of fewer than 100 patients to more than 30,000 patients, depending on the variable assessed. Variables included epidemiologic parameters, vaccine efficacy and adverse events, quality-adjusted life-years (QALYs), and costs. Because there is no standard willingness-to-pay (WTP) threshold for cost-effectiveness in the United States, $50,000 per QALY was chosen.
Main outcome measures. Total costs and QALYs.
Main results. At all ages, no vaccination was always the least expensive and least effective option, while HZ/su was always the most effective and less expensive than ZVL. At a proposed price of $280 per series ($140 per dose), HZ/su was more effective and less expensive than ZVL at all ages. The incremental cost-effectiveness ratios compared with no vaccination ranged from $20,038 to $30,084 per QALY, depending on vaccination age. The cost-effectiveness of HZ/su was insensitive to the waning rate of either vaccine due to its high efficacy, with initial level of protection close to 90% even among people 70 years or older.
Conclusion. At a manufacturer suggested price of $280 per series ($140 per dose), HZ/su would cost less than ZVL and has a high probability of offering good value.
Commentary
Herpes zosters is a localized, usually painful, cutaneous eruption resulting from reactivation of latent varicella zoster virus. It is a common disease with approximately one million cases occurring each year in the United States [1]. The incidence increases with age, from 5 cases per 1000 population in adults aged 50–59 years to 11 cases per 1000 population in persons aged ≥ 80 years. Postherpetic neuralgia, commonly defined as persistent pain for at least 90 days following the resolution of the herpes zoster rash, is the most common complication and occurs in 10% to 13% of herpes zoster cases in persons aged > 50 years [2,3].
In 2006, the US Food and Drug Administration (FDA) approved the ZVL vaccine Zostavax (Merck) for prevention of postherpetic neuralgia. By 2016, 33% of adults aged ≥ 60 years reported receipt of the vaccine [4]. However, ZVL does not prevent all herpes zoster, particularly among the elderly. Moreover, the efficacy wanes completely after approximately 10 years [5]. To address these shortcomings, a 2-dose HZ/su (Shingrix; GlaxoSmithKline) containing recombinant glycoprotein E in combination with a novel adjuvant (AS01B) was approved by the FDA in adults aged ≥ 50 years. In randomized controlled trials, HZ/su has an efficacy of close to 97%, even after age 70 years [6].
With the approval of the new attenuated herpes zoster vaccine, clinicians and patients face the question of which vaccine to get and when. The cost-effectiveness analysis published by Le and Rothberg in this study compare the value of HZ/su with ZVL vaccine and a no-vaccine strategy for individuals 60 years or older from the US societal perspective. The results suggest that, at $140 per dose, using HZ/su vaccine compared with no vaccine would cost between $20,038 and $30,084 per QALY and thus is a cost-effective strategy. The deterministic sensitivity analysis indicates that the overall results do not change under different assumptions about model input parameters, even if patients are nonadherent to the second dose of HZ/su vaccine.
As with any simulation study, the major limitation of this study is the accuracy of the model and the assumptions on which it is based. The body of evidence for benefits of ZVL was large, including multiple pre-licensure and post-licensure RCTs, as well as observational studies of effectiveness. On the other hand, the body of evidence for benefits of RZV was primarily informed by one high-quality RCT that studied vaccine efficacy through 4 years post-vaccination [4,6]. Currently, 3 other independent cost-effectiveness analysis are available. The Centers for Disease Control and Prevention model estimated HZ/su vaccine cost per QALY of $31,000 when vaccination occurred at age ≥ 50 years. The GlaxoSmithKline model, manufacturer of HZ/su vaccine, estimated a HZ/su vaccine cost per QALY of $12,000. While the Merck model, manufacturer of the ZVL vaccine, estimated a HZ/su vaccine cost per QALY of $107,000 [4]. In addition to model variables, the key assumption by Le and Rothberg are based on the HZ/su vaccine cost at $140 per dose and ZVL at $213. The study results need to be interpreted carefully if the vaccine prices turn out to be different in the future.
Applications for Clinical Practice
The current study by Le and Rothberg demonstrated the cost-effectiveness of the new HZ/su vaccine. Since the study’s publication, the CDC has updated their recommendations on immunization practices for use of herpes zoster vaccine [4]. HZ/su vaccine, also known as the recombinant zoster vaccine (RZV), is now preferred over ZVL for the prevention of herpes zoster and related complications. RZV is recommended for immunocompetent adults age 50 or older, 10 years earlier than previously for the ZVL. In addition, RZV is recommended for adults who previously received ZVL. Finally, RZV can be administered concomitantly with other adult vaccines, does not require screening for a history of varicella, and is likely safe for immunocompromised persons.
—Ka Ming Gordon Ngai, MD, MPH
1. Insinga RP, Itzler RF, Pellissier JM, et al. The incidence of herpes zoster in a United States administrative database. J Gen Intern Med 2005;20:748–53.
2. Yawn BP, Saddier P, Wollan PC, et al. A population-based study of the incidence and complication rates of herpes zoster before zoster vaccine introduction. Mayo Clin Proc 2007;82:1341–9.
3. Oxman MN, Levin MJ, Johnson GR, et al. Shingles Prevention Study Group. A vaccine to prevent herpes zoster and postherpetic neuralgia in older adults. N Eng J Med 2005;352:2271-84.
4. Dooling KL, Guo A, Patel M, et al. Recommendations of the Advisory Committee on Immunization Practices for use of herpes zoster vaccines. MMWR Morb Mortal Wkly Rep 2018;67:103–8.
5. Morrison VA, Johnson GR, Schmader KE, et al; Shingles Prevention Study Group. Long-term persistence of zoster vaccine efficacy. Clin Infect Dis 2015;60:900–9.
6. Lai H, Cunningham AL, Godeaux O, et al; ZOE-50 Study Group. Efficacy of an adjuvanted herpes zoster subunit vaccine in older adults. N Engl J Med 2015;372:2087–96.
Study Overview
Objective. To assess the cost-effectiveness of the new adjuvanted herpes zoster subunit vaccine (HZ/su) as compared with that of the current live attenuated herpes zoster vaccine (ZVL), or no vaccine.
Design. Markov decision model evaluating 3 strategies from a societal perspective: (1) no vaccination, (2) vaccination with single dose ZVL, and (3) vaccination with 2-dose series of HZ/su.
Setting and participants. Data for the model were extracted from the US medical literature using PubMed through January 2015. Data were derived from studies of fewer than 100 patients to more than 30,000 patients, depending on the variable assessed. Variables included epidemiologic parameters, vaccine efficacy and adverse events, quality-adjusted life-years (QALYs), and costs. Because there is no standard willingness-to-pay (WTP) threshold for cost-effectiveness in the United States, $50,000 per QALY was chosen.
Main outcome measures. Total costs and QALYs.
Main results. At all ages, no vaccination was always the least expensive and least effective option, while HZ/su was always the most effective and less expensive than ZVL. At a proposed price of $280 per series ($140 per dose), HZ/su was more effective and less expensive than ZVL at all ages. The incremental cost-effectiveness ratios compared with no vaccination ranged from $20,038 to $30,084 per QALY, depending on vaccination age. The cost-effectiveness of HZ/su was insensitive to the waning rate of either vaccine due to its high efficacy, with initial level of protection close to 90% even among people 70 years or older.
Conclusion. At a manufacturer suggested price of $280 per series ($140 per dose), HZ/su would cost less than ZVL and has a high probability of offering good value.
Commentary
Herpes zosters is a localized, usually painful, cutaneous eruption resulting from reactivation of latent varicella zoster virus. It is a common disease with approximately one million cases occurring each year in the United States [1]. The incidence increases with age, from 5 cases per 1000 population in adults aged 50–59 years to 11 cases per 1000 population in persons aged ≥ 80 years. Postherpetic neuralgia, commonly defined as persistent pain for at least 90 days following the resolution of the herpes zoster rash, is the most common complication and occurs in 10% to 13% of herpes zoster cases in persons aged > 50 years [2,3].
In 2006, the US Food and Drug Administration (FDA) approved the ZVL vaccine Zostavax (Merck) for prevention of postherpetic neuralgia. By 2016, 33% of adults aged ≥ 60 years reported receipt of the vaccine [4]. However, ZVL does not prevent all herpes zoster, particularly among the elderly. Moreover, the efficacy wanes completely after approximately 10 years [5]. To address these shortcomings, a 2-dose HZ/su (Shingrix; GlaxoSmithKline) containing recombinant glycoprotein E in combination with a novel adjuvant (AS01B) was approved by the FDA in adults aged ≥ 50 years. In randomized controlled trials, HZ/su has an efficacy of close to 97%, even after age 70 years [6].
With the approval of the new attenuated herpes zoster vaccine, clinicians and patients face the question of which vaccine to get and when. The cost-effectiveness analysis published by Le and Rothberg in this study compare the value of HZ/su with ZVL vaccine and a no-vaccine strategy for individuals 60 years or older from the US societal perspective. The results suggest that, at $140 per dose, using HZ/su vaccine compared with no vaccine would cost between $20,038 and $30,084 per QALY and thus is a cost-effective strategy. The deterministic sensitivity analysis indicates that the overall results do not change under different assumptions about model input parameters, even if patients are nonadherent to the second dose of HZ/su vaccine.
As with any simulation study, the major limitation of this study is the accuracy of the model and the assumptions on which it is based. The body of evidence for benefits of ZVL was large, including multiple pre-licensure and post-licensure RCTs, as well as observational studies of effectiveness. On the other hand, the body of evidence for benefits of RZV was primarily informed by one high-quality RCT that studied vaccine efficacy through 4 years post-vaccination [4,6]. Currently, 3 other independent cost-effectiveness analysis are available. The Centers for Disease Control and Prevention model estimated HZ/su vaccine cost per QALY of $31,000 when vaccination occurred at age ≥ 50 years. The GlaxoSmithKline model, manufacturer of HZ/su vaccine, estimated a HZ/su vaccine cost per QALY of $12,000. While the Merck model, manufacturer of the ZVL vaccine, estimated a HZ/su vaccine cost per QALY of $107,000 [4]. In addition to model variables, the key assumption by Le and Rothberg are based on the HZ/su vaccine cost at $140 per dose and ZVL at $213. The study results need to be interpreted carefully if the vaccine prices turn out to be different in the future.
Applications for Clinical Practice
The current study by Le and Rothberg demonstrated the cost-effectiveness of the new HZ/su vaccine. Since the study’s publication, the CDC has updated their recommendations on immunization practices for use of herpes zoster vaccine [4]. HZ/su vaccine, also known as the recombinant zoster vaccine (RZV), is now preferred over ZVL for the prevention of herpes zoster and related complications. RZV is recommended for immunocompetent adults age 50 or older, 10 years earlier than previously for the ZVL. In addition, RZV is recommended for adults who previously received ZVL. Finally, RZV can be administered concomitantly with other adult vaccines, does not require screening for a history of varicella, and is likely safe for immunocompromised persons.
—Ka Ming Gordon Ngai, MD, MPH
Study Overview
Objective. To assess the cost-effectiveness of the new adjuvanted herpes zoster subunit vaccine (HZ/su) as compared with that of the current live attenuated herpes zoster vaccine (ZVL), or no vaccine.
Design. Markov decision model evaluating 3 strategies from a societal perspective: (1) no vaccination, (2) vaccination with single dose ZVL, and (3) vaccination with 2-dose series of HZ/su.
Setting and participants. Data for the model were extracted from the US medical literature using PubMed through January 2015. Data were derived from studies of fewer than 100 patients to more than 30,000 patients, depending on the variable assessed. Variables included epidemiologic parameters, vaccine efficacy and adverse events, quality-adjusted life-years (QALYs), and costs. Because there is no standard willingness-to-pay (WTP) threshold for cost-effectiveness in the United States, $50,000 per QALY was chosen.
Main outcome measures. Total costs and QALYs.
Main results. At all ages, no vaccination was always the least expensive and least effective option, while HZ/su was always the most effective and less expensive than ZVL. At a proposed price of $280 per series ($140 per dose), HZ/su was more effective and less expensive than ZVL at all ages. The incremental cost-effectiveness ratios compared with no vaccination ranged from $20,038 to $30,084 per QALY, depending on vaccination age. The cost-effectiveness of HZ/su was insensitive to the waning rate of either vaccine due to its high efficacy, with initial level of protection close to 90% even among people 70 years or older.
Conclusion. At a manufacturer suggested price of $280 per series ($140 per dose), HZ/su would cost less than ZVL and has a high probability of offering good value.
Commentary
Herpes zosters is a localized, usually painful, cutaneous eruption resulting from reactivation of latent varicella zoster virus. It is a common disease with approximately one million cases occurring each year in the United States [1]. The incidence increases with age, from 5 cases per 1000 population in adults aged 50–59 years to 11 cases per 1000 population in persons aged ≥ 80 years. Postherpetic neuralgia, commonly defined as persistent pain for at least 90 days following the resolution of the herpes zoster rash, is the most common complication and occurs in 10% to 13% of herpes zoster cases in persons aged > 50 years [2,3].
In 2006, the US Food and Drug Administration (FDA) approved the ZVL vaccine Zostavax (Merck) for prevention of postherpetic neuralgia. By 2016, 33% of adults aged ≥ 60 years reported receipt of the vaccine [4]. However, ZVL does not prevent all herpes zoster, particularly among the elderly. Moreover, the efficacy wanes completely after approximately 10 years [5]. To address these shortcomings, a 2-dose HZ/su (Shingrix; GlaxoSmithKline) containing recombinant glycoprotein E in combination with a novel adjuvant (AS01B) was approved by the FDA in adults aged ≥ 50 years. In randomized controlled trials, HZ/su has an efficacy of close to 97%, even after age 70 years [6].
With the approval of the new attenuated herpes zoster vaccine, clinicians and patients face the question of which vaccine to get and when. The cost-effectiveness analysis published by Le and Rothberg in this study compare the value of HZ/su with ZVL vaccine and a no-vaccine strategy for individuals 60 years or older from the US societal perspective. The results suggest that, at $140 per dose, using HZ/su vaccine compared with no vaccine would cost between $20,038 and $30,084 per QALY and thus is a cost-effective strategy. The deterministic sensitivity analysis indicates that the overall results do not change under different assumptions about model input parameters, even if patients are nonadherent to the second dose of HZ/su vaccine.
As with any simulation study, the major limitation of this study is the accuracy of the model and the assumptions on which it is based. The body of evidence for benefits of ZVL was large, including multiple pre-licensure and post-licensure RCTs, as well as observational studies of effectiveness. On the other hand, the body of evidence for benefits of RZV was primarily informed by one high-quality RCT that studied vaccine efficacy through 4 years post-vaccination [4,6]. Currently, 3 other independent cost-effectiveness analysis are available. The Centers for Disease Control and Prevention model estimated HZ/su vaccine cost per QALY of $31,000 when vaccination occurred at age ≥ 50 years. The GlaxoSmithKline model, manufacturer of HZ/su vaccine, estimated a HZ/su vaccine cost per QALY of $12,000. While the Merck model, manufacturer of the ZVL vaccine, estimated a HZ/su vaccine cost per QALY of $107,000 [4]. In addition to model variables, the key assumption by Le and Rothberg are based on the HZ/su vaccine cost at $140 per dose and ZVL at $213. The study results need to be interpreted carefully if the vaccine prices turn out to be different in the future.
Applications for Clinical Practice
The current study by Le and Rothberg demonstrated the cost-effectiveness of the new HZ/su vaccine. Since the study’s publication, the CDC has updated their recommendations on immunization practices for use of herpes zoster vaccine [4]. HZ/su vaccine, also known as the recombinant zoster vaccine (RZV), is now preferred over ZVL for the prevention of herpes zoster and related complications. RZV is recommended for immunocompetent adults age 50 or older, 10 years earlier than previously for the ZVL. In addition, RZV is recommended for adults who previously received ZVL. Finally, RZV can be administered concomitantly with other adult vaccines, does not require screening for a history of varicella, and is likely safe for immunocompromised persons.
—Ka Ming Gordon Ngai, MD, MPH
1. Insinga RP, Itzler RF, Pellissier JM, et al. The incidence of herpes zoster in a United States administrative database. J Gen Intern Med 2005;20:748–53.
2. Yawn BP, Saddier P, Wollan PC, et al. A population-based study of the incidence and complication rates of herpes zoster before zoster vaccine introduction. Mayo Clin Proc 2007;82:1341–9.
3. Oxman MN, Levin MJ, Johnson GR, et al. Shingles Prevention Study Group. A vaccine to prevent herpes zoster and postherpetic neuralgia in older adults. N Eng J Med 2005;352:2271-84.
4. Dooling KL, Guo A, Patel M, et al. Recommendations of the Advisory Committee on Immunization Practices for use of herpes zoster vaccines. MMWR Morb Mortal Wkly Rep 2018;67:103–8.
5. Morrison VA, Johnson GR, Schmader KE, et al; Shingles Prevention Study Group. Long-term persistence of zoster vaccine efficacy. Clin Infect Dis 2015;60:900–9.
6. Lai H, Cunningham AL, Godeaux O, et al; ZOE-50 Study Group. Efficacy of an adjuvanted herpes zoster subunit vaccine in older adults. N Engl J Med 2015;372:2087–96.
1. Insinga RP, Itzler RF, Pellissier JM, et al. The incidence of herpes zoster in a United States administrative database. J Gen Intern Med 2005;20:748–53.
2. Yawn BP, Saddier P, Wollan PC, et al. A population-based study of the incidence and complication rates of herpes zoster before zoster vaccine introduction. Mayo Clin Proc 2007;82:1341–9.
3. Oxman MN, Levin MJ, Johnson GR, et al. Shingles Prevention Study Group. A vaccine to prevent herpes zoster and postherpetic neuralgia in older adults. N Eng J Med 2005;352:2271-84.
4. Dooling KL, Guo A, Patel M, et al. Recommendations of the Advisory Committee on Immunization Practices for use of herpes zoster vaccines. MMWR Morb Mortal Wkly Rep 2018;67:103–8.
5. Morrison VA, Johnson GR, Schmader KE, et al; Shingles Prevention Study Group. Long-term persistence of zoster vaccine efficacy. Clin Infect Dis 2015;60:900–9.
6. Lai H, Cunningham AL, Godeaux O, et al; ZOE-50 Study Group. Efficacy of an adjuvanted herpes zoster subunit vaccine in older adults. N Engl J Med 2015;372:2087–96.
Non-Culprit Lesion PCI Strategies in Patients with Acute Myocardial Infarction and Cardiogenic Shock
Study Overview
Objective. To determine if percutaneous coronary intervention (PCI) of non-culprit vessels should be performed in patients with acute myocardial infarction and cardiogenic shock.
Design. Multicenter randomized controlled trial.
Setting and participants. 706 patients who had multivessel disease, acute myocardial infarction, and cardiogenic shock were assigned to one of 2 revascularization strategies: PCI of the culprit lesion only with the option of staged revascularization of non-culprit lesions, or immediate multivessel PCI.
Main outcome measures. The primary endpoint was the composite of death or severe renal failure leading to renal replacement therapy within 30 days after randomization. Safety endpoints included bleeding and stroke.
Main results. The primary endpoint of death or renal replacement therapy occurred in 158 /344 patients (45.9%) in the culprit lesion–only PCI group and 189/341 patients (55.4%) in the multivessel PCI group (relative risk [RR] 0.83, 95% CI 0.72–0.96, P = 0.01). The rate of death from any cause was lower in the culprit lesion–only PCI group compared to multivessel PCI group (RR 0.84, 95% CI 0.72–0.98, P = 0.03). There was no difference in stroke and numerically lower risk of bleeding in culprit lesion–only PCI group (RR 0.75, 95% CI 0.55–1.03).
Conclusion. Among patients who had multivessel coronary artery disease and acute myocardial infarction with cardiogenic shock, the 30-day risk of death or severe renal failure leading to renal replacement therapy was lower in patients who initially underwent PCI of the culprit lesion only compared with patients who underwent immediate multivessel PCI.
Commentary
Patients presenting with cardiogenic shock at the time of acute myocardial infarction have the highest mortality—up to 50%. Since the original SHOCK trial in 1999, it is known that the mortality can be reduced by early revascularization of the culprit vessel [1]. However, whether the non-culprit vessel should be revascularized at the time of presentation with acute myocardial infarction is unknown.
Recently, there have been multiple trials suggesting the benefit of non-culprit vessel revascularization in patients with acute myocardial infarction who are hemodynamically stable at the time of their presentation. Three recent trials—PRAMI, CvPRIT and DANAMI-PRIMULTI—investigated this clinical question and found benefit of non-culprit vessel revascularization [2–4]. The results of these trials led to a focused update of the 2011 ACCF/AHA/SCAI guideline for percutaneous coronary intervention in 2015 [5]. Noninfarct-related artery PCI in hemodynamically stable patients presenting with acute myocardial infarction was upgraded to class IIb from class III [5]. Whether these findings can be extended to hemodynamically unstable (cardiogenic shock) patients is not mentioned in the guidelines.
In the current CULPRIT-SHOCK trial, Thiele et al investigated this clinical question by performing a well-designed clinical trial in patients with acute myocardial infarction and cardiogenic shock. They found that the composite endpoint of death and renal replacement therapy at 30 days occurred more frequently in the multivessel PCI group compared with the culprit lesion–only group (relative risk [RR] 0.83, 95% CI 0.71–0.96, P = 0.01). The composite endpoint was mainly driven by death (43.3% vs 51.6%, RR 0.84, 95% CI 0.72–0.98, P = 0.03), and the rate of renal replacement therapy was numerically higher in the mutivessel PCI group (11.6% vs 16.4%, P = 0.07). The study was conducted in the sickest population compared to prior trials as evidenced by high rate of mechanical ventilation (~80%), requirement of catecholamine support (~90%), and long ICU stay (median 5 days). The significance of non-culprit lesion was determined by angiogram (stenosis > 70%). The culprit vessel–only group had treatment of the culprit vessel only initially, but the staged intervention for non-culprit vessel was encouraged.
A unique point of this trial is that patients with chronic total occlusion (CTO) were included in the study and it was encouraged to attempt revascularization of CTO lesions, contrary to previous trials. Although CTO intervention improves angina and ejection fraction [6,7], whether CTO intervention has a mortality benefit needs further investigation. In the CULPRIT-SHOCK trial, 24% of patients had one or more CTO lesions. This most likely contributed to the increased contrast use in the multivessel PCI group (250 vs 190 mL, P < 0.01). CTO is considered a most challenging lesion to treat, and expertise and skill level vary among operators. In the hybrid CTO intervention model, it is recommended to stage the intervention as much as possible, as this type of intervention requires meticulous planning [8]. There is a possibility that attempting CTO intervention in this acute setting caused more harm than benefit. Furthermore, the investigators did not report the success rate of CTO intervention.
Another interesting finding of this trial is that the mortality of both groups was high (43.3% vs 51.6%). The revascularization arm of the original shock trial almost 20 years ago had a 30-day mortality of 46.7%, which is almost identical with the current CULPRIT-SHOCK study. Despite improvement in hemodynamic support such as Impella, TandemHeart, extracorporeal membrane oxygenation device, and improvement in medical therapy over the years, patients with cardiogenic shock with acute myocardial infarction have a dismal prognosis.
The CULPRIT-SHOCK trial has number of strengths, including low drop-out rate (3%) and adequate power, however, there are some limitations. Some patients crossed over from culprit-vessel only to multivessel PCI group due to lack of hemodynamic improvement, plaque shifts, and newly detected lesions after treatment of the culprit lesion. On the other hand, some patients crossed over from multivessel PCI from culprit lesion only due to multiple reasons, including technical difficulty of intervention.
Applications for Clinical Practice
In patients presenting with cardiogenic shock and acute myocardial infarction, culprit lesion–only intervention and focusing on hemodynamic support with a staged intervention if necessary seems to be better strategy than immediate multivessel PCI, including non-culprit vessel PCI.
—Taishi Hirai, MD, University of Chicago Medical Center, Chicago, IL
1. Hochman JS, Sleeper LA, Webb JG, et al. Early revascularization in acute myocardial infarction complicated by cardiogenic shock. SHOCK Investigators. Should we emergently revascularize occluded coronaries for cardiogenic shock. N Engl J Med 1999;341:625–34.
2. Wald DS, Morris JK, Wald NJ, et al. Randomized trial of preventive angioplasty in myocardial infarction. N Engl J Med 2013;369:1115–23.
3. Gershlick AH, Khan JN, Kelly DJ, et al. Randomized trial of complete versus lesion-only revascularization in patients undergoing primary percutaneous coronary intervention for STEMI and multivessel disease: the CvLPRIT trial. J Am Coll Cardiol 2015;65:963–72.
4. Engstrom T, Kelbaek H, Helqvist S, et al. Complete revasculari Outcomes Research in Review www.mdedge.com/jcomjournal Vol. 25, No. 3 March 2018 JCOM 103 sation versus treatment of the culprit lesion only in patients with ST-segment elevation myocardial infarction and multivessel disease (DANAMI-3-PRIMULTI): an open-label, randomised controlled trial. Lancet 2015;386:665–71.
5. Levine GN, Bates ER, Blankenship JC, et al. 2015 ACC/AHA/SCAI Focused update on primary percutaneous coronary intervention for patients with ST-elevation myocardial infarction: an update of the 2011 ACCF/AHA/SCAI guideline for percutaneous coronary intervention and the 2013 ACCF/AHA guideline for the management of ST-elevation myocardial infarction. J Am Coll Cardiol 2016;67:1235–50.
6. Sapontis J, Salisbury AC, Yeh RW, et al. Early procedural and health status outcomes after chronic total occlusion angioplasty: a report from the OPEN-CTO Registry (Outcomes, Patient Health Status, and Efficiency in Chronic Total Occlusion Hybrid Procedures). JACC Cardiovasc Interv 2017;10:1523–34.
7. Henriques JP, Hoebers LP, Ramunddal T, et al. Percutaneous intervention for concurrent chronic total occlusions in patients with STEMI: the EXPLORE trial. J Am Coll Cardiol 2016;68:1622–32.
8. Brilakis ES, Grantham JA, Rinfret S, et al. A percutaneous treatment algorithm for crossing coronary chronic total occlusions. JACC Cardiovasc Interv 2012;5:367–79.
Study Overview
Objective. To determine if percutaneous coronary intervention (PCI) of non-culprit vessels should be performed in patients with acute myocardial infarction and cardiogenic shock.
Design. Multicenter randomized controlled trial.
Setting and participants. 706 patients who had multivessel disease, acute myocardial infarction, and cardiogenic shock were assigned to one of 2 revascularization strategies: PCI of the culprit lesion only with the option of staged revascularization of non-culprit lesions, or immediate multivessel PCI.
Main outcome measures. The primary endpoint was the composite of death or severe renal failure leading to renal replacement therapy within 30 days after randomization. Safety endpoints included bleeding and stroke.
Main results. The primary endpoint of death or renal replacement therapy occurred in 158 /344 patients (45.9%) in the culprit lesion–only PCI group and 189/341 patients (55.4%) in the multivessel PCI group (relative risk [RR] 0.83, 95% CI 0.72–0.96, P = 0.01). The rate of death from any cause was lower in the culprit lesion–only PCI group compared to multivessel PCI group (RR 0.84, 95% CI 0.72–0.98, P = 0.03). There was no difference in stroke and numerically lower risk of bleeding in culprit lesion–only PCI group (RR 0.75, 95% CI 0.55–1.03).
Conclusion. Among patients who had multivessel coronary artery disease and acute myocardial infarction with cardiogenic shock, the 30-day risk of death or severe renal failure leading to renal replacement therapy was lower in patients who initially underwent PCI of the culprit lesion only compared with patients who underwent immediate multivessel PCI.
Commentary
Patients presenting with cardiogenic shock at the time of acute myocardial infarction have the highest mortality—up to 50%. Since the original SHOCK trial in 1999, it is known that the mortality can be reduced by early revascularization of the culprit vessel [1]. However, whether the non-culprit vessel should be revascularized at the time of presentation with acute myocardial infarction is unknown.
Recently, there have been multiple trials suggesting the benefit of non-culprit vessel revascularization in patients with acute myocardial infarction who are hemodynamically stable at the time of their presentation. Three recent trials—PRAMI, CvPRIT and DANAMI-PRIMULTI—investigated this clinical question and found benefit of non-culprit vessel revascularization [2–4]. The results of these trials led to a focused update of the 2011 ACCF/AHA/SCAI guideline for percutaneous coronary intervention in 2015 [5]. Noninfarct-related artery PCI in hemodynamically stable patients presenting with acute myocardial infarction was upgraded to class IIb from class III [5]. Whether these findings can be extended to hemodynamically unstable (cardiogenic shock) patients is not mentioned in the guidelines.
In the current CULPRIT-SHOCK trial, Thiele et al investigated this clinical question by performing a well-designed clinical trial in patients with acute myocardial infarction and cardiogenic shock. They found that the composite endpoint of death and renal replacement therapy at 30 days occurred more frequently in the multivessel PCI group compared with the culprit lesion–only group (relative risk [RR] 0.83, 95% CI 0.71–0.96, P = 0.01). The composite endpoint was mainly driven by death (43.3% vs 51.6%, RR 0.84, 95% CI 0.72–0.98, P = 0.03), and the rate of renal replacement therapy was numerically higher in the mutivessel PCI group (11.6% vs 16.4%, P = 0.07). The study was conducted in the sickest population compared to prior trials as evidenced by high rate of mechanical ventilation (~80%), requirement of catecholamine support (~90%), and long ICU stay (median 5 days). The significance of non-culprit lesion was determined by angiogram (stenosis > 70%). The culprit vessel–only group had treatment of the culprit vessel only initially, but the staged intervention for non-culprit vessel was encouraged.
A unique point of this trial is that patients with chronic total occlusion (CTO) were included in the study and it was encouraged to attempt revascularization of CTO lesions, contrary to previous trials. Although CTO intervention improves angina and ejection fraction [6,7], whether CTO intervention has a mortality benefit needs further investigation. In the CULPRIT-SHOCK trial, 24% of patients had one or more CTO lesions. This most likely contributed to the increased contrast use in the multivessel PCI group (250 vs 190 mL, P < 0.01). CTO is considered a most challenging lesion to treat, and expertise and skill level vary among operators. In the hybrid CTO intervention model, it is recommended to stage the intervention as much as possible, as this type of intervention requires meticulous planning [8]. There is a possibility that attempting CTO intervention in this acute setting caused more harm than benefit. Furthermore, the investigators did not report the success rate of CTO intervention.
Another interesting finding of this trial is that the mortality of both groups was high (43.3% vs 51.6%). The revascularization arm of the original shock trial almost 20 years ago had a 30-day mortality of 46.7%, which is almost identical with the current CULPRIT-SHOCK study. Despite improvement in hemodynamic support such as Impella, TandemHeart, extracorporeal membrane oxygenation device, and improvement in medical therapy over the years, patients with cardiogenic shock with acute myocardial infarction have a dismal prognosis.
The CULPRIT-SHOCK trial has number of strengths, including low drop-out rate (3%) and adequate power, however, there are some limitations. Some patients crossed over from culprit-vessel only to multivessel PCI group due to lack of hemodynamic improvement, plaque shifts, and newly detected lesions after treatment of the culprit lesion. On the other hand, some patients crossed over from multivessel PCI from culprit lesion only due to multiple reasons, including technical difficulty of intervention.
Applications for Clinical Practice
In patients presenting with cardiogenic shock and acute myocardial infarction, culprit lesion–only intervention and focusing on hemodynamic support with a staged intervention if necessary seems to be better strategy than immediate multivessel PCI, including non-culprit vessel PCI.
—Taishi Hirai, MD, University of Chicago Medical Center, Chicago, IL
Study Overview
Objective. To determine if percutaneous coronary intervention (PCI) of non-culprit vessels should be performed in patients with acute myocardial infarction and cardiogenic shock.
Design. Multicenter randomized controlled trial.
Setting and participants. 706 patients who had multivessel disease, acute myocardial infarction, and cardiogenic shock were assigned to one of 2 revascularization strategies: PCI of the culprit lesion only with the option of staged revascularization of non-culprit lesions, or immediate multivessel PCI.
Main outcome measures. The primary endpoint was the composite of death or severe renal failure leading to renal replacement therapy within 30 days after randomization. Safety endpoints included bleeding and stroke.
Main results. The primary endpoint of death or renal replacement therapy occurred in 158 /344 patients (45.9%) in the culprit lesion–only PCI group and 189/341 patients (55.4%) in the multivessel PCI group (relative risk [RR] 0.83, 95% CI 0.72–0.96, P = 0.01). The rate of death from any cause was lower in the culprit lesion–only PCI group compared to multivessel PCI group (RR 0.84, 95% CI 0.72–0.98, P = 0.03). There was no difference in stroke and numerically lower risk of bleeding in culprit lesion–only PCI group (RR 0.75, 95% CI 0.55–1.03).
Conclusion. Among patients who had multivessel coronary artery disease and acute myocardial infarction with cardiogenic shock, the 30-day risk of death or severe renal failure leading to renal replacement therapy was lower in patients who initially underwent PCI of the culprit lesion only compared with patients who underwent immediate multivessel PCI.
Commentary
Patients presenting with cardiogenic shock at the time of acute myocardial infarction have the highest mortality—up to 50%. Since the original SHOCK trial in 1999, it is known that the mortality can be reduced by early revascularization of the culprit vessel [1]. However, whether the non-culprit vessel should be revascularized at the time of presentation with acute myocardial infarction is unknown.
Recently, there have been multiple trials suggesting the benefit of non-culprit vessel revascularization in patients with acute myocardial infarction who are hemodynamically stable at the time of their presentation. Three recent trials—PRAMI, CvPRIT and DANAMI-PRIMULTI—investigated this clinical question and found benefit of non-culprit vessel revascularization [2–4]. The results of these trials led to a focused update of the 2011 ACCF/AHA/SCAI guideline for percutaneous coronary intervention in 2015 [5]. Noninfarct-related artery PCI in hemodynamically stable patients presenting with acute myocardial infarction was upgraded to class IIb from class III [5]. Whether these findings can be extended to hemodynamically unstable (cardiogenic shock) patients is not mentioned in the guidelines.
In the current CULPRIT-SHOCK trial, Thiele et al investigated this clinical question by performing a well-designed clinical trial in patients with acute myocardial infarction and cardiogenic shock. They found that the composite endpoint of death and renal replacement therapy at 30 days occurred more frequently in the multivessel PCI group compared with the culprit lesion–only group (relative risk [RR] 0.83, 95% CI 0.71–0.96, P = 0.01). The composite endpoint was mainly driven by death (43.3% vs 51.6%, RR 0.84, 95% CI 0.72–0.98, P = 0.03), and the rate of renal replacement therapy was numerically higher in the mutivessel PCI group (11.6% vs 16.4%, P = 0.07). The study was conducted in the sickest population compared to prior trials as evidenced by high rate of mechanical ventilation (~80%), requirement of catecholamine support (~90%), and long ICU stay (median 5 days). The significance of non-culprit lesion was determined by angiogram (stenosis > 70%). The culprit vessel–only group had treatment of the culprit vessel only initially, but the staged intervention for non-culprit vessel was encouraged.
A unique point of this trial is that patients with chronic total occlusion (CTO) were included in the study and it was encouraged to attempt revascularization of CTO lesions, contrary to previous trials. Although CTO intervention improves angina and ejection fraction [6,7], whether CTO intervention has a mortality benefit needs further investigation. In the CULPRIT-SHOCK trial, 24% of patients had one or more CTO lesions. This most likely contributed to the increased contrast use in the multivessel PCI group (250 vs 190 mL, P < 0.01). CTO is considered a most challenging lesion to treat, and expertise and skill level vary among operators. In the hybrid CTO intervention model, it is recommended to stage the intervention as much as possible, as this type of intervention requires meticulous planning [8]. There is a possibility that attempting CTO intervention in this acute setting caused more harm than benefit. Furthermore, the investigators did not report the success rate of CTO intervention.
Another interesting finding of this trial is that the mortality of both groups was high (43.3% vs 51.6%). The revascularization arm of the original shock trial almost 20 years ago had a 30-day mortality of 46.7%, which is almost identical with the current CULPRIT-SHOCK study. Despite improvement in hemodynamic support such as Impella, TandemHeart, extracorporeal membrane oxygenation device, and improvement in medical therapy over the years, patients with cardiogenic shock with acute myocardial infarction have a dismal prognosis.
The CULPRIT-SHOCK trial has number of strengths, including low drop-out rate (3%) and adequate power, however, there are some limitations. Some patients crossed over from culprit-vessel only to multivessel PCI group due to lack of hemodynamic improvement, plaque shifts, and newly detected lesions after treatment of the culprit lesion. On the other hand, some patients crossed over from multivessel PCI from culprit lesion only due to multiple reasons, including technical difficulty of intervention.
Applications for Clinical Practice
In patients presenting with cardiogenic shock and acute myocardial infarction, culprit lesion–only intervention and focusing on hemodynamic support with a staged intervention if necessary seems to be better strategy than immediate multivessel PCI, including non-culprit vessel PCI.
—Taishi Hirai, MD, University of Chicago Medical Center, Chicago, IL
1. Hochman JS, Sleeper LA, Webb JG, et al. Early revascularization in acute myocardial infarction complicated by cardiogenic shock. SHOCK Investigators. Should we emergently revascularize occluded coronaries for cardiogenic shock. N Engl J Med 1999;341:625–34.
2. Wald DS, Morris JK, Wald NJ, et al. Randomized trial of preventive angioplasty in myocardial infarction. N Engl J Med 2013;369:1115–23.
3. Gershlick AH, Khan JN, Kelly DJ, et al. Randomized trial of complete versus lesion-only revascularization in patients undergoing primary percutaneous coronary intervention for STEMI and multivessel disease: the CvLPRIT trial. J Am Coll Cardiol 2015;65:963–72.
4. Engstrom T, Kelbaek H, Helqvist S, et al. Complete revasculari Outcomes Research in Review www.mdedge.com/jcomjournal Vol. 25, No. 3 March 2018 JCOM 103 sation versus treatment of the culprit lesion only in patients with ST-segment elevation myocardial infarction and multivessel disease (DANAMI-3-PRIMULTI): an open-label, randomised controlled trial. Lancet 2015;386:665–71.
5. Levine GN, Bates ER, Blankenship JC, et al. 2015 ACC/AHA/SCAI Focused update on primary percutaneous coronary intervention for patients with ST-elevation myocardial infarction: an update of the 2011 ACCF/AHA/SCAI guideline for percutaneous coronary intervention and the 2013 ACCF/AHA guideline for the management of ST-elevation myocardial infarction. J Am Coll Cardiol 2016;67:1235–50.
6. Sapontis J, Salisbury AC, Yeh RW, et al. Early procedural and health status outcomes after chronic total occlusion angioplasty: a report from the OPEN-CTO Registry (Outcomes, Patient Health Status, and Efficiency in Chronic Total Occlusion Hybrid Procedures). JACC Cardiovasc Interv 2017;10:1523–34.
7. Henriques JP, Hoebers LP, Ramunddal T, et al. Percutaneous intervention for concurrent chronic total occlusions in patients with STEMI: the EXPLORE trial. J Am Coll Cardiol 2016;68:1622–32.
8. Brilakis ES, Grantham JA, Rinfret S, et al. A percutaneous treatment algorithm for crossing coronary chronic total occlusions. JACC Cardiovasc Interv 2012;5:367–79.
1. Hochman JS, Sleeper LA, Webb JG, et al. Early revascularization in acute myocardial infarction complicated by cardiogenic shock. SHOCK Investigators. Should we emergently revascularize occluded coronaries for cardiogenic shock. N Engl J Med 1999;341:625–34.
2. Wald DS, Morris JK, Wald NJ, et al. Randomized trial of preventive angioplasty in myocardial infarction. N Engl J Med 2013;369:1115–23.
3. Gershlick AH, Khan JN, Kelly DJ, et al. Randomized trial of complete versus lesion-only revascularization in patients undergoing primary percutaneous coronary intervention for STEMI and multivessel disease: the CvLPRIT trial. J Am Coll Cardiol 2015;65:963–72.
4. Engstrom T, Kelbaek H, Helqvist S, et al. Complete revasculari Outcomes Research in Review www.mdedge.com/jcomjournal Vol. 25, No. 3 March 2018 JCOM 103 sation versus treatment of the culprit lesion only in patients with ST-segment elevation myocardial infarction and multivessel disease (DANAMI-3-PRIMULTI): an open-label, randomised controlled trial. Lancet 2015;386:665–71.
5. Levine GN, Bates ER, Blankenship JC, et al. 2015 ACC/AHA/SCAI Focused update on primary percutaneous coronary intervention for patients with ST-elevation myocardial infarction: an update of the 2011 ACCF/AHA/SCAI guideline for percutaneous coronary intervention and the 2013 ACCF/AHA guideline for the management of ST-elevation myocardial infarction. J Am Coll Cardiol 2016;67:1235–50.
6. Sapontis J, Salisbury AC, Yeh RW, et al. Early procedural and health status outcomes after chronic total occlusion angioplasty: a report from the OPEN-CTO Registry (Outcomes, Patient Health Status, and Efficiency in Chronic Total Occlusion Hybrid Procedures). JACC Cardiovasc Interv 2017;10:1523–34.
7. Henriques JP, Hoebers LP, Ramunddal T, et al. Percutaneous intervention for concurrent chronic total occlusions in patients with STEMI: the EXPLORE trial. J Am Coll Cardiol 2016;68:1622–32.
8. Brilakis ES, Grantham JA, Rinfret S, et al. A percutaneous treatment algorithm for crossing coronary chronic total occlusions. JACC Cardiovasc Interv 2012;5:367–79.
Endobronchial Valves for Severe Emphysema
Study Overview
Objective. To evaluate the efficacy and safety of Zephyr endobronchial valves (EBVs) in patients with heterogeneous emphysema and absence of collateral ventilation.
Design. Multicenter, randomized, nonblinded clinical trial.
Setting and participants. This study was conducted at 17 sites across Europe between 2014 and 2016. Patients with severe emphysema who were ex-smokers and ≥ 40 years old were recruited. Key inclusion criteria were post-bronchodilator FEV1 between 15%–45% predicted despite optimal medical management, total lung capacity greater than 100% predicted, residual volume ≥ 180% predicted, and a 6-minute walk distance of between 150 and 450 meters. Heterogenous emphysema was defined as a greater than 10% difference in destruction score between target and ipsilateral lobes as measured by high-resolution CT. All eligible patients underwent Chartis pulmonary assessment (Pulmonx, Redwood City, CA) assessment to determine the presence of collateral ventilation between the target and adjacent lobes, and patients with collateral ventilation were excluded.
Intervention. Patients were randomized 2:1 to either EBV plus standard of care (intervention) or standard of care alone (control) by blocked design and concealed envelopes. The EBV group underwent immediate placement of Zephyr EBVs with the intention of complete lobar occlusion.
Main outcome measures. The primary outcome at 3 months post-procedure was the percentage of subjects with FEV1 improvement from baseline of 12% or greater. Changes in FEV1, residual volume, 6-minute walk distance, St. George’s Respiratory Questionnaire score and modified Medical Research Council score were assessed at 3 and 6 months and target lobe volume reduction on chest CT at 3 months.
Main results. 97 subjects were randomized to the intervention (n = 65) or control group (n = 32). At 3 months, 55.4% of intervention and 6.5% of control subjects had an FEV1 improvement of 12% or more (P < 0.001). Improvements were maintained at 6 months: intervention, 56.3%, versus control, 3.2% (P < 0.001), with a mean ± SD change in FEV1% at 6 months of 20.7 ± 29.6% and –8.6 ± 13.0%, respectively. A total of 89.8% of intervention subjects had target lobe volume reduction greater than or equal to 350 mL (mean, 1.09 ± 0.62 L; P < 0.001). The differences in outcomes between the intervention and control groups were statistically significant, with the following measured differences: residual volume, –700 m; 6-minute walk distance, +78.7 m; St. George’s Respiratory Questionnaire score, –6.5 points; modified Medical Research Council dyspnea score, –0.6 points; and BODE (body mass index, airflow obstruction, dyspnea, and exercise capacity) index, –1.8 points (all P < 0.05). Pneumothorax was the most common adverse event, occurring in 19 of 65 (29.2%) of intervention subjects.
Conclusion. Endobronchial valve treatment in hyperinflated patients with heterogeneous emphysema without collateral ventilation resulted in clinically meaningful benefits in lung function, dyspnea, exercise tolerance and quality of life, with an acceptable safety profile.
Commentary
Patients with severe emphysema are difficult to manage. Optimal medical management is required to maintain their lung function and quality of life, with combination bronchodilators (long-acting beta 2 agonists, long-acting anticholinergics, and inhaled corticosteroids), roflumilast (selective phosphodiesterase-4 inhibitors), oral corticosteroids or macrolide antibiotics when indicated, long-term oxygen, and noninvasive ventilator support. Palliative team care consultation and support, adequate nutritional support, influenza and pneumococcal vaccination, and pulmonary rehabilitation/graded exercise training are important aspects of emphysema treatment [1].
To help patients with severe emphysema who experience further decline despite intensive medical management, a lung volume reduction strategy was devised. In 2003, the NETT trial was conducted [2]. In this study, lung volume reduction surgery was performed in 608 patients, who were followed for 29 months. This study revealed a lack of survival benefit with significant immediate postoperative mortality and complication rate. Despite this disappointing result, a subgroup of patients (upper-lobe predominant disease and low baseline exercise capacity) had a statistically significant mortality benefit from surgery.
Since then, many have sought to determine a less invasive method of lung volume reduction. So far, one-way endobronchial valves, self-activating coils, and targeted destruction and remodeling of emphysematous lung with vapor or sealant methods have been studied. Several studies have examined the efficacy and safety of coils, with reasonable improvement of 6-minute walk distance and FEV1; however, complications including death, pneumothorax and pneumonia were noted. Vapor ablation (STEP-UP trial) [3] and lung sealant [4] were also attempted in order to achieve lung volume reduction, but increased infection was problematic. The 2017 GOLD guidelines suggested lung volume reduction by endobronchial one-way valve or lung coils as interventional bronchoscopic options for lung volume reduction [1].
Two types of endobronchial valves have been introduced to date: the intra bronchial valve, developed by Olympus, and the Zephyr valve by Pulmonx. Endobronchial valves are deployed to the bronchi via bronchoscopic guidance, and limit airflow to the portions of the lung distal to the valve while allowing mucus and air movement in the proximal direction. The VENT study, the largest endobronchial valve trial using the Zephyr valve, was published in 2010 [5]. This study demonstrated the efficacy of endobronchial valve treatment, especially in patients with heterogeneous emphysema and complete interlobar fissures as opposed to homogeneous emphysema and incomplete interlobar fissures. Subsequent studies demonstrated the importance of absence of collateral ventilation, measured by the Chartis system, when considering endobronchial valves [6].
The current study by Kemp et al is the first multicenter randomized endobronchial valve trial conducted in Europe. The study was able to demonstrate remarkable improvement in FEV1 (mean 140 mL decrease vs 90 mL increase) and 6-minute walk distance (mean +36.2 meter vs –42.5 meter) after endobronchial valve treatment in severe emphysema patients. The amount of volume reduction was reaching up to 2 liters. Patients in the control group were given the opportunity to receive endobronchial valve after the 6 months study follow-up period and 30 out of 32 patients opted for the endobronchial valve treatment. The authors concluded that the endobronchial valve therapy resulted in clinically meaningful benefits in lung function, dyspnea, exercise tolerance and quality of life with an acceptable safety profile.
It is notable that the authors included only selected patients, limited to those with presence of heterogeneous emphysema, absence of collateral ventilation, low risk of COPD exacerbation or infection, and patients who were likely able to tolerate pneumothorax. Despite this, 13 patients developed pneumothorax and death occurred in 1 patient, leading to a significantly longer average length of hospital stay in the treatment group. Although this rate of complications is not higher than prior endobronchial valve studies, it is important to note when broadly applying the outcomes of this study to patient care. Lack of long-term follow-up and the nonblinded study design also limit the strength of this study.
Applications for Clinical Practice
Many patients suffer from emphysema. Among them, severe emphysema is the most difficult to manage. It is important to incorporate optimal medical management including bronchodilators, palliative care, oxygen therapy, pulmonary rehabilitation and non-invasive ventilation options. When patients with severe emphysema continue to decline or seek further improvement in their care, and when they meet the specific criteria for lung volume reduction, endobronchial valve therapy should be considered an option and physicians should refer them to appropriate centers. However, the risk of complications, such as pneumothorax, still remains high.
—Minkyung Kwon, MD, Pulmonary and Critical Care Medicine, Mayo Clinic Florida, Jacksonville, FL, and Joel Roberson, MD, Department of Radiology, William Beaumont Hospital, Royal Oak, MI
1. The Global Strategy for the Diagnosis, Management and Prevention of COPD, Global Initiative for Chronic Obstructive Lung Disease (GOLD) 2017.
2. Weinmann GG, Chiang YP, Sheingold S. The National Emphysema Treatment Trial (NETT): a study in agency collaboration. Proc Am Thorac Soc 2008;5:381–4.
3. Herth FJ, Valipour A, Shah PL, et al. Segmental volume reduction using thermal vapour ablation in patients with severe emphysema: 6-month results of the multicentre, parallel-group, open-label, randomised controlled STEP-UP trial. Lancet Respir Med 2016;4:185–93.
4. Come CE, Kramer MR, Dransfield MT, et al. A randomised trial of lung sealant versus medical therapy for advanced emphysema. Eur Respir J 2015;46:651–62.
5. Sciurba FC, Ernst A, Herth FJ, et al. A randomized study of endobronchial valves for advanced emphysema. N Engl J Med 2010;363:1233–44.
6. Klooster K, ten Hacken NH, Hartman JE, et al. Endobronchial valves for emphysema without interlobar collateral ventilation. N Engl J Med 2015;373:2325–35.
Study Overview
Objective. To evaluate the efficacy and safety of Zephyr endobronchial valves (EBVs) in patients with heterogeneous emphysema and absence of collateral ventilation.
Design. Multicenter, randomized, nonblinded clinical trial.
Setting and participants. This study was conducted at 17 sites across Europe between 2014 and 2016. Patients with severe emphysema who were ex-smokers and ≥ 40 years old were recruited. Key inclusion criteria were post-bronchodilator FEV1 between 15%–45% predicted despite optimal medical management, total lung capacity greater than 100% predicted, residual volume ≥ 180% predicted, and a 6-minute walk distance of between 150 and 450 meters. Heterogenous emphysema was defined as a greater than 10% difference in destruction score between target and ipsilateral lobes as measured by high-resolution CT. All eligible patients underwent Chartis pulmonary assessment (Pulmonx, Redwood City, CA) assessment to determine the presence of collateral ventilation between the target and adjacent lobes, and patients with collateral ventilation were excluded.
Intervention. Patients were randomized 2:1 to either EBV plus standard of care (intervention) or standard of care alone (control) by blocked design and concealed envelopes. The EBV group underwent immediate placement of Zephyr EBVs with the intention of complete lobar occlusion.
Main outcome measures. The primary outcome at 3 months post-procedure was the percentage of subjects with FEV1 improvement from baseline of 12% or greater. Changes in FEV1, residual volume, 6-minute walk distance, St. George’s Respiratory Questionnaire score and modified Medical Research Council score were assessed at 3 and 6 months and target lobe volume reduction on chest CT at 3 months.
Main results. 97 subjects were randomized to the intervention (n = 65) or control group (n = 32). At 3 months, 55.4% of intervention and 6.5% of control subjects had an FEV1 improvement of 12% or more (P < 0.001). Improvements were maintained at 6 months: intervention, 56.3%, versus control, 3.2% (P < 0.001), with a mean ± SD change in FEV1% at 6 months of 20.7 ± 29.6% and –8.6 ± 13.0%, respectively. A total of 89.8% of intervention subjects had target lobe volume reduction greater than or equal to 350 mL (mean, 1.09 ± 0.62 L; P < 0.001). The differences in outcomes between the intervention and control groups were statistically significant, with the following measured differences: residual volume, –700 m; 6-minute walk distance, +78.7 m; St. George’s Respiratory Questionnaire score, –6.5 points; modified Medical Research Council dyspnea score, –0.6 points; and BODE (body mass index, airflow obstruction, dyspnea, and exercise capacity) index, –1.8 points (all P < 0.05). Pneumothorax was the most common adverse event, occurring in 19 of 65 (29.2%) of intervention subjects.
Conclusion. Endobronchial valve treatment in hyperinflated patients with heterogeneous emphysema without collateral ventilation resulted in clinically meaningful benefits in lung function, dyspnea, exercise tolerance and quality of life, with an acceptable safety profile.
Commentary
Patients with severe emphysema are difficult to manage. Optimal medical management is required to maintain their lung function and quality of life, with combination bronchodilators (long-acting beta 2 agonists, long-acting anticholinergics, and inhaled corticosteroids), roflumilast (selective phosphodiesterase-4 inhibitors), oral corticosteroids or macrolide antibiotics when indicated, long-term oxygen, and noninvasive ventilator support. Palliative team care consultation and support, adequate nutritional support, influenza and pneumococcal vaccination, and pulmonary rehabilitation/graded exercise training are important aspects of emphysema treatment [1].
To help patients with severe emphysema who experience further decline despite intensive medical management, a lung volume reduction strategy was devised. In 2003, the NETT trial was conducted [2]. In this study, lung volume reduction surgery was performed in 608 patients, who were followed for 29 months. This study revealed a lack of survival benefit with significant immediate postoperative mortality and complication rate. Despite this disappointing result, a subgroup of patients (upper-lobe predominant disease and low baseline exercise capacity) had a statistically significant mortality benefit from surgery.
Since then, many have sought to determine a less invasive method of lung volume reduction. So far, one-way endobronchial valves, self-activating coils, and targeted destruction and remodeling of emphysematous lung with vapor or sealant methods have been studied. Several studies have examined the efficacy and safety of coils, with reasonable improvement of 6-minute walk distance and FEV1; however, complications including death, pneumothorax and pneumonia were noted. Vapor ablation (STEP-UP trial) [3] and lung sealant [4] were also attempted in order to achieve lung volume reduction, but increased infection was problematic. The 2017 GOLD guidelines suggested lung volume reduction by endobronchial one-way valve or lung coils as interventional bronchoscopic options for lung volume reduction [1].
Two types of endobronchial valves have been introduced to date: the intra bronchial valve, developed by Olympus, and the Zephyr valve by Pulmonx. Endobronchial valves are deployed to the bronchi via bronchoscopic guidance, and limit airflow to the portions of the lung distal to the valve while allowing mucus and air movement in the proximal direction. The VENT study, the largest endobronchial valve trial using the Zephyr valve, was published in 2010 [5]. This study demonstrated the efficacy of endobronchial valve treatment, especially in patients with heterogeneous emphysema and complete interlobar fissures as opposed to homogeneous emphysema and incomplete interlobar fissures. Subsequent studies demonstrated the importance of absence of collateral ventilation, measured by the Chartis system, when considering endobronchial valves [6].
The current study by Kemp et al is the first multicenter randomized endobronchial valve trial conducted in Europe. The study was able to demonstrate remarkable improvement in FEV1 (mean 140 mL decrease vs 90 mL increase) and 6-minute walk distance (mean +36.2 meter vs –42.5 meter) after endobronchial valve treatment in severe emphysema patients. The amount of volume reduction was reaching up to 2 liters. Patients in the control group were given the opportunity to receive endobronchial valve after the 6 months study follow-up period and 30 out of 32 patients opted for the endobronchial valve treatment. The authors concluded that the endobronchial valve therapy resulted in clinically meaningful benefits in lung function, dyspnea, exercise tolerance and quality of life with an acceptable safety profile.
It is notable that the authors included only selected patients, limited to those with presence of heterogeneous emphysema, absence of collateral ventilation, low risk of COPD exacerbation or infection, and patients who were likely able to tolerate pneumothorax. Despite this, 13 patients developed pneumothorax and death occurred in 1 patient, leading to a significantly longer average length of hospital stay in the treatment group. Although this rate of complications is not higher than prior endobronchial valve studies, it is important to note when broadly applying the outcomes of this study to patient care. Lack of long-term follow-up and the nonblinded study design also limit the strength of this study.
Applications for Clinical Practice
Many patients suffer from emphysema. Among them, severe emphysema is the most difficult to manage. It is important to incorporate optimal medical management including bronchodilators, palliative care, oxygen therapy, pulmonary rehabilitation and non-invasive ventilation options. When patients with severe emphysema continue to decline or seek further improvement in their care, and when they meet the specific criteria for lung volume reduction, endobronchial valve therapy should be considered an option and physicians should refer them to appropriate centers. However, the risk of complications, such as pneumothorax, still remains high.
—Minkyung Kwon, MD, Pulmonary and Critical Care Medicine, Mayo Clinic Florida, Jacksonville, FL, and Joel Roberson, MD, Department of Radiology, William Beaumont Hospital, Royal Oak, MI
Study Overview
Objective. To evaluate the efficacy and safety of Zephyr endobronchial valves (EBVs) in patients with heterogeneous emphysema and absence of collateral ventilation.
Design. Multicenter, randomized, nonblinded clinical trial.
Setting and participants. This study was conducted at 17 sites across Europe between 2014 and 2016. Patients with severe emphysema who were ex-smokers and ≥ 40 years old were recruited. Key inclusion criteria were post-bronchodilator FEV1 between 15%–45% predicted despite optimal medical management, total lung capacity greater than 100% predicted, residual volume ≥ 180% predicted, and a 6-minute walk distance of between 150 and 450 meters. Heterogenous emphysema was defined as a greater than 10% difference in destruction score between target and ipsilateral lobes as measured by high-resolution CT. All eligible patients underwent Chartis pulmonary assessment (Pulmonx, Redwood City, CA) assessment to determine the presence of collateral ventilation between the target and adjacent lobes, and patients with collateral ventilation were excluded.
Intervention. Patients were randomized 2:1 to either EBV plus standard of care (intervention) or standard of care alone (control) by blocked design and concealed envelopes. The EBV group underwent immediate placement of Zephyr EBVs with the intention of complete lobar occlusion.
Main outcome measures. The primary outcome at 3 months post-procedure was the percentage of subjects with FEV1 improvement from baseline of 12% or greater. Changes in FEV1, residual volume, 6-minute walk distance, St. George’s Respiratory Questionnaire score and modified Medical Research Council score were assessed at 3 and 6 months and target lobe volume reduction on chest CT at 3 months.
Main results. 97 subjects were randomized to the intervention (n = 65) or control group (n = 32). At 3 months, 55.4% of intervention and 6.5% of control subjects had an FEV1 improvement of 12% or more (P < 0.001). Improvements were maintained at 6 months: intervention, 56.3%, versus control, 3.2% (P < 0.001), with a mean ± SD change in FEV1% at 6 months of 20.7 ± 29.6% and –8.6 ± 13.0%, respectively. A total of 89.8% of intervention subjects had target lobe volume reduction greater than or equal to 350 mL (mean, 1.09 ± 0.62 L; P < 0.001). The differences in outcomes between the intervention and control groups were statistically significant, with the following measured differences: residual volume, –700 m; 6-minute walk distance, +78.7 m; St. George’s Respiratory Questionnaire score, –6.5 points; modified Medical Research Council dyspnea score, –0.6 points; and BODE (body mass index, airflow obstruction, dyspnea, and exercise capacity) index, –1.8 points (all P < 0.05). Pneumothorax was the most common adverse event, occurring in 19 of 65 (29.2%) of intervention subjects.
Conclusion. Endobronchial valve treatment in hyperinflated patients with heterogeneous emphysema without collateral ventilation resulted in clinically meaningful benefits in lung function, dyspnea, exercise tolerance and quality of life, with an acceptable safety profile.
Commentary
Patients with severe emphysema are difficult to manage. Optimal medical management is required to maintain their lung function and quality of life, with combination bronchodilators (long-acting beta 2 agonists, long-acting anticholinergics, and inhaled corticosteroids), roflumilast (selective phosphodiesterase-4 inhibitors), oral corticosteroids or macrolide antibiotics when indicated, long-term oxygen, and noninvasive ventilator support. Palliative team care consultation and support, adequate nutritional support, influenza and pneumococcal vaccination, and pulmonary rehabilitation/graded exercise training are important aspects of emphysema treatment [1].
To help patients with severe emphysema who experience further decline despite intensive medical management, a lung volume reduction strategy was devised. In 2003, the NETT trial was conducted [2]. In this study, lung volume reduction surgery was performed in 608 patients, who were followed for 29 months. This study revealed a lack of survival benefit with significant immediate postoperative mortality and complication rate. Despite this disappointing result, a subgroup of patients (upper-lobe predominant disease and low baseline exercise capacity) had a statistically significant mortality benefit from surgery.
Since then, many have sought to determine a less invasive method of lung volume reduction. So far, one-way endobronchial valves, self-activating coils, and targeted destruction and remodeling of emphysematous lung with vapor or sealant methods have been studied. Several studies have examined the efficacy and safety of coils, with reasonable improvement of 6-minute walk distance and FEV1; however, complications including death, pneumothorax and pneumonia were noted. Vapor ablation (STEP-UP trial) [3] and lung sealant [4] were also attempted in order to achieve lung volume reduction, but increased infection was problematic. The 2017 GOLD guidelines suggested lung volume reduction by endobronchial one-way valve or lung coils as interventional bronchoscopic options for lung volume reduction [1].
Two types of endobronchial valves have been introduced to date: the intra bronchial valve, developed by Olympus, and the Zephyr valve by Pulmonx. Endobronchial valves are deployed to the bronchi via bronchoscopic guidance, and limit airflow to the portions of the lung distal to the valve while allowing mucus and air movement in the proximal direction. The VENT study, the largest endobronchial valve trial using the Zephyr valve, was published in 2010 [5]. This study demonstrated the efficacy of endobronchial valve treatment, especially in patients with heterogeneous emphysema and complete interlobar fissures as opposed to homogeneous emphysema and incomplete interlobar fissures. Subsequent studies demonstrated the importance of absence of collateral ventilation, measured by the Chartis system, when considering endobronchial valves [6].
The current study by Kemp et al is the first multicenter randomized endobronchial valve trial conducted in Europe. The study was able to demonstrate remarkable improvement in FEV1 (mean 140 mL decrease vs 90 mL increase) and 6-minute walk distance (mean +36.2 meter vs –42.5 meter) after endobronchial valve treatment in severe emphysema patients. The amount of volume reduction was reaching up to 2 liters. Patients in the control group were given the opportunity to receive endobronchial valve after the 6 months study follow-up period and 30 out of 32 patients opted for the endobronchial valve treatment. The authors concluded that the endobronchial valve therapy resulted in clinically meaningful benefits in lung function, dyspnea, exercise tolerance and quality of life with an acceptable safety profile.
It is notable that the authors included only selected patients, limited to those with presence of heterogeneous emphysema, absence of collateral ventilation, low risk of COPD exacerbation or infection, and patients who were likely able to tolerate pneumothorax. Despite this, 13 patients developed pneumothorax and death occurred in 1 patient, leading to a significantly longer average length of hospital stay in the treatment group. Although this rate of complications is not higher than prior endobronchial valve studies, it is important to note when broadly applying the outcomes of this study to patient care. Lack of long-term follow-up and the nonblinded study design also limit the strength of this study.
Applications for Clinical Practice
Many patients suffer from emphysema. Among them, severe emphysema is the most difficult to manage. It is important to incorporate optimal medical management including bronchodilators, palliative care, oxygen therapy, pulmonary rehabilitation and non-invasive ventilation options. When patients with severe emphysema continue to decline or seek further improvement in their care, and when they meet the specific criteria for lung volume reduction, endobronchial valve therapy should be considered an option and physicians should refer them to appropriate centers. However, the risk of complications, such as pneumothorax, still remains high.
—Minkyung Kwon, MD, Pulmonary and Critical Care Medicine, Mayo Clinic Florida, Jacksonville, FL, and Joel Roberson, MD, Department of Radiology, William Beaumont Hospital, Royal Oak, MI
1. The Global Strategy for the Diagnosis, Management and Prevention of COPD, Global Initiative for Chronic Obstructive Lung Disease (GOLD) 2017.
2. Weinmann GG, Chiang YP, Sheingold S. The National Emphysema Treatment Trial (NETT): a study in agency collaboration. Proc Am Thorac Soc 2008;5:381–4.
3. Herth FJ, Valipour A, Shah PL, et al. Segmental volume reduction using thermal vapour ablation in patients with severe emphysema: 6-month results of the multicentre, parallel-group, open-label, randomised controlled STEP-UP trial. Lancet Respir Med 2016;4:185–93.
4. Come CE, Kramer MR, Dransfield MT, et al. A randomised trial of lung sealant versus medical therapy for advanced emphysema. Eur Respir J 2015;46:651–62.
5. Sciurba FC, Ernst A, Herth FJ, et al. A randomized study of endobronchial valves for advanced emphysema. N Engl J Med 2010;363:1233–44.
6. Klooster K, ten Hacken NH, Hartman JE, et al. Endobronchial valves for emphysema without interlobar collateral ventilation. N Engl J Med 2015;373:2325–35.
1. The Global Strategy for the Diagnosis, Management and Prevention of COPD, Global Initiative for Chronic Obstructive Lung Disease (GOLD) 2017.
2. Weinmann GG, Chiang YP, Sheingold S. The National Emphysema Treatment Trial (NETT): a study in agency collaboration. Proc Am Thorac Soc 2008;5:381–4.
3. Herth FJ, Valipour A, Shah PL, et al. Segmental volume reduction using thermal vapour ablation in patients with severe emphysema: 6-month results of the multicentre, parallel-group, open-label, randomised controlled STEP-UP trial. Lancet Respir Med 2016;4:185–93.
4. Come CE, Kramer MR, Dransfield MT, et al. A randomised trial of lung sealant versus medical therapy for advanced emphysema. Eur Respir J 2015;46:651–62.
5. Sciurba FC, Ernst A, Herth FJ, et al. A randomized study of endobronchial valves for advanced emphysema. N Engl J Med 2010;363:1233–44.
6. Klooster K, ten Hacken NH, Hartman JE, et al. Endobronchial valves for emphysema without interlobar collateral ventilation. N Engl J Med 2015;373:2325–35.