Slot System
Featured Buckets
Featured Buckets Admin

Brentuximab Vedotin with Chemotherapy Improves Progression-Free Survival in Advanced-Stage Hodgkin’s Lymphoma

Article Type
Changed
Wed, 04/29/2020 - 11:25

Study Overview

Objective. To compare the efficacy of brentuximab vedotin, doxorubicin, vinblastine, and dacarbazine (A+AVD) with that of doxorubicin, bleomycin, vinblastine, and dacarbazine (ABVD) in patients with stage III or IV classic Hodgkin’s lymphoma.

Design. The ECHELON-1 trial, an international, openlabel, randomized phase 3 trial.

Setting and participants. In this multicenter international trial, a total of 1334 patients underwent randomization from November 2012 through January
2016. Eligible patients were 18 years of age older and had newly diagnosed and histologically proven classic Hodgkin’s lymphoma, Ann Arbor stage III or IV. Patients were eligible only if they had not received prior systemic chemotherapy or radiotherapy. All patients were required to have an ECOG performance status of ≤ 2 and adequate hematologic parameters (hemoglobin ≥ 8, ANC ≥ 1500, and platelet count ≥ 75,000). Patients with nodular lymphocyte predominant Hodgkin’s lymphoma, pre-existing peripheral sensory neuropathy, or known cerebral or meningeal disease were excluded.

Intervention. Patients were randomized in a 1:1 fashion to receive A+AVD (brentuximab vedotin 1.2 mg/kg, doxorubicin 25 mg/m2, vinblastine 6 mg/m2 and dacarbazine 375 mg/m2) or ABVD (doxorubicin 25 mg/m2, bleomycin 10 units/m2, vinblastine 6 mg/m2 and dacarbazine 375 mg/m2) IV on days 1 and 15 of each 28-day cycle for up to 6 cycles. A PET scan was done at the end of the second cycle (PET2) and if this showed increased uptake at any site or uptake at a new site of disease (Deauville score 5) patients could be switched to an alternative frontline therapy at the treating physician’s discretion.

Main outcome measures. The primary endpoint of this study was modified progression-free survival (mPFS), defined as time to disease progression, death, or modified progression (noncomplete response after completion of frontline therapy—Deauville score 3, 4, or 5 on PET). Modified progression was incorporated as an endpoint in order to assess the effectiveness of frontline therapy. A secondary endpoint of the study was overall survival (OS).

Results. The baseline characteristics were well balanced between the treatment arms. 58% of the patients were male and 64% had stage IV disease. The median age was 36 years and 9% in each group were over the age of 65. After a median follow-up of 24.9 months, the independently assessed 2-year mPFS was 82.1% and 77.2% in the A+AVD and ABVD groups, respectively (hazard ratio [HR] 0.77; 95% confidence interval [CI] 0.6–0.98). The 2-year mPFS rate according to investigator assessment was 81% and 74.4% in the A+AVD and ABVD groups, respectively. Modified progression (failure to achieve a complete response after completion of frontline therapy resulting in treatment with subsequent therapy) occurred in 9 and 22 patients in the
A+AVD and ABVD groups, respectively. A pre-specified subgroup analysis showed that patients from North America, male patients, patients with involvement of more than 1 extranodal site, patients with a high IPSS score (4–7), patients < 60 years old and those with stage IV disease appeared to benefit more from A+AVD. The rate of PET2 negativity was 89% with A+AVD and 86% with ABVD. The 2-year overall survival was 96.6% in the A+AVD group and 94.9% in the ABVD group (HR 0.72; 95% CI 0.44–1.17). Fewer patients in the A+AVD group received subsequent cancer-directed therapy.

Neutropenia was more commonly reported in the A+AVD group (58% vs. 45%). Moreover, febrile neutropenia was reported in 19% and 8% of patients in the A+AVD and ABVD groups, respectively. Discontinuation rates in either arm for febrile neutropenia was ≤ 1%. The rate of infections was 55% in the A+AVD group and 50% in the ABVD group (grade 3 or higher: 18% and 10%, respectively). After review of the rates of febrile neutropenia, the safety monitoring committee recommended that primary prophylaxis with granulocyte colony-stimulating factor (G-CSF) be used for patients who were yet to be enrolled. The rate of febrile neutropenia in the 83 patients in the A+AVD group who received primary prophylaxis was lower than those who did not (11% vs. 18%). Peripheral neuropathy occurred in 67% of patients in the A+AVD group and 42% in the ABVD group (grade 3 or higher: 11% vs 2%, respectively). Neuropathy lead to discontinuation of a study drug in 10% of patients in the A+AVD group. 67% of patients with peripheral neuropathy in the A+AVD group had resolution or improvement by one grade of their neuropathy at the time of last follow up. Pulmonary toxicity was reported in 2% of patients in the A+AVD group and 7% of the ABVD group (grade 3 or higher: < 1% vs. 3%, respectively). During treatment, 9 deaths were reported in the A+AVD group and 13 deaths in the ABVD group. Of the deaths in the ABVD group, 11 were associated with pulmonary toxicity.

Conclusion. A+AVD had superior efficacy to ABVD in the treatment of patients with advanced-stage Hodgkin’s lymphoma.

Commentary

Hodgkin’s lymphoma (HL) accounts for approximately 10% of all lymphomas in the world annually [1]. While outcomes with frontline therapy for patients with HL have dramatically improved with ABVD, up to 30% of patients have either refractory disease or relapse after initial therapy [2,3]. One particular area of concern in the current treatment of HL with ABVD is the associated pulmonary toxicity of bleomycin. Pulmonary toxicity from bleomycin occurs in approximately 20%–30% of patients and can lead to long-term morbidity [4,5]. In addition, approximately 15% or more of HL patients are elderly and may have co-existing pulmonary disease. In the previously published E2496 trial, the risk of bleomycin lung toxicity in the elderly was 24% [3]. Although the risk of clinically relevant lung toxicity remains low, there is considerable concern about this amongst clinicians. Recent data has challenged the benefit of bleomycin as a component of ABVD. For example, Johnson and colleagues have shown that in patients with a negtive PET scan after 2 cycles of ABVD, the omission of bleomycin (ie, continuation of AVD) resulted in only a 1.6% reduction in 3-year progression-free survival with a decrease in pulmonary toxicity [6].

Recently, there have been notable advances in the treatment of patients with relapsed or refractory HL, including the incorporation of the PD-1 inhibitor
nivolumab as well as the immunotoxin conjugated CD30 monoclonal antibody brentuximab vedotin (BV). Given the activity of such agents in relapsed and refractory patients, there has been much enthusiasm about incorporation of such agents into the frontline setting. In the current ECHELON-1 trial, Connors and colleagues present the results of a randomized phase 3 trial comparing ABVD, the current standard of care, to A+AVD, which replaces bleomycin with BV. The trial used a primary endpoint of modified progression-free survival, where a noncomplete response and after primary therapy and subsequent treatment with anticancer therapy was considered disease progression. Notably, this trial did meet its primary endpoint of improved
modified PFS, with a 4.9% lower risk of progression, death, or noncomplete response and subsequent need for treatment at 2 years. Overall survival was not significantly different at the time of analysis.

There are some noteworthy findings in addition to this. First, A+AVD was associated with a higher risk of febrile neutropenia and infectious complications; however, following the incorporation of G-CSF prophylaxis this risk was lowered. The pulmonary toxicity was lower in the A+AVD group (2% vs. 7%). A+AVD was associated with an increased risk of peripheral neuropathy, which appeared to improve or resolve following discontinuation of therapy. The neuropathy was mainly low grade with only 11% being grade 3 or higher. Although it remains early and follow-up short, A+AVD did appear to have superior efficacy with a decrease in the risk of pulmonary toxicity in this study. It is worth noting that the risk of neurotoxicity was higher, albeit reversible with drug discontinuation. Given these results, A+AVD warrants consideration as frontline therapy in newly diagnosed patients with advanced stage classic Hodgkin’s lymphoma.

Applications for Clinical Practice

The results of this trial suggest that A+AVD with G-CSF support compares favorably to ABVD and may represent an acceptable first-line treatment strategy, particularly for patients at higher risk for pulmonary toxicity, although follow-up remains short at this time.

—Daniel Isaac, DO, MS

References

1. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2017. CA Cancer J Clin 2017;67:7–30.

2. Canellos GP, Anderson JR, Propert KJ, et al. Chemotherapy of advanced Hodgkin’s disease with MOPP, ABVD, or MOPP alternating with ABVD. N Engl J Med 1992;327:1478–84.

3. Gordon LI, Hong F, Fisher RI, et al. Randomized phase III trial of ABVD versus Stanford V with or without radiation therapy in locally extensive and advanced-stage Hodgkin lymphoma: An intergroup study coordinated by the Eastern Cooperative Oncology Group (E2496). J Clin Oncol 2013;31:684–91.

4. Martin WG, Ristow KM, Habermann TM, et al. Bleomycin pulmonary toxicity has a negative impact on the outcome of patients with Hodgkin’s lymphoma. J Clin Oncol 2005;23:7614–20.

5. Hoskin PJ, Lowry L, Horwich A, et al. Randomized comparison of the Stanford V regimen and ABVD in the treatment of advanced Hodgkin’s lymphoma: United Kingdom National Cancer Research Institute Lymphoma Group Study ISRCTN 64141244. J Clin Oncol 2009;27:5390–6.

6. Johnson P, Federico M, Kirkwood A, et al. Adapted treatment guided by interim PET-CT scan in advanced Hodgkin’s lymphoma. N Engl J Med 2016;374:2419–29.

Article PDF
Issue
Journal of Clinical Outcomes Management - 25(2)
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To compare the efficacy of brentuximab vedotin, doxorubicin, vinblastine, and dacarbazine (A+AVD) with that of doxorubicin, bleomycin, vinblastine, and dacarbazine (ABVD) in patients with stage III or IV classic Hodgkin’s lymphoma.

Design. The ECHELON-1 trial, an international, openlabel, randomized phase 3 trial.

Setting and participants. In this multicenter international trial, a total of 1334 patients underwent randomization from November 2012 through January
2016. Eligible patients were 18 years of age older and had newly diagnosed and histologically proven classic Hodgkin’s lymphoma, Ann Arbor stage III or IV. Patients were eligible only if they had not received prior systemic chemotherapy or radiotherapy. All patients were required to have an ECOG performance status of ≤ 2 and adequate hematologic parameters (hemoglobin ≥ 8, ANC ≥ 1500, and platelet count ≥ 75,000). Patients with nodular lymphocyte predominant Hodgkin’s lymphoma, pre-existing peripheral sensory neuropathy, or known cerebral or meningeal disease were excluded.

Intervention. Patients were randomized in a 1:1 fashion to receive A+AVD (brentuximab vedotin 1.2 mg/kg, doxorubicin 25 mg/m2, vinblastine 6 mg/m2 and dacarbazine 375 mg/m2) or ABVD (doxorubicin 25 mg/m2, bleomycin 10 units/m2, vinblastine 6 mg/m2 and dacarbazine 375 mg/m2) IV on days 1 and 15 of each 28-day cycle for up to 6 cycles. A PET scan was done at the end of the second cycle (PET2) and if this showed increased uptake at any site or uptake at a new site of disease (Deauville score 5) patients could be switched to an alternative frontline therapy at the treating physician’s discretion.

Main outcome measures. The primary endpoint of this study was modified progression-free survival (mPFS), defined as time to disease progression, death, or modified progression (noncomplete response after completion of frontline therapy—Deauville score 3, 4, or 5 on PET). Modified progression was incorporated as an endpoint in order to assess the effectiveness of frontline therapy. A secondary endpoint of the study was overall survival (OS).

Results. The baseline characteristics were well balanced between the treatment arms. 58% of the patients were male and 64% had stage IV disease. The median age was 36 years and 9% in each group were over the age of 65. After a median follow-up of 24.9 months, the independently assessed 2-year mPFS was 82.1% and 77.2% in the A+AVD and ABVD groups, respectively (hazard ratio [HR] 0.77; 95% confidence interval [CI] 0.6–0.98). The 2-year mPFS rate according to investigator assessment was 81% and 74.4% in the A+AVD and ABVD groups, respectively. Modified progression (failure to achieve a complete response after completion of frontline therapy resulting in treatment with subsequent therapy) occurred in 9 and 22 patients in the
A+AVD and ABVD groups, respectively. A pre-specified subgroup analysis showed that patients from North America, male patients, patients with involvement of more than 1 extranodal site, patients with a high IPSS score (4–7), patients < 60 years old and those with stage IV disease appeared to benefit more from A+AVD. The rate of PET2 negativity was 89% with A+AVD and 86% with ABVD. The 2-year overall survival was 96.6% in the A+AVD group and 94.9% in the ABVD group (HR 0.72; 95% CI 0.44–1.17). Fewer patients in the A+AVD group received subsequent cancer-directed therapy.

Neutropenia was more commonly reported in the A+AVD group (58% vs. 45%). Moreover, febrile neutropenia was reported in 19% and 8% of patients in the A+AVD and ABVD groups, respectively. Discontinuation rates in either arm for febrile neutropenia was ≤ 1%. The rate of infections was 55% in the A+AVD group and 50% in the ABVD group (grade 3 or higher: 18% and 10%, respectively). After review of the rates of febrile neutropenia, the safety monitoring committee recommended that primary prophylaxis with granulocyte colony-stimulating factor (G-CSF) be used for patients who were yet to be enrolled. The rate of febrile neutropenia in the 83 patients in the A+AVD group who received primary prophylaxis was lower than those who did not (11% vs. 18%). Peripheral neuropathy occurred in 67% of patients in the A+AVD group and 42% in the ABVD group (grade 3 or higher: 11% vs 2%, respectively). Neuropathy lead to discontinuation of a study drug in 10% of patients in the A+AVD group. 67% of patients with peripheral neuropathy in the A+AVD group had resolution or improvement by one grade of their neuropathy at the time of last follow up. Pulmonary toxicity was reported in 2% of patients in the A+AVD group and 7% of the ABVD group (grade 3 or higher: < 1% vs. 3%, respectively). During treatment, 9 deaths were reported in the A+AVD group and 13 deaths in the ABVD group. Of the deaths in the ABVD group, 11 were associated with pulmonary toxicity.

Conclusion. A+AVD had superior efficacy to ABVD in the treatment of patients with advanced-stage Hodgkin’s lymphoma.

Commentary

Hodgkin’s lymphoma (HL) accounts for approximately 10% of all lymphomas in the world annually [1]. While outcomes with frontline therapy for patients with HL have dramatically improved with ABVD, up to 30% of patients have either refractory disease or relapse after initial therapy [2,3]. One particular area of concern in the current treatment of HL with ABVD is the associated pulmonary toxicity of bleomycin. Pulmonary toxicity from bleomycin occurs in approximately 20%–30% of patients and can lead to long-term morbidity [4,5]. In addition, approximately 15% or more of HL patients are elderly and may have co-existing pulmonary disease. In the previously published E2496 trial, the risk of bleomycin lung toxicity in the elderly was 24% [3]. Although the risk of clinically relevant lung toxicity remains low, there is considerable concern about this amongst clinicians. Recent data has challenged the benefit of bleomycin as a component of ABVD. For example, Johnson and colleagues have shown that in patients with a negtive PET scan after 2 cycles of ABVD, the omission of bleomycin (ie, continuation of AVD) resulted in only a 1.6% reduction in 3-year progression-free survival with a decrease in pulmonary toxicity [6].

Recently, there have been notable advances in the treatment of patients with relapsed or refractory HL, including the incorporation of the PD-1 inhibitor
nivolumab as well as the immunotoxin conjugated CD30 monoclonal antibody brentuximab vedotin (BV). Given the activity of such agents in relapsed and refractory patients, there has been much enthusiasm about incorporation of such agents into the frontline setting. In the current ECHELON-1 trial, Connors and colleagues present the results of a randomized phase 3 trial comparing ABVD, the current standard of care, to A+AVD, which replaces bleomycin with BV. The trial used a primary endpoint of modified progression-free survival, where a noncomplete response and after primary therapy and subsequent treatment with anticancer therapy was considered disease progression. Notably, this trial did meet its primary endpoint of improved
modified PFS, with a 4.9% lower risk of progression, death, or noncomplete response and subsequent need for treatment at 2 years. Overall survival was not significantly different at the time of analysis.

There are some noteworthy findings in addition to this. First, A+AVD was associated with a higher risk of febrile neutropenia and infectious complications; however, following the incorporation of G-CSF prophylaxis this risk was lowered. The pulmonary toxicity was lower in the A+AVD group (2% vs. 7%). A+AVD was associated with an increased risk of peripheral neuropathy, which appeared to improve or resolve following discontinuation of therapy. The neuropathy was mainly low grade with only 11% being grade 3 or higher. Although it remains early and follow-up short, A+AVD did appear to have superior efficacy with a decrease in the risk of pulmonary toxicity in this study. It is worth noting that the risk of neurotoxicity was higher, albeit reversible with drug discontinuation. Given these results, A+AVD warrants consideration as frontline therapy in newly diagnosed patients with advanced stage classic Hodgkin’s lymphoma.

Applications for Clinical Practice

The results of this trial suggest that A+AVD with G-CSF support compares favorably to ABVD and may represent an acceptable first-line treatment strategy, particularly for patients at higher risk for pulmonary toxicity, although follow-up remains short at this time.

—Daniel Isaac, DO, MS

Study Overview

Objective. To compare the efficacy of brentuximab vedotin, doxorubicin, vinblastine, and dacarbazine (A+AVD) with that of doxorubicin, bleomycin, vinblastine, and dacarbazine (ABVD) in patients with stage III or IV classic Hodgkin’s lymphoma.

Design. The ECHELON-1 trial, an international, openlabel, randomized phase 3 trial.

Setting and participants. In this multicenter international trial, a total of 1334 patients underwent randomization from November 2012 through January
2016. Eligible patients were 18 years of age older and had newly diagnosed and histologically proven classic Hodgkin’s lymphoma, Ann Arbor stage III or IV. Patients were eligible only if they had not received prior systemic chemotherapy or radiotherapy. All patients were required to have an ECOG performance status of ≤ 2 and adequate hematologic parameters (hemoglobin ≥ 8, ANC ≥ 1500, and platelet count ≥ 75,000). Patients with nodular lymphocyte predominant Hodgkin’s lymphoma, pre-existing peripheral sensory neuropathy, or known cerebral or meningeal disease were excluded.

Intervention. Patients were randomized in a 1:1 fashion to receive A+AVD (brentuximab vedotin 1.2 mg/kg, doxorubicin 25 mg/m2, vinblastine 6 mg/m2 and dacarbazine 375 mg/m2) or ABVD (doxorubicin 25 mg/m2, bleomycin 10 units/m2, vinblastine 6 mg/m2 and dacarbazine 375 mg/m2) IV on days 1 and 15 of each 28-day cycle for up to 6 cycles. A PET scan was done at the end of the second cycle (PET2) and if this showed increased uptake at any site or uptake at a new site of disease (Deauville score 5) patients could be switched to an alternative frontline therapy at the treating physician’s discretion.

Main outcome measures. The primary endpoint of this study was modified progression-free survival (mPFS), defined as time to disease progression, death, or modified progression (noncomplete response after completion of frontline therapy—Deauville score 3, 4, or 5 on PET). Modified progression was incorporated as an endpoint in order to assess the effectiveness of frontline therapy. A secondary endpoint of the study was overall survival (OS).

Results. The baseline characteristics were well balanced between the treatment arms. 58% of the patients were male and 64% had stage IV disease. The median age was 36 years and 9% in each group were over the age of 65. After a median follow-up of 24.9 months, the independently assessed 2-year mPFS was 82.1% and 77.2% in the A+AVD and ABVD groups, respectively (hazard ratio [HR] 0.77; 95% confidence interval [CI] 0.6–0.98). The 2-year mPFS rate according to investigator assessment was 81% and 74.4% in the A+AVD and ABVD groups, respectively. Modified progression (failure to achieve a complete response after completion of frontline therapy resulting in treatment with subsequent therapy) occurred in 9 and 22 patients in the
A+AVD and ABVD groups, respectively. A pre-specified subgroup analysis showed that patients from North America, male patients, patients with involvement of more than 1 extranodal site, patients with a high IPSS score (4–7), patients < 60 years old and those with stage IV disease appeared to benefit more from A+AVD. The rate of PET2 negativity was 89% with A+AVD and 86% with ABVD. The 2-year overall survival was 96.6% in the A+AVD group and 94.9% in the ABVD group (HR 0.72; 95% CI 0.44–1.17). Fewer patients in the A+AVD group received subsequent cancer-directed therapy.

Neutropenia was more commonly reported in the A+AVD group (58% vs. 45%). Moreover, febrile neutropenia was reported in 19% and 8% of patients in the A+AVD and ABVD groups, respectively. Discontinuation rates in either arm for febrile neutropenia was ≤ 1%. The rate of infections was 55% in the A+AVD group and 50% in the ABVD group (grade 3 or higher: 18% and 10%, respectively). After review of the rates of febrile neutropenia, the safety monitoring committee recommended that primary prophylaxis with granulocyte colony-stimulating factor (G-CSF) be used for patients who were yet to be enrolled. The rate of febrile neutropenia in the 83 patients in the A+AVD group who received primary prophylaxis was lower than those who did not (11% vs. 18%). Peripheral neuropathy occurred in 67% of patients in the A+AVD group and 42% in the ABVD group (grade 3 or higher: 11% vs 2%, respectively). Neuropathy lead to discontinuation of a study drug in 10% of patients in the A+AVD group. 67% of patients with peripheral neuropathy in the A+AVD group had resolution or improvement by one grade of their neuropathy at the time of last follow up. Pulmonary toxicity was reported in 2% of patients in the A+AVD group and 7% of the ABVD group (grade 3 or higher: < 1% vs. 3%, respectively). During treatment, 9 deaths were reported in the A+AVD group and 13 deaths in the ABVD group. Of the deaths in the ABVD group, 11 were associated with pulmonary toxicity.

Conclusion. A+AVD had superior efficacy to ABVD in the treatment of patients with advanced-stage Hodgkin’s lymphoma.

Commentary

Hodgkin’s lymphoma (HL) accounts for approximately 10% of all lymphomas in the world annually [1]. While outcomes with frontline therapy for patients with HL have dramatically improved with ABVD, up to 30% of patients have either refractory disease or relapse after initial therapy [2,3]. One particular area of concern in the current treatment of HL with ABVD is the associated pulmonary toxicity of bleomycin. Pulmonary toxicity from bleomycin occurs in approximately 20%–30% of patients and can lead to long-term morbidity [4,5]. In addition, approximately 15% or more of HL patients are elderly and may have co-existing pulmonary disease. In the previously published E2496 trial, the risk of bleomycin lung toxicity in the elderly was 24% [3]. Although the risk of clinically relevant lung toxicity remains low, there is considerable concern about this amongst clinicians. Recent data has challenged the benefit of bleomycin as a component of ABVD. For example, Johnson and colleagues have shown that in patients with a negtive PET scan after 2 cycles of ABVD, the omission of bleomycin (ie, continuation of AVD) resulted in only a 1.6% reduction in 3-year progression-free survival with a decrease in pulmonary toxicity [6].

Recently, there have been notable advances in the treatment of patients with relapsed or refractory HL, including the incorporation of the PD-1 inhibitor
nivolumab as well as the immunotoxin conjugated CD30 monoclonal antibody brentuximab vedotin (BV). Given the activity of such agents in relapsed and refractory patients, there has been much enthusiasm about incorporation of such agents into the frontline setting. In the current ECHELON-1 trial, Connors and colleagues present the results of a randomized phase 3 trial comparing ABVD, the current standard of care, to A+AVD, which replaces bleomycin with BV. The trial used a primary endpoint of modified progression-free survival, where a noncomplete response and after primary therapy and subsequent treatment with anticancer therapy was considered disease progression. Notably, this trial did meet its primary endpoint of improved
modified PFS, with a 4.9% lower risk of progression, death, or noncomplete response and subsequent need for treatment at 2 years. Overall survival was not significantly different at the time of analysis.

There are some noteworthy findings in addition to this. First, A+AVD was associated with a higher risk of febrile neutropenia and infectious complications; however, following the incorporation of G-CSF prophylaxis this risk was lowered. The pulmonary toxicity was lower in the A+AVD group (2% vs. 7%). A+AVD was associated with an increased risk of peripheral neuropathy, which appeared to improve or resolve following discontinuation of therapy. The neuropathy was mainly low grade with only 11% being grade 3 or higher. Although it remains early and follow-up short, A+AVD did appear to have superior efficacy with a decrease in the risk of pulmonary toxicity in this study. It is worth noting that the risk of neurotoxicity was higher, albeit reversible with drug discontinuation. Given these results, A+AVD warrants consideration as frontline therapy in newly diagnosed patients with advanced stage classic Hodgkin’s lymphoma.

Applications for Clinical Practice

The results of this trial suggest that A+AVD with G-CSF support compares favorably to ABVD and may represent an acceptable first-line treatment strategy, particularly for patients at higher risk for pulmonary toxicity, although follow-up remains short at this time.

—Daniel Isaac, DO, MS

References

1. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2017. CA Cancer J Clin 2017;67:7–30.

2. Canellos GP, Anderson JR, Propert KJ, et al. Chemotherapy of advanced Hodgkin’s disease with MOPP, ABVD, or MOPP alternating with ABVD. N Engl J Med 1992;327:1478–84.

3. Gordon LI, Hong F, Fisher RI, et al. Randomized phase III trial of ABVD versus Stanford V with or without radiation therapy in locally extensive and advanced-stage Hodgkin lymphoma: An intergroup study coordinated by the Eastern Cooperative Oncology Group (E2496). J Clin Oncol 2013;31:684–91.

4. Martin WG, Ristow KM, Habermann TM, et al. Bleomycin pulmonary toxicity has a negative impact on the outcome of patients with Hodgkin’s lymphoma. J Clin Oncol 2005;23:7614–20.

5. Hoskin PJ, Lowry L, Horwich A, et al. Randomized comparison of the Stanford V regimen and ABVD in the treatment of advanced Hodgkin’s lymphoma: United Kingdom National Cancer Research Institute Lymphoma Group Study ISRCTN 64141244. J Clin Oncol 2009;27:5390–6.

6. Johnson P, Federico M, Kirkwood A, et al. Adapted treatment guided by interim PET-CT scan in advanced Hodgkin’s lymphoma. N Engl J Med 2016;374:2419–29.

References

1. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2017. CA Cancer J Clin 2017;67:7–30.

2. Canellos GP, Anderson JR, Propert KJ, et al. Chemotherapy of advanced Hodgkin’s disease with MOPP, ABVD, or MOPP alternating with ABVD. N Engl J Med 1992;327:1478–84.

3. Gordon LI, Hong F, Fisher RI, et al. Randomized phase III trial of ABVD versus Stanford V with or without radiation therapy in locally extensive and advanced-stage Hodgkin lymphoma: An intergroup study coordinated by the Eastern Cooperative Oncology Group (E2496). J Clin Oncol 2013;31:684–91.

4. Martin WG, Ristow KM, Habermann TM, et al. Bleomycin pulmonary toxicity has a negative impact on the outcome of patients with Hodgkin’s lymphoma. J Clin Oncol 2005;23:7614–20.

5. Hoskin PJ, Lowry L, Horwich A, et al. Randomized comparison of the Stanford V regimen and ABVD in the treatment of advanced Hodgkin’s lymphoma: United Kingdom National Cancer Research Institute Lymphoma Group Study ISRCTN 64141244. J Clin Oncol 2009;27:5390–6.

6. Johnson P, Federico M, Kirkwood A, et al. Adapted treatment guided by interim PET-CT scan in advanced Hodgkin’s lymphoma. N Engl J Med 2016;374:2419–29.

Issue
Journal of Clinical Outcomes Management - 25(2)
Issue
Journal of Clinical Outcomes Management - 25(2)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Early Hip Fracture Surgery Is Associated with Lower 30-Day Mortality

Article Type
Changed
Wed, 04/29/2020 - 11:33

Study Overview

Objective. To determine the association between wait times for hip fracture surgery and outcomes after surgery and to identify the optimal time window for conducting hip fracture surgery.

Design. Observational cohort study.

Setting and participants. The study was conducted using population-based health administrative databases in Ontario, Canada. The databases collected information on health care services, physician and hospital information, and demographic characteristics in Ontario. The investigators used the databases to identify adults undergoing hip fracture surgery between April 2009 and March 2014. Excluded were adults who are non-Ontario residents, those with elective hospital admissions, those with prior hip fractures, and patients without hospital arrival time data. Other exclusion criteria include age younger than 45 years, those with delay in surgery longer than 10 days, surgery performed by a nonorthopedic surgeon, and those at hospitals with fewer than 5 hip fracture surgeries during the study period.

The primary independent variable was wait time for surgery, calculated from time from emergency department arrival until surgery and rounded in hours. Other covariates included in the analysis were patient characteristics including age, sex and comorbid conditions using the Deyo-Charlson comorbidity index, the Johns Hopkins Collapsed Aggregated Diagnosis Groups, and other validated algorithms. In addition, other conditions associated with hip fracture were included—osteomyelitis, bone cancer, other fractures, history of total hip arthroplasty, and multiple trauma. Additional covariates included median neighborhood household income quintile as a proxy for socioeconomic status, patient’s discharge disposition, and rural status. Characteristics of the procedure including procedure type, duration and timing (working vs. after hours) were assessed. Surgeon- and hospital-related factors included years since orthopedic certification as a proxy for surgeon experience and number of hip fracture procedures performed in the year preceding the event for surgeon and hospital. Other hospital characteristics included academic or community-based hospital, hospital size, and hospital’s capacity for performing nonelective surgery.

Main outcome measures. The main outcome measure was mortality within 30 days of being admitted for hip fracture surgery. Other secondary outcomes included mortality at 90 and 365 days after admission, medical complications within 30, 90, and 365 days, and a composite of mortality and any complications at these timeframes. Complications included myocardial infarction, deep vein thrombosis, pulmonary embolism and pneumonia. Statistical analysis include modeling for the probability of complications according to the time elapsed from emergency department arrival to surgery using risk adjusted spline analyses. The association between surgical wait time and mortality was graphically represented to visualize an inflection point when complications begin to rise. The area under the receiver operating characteristic curve was calculated at time thresholds around the area of inflection and the time producing the maximum area under the curve was selected as the threshold to classify patients
as receiving early or delayed surgery. Early and delayed patients were matched using propensity score with 1:1 matching without replacement. Outcomes were compared between early and delayed groups after matching and absolute risk differences were calculated using generalized estimating equations.

Main results. A total of 42,230 adults were included, with a mean age of 80.1 (SD 10.7) years; 70.5% were women. The average time from arrival to emergency room to surgery was 38.8 (SD 28.8) hours. The spline models identified an area of inflection at 24 hours when the risk of complications begins to rise. The investigators used 24 hours as a time point to classify patients into early or delayed surgery group. 33.6% of patients received early surgery and 66.4% had delayed surgery. Propensity score matching yielded a sample of 13,731 in each group. Patients with delayed surgery compared with early surgery had higher 30-day mortality (6.5% vs. 5.8%, absolute risk difference 0.79%), rate of pulmonary embolism (1.2% vs. 0.7%, absolute risk difference 0.51%), rate of myocardial infarction (1.2% vs. 0.8%, absolute risk difference 0.39%), and rate of pneumonia (4.6% vs. 3.7%, absolute risk difference 0.95%). For the composite outcome, 12.1% vs. 10.1% had mortality or complications in the delayed group and the early group respectively with an absolute difference of 2.16%. Outcomes at 90 days and 365 days were similar and remained significant. In subgroups of patients without comorbidity and those receiving surgery within 36 hours the results remained similar.

Conclusion. Early hip fracture surgery, defined as within 24 hours after arrival to emergency room, is associated with lower mortality and complications when compared to delayed surgery.

Commentary

Hip fracture affects predominantly older adults and leads to potential devastating consequences. Older adults who experience hip fracture have increased risk of functional decline, institutionalization, and death [1]. As hip fracture care often include surgical repair, many studies have examined the impact of timing of surgery on hip fracture outcomes, as the timing of surgery is a potentially modifiable factor that could impact patient outcomes [2]. Prior smaller cohort studies have demonstrated that delayed surgery may impact outcomes but the reasons for the delay, such as medical complexity, may also play a role in increasing the risk of adverse outcomes [3]. The current study adds to the previous literature by examining a large population-based cohort, thereby allowing for analysis that takes into account medical comorbidities using matching methods and sensitivity analyses that examined a sample without comorbidities. The study also employs a different approach to defining early vs. delayed surgery by using analytical methods to determine when risk of complications begins to rise. The results indicate that early surgery is associated with better outcomes at 30 days and beyond and that delaying surgery beyond 24 hours is associated with poorer patient outcomes.

Patients with hip fracture require care from multiple disciplines and care across multiple settings. These care components may also have an impact on patient outcomes, particularly outcomes at 90 and 365 days; some examples include anesthesia care during hip fracture surgery [4], pain control, early mobilization, and delirium prevention [1,5]. A limitation of utilizing administrative databases is that some of these potentially important factors that may affect outcome may not be included and thus cannot be controlled for. It is conceivable that early surgery may be associated with care characteristics that may also be favorable to outcomes. Another limitation is that it is still difficult to tease out the effect of medical complexity at the
time of hip fracture presentation, which may impact both timing of surgery and patient outcomes, despite sensitivity analyses that limit the sample to those who had surgery within 36 hours and also those without medical comorbidities according to the administrative data, and adjusting for antiplatelet or anticoagulant medications. It is also important to note that a randomized controlled trial may further elucidate the causal relationship between timing of surgery and patient outcomes. Despite the limitations of the study, the results make a strong case for limiting surgical wait time to within 24 hours from the time when the patient arrives in the emergency room.

Applications for Clinical Practice

Similar to how hospitals organize their care for patients with acute myocardial infarction for early reperfusion, and for patients with acute ischemic stroke with early thombolytic therapy, hip fracture care may need to be organized and coordinated in order to reduce surgical wait time to within 24 hours. Timely assessments by an orthopedic surgeon, anesthesiologist, and medical consultants to prepare patients for surgery and making available operating room and staff for hip fracture patients are necessary steps to reach the goal of reducing surgical wait time.

—William W. Hung, MD, MPH

References

1. Hung WW, Egol KA, Zuckerman JD, Siu AL. Hip fracture management: tailoring care for the older patient. JAMA 2012;307:2185–94.

2. Orosz GM, Magaziner J, Hannan EL, et al. Association of timing of surgery for hip fracture and patient outcomes. JAMA 2004;291:1738–43.

3. Vidán MT, Sánchez E, Gracia Y, et al. Causes and effects of surgical delay in patients with hip fracture: a cohort study. Ann Intern Med 2011;155:226–33.

4. Neuman MD, Silber JH, Elkassabany NM, et al. Comparative effectiveness of regional versus general anesthesia for hip fracture surgery in adults. Anesthesiology 2012;117: 72–92.

5. Grigoryan KV, Javedan H, Rudolph JL. Orthogeriatric care models and outcomes in hip fracture patients: a systematic review and meta-analysis. J Orthop Trauma 2014;28:e49–55.

Article PDF
Issue
Journal of Clinical Outcomes Management - 25(1)
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To determine the association between wait times for hip fracture surgery and outcomes after surgery and to identify the optimal time window for conducting hip fracture surgery.

Design. Observational cohort study.

Setting and participants. The study was conducted using population-based health administrative databases in Ontario, Canada. The databases collected information on health care services, physician and hospital information, and demographic characteristics in Ontario. The investigators used the databases to identify adults undergoing hip fracture surgery between April 2009 and March 2014. Excluded were adults who are non-Ontario residents, those with elective hospital admissions, those with prior hip fractures, and patients without hospital arrival time data. Other exclusion criteria include age younger than 45 years, those with delay in surgery longer than 10 days, surgery performed by a nonorthopedic surgeon, and those at hospitals with fewer than 5 hip fracture surgeries during the study period.

The primary independent variable was wait time for surgery, calculated from time from emergency department arrival until surgery and rounded in hours. Other covariates included in the analysis were patient characteristics including age, sex and comorbid conditions using the Deyo-Charlson comorbidity index, the Johns Hopkins Collapsed Aggregated Diagnosis Groups, and other validated algorithms. In addition, other conditions associated with hip fracture were included—osteomyelitis, bone cancer, other fractures, history of total hip arthroplasty, and multiple trauma. Additional covariates included median neighborhood household income quintile as a proxy for socioeconomic status, patient’s discharge disposition, and rural status. Characteristics of the procedure including procedure type, duration and timing (working vs. after hours) were assessed. Surgeon- and hospital-related factors included years since orthopedic certification as a proxy for surgeon experience and number of hip fracture procedures performed in the year preceding the event for surgeon and hospital. Other hospital characteristics included academic or community-based hospital, hospital size, and hospital’s capacity for performing nonelective surgery.

Main outcome measures. The main outcome measure was mortality within 30 days of being admitted for hip fracture surgery. Other secondary outcomes included mortality at 90 and 365 days after admission, medical complications within 30, 90, and 365 days, and a composite of mortality and any complications at these timeframes. Complications included myocardial infarction, deep vein thrombosis, pulmonary embolism and pneumonia. Statistical analysis include modeling for the probability of complications according to the time elapsed from emergency department arrival to surgery using risk adjusted spline analyses. The association between surgical wait time and mortality was graphically represented to visualize an inflection point when complications begin to rise. The area under the receiver operating characteristic curve was calculated at time thresholds around the area of inflection and the time producing the maximum area under the curve was selected as the threshold to classify patients
as receiving early or delayed surgery. Early and delayed patients were matched using propensity score with 1:1 matching without replacement. Outcomes were compared between early and delayed groups after matching and absolute risk differences were calculated using generalized estimating equations.

Main results. A total of 42,230 adults were included, with a mean age of 80.1 (SD 10.7) years; 70.5% were women. The average time from arrival to emergency room to surgery was 38.8 (SD 28.8) hours. The spline models identified an area of inflection at 24 hours when the risk of complications begins to rise. The investigators used 24 hours as a time point to classify patients into early or delayed surgery group. 33.6% of patients received early surgery and 66.4% had delayed surgery. Propensity score matching yielded a sample of 13,731 in each group. Patients with delayed surgery compared with early surgery had higher 30-day mortality (6.5% vs. 5.8%, absolute risk difference 0.79%), rate of pulmonary embolism (1.2% vs. 0.7%, absolute risk difference 0.51%), rate of myocardial infarction (1.2% vs. 0.8%, absolute risk difference 0.39%), and rate of pneumonia (4.6% vs. 3.7%, absolute risk difference 0.95%). For the composite outcome, 12.1% vs. 10.1% had mortality or complications in the delayed group and the early group respectively with an absolute difference of 2.16%. Outcomes at 90 days and 365 days were similar and remained significant. In subgroups of patients without comorbidity and those receiving surgery within 36 hours the results remained similar.

Conclusion. Early hip fracture surgery, defined as within 24 hours after arrival to emergency room, is associated with lower mortality and complications when compared to delayed surgery.

Commentary

Hip fracture affects predominantly older adults and leads to potential devastating consequences. Older adults who experience hip fracture have increased risk of functional decline, institutionalization, and death [1]. As hip fracture care often include surgical repair, many studies have examined the impact of timing of surgery on hip fracture outcomes, as the timing of surgery is a potentially modifiable factor that could impact patient outcomes [2]. Prior smaller cohort studies have demonstrated that delayed surgery may impact outcomes but the reasons for the delay, such as medical complexity, may also play a role in increasing the risk of adverse outcomes [3]. The current study adds to the previous literature by examining a large population-based cohort, thereby allowing for analysis that takes into account medical comorbidities using matching methods and sensitivity analyses that examined a sample without comorbidities. The study also employs a different approach to defining early vs. delayed surgery by using analytical methods to determine when risk of complications begins to rise. The results indicate that early surgery is associated with better outcomes at 30 days and beyond and that delaying surgery beyond 24 hours is associated with poorer patient outcomes.

Patients with hip fracture require care from multiple disciplines and care across multiple settings. These care components may also have an impact on patient outcomes, particularly outcomes at 90 and 365 days; some examples include anesthesia care during hip fracture surgery [4], pain control, early mobilization, and delirium prevention [1,5]. A limitation of utilizing administrative databases is that some of these potentially important factors that may affect outcome may not be included and thus cannot be controlled for. It is conceivable that early surgery may be associated with care characteristics that may also be favorable to outcomes. Another limitation is that it is still difficult to tease out the effect of medical complexity at the
time of hip fracture presentation, which may impact both timing of surgery and patient outcomes, despite sensitivity analyses that limit the sample to those who had surgery within 36 hours and also those without medical comorbidities according to the administrative data, and adjusting for antiplatelet or anticoagulant medications. It is also important to note that a randomized controlled trial may further elucidate the causal relationship between timing of surgery and patient outcomes. Despite the limitations of the study, the results make a strong case for limiting surgical wait time to within 24 hours from the time when the patient arrives in the emergency room.

Applications for Clinical Practice

Similar to how hospitals organize their care for patients with acute myocardial infarction for early reperfusion, and for patients with acute ischemic stroke with early thombolytic therapy, hip fracture care may need to be organized and coordinated in order to reduce surgical wait time to within 24 hours. Timely assessments by an orthopedic surgeon, anesthesiologist, and medical consultants to prepare patients for surgery and making available operating room and staff for hip fracture patients are necessary steps to reach the goal of reducing surgical wait time.

—William W. Hung, MD, MPH

Study Overview

Objective. To determine the association between wait times for hip fracture surgery and outcomes after surgery and to identify the optimal time window for conducting hip fracture surgery.

Design. Observational cohort study.

Setting and participants. The study was conducted using population-based health administrative databases in Ontario, Canada. The databases collected information on health care services, physician and hospital information, and demographic characteristics in Ontario. The investigators used the databases to identify adults undergoing hip fracture surgery between April 2009 and March 2014. Excluded were adults who are non-Ontario residents, those with elective hospital admissions, those with prior hip fractures, and patients without hospital arrival time data. Other exclusion criteria include age younger than 45 years, those with delay in surgery longer than 10 days, surgery performed by a nonorthopedic surgeon, and those at hospitals with fewer than 5 hip fracture surgeries during the study period.

The primary independent variable was wait time for surgery, calculated from time from emergency department arrival until surgery and rounded in hours. Other covariates included in the analysis were patient characteristics including age, sex and comorbid conditions using the Deyo-Charlson comorbidity index, the Johns Hopkins Collapsed Aggregated Diagnosis Groups, and other validated algorithms. In addition, other conditions associated with hip fracture were included—osteomyelitis, bone cancer, other fractures, history of total hip arthroplasty, and multiple trauma. Additional covariates included median neighborhood household income quintile as a proxy for socioeconomic status, patient’s discharge disposition, and rural status. Characteristics of the procedure including procedure type, duration and timing (working vs. after hours) were assessed. Surgeon- and hospital-related factors included years since orthopedic certification as a proxy for surgeon experience and number of hip fracture procedures performed in the year preceding the event for surgeon and hospital. Other hospital characteristics included academic or community-based hospital, hospital size, and hospital’s capacity for performing nonelective surgery.

Main outcome measures. The main outcome measure was mortality within 30 days of being admitted for hip fracture surgery. Other secondary outcomes included mortality at 90 and 365 days after admission, medical complications within 30, 90, and 365 days, and a composite of mortality and any complications at these timeframes. Complications included myocardial infarction, deep vein thrombosis, pulmonary embolism and pneumonia. Statistical analysis include modeling for the probability of complications according to the time elapsed from emergency department arrival to surgery using risk adjusted spline analyses. The association between surgical wait time and mortality was graphically represented to visualize an inflection point when complications begin to rise. The area under the receiver operating characteristic curve was calculated at time thresholds around the area of inflection and the time producing the maximum area under the curve was selected as the threshold to classify patients
as receiving early or delayed surgery. Early and delayed patients were matched using propensity score with 1:1 matching without replacement. Outcomes were compared between early and delayed groups after matching and absolute risk differences were calculated using generalized estimating equations.

Main results. A total of 42,230 adults were included, with a mean age of 80.1 (SD 10.7) years; 70.5% were women. The average time from arrival to emergency room to surgery was 38.8 (SD 28.8) hours. The spline models identified an area of inflection at 24 hours when the risk of complications begins to rise. The investigators used 24 hours as a time point to classify patients into early or delayed surgery group. 33.6% of patients received early surgery and 66.4% had delayed surgery. Propensity score matching yielded a sample of 13,731 in each group. Patients with delayed surgery compared with early surgery had higher 30-day mortality (6.5% vs. 5.8%, absolute risk difference 0.79%), rate of pulmonary embolism (1.2% vs. 0.7%, absolute risk difference 0.51%), rate of myocardial infarction (1.2% vs. 0.8%, absolute risk difference 0.39%), and rate of pneumonia (4.6% vs. 3.7%, absolute risk difference 0.95%). For the composite outcome, 12.1% vs. 10.1% had mortality or complications in the delayed group and the early group respectively with an absolute difference of 2.16%. Outcomes at 90 days and 365 days were similar and remained significant. In subgroups of patients without comorbidity and those receiving surgery within 36 hours the results remained similar.

Conclusion. Early hip fracture surgery, defined as within 24 hours after arrival to emergency room, is associated with lower mortality and complications when compared to delayed surgery.

Commentary

Hip fracture affects predominantly older adults and leads to potential devastating consequences. Older adults who experience hip fracture have increased risk of functional decline, institutionalization, and death [1]. As hip fracture care often include surgical repair, many studies have examined the impact of timing of surgery on hip fracture outcomes, as the timing of surgery is a potentially modifiable factor that could impact patient outcomes [2]. Prior smaller cohort studies have demonstrated that delayed surgery may impact outcomes but the reasons for the delay, such as medical complexity, may also play a role in increasing the risk of adverse outcomes [3]. The current study adds to the previous literature by examining a large population-based cohort, thereby allowing for analysis that takes into account medical comorbidities using matching methods and sensitivity analyses that examined a sample without comorbidities. The study also employs a different approach to defining early vs. delayed surgery by using analytical methods to determine when risk of complications begins to rise. The results indicate that early surgery is associated with better outcomes at 30 days and beyond and that delaying surgery beyond 24 hours is associated with poorer patient outcomes.

Patients with hip fracture require care from multiple disciplines and care across multiple settings. These care components may also have an impact on patient outcomes, particularly outcomes at 90 and 365 days; some examples include anesthesia care during hip fracture surgery [4], pain control, early mobilization, and delirium prevention [1,5]. A limitation of utilizing administrative databases is that some of these potentially important factors that may affect outcome may not be included and thus cannot be controlled for. It is conceivable that early surgery may be associated with care characteristics that may also be favorable to outcomes. Another limitation is that it is still difficult to tease out the effect of medical complexity at the
time of hip fracture presentation, which may impact both timing of surgery and patient outcomes, despite sensitivity analyses that limit the sample to those who had surgery within 36 hours and also those without medical comorbidities according to the administrative data, and adjusting for antiplatelet or anticoagulant medications. It is also important to note that a randomized controlled trial may further elucidate the causal relationship between timing of surgery and patient outcomes. Despite the limitations of the study, the results make a strong case for limiting surgical wait time to within 24 hours from the time when the patient arrives in the emergency room.

Applications for Clinical Practice

Similar to how hospitals organize their care for patients with acute myocardial infarction for early reperfusion, and for patients with acute ischemic stroke with early thombolytic therapy, hip fracture care may need to be organized and coordinated in order to reduce surgical wait time to within 24 hours. Timely assessments by an orthopedic surgeon, anesthesiologist, and medical consultants to prepare patients for surgery and making available operating room and staff for hip fracture patients are necessary steps to reach the goal of reducing surgical wait time.

—William W. Hung, MD, MPH

References

1. Hung WW, Egol KA, Zuckerman JD, Siu AL. Hip fracture management: tailoring care for the older patient. JAMA 2012;307:2185–94.

2. Orosz GM, Magaziner J, Hannan EL, et al. Association of timing of surgery for hip fracture and patient outcomes. JAMA 2004;291:1738–43.

3. Vidán MT, Sánchez E, Gracia Y, et al. Causes and effects of surgical delay in patients with hip fracture: a cohort study. Ann Intern Med 2011;155:226–33.

4. Neuman MD, Silber JH, Elkassabany NM, et al. Comparative effectiveness of regional versus general anesthesia for hip fracture surgery in adults. Anesthesiology 2012;117: 72–92.

5. Grigoryan KV, Javedan H, Rudolph JL. Orthogeriatric care models and outcomes in hip fracture patients: a systematic review and meta-analysis. J Orthop Trauma 2014;28:e49–55.

References

1. Hung WW, Egol KA, Zuckerman JD, Siu AL. Hip fracture management: tailoring care for the older patient. JAMA 2012;307:2185–94.

2. Orosz GM, Magaziner J, Hannan EL, et al. Association of timing of surgery for hip fracture and patient outcomes. JAMA 2004;291:1738–43.

3. Vidán MT, Sánchez E, Gracia Y, et al. Causes and effects of surgical delay in patients with hip fracture: a cohort study. Ann Intern Med 2011;155:226–33.

4. Neuman MD, Silber JH, Elkassabany NM, et al. Comparative effectiveness of regional versus general anesthesia for hip fracture surgery in adults. Anesthesiology 2012;117: 72–92.

5. Grigoryan KV, Javedan H, Rudolph JL. Orthogeriatric care models and outcomes in hip fracture patients: a systematic review and meta-analysis. J Orthop Trauma 2014;28:e49–55.

Issue
Journal of Clinical Outcomes Management - 25(1)
Issue
Journal of Clinical Outcomes Management - 25(1)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Home Monitoring of Cystic Fibrosis

Article Type
Changed
Wed, 04/29/2020 - 11:34

Study Overview

Objective. To determine if an intervention directed toward early detection of pulmonary exacerbations using electronic home monitoring of spirometry and symptoms would result in slower decline in lung function.

Design. Multicenter, randomized, nonblinded 2-arm clinical trial.

Setting and participants. The study was conducted at 14 cystic fibrosis centers in the United States between 2011 and 2015. Cystic fibrosis patients (stable at baseline, FEV1 > 25% predicted) at least 14 years old (adolescent and adults) were included and randomized 1:1 to either an early intervention arm or usual care arm.

Intervention. The intervention arm used home-based spirometers and patient-reported respiratory symptoms using the Cystic Fibrosis Respiratory Symptoms Diary (CFRSD), which was to be completed twice weekly and collected by the central AM2 system. This AM2 system alerted sites to contact patients for an acute pulmonary exacerbation evaluation when FEV1 values fell by greater than 10% from baseline or CFRSD worsened from baseline in two or more of eight respiratory symptoms. The usual care arm patients had quarterly CF visits and/or acute visits based on their need.

Main outcome measures. The primary outcome variable was the 52-week change in FEV1 volume in liters. Secondary outcome variables were changes in CFQ-R (Cystic Fibrosis Questionnaire, revised), CFRSD, FEV1 % predicted, FVC in liters, FEF25-75%, time to first acute pulmonary exacerbation, time from first pulmonary exacerbation to subsequent pulmonary exacerbation, number of hospitalization days, number of hospitalizations, percent change in prevalence of Pseudomonas or Staphylococcus aureus and global assessment of protocol burden score.

Main results. A total of 267 patients were randomized. The results were analyzed using intention-to-treat analysis. There was no significant difference between study arms in 52-week mean change in FEV1 slope (mean slope difference, 0.00 L, 95% confidence interval, –0.07 to 0.07; P = 0.99). The early intervention arm subjects detected exacerbations sooner and more frequently than usual care arm subjects (time to first exacerbation hazard ratio, 1.45; 94% confidence interval, 1.09 to 1.93; P = 0.01). Adverse events were not significantly different between treatment arms.

Conclusion. An intervention of electronic home monitoring of patients with CF was able to detect more exacerbations than usual care, but this did not result in slower decline in lung function.

Commentary

Establishing efficacy and safety of home monitoring is a popular research topic in the current era of information technology. Most data to date has come from chronic adult disease such as heart failure, diabetes, or COPD [1]. While relatively rare, CF is a chronic lung disease that could potentially benefit from home monitoring. This is supported by previous evidence suggesting that up to a quarter of pulmonary exacerbations in CF patients result in worsened baseline lung function [2]. Close monitoring of symptoms and FEV1 using home monitoring was hypothesized to improve management and long-term function in this population. Indeed, in children with CF, electronic home monitoring of symptoms and lung function was able to detect pulmonary exacerbations early [3]. Frequency of monitoring is widely variable between centers, and some suggest aggressive monitoring of CF provides better clinical outcomes [4]. Current CF guidelines do not make specific recommendations regarding frequency of monitoring.

In this study, Lechtzin et al attempted to determine if the early detection of acute pulmonary exacerbations in CF patients by home monitoring and treatment would prevent progressive decline in lung function. This multicenter randomized trial was conducted at large CF centers in the US with a total cohort of 267 patients. The study had a mean follow-up time of 46.8 weeks per participant in the intervention arm and a mean follow-up time of 50.9 weeks per participant in the usual care arm. Given the predefined follow-up length (52 weeks) the primary outcome of FEV1 in liters was deemed sensitive enough to detect a decline of lung function. However the discrepancy between follow-up times with the intervention group having a 4.1-week shorter mean follow-up than the usual care could have influenced the interpretation of the results. Additionally, a large percentage of these patients were clinically stable at initial enrollment, with an average FEV1 % predicted of 79.5%. The stability of initial participants raises questions as to the efficacy of home monitoring in CF patient with moderate to severe lung disease. Mostly importantly, due to the nature of intervention the study could not be blinded, which could have substantially increased anxiety and self-awareness of patients in reporting their symptoms in the intervention arm.

Currently, an established consensus definition of pulmonary exacerbations of CF is lacking. Previous studies have proposed several different criteria of acute pulmonary exacerbations. Most proposed definitions depend on symptom changes such as cough, sputum, chest pain, shortness of breath, fatigue and weight-loss, making the definition less specific or objective.

The number of acute visits in the intervention arm was significantly higher than that in the usual care arm (153 vs 64). Despite a higher number of visits with intervention group, a significant number of these visits did not lead to a diagnosis of acute pulmonary exacerbation. Reportedly, 108 acute visits met protocol-defined pulmonary exacerbation and 29 acute visits did not meet protocol-defined pulmonary exacerbation in the intervention arm compared to 44 and 12 respectively in the usual care arm of the study. Given that the groups had similar baseline demographics and were randomized appropriately, one would expect that the number of acute visits severe enough to meet protocol-defined criteria as a pulmonary exacerbation would be similar in both groups. However, the absolute number of protocol-defined pulmonary exacerbations was far greater in the intervention group. Therefore, one could question the clinical significance of what was defined as acute pulmonary exacerbation. Potentially, the elevation of the absolute number of protocol-defined pulmonary exacerbations in the intervention group was simply due to increased surveillance. If the former were correct, one would expect the lack of identification/treatment of a significant number of pulmonary exacerbations in the usual care group would have led to a larger decline in FEV1 after 52 weeks than was seen in the results when compared to the intervention group. Given that the results of the study indicate no significant difference in change in FEV1 between study arms, perhaps the studied parameters in the intervention group were overly sensitive.

Of note, the usual care arm did have a statistically significant higher rate of hospitalizations and IV antibiotic use, suggesting that early identification of acute visits can identify patients earlier in the course of an acute pulmonary exacerbation and prevent higher level of care, though at the expense of more acute event “false positives,” or over-diagnosis. This trade-off may not result in cost saving, though this was not a consideration of this study. Additionally, there was likely difference in treatment, as treatment was not standardized, with potential implications for the validity of results.

The early intervention protocol was not only shown to lead to increased visits with no benefit in lung function decline, but as one may expect, also proved to be remarkably burdensome to many patients compared to the usual care protocol. Entering data on a weekly basis (or perhaps even monthly) was found to be burdensome in many remote-monitoring trials [5]. This may be especially apparent in a younger age group: in this study the average age of the study population was between 18 and 30 years of age. It can be hypothesized that this age group may not have enough responsibility, time, or enthusiasm to participate in home monitoring. Home monitoring maybe more effective in a disease condition where the average age is older or in a pediatric population in whom the parents oversee the care of the patient or have more time and receive subjective benefit from home monitoring services.

Less may be sufficient. The current study suggests that the home monitoring in CF may increase medical expense and unnecessary antibiotic use with no improvement in lung function. It is difficult to assess from this study the impact that the burden of home monitoring would have on clinical outcomes, however, previous meta-analysis of data studying COPD populations using home monitoring system, interestingly, also had increased health service usage and even led to increase in mortality in the intervention group compared with usual care group [1,6].

Perhaps the negative result of current study is due to the oftentimes variable definitions of and management algorithms for pulmonary exacerbations rather than the home monitoring system itself. Limited evidence exists for optimal threshold identification [7]. Aggregated, large amounts of data gathered by telemonitoring have not been proven to be used effectively. Moreover, as mentioned, a clear definition and management guidelines for pulmonary exacerbation are lacking. As a next step, studies are ongoing to evaluate how to use the collected data without increasing harm or cost. This could utilize machine learning or developing a more specific model defining and predicting pulmonary exacerbations as well as standardized indications for antibiotic therapy and hospitalization.

 

 

Applications for Clinical Practice

CF patients suffer from frequent pulmonary exacerbations and close monitoring and appropriate treatment is necessary to prevent progressive decline of lung function. This study has shown no benefit of electronic home monitoring in CF patients based on symptoms and spirometry over usual care. However, this negative outcome may be due to the limitation of the current definition of pulmonary exacerbation and lack of a consensus management algorithm. Optimizing the definition of pulmonary exacerbation and protocoling management based on severity may improve future evaluations of electronic home monitoring. Electronic home monitoring may help identify patients requiring evaluation; however, clinicians should continue to manage CF patients with conventional tools including regular follow-up visits, thorough history taking, and appropriate use of antibiotics based on their clinical acumen.

—Minkyung Kwon, MD, Joel Roberson, MD, Drew Willey, MD, and Neal Patel, MD (Mayo Clinic Florida, Jacksonville, FL, except for Dr. Roberson, of Oakland University/ Beaumont Health, Royal Oak, MI)

References

1. Polisena J, Tran K, Cimon K, et al. Home telehealth for chronic obstructive pulmonary disease: a systematic review and meta-analysis. J Telemed Telecare 2010;16 :120–7.

2. Sanders DB, Bittner RC, Rosenfeld M, et al. Failure to recover to baseline pulmonary function after cystic fibrosis pulmonary exacerbation. Am J Respir Crit Care Med 2010;182:627–32.

3. van Horck M, Winkens B, Wesseling G, et al. Early detection of pulmonary exacerbations in children with Cystic Fibrosis by electronic home monitoring of symptoms and lung function. Sci Rep 2017;7:12350.

4. Johnson C, Butler SM, Konstan MW, et al. Factors influencing outcomes in cystic fibrosis: a center-based analysis. Chest 2003;123:20–7.

5. Ding H, Karunanithi M, Kanagasingam Y, et al. A pilot study of a mobile-phone-based home monitoring system to assist in remote interventions in cases of acute exacerbation of COPD. J Telemed Telecare 2014;20:128–34.

6. Kargiannakis M, Fitzsimmons DA, Bentley CL, Mountain GA. Does telehealth monitoring identify exacerbations of chronic obstructive pulmonary disease and reduce hospitalisations? an analysis of system data. JMIR Med Inform 2017;5:e8.

7. Finkelstein J, Jeong IC. Machine learning approaches to personalize early prediction of asthma exacerbations. Ann N Y Acad Sci 2017;1387:153–65.

Article PDF
Issue
Journal of Clinical Outcomes Management - 25(1)
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To determine if an intervention directed toward early detection of pulmonary exacerbations using electronic home monitoring of spirometry and symptoms would result in slower decline in lung function.

Design. Multicenter, randomized, nonblinded 2-arm clinical trial.

Setting and participants. The study was conducted at 14 cystic fibrosis centers in the United States between 2011 and 2015. Cystic fibrosis patients (stable at baseline, FEV1 > 25% predicted) at least 14 years old (adolescent and adults) were included and randomized 1:1 to either an early intervention arm or usual care arm.

Intervention. The intervention arm used home-based spirometers and patient-reported respiratory symptoms using the Cystic Fibrosis Respiratory Symptoms Diary (CFRSD), which was to be completed twice weekly and collected by the central AM2 system. This AM2 system alerted sites to contact patients for an acute pulmonary exacerbation evaluation when FEV1 values fell by greater than 10% from baseline or CFRSD worsened from baseline in two or more of eight respiratory symptoms. The usual care arm patients had quarterly CF visits and/or acute visits based on their need.

Main outcome measures. The primary outcome variable was the 52-week change in FEV1 volume in liters. Secondary outcome variables were changes in CFQ-R (Cystic Fibrosis Questionnaire, revised), CFRSD, FEV1 % predicted, FVC in liters, FEF25-75%, time to first acute pulmonary exacerbation, time from first pulmonary exacerbation to subsequent pulmonary exacerbation, number of hospitalization days, number of hospitalizations, percent change in prevalence of Pseudomonas or Staphylococcus aureus and global assessment of protocol burden score.

Main results. A total of 267 patients were randomized. The results were analyzed using intention-to-treat analysis. There was no significant difference between study arms in 52-week mean change in FEV1 slope (mean slope difference, 0.00 L, 95% confidence interval, –0.07 to 0.07; P = 0.99). The early intervention arm subjects detected exacerbations sooner and more frequently than usual care arm subjects (time to first exacerbation hazard ratio, 1.45; 94% confidence interval, 1.09 to 1.93; P = 0.01). Adverse events were not significantly different between treatment arms.

Conclusion. An intervention of electronic home monitoring of patients with CF was able to detect more exacerbations than usual care, but this did not result in slower decline in lung function.

Commentary

Establishing efficacy and safety of home monitoring is a popular research topic in the current era of information technology. Most data to date has come from chronic adult disease such as heart failure, diabetes, or COPD [1]. While relatively rare, CF is a chronic lung disease that could potentially benefit from home monitoring. This is supported by previous evidence suggesting that up to a quarter of pulmonary exacerbations in CF patients result in worsened baseline lung function [2]. Close monitoring of symptoms and FEV1 using home monitoring was hypothesized to improve management and long-term function in this population. Indeed, in children with CF, electronic home monitoring of symptoms and lung function was able to detect pulmonary exacerbations early [3]. Frequency of monitoring is widely variable between centers, and some suggest aggressive monitoring of CF provides better clinical outcomes [4]. Current CF guidelines do not make specific recommendations regarding frequency of monitoring.

In this study, Lechtzin et al attempted to determine if the early detection of acute pulmonary exacerbations in CF patients by home monitoring and treatment would prevent progressive decline in lung function. This multicenter randomized trial was conducted at large CF centers in the US with a total cohort of 267 patients. The study had a mean follow-up time of 46.8 weeks per participant in the intervention arm and a mean follow-up time of 50.9 weeks per participant in the usual care arm. Given the predefined follow-up length (52 weeks) the primary outcome of FEV1 in liters was deemed sensitive enough to detect a decline of lung function. However the discrepancy between follow-up times with the intervention group having a 4.1-week shorter mean follow-up than the usual care could have influenced the interpretation of the results. Additionally, a large percentage of these patients were clinically stable at initial enrollment, with an average FEV1 % predicted of 79.5%. The stability of initial participants raises questions as to the efficacy of home monitoring in CF patient with moderate to severe lung disease. Mostly importantly, due to the nature of intervention the study could not be blinded, which could have substantially increased anxiety and self-awareness of patients in reporting their symptoms in the intervention arm.

Currently, an established consensus definition of pulmonary exacerbations of CF is lacking. Previous studies have proposed several different criteria of acute pulmonary exacerbations. Most proposed definitions depend on symptom changes such as cough, sputum, chest pain, shortness of breath, fatigue and weight-loss, making the definition less specific or objective.

The number of acute visits in the intervention arm was significantly higher than that in the usual care arm (153 vs 64). Despite a higher number of visits with intervention group, a significant number of these visits did not lead to a diagnosis of acute pulmonary exacerbation. Reportedly, 108 acute visits met protocol-defined pulmonary exacerbation and 29 acute visits did not meet protocol-defined pulmonary exacerbation in the intervention arm compared to 44 and 12 respectively in the usual care arm of the study. Given that the groups had similar baseline demographics and were randomized appropriately, one would expect that the number of acute visits severe enough to meet protocol-defined criteria as a pulmonary exacerbation would be similar in both groups. However, the absolute number of protocol-defined pulmonary exacerbations was far greater in the intervention group. Therefore, one could question the clinical significance of what was defined as acute pulmonary exacerbation. Potentially, the elevation of the absolute number of protocol-defined pulmonary exacerbations in the intervention group was simply due to increased surveillance. If the former were correct, one would expect the lack of identification/treatment of a significant number of pulmonary exacerbations in the usual care group would have led to a larger decline in FEV1 after 52 weeks than was seen in the results when compared to the intervention group. Given that the results of the study indicate no significant difference in change in FEV1 between study arms, perhaps the studied parameters in the intervention group were overly sensitive.

Of note, the usual care arm did have a statistically significant higher rate of hospitalizations and IV antibiotic use, suggesting that early identification of acute visits can identify patients earlier in the course of an acute pulmonary exacerbation and prevent higher level of care, though at the expense of more acute event “false positives,” or over-diagnosis. This trade-off may not result in cost saving, though this was not a consideration of this study. Additionally, there was likely difference in treatment, as treatment was not standardized, with potential implications for the validity of results.

The early intervention protocol was not only shown to lead to increased visits with no benefit in lung function decline, but as one may expect, also proved to be remarkably burdensome to many patients compared to the usual care protocol. Entering data on a weekly basis (or perhaps even monthly) was found to be burdensome in many remote-monitoring trials [5]. This may be especially apparent in a younger age group: in this study the average age of the study population was between 18 and 30 years of age. It can be hypothesized that this age group may not have enough responsibility, time, or enthusiasm to participate in home monitoring. Home monitoring maybe more effective in a disease condition where the average age is older or in a pediatric population in whom the parents oversee the care of the patient or have more time and receive subjective benefit from home monitoring services.

Less may be sufficient. The current study suggests that the home monitoring in CF may increase medical expense and unnecessary antibiotic use with no improvement in lung function. It is difficult to assess from this study the impact that the burden of home monitoring would have on clinical outcomes, however, previous meta-analysis of data studying COPD populations using home monitoring system, interestingly, also had increased health service usage and even led to increase in mortality in the intervention group compared with usual care group [1,6].

Perhaps the negative result of current study is due to the oftentimes variable definitions of and management algorithms for pulmonary exacerbations rather than the home monitoring system itself. Limited evidence exists for optimal threshold identification [7]. Aggregated, large amounts of data gathered by telemonitoring have not been proven to be used effectively. Moreover, as mentioned, a clear definition and management guidelines for pulmonary exacerbation are lacking. As a next step, studies are ongoing to evaluate how to use the collected data without increasing harm or cost. This could utilize machine learning or developing a more specific model defining and predicting pulmonary exacerbations as well as standardized indications for antibiotic therapy and hospitalization.

 

 

Applications for Clinical Practice

CF patients suffer from frequent pulmonary exacerbations and close monitoring and appropriate treatment is necessary to prevent progressive decline of lung function. This study has shown no benefit of electronic home monitoring in CF patients based on symptoms and spirometry over usual care. However, this negative outcome may be due to the limitation of the current definition of pulmonary exacerbation and lack of a consensus management algorithm. Optimizing the definition of pulmonary exacerbation and protocoling management based on severity may improve future evaluations of electronic home monitoring. Electronic home monitoring may help identify patients requiring evaluation; however, clinicians should continue to manage CF patients with conventional tools including regular follow-up visits, thorough history taking, and appropriate use of antibiotics based on their clinical acumen.

—Minkyung Kwon, MD, Joel Roberson, MD, Drew Willey, MD, and Neal Patel, MD (Mayo Clinic Florida, Jacksonville, FL, except for Dr. Roberson, of Oakland University/ Beaumont Health, Royal Oak, MI)

Study Overview

Objective. To determine if an intervention directed toward early detection of pulmonary exacerbations using electronic home monitoring of spirometry and symptoms would result in slower decline in lung function.

Design. Multicenter, randomized, nonblinded 2-arm clinical trial.

Setting and participants. The study was conducted at 14 cystic fibrosis centers in the United States between 2011 and 2015. Cystic fibrosis patients (stable at baseline, FEV1 > 25% predicted) at least 14 years old (adolescent and adults) were included and randomized 1:1 to either an early intervention arm or usual care arm.

Intervention. The intervention arm used home-based spirometers and patient-reported respiratory symptoms using the Cystic Fibrosis Respiratory Symptoms Diary (CFRSD), which was to be completed twice weekly and collected by the central AM2 system. This AM2 system alerted sites to contact patients for an acute pulmonary exacerbation evaluation when FEV1 values fell by greater than 10% from baseline or CFRSD worsened from baseline in two or more of eight respiratory symptoms. The usual care arm patients had quarterly CF visits and/or acute visits based on their need.

Main outcome measures. The primary outcome variable was the 52-week change in FEV1 volume in liters. Secondary outcome variables were changes in CFQ-R (Cystic Fibrosis Questionnaire, revised), CFRSD, FEV1 % predicted, FVC in liters, FEF25-75%, time to first acute pulmonary exacerbation, time from first pulmonary exacerbation to subsequent pulmonary exacerbation, number of hospitalization days, number of hospitalizations, percent change in prevalence of Pseudomonas or Staphylococcus aureus and global assessment of protocol burden score.

Main results. A total of 267 patients were randomized. The results were analyzed using intention-to-treat analysis. There was no significant difference between study arms in 52-week mean change in FEV1 slope (mean slope difference, 0.00 L, 95% confidence interval, –0.07 to 0.07; P = 0.99). The early intervention arm subjects detected exacerbations sooner and more frequently than usual care arm subjects (time to first exacerbation hazard ratio, 1.45; 94% confidence interval, 1.09 to 1.93; P = 0.01). Adverse events were not significantly different between treatment arms.

Conclusion. An intervention of electronic home monitoring of patients with CF was able to detect more exacerbations than usual care, but this did not result in slower decline in lung function.

Commentary

Establishing efficacy and safety of home monitoring is a popular research topic in the current era of information technology. Most data to date has come from chronic adult disease such as heart failure, diabetes, or COPD [1]. While relatively rare, CF is a chronic lung disease that could potentially benefit from home monitoring. This is supported by previous evidence suggesting that up to a quarter of pulmonary exacerbations in CF patients result in worsened baseline lung function [2]. Close monitoring of symptoms and FEV1 using home monitoring was hypothesized to improve management and long-term function in this population. Indeed, in children with CF, electronic home monitoring of symptoms and lung function was able to detect pulmonary exacerbations early [3]. Frequency of monitoring is widely variable between centers, and some suggest aggressive monitoring of CF provides better clinical outcomes [4]. Current CF guidelines do not make specific recommendations regarding frequency of monitoring.

In this study, Lechtzin et al attempted to determine if the early detection of acute pulmonary exacerbations in CF patients by home monitoring and treatment would prevent progressive decline in lung function. This multicenter randomized trial was conducted at large CF centers in the US with a total cohort of 267 patients. The study had a mean follow-up time of 46.8 weeks per participant in the intervention arm and a mean follow-up time of 50.9 weeks per participant in the usual care arm. Given the predefined follow-up length (52 weeks) the primary outcome of FEV1 in liters was deemed sensitive enough to detect a decline of lung function. However the discrepancy between follow-up times with the intervention group having a 4.1-week shorter mean follow-up than the usual care could have influenced the interpretation of the results. Additionally, a large percentage of these patients were clinically stable at initial enrollment, with an average FEV1 % predicted of 79.5%. The stability of initial participants raises questions as to the efficacy of home monitoring in CF patient with moderate to severe lung disease. Mostly importantly, due to the nature of intervention the study could not be blinded, which could have substantially increased anxiety and self-awareness of patients in reporting their symptoms in the intervention arm.

Currently, an established consensus definition of pulmonary exacerbations of CF is lacking. Previous studies have proposed several different criteria of acute pulmonary exacerbations. Most proposed definitions depend on symptom changes such as cough, sputum, chest pain, shortness of breath, fatigue and weight-loss, making the definition less specific or objective.

The number of acute visits in the intervention arm was significantly higher than that in the usual care arm (153 vs 64). Despite a higher number of visits with intervention group, a significant number of these visits did not lead to a diagnosis of acute pulmonary exacerbation. Reportedly, 108 acute visits met protocol-defined pulmonary exacerbation and 29 acute visits did not meet protocol-defined pulmonary exacerbation in the intervention arm compared to 44 and 12 respectively in the usual care arm of the study. Given that the groups had similar baseline demographics and were randomized appropriately, one would expect that the number of acute visits severe enough to meet protocol-defined criteria as a pulmonary exacerbation would be similar in both groups. However, the absolute number of protocol-defined pulmonary exacerbations was far greater in the intervention group. Therefore, one could question the clinical significance of what was defined as acute pulmonary exacerbation. Potentially, the elevation of the absolute number of protocol-defined pulmonary exacerbations in the intervention group was simply due to increased surveillance. If the former were correct, one would expect the lack of identification/treatment of a significant number of pulmonary exacerbations in the usual care group would have led to a larger decline in FEV1 after 52 weeks than was seen in the results when compared to the intervention group. Given that the results of the study indicate no significant difference in change in FEV1 between study arms, perhaps the studied parameters in the intervention group were overly sensitive.

Of note, the usual care arm did have a statistically significant higher rate of hospitalizations and IV antibiotic use, suggesting that early identification of acute visits can identify patients earlier in the course of an acute pulmonary exacerbation and prevent higher level of care, though at the expense of more acute event “false positives,” or over-diagnosis. This trade-off may not result in cost saving, though this was not a consideration of this study. Additionally, there was likely difference in treatment, as treatment was not standardized, with potential implications for the validity of results.

The early intervention protocol was not only shown to lead to increased visits with no benefit in lung function decline, but as one may expect, also proved to be remarkably burdensome to many patients compared to the usual care protocol. Entering data on a weekly basis (or perhaps even monthly) was found to be burdensome in many remote-monitoring trials [5]. This may be especially apparent in a younger age group: in this study the average age of the study population was between 18 and 30 years of age. It can be hypothesized that this age group may not have enough responsibility, time, or enthusiasm to participate in home monitoring. Home monitoring maybe more effective in a disease condition where the average age is older or in a pediatric population in whom the parents oversee the care of the patient or have more time and receive subjective benefit from home monitoring services.

Less may be sufficient. The current study suggests that the home monitoring in CF may increase medical expense and unnecessary antibiotic use with no improvement in lung function. It is difficult to assess from this study the impact that the burden of home monitoring would have on clinical outcomes, however, previous meta-analysis of data studying COPD populations using home monitoring system, interestingly, also had increased health service usage and even led to increase in mortality in the intervention group compared with usual care group [1,6].

Perhaps the negative result of current study is due to the oftentimes variable definitions of and management algorithms for pulmonary exacerbations rather than the home monitoring system itself. Limited evidence exists for optimal threshold identification [7]. Aggregated, large amounts of data gathered by telemonitoring have not been proven to be used effectively. Moreover, as mentioned, a clear definition and management guidelines for pulmonary exacerbation are lacking. As a next step, studies are ongoing to evaluate how to use the collected data without increasing harm or cost. This could utilize machine learning or developing a more specific model defining and predicting pulmonary exacerbations as well as standardized indications for antibiotic therapy and hospitalization.

 

 

Applications for Clinical Practice

CF patients suffer from frequent pulmonary exacerbations and close monitoring and appropriate treatment is necessary to prevent progressive decline of lung function. This study has shown no benefit of electronic home monitoring in CF patients based on symptoms and spirometry over usual care. However, this negative outcome may be due to the limitation of the current definition of pulmonary exacerbation and lack of a consensus management algorithm. Optimizing the definition of pulmonary exacerbation and protocoling management based on severity may improve future evaluations of electronic home monitoring. Electronic home monitoring may help identify patients requiring evaluation; however, clinicians should continue to manage CF patients with conventional tools including regular follow-up visits, thorough history taking, and appropriate use of antibiotics based on their clinical acumen.

—Minkyung Kwon, MD, Joel Roberson, MD, Drew Willey, MD, and Neal Patel, MD (Mayo Clinic Florida, Jacksonville, FL, except for Dr. Roberson, of Oakland University/ Beaumont Health, Royal Oak, MI)

References

1. Polisena J, Tran K, Cimon K, et al. Home telehealth for chronic obstructive pulmonary disease: a systematic review and meta-analysis. J Telemed Telecare 2010;16 :120–7.

2. Sanders DB, Bittner RC, Rosenfeld M, et al. Failure to recover to baseline pulmonary function after cystic fibrosis pulmonary exacerbation. Am J Respir Crit Care Med 2010;182:627–32.

3. van Horck M, Winkens B, Wesseling G, et al. Early detection of pulmonary exacerbations in children with Cystic Fibrosis by electronic home monitoring of symptoms and lung function. Sci Rep 2017;7:12350.

4. Johnson C, Butler SM, Konstan MW, et al. Factors influencing outcomes in cystic fibrosis: a center-based analysis. Chest 2003;123:20–7.

5. Ding H, Karunanithi M, Kanagasingam Y, et al. A pilot study of a mobile-phone-based home monitoring system to assist in remote interventions in cases of acute exacerbation of COPD. J Telemed Telecare 2014;20:128–34.

6. Kargiannakis M, Fitzsimmons DA, Bentley CL, Mountain GA. Does telehealth monitoring identify exacerbations of chronic obstructive pulmonary disease and reduce hospitalisations? an analysis of system data. JMIR Med Inform 2017;5:e8.

7. Finkelstein J, Jeong IC. Machine learning approaches to personalize early prediction of asthma exacerbations. Ann N Y Acad Sci 2017;1387:153–65.

References

1. Polisena J, Tran K, Cimon K, et al. Home telehealth for chronic obstructive pulmonary disease: a systematic review and meta-analysis. J Telemed Telecare 2010;16 :120–7.

2. Sanders DB, Bittner RC, Rosenfeld M, et al. Failure to recover to baseline pulmonary function after cystic fibrosis pulmonary exacerbation. Am J Respir Crit Care Med 2010;182:627–32.

3. van Horck M, Winkens B, Wesseling G, et al. Early detection of pulmonary exacerbations in children with Cystic Fibrosis by electronic home monitoring of symptoms and lung function. Sci Rep 2017;7:12350.

4. Johnson C, Butler SM, Konstan MW, et al. Factors influencing outcomes in cystic fibrosis: a center-based analysis. Chest 2003;123:20–7.

5. Ding H, Karunanithi M, Kanagasingam Y, et al. A pilot study of a mobile-phone-based home monitoring system to assist in remote interventions in cases of acute exacerbation of COPD. J Telemed Telecare 2014;20:128–34.

6. Kargiannakis M, Fitzsimmons DA, Bentley CL, Mountain GA. Does telehealth monitoring identify exacerbations of chronic obstructive pulmonary disease and reduce hospitalisations? an analysis of system data. JMIR Med Inform 2017;5:e8.

7. Finkelstein J, Jeong IC. Machine learning approaches to personalize early prediction of asthma exacerbations. Ann N Y Acad Sci 2017;1387:153–65.

Issue
Journal of Clinical Outcomes Management - 25(1)
Issue
Journal of Clinical Outcomes Management - 25(1)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Addition of Durvalumab After Chemoradiotherapy Improves Progression-Free Survival in Unresectable Stage III Non-Small-Cell Lung Cancer

Article Type
Changed
Wed, 04/29/2020 - 11:32

Study Overview

Objective. To evaluate the efficacy of the PD-L1 anti­body durvalumab in the treatment of patients with unresectable stage III non-small-cell lung cancer (NSCLC) following completion of standard chemoradiotherapy.

Design. Interim analysis of the phase III PACIFIC study, a randomized, double-blind, international study.

Setting and participants. A total of 709 patients underwent randomization between May 2014 and April 2016. Eligible patients had histologically proven stage III, locally advanced and unresectable NSCLC with no evidence of disease progression following chemoradiotherapy. The enrolled patients had received at least 2 cycles of platinum-based chemotherapy concurrently with definitive radiation therapy (54 Gy to 66 Gy). Initially, patients were randomized within 2 weeks of completing radiation; however, the protocol was amended to allow randomization up to 42 days following completion of therapy. Patients were not eligible if they had previous exposure to anti-PD-1 or PD-L1 antibodies or active or prior autoimmune disease in the last 2 years. All patients were required to have an WHO performance status of 0 or 1. The patients were stratified at randomization by age (< 65 or > 65 years), sex and smoking status. Enrollment was not restricted to level of PD-L1 expression.

Intervention. Patients were randomized in a 2:1 ratio to receive consolidation durvalumab 10 mg/kg or placebo every 2 weeks for up to 12 months. The intervention was discontinued if there was evidence of confirmed disease progression, treatment with an alternative anticancer therapy, toxicity or patient preference. The response to treatment was assessed every 8 weeks for the first year and then every 12 weeks thereafter.

Main outcome measures. The primary endpoints of the study were progression-free survival (PFS) by blinded independent review and overall survival (OS). Secondary endpoints were the percentage of patients alive without disease progression at 12 and 18 months, objective response rate, duration of response, safety, and time to death or metastasis. Patients were given the option to provide archived tumor specimens for PD-L1 testing.

Results. The baseline characteristics were balanced. The median age at enrollment was 64 years and 91% of the patients were current or former smokers. The vast majority of patients (> 99% in both groups) received concurrent chemoradiotherapy. The response to initial concurrent therapy was similar in both groups with complete response rates of 1.9% and 3% in the durvalumab and placebo groups, respectively, and partial response rates of 48.7% and 46.8%. Archived tumor samples showed ≥ 25% PD-L1 expression in 22.3% of patients (24% in durvalumab group versus 18.6% in placebo group) and < 25% in 41% of patients (39.3%% in durvalumab group versus 44.3% in placebo group). PD-L1 status was unknown in 36.7% of the enrolled patients. Of note, 6% of patients enrolled had EGFR mutations.

After a median follow-up of 14.5 months, the median PFS was 16.8 months with durvalumab versus 5.6 months with placebo (P < 0.001; hazard ratio [HR] 0.52, 95% confidence interval [CI] 0.42–0.65). The 12-month PFS rate was 55.9% and 35.3% in the durvalumab and placebo group, respectively. The 18-month PFS rate was 44.2% and 27% in the durvalumab and placebo group, respectively. The PFS results were consistent across all subgroups. The PFS benefit was observed regardless of PD-L1 expression. The median time to death or metastasis was 23.2 months in the durvalumab group versus 14.6 months with placebo (HR 0.52; 95% CI 0.39–0.69). The objective response rate was significantly higher in the durvalumab group (28.4% vs. 16%, P < 0.001). The median duration of response was longer with durvalumab. Of the patients who responded to durvalumab, 73% had ongoing response at 18 months compared with 47% in the placebo group. OS was not assessed at this interm analysis.

Adverse events (AE) of any grade occurred in over approximately 95% in both groups. Grade 3 or 4 AE occurred in 29.9% in the durvalumab group and 26.1% in the placebo group. The most common grade 3 or 4 AE was pneumonia, occurring in about 4% of patients in each group. More patients in the durvalumab group discontinued treatment (15.4% vs 9.8%). Death due to an AE occurred in 4.4% of the durvalumab group and 5.6% of the placebo group. The most frequent AE leading to discontinuation was pneumonitis or radiation pneumonitis and pneumonia. Pneumonitis or radiation pneumonitis occurred in 33.9% (3.4% grade 3 or 4) and 24.8% (2.6% grade 3 or 4) of the durvalumab and placebo groups, respectively. Immune-mediated AE of any grade were more common in the duvalumab group occurring in 24% of patients (vs. 8% in placebo). Of these, 14% of patients in the durvalumab group required glucocorticoids compared with 4.3% in the placebo group. The most AE of interest was diarrhea, which occurred in 18% of the patients in both groups.

 

 

Conclusion. The addition of consolidative durvalumab following completion of concurrent chemoradiotherapy in patients with stage III, locally advanced NSCLC significantly improved PFS without a significant increase in treatment-related adverse events.

Commentary

Pre-clinical evidence has suggested that chemotherapy and radiation therapy may lead to upregulation of PD-L1 expression by tumor cells leading to increased PD-L1 mediated T cell apoptosis [1,2]. Given prior studies documenting PD-L1 expression as a predictive biomarker for response to durvalumab, the authors of the current trial hypothesized that the addition of durvalumab after chemoradiotherapy would provide clinical benefit likely mediated by upregulation of PD-L1. The results from this pre-planned interim analysis show a significant improvement in progression-free survival with the addition of durvalumab with a 48% decrease in the risk of progression. This benefit was noted across all patient subgroups. In addition, responses to durvalumab were durable, with 72% of the patients who responded having an ongoing response at 18 months. Interestingly, the response to durvalumab was independent of PD-L1 expression, which is in contrast to previous studies showing PD-L1 expression to be a good biomarker for durvalumab response [3].

The results of the PACIFIC trial represent a clinically meaningful benefit and suggests an excellent option for patients with unresectable stage III NSCLC. One important point to highlight is that the addition of durvalumab was well tolerated and did not appear to significantly increase the rate of severe adverse events. Of particular interest is the similar rates of grade 3 or 4 pneumonitis, which appeared to be around 3% for each group. Overall survival data remain immature at the time of this analysis; however, given the acceptable toxicity profile and improved PFS this combination should be considered for these patients in clinical practice. Ongoing trials are underway to evaluate the role of single-agent durvalumab in the front-line setting for NSCLC.

 

Applications for Clinical Practice

In patients with unresectable stage III NSCLC who have no evidence of disease progression following completion of chemoradiotherapy, the addition of durvalumab provided a significant and clinically meaningful improvement in progression-free survival without an increase in serious adverse events. While the overall survival data is immature, the 48% improvement in progression-free survival supports the incorporation of durvalumab into standard practice in this patient population.

—Daniel Isaac, DO, MS

References

1. Deng L, Liang H, Burnette B, et al. Irradiation and anti-PD-L1 treatment synergistically promote antitumor immunity in mice. J Clin Invest2014;124:687–95.

2. Zhang P, Su DM, Liang M, Fu J. Chemopreventive agents induce programmed death-1-ligand 1 (PD-L1) surface expression in breast cancer cells and promote PD-L1 mediated T cell apoptosis. Mol Immun 2008;45:1470–6.

3. Antonia SJ, Brahmer JR, Khleif S, et al. Phase ½ [What should this be? 3?]study of the safety and clinical activity of durvalumab in patients with non-small cell lung cancer (NSCLC). Presented at the 41st European Society for Medical Oncology Annual Meeting, Copenhagen, October 7–11 2016.

Article PDF
Issue
Journal of Clinical Outcomes Management - 25(1)
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To evaluate the efficacy of the PD-L1 anti­body durvalumab in the treatment of patients with unresectable stage III non-small-cell lung cancer (NSCLC) following completion of standard chemoradiotherapy.

Design. Interim analysis of the phase III PACIFIC study, a randomized, double-blind, international study.

Setting and participants. A total of 709 patients underwent randomization between May 2014 and April 2016. Eligible patients had histologically proven stage III, locally advanced and unresectable NSCLC with no evidence of disease progression following chemoradiotherapy. The enrolled patients had received at least 2 cycles of platinum-based chemotherapy concurrently with definitive radiation therapy (54 Gy to 66 Gy). Initially, patients were randomized within 2 weeks of completing radiation; however, the protocol was amended to allow randomization up to 42 days following completion of therapy. Patients were not eligible if they had previous exposure to anti-PD-1 or PD-L1 antibodies or active or prior autoimmune disease in the last 2 years. All patients were required to have an WHO performance status of 0 or 1. The patients were stratified at randomization by age (< 65 or > 65 years), sex and smoking status. Enrollment was not restricted to level of PD-L1 expression.

Intervention. Patients were randomized in a 2:1 ratio to receive consolidation durvalumab 10 mg/kg or placebo every 2 weeks for up to 12 months. The intervention was discontinued if there was evidence of confirmed disease progression, treatment with an alternative anticancer therapy, toxicity or patient preference. The response to treatment was assessed every 8 weeks for the first year and then every 12 weeks thereafter.

Main outcome measures. The primary endpoints of the study were progression-free survival (PFS) by blinded independent review and overall survival (OS). Secondary endpoints were the percentage of patients alive without disease progression at 12 and 18 months, objective response rate, duration of response, safety, and time to death or metastasis. Patients were given the option to provide archived tumor specimens for PD-L1 testing.

Results. The baseline characteristics were balanced. The median age at enrollment was 64 years and 91% of the patients were current or former smokers. The vast majority of patients (> 99% in both groups) received concurrent chemoradiotherapy. The response to initial concurrent therapy was similar in both groups with complete response rates of 1.9% and 3% in the durvalumab and placebo groups, respectively, and partial response rates of 48.7% and 46.8%. Archived tumor samples showed ≥ 25% PD-L1 expression in 22.3% of patients (24% in durvalumab group versus 18.6% in placebo group) and < 25% in 41% of patients (39.3%% in durvalumab group versus 44.3% in placebo group). PD-L1 status was unknown in 36.7% of the enrolled patients. Of note, 6% of patients enrolled had EGFR mutations.

After a median follow-up of 14.5 months, the median PFS was 16.8 months with durvalumab versus 5.6 months with placebo (P < 0.001; hazard ratio [HR] 0.52, 95% confidence interval [CI] 0.42–0.65). The 12-month PFS rate was 55.9% and 35.3% in the durvalumab and placebo group, respectively. The 18-month PFS rate was 44.2% and 27% in the durvalumab and placebo group, respectively. The PFS results were consistent across all subgroups. The PFS benefit was observed regardless of PD-L1 expression. The median time to death or metastasis was 23.2 months in the durvalumab group versus 14.6 months with placebo (HR 0.52; 95% CI 0.39–0.69). The objective response rate was significantly higher in the durvalumab group (28.4% vs. 16%, P < 0.001). The median duration of response was longer with durvalumab. Of the patients who responded to durvalumab, 73% had ongoing response at 18 months compared with 47% in the placebo group. OS was not assessed at this interm analysis.

Adverse events (AE) of any grade occurred in over approximately 95% in both groups. Grade 3 or 4 AE occurred in 29.9% in the durvalumab group and 26.1% in the placebo group. The most common grade 3 or 4 AE was pneumonia, occurring in about 4% of patients in each group. More patients in the durvalumab group discontinued treatment (15.4% vs 9.8%). Death due to an AE occurred in 4.4% of the durvalumab group and 5.6% of the placebo group. The most frequent AE leading to discontinuation was pneumonitis or radiation pneumonitis and pneumonia. Pneumonitis or radiation pneumonitis occurred in 33.9% (3.4% grade 3 or 4) and 24.8% (2.6% grade 3 or 4) of the durvalumab and placebo groups, respectively. Immune-mediated AE of any grade were more common in the duvalumab group occurring in 24% of patients (vs. 8% in placebo). Of these, 14% of patients in the durvalumab group required glucocorticoids compared with 4.3% in the placebo group. The most AE of interest was diarrhea, which occurred in 18% of the patients in both groups.

 

 

Conclusion. The addition of consolidative durvalumab following completion of concurrent chemoradiotherapy in patients with stage III, locally advanced NSCLC significantly improved PFS without a significant increase in treatment-related adverse events.

Commentary

Pre-clinical evidence has suggested that chemotherapy and radiation therapy may lead to upregulation of PD-L1 expression by tumor cells leading to increased PD-L1 mediated T cell apoptosis [1,2]. Given prior studies documenting PD-L1 expression as a predictive biomarker for response to durvalumab, the authors of the current trial hypothesized that the addition of durvalumab after chemoradiotherapy would provide clinical benefit likely mediated by upregulation of PD-L1. The results from this pre-planned interim analysis show a significant improvement in progression-free survival with the addition of durvalumab with a 48% decrease in the risk of progression. This benefit was noted across all patient subgroups. In addition, responses to durvalumab were durable, with 72% of the patients who responded having an ongoing response at 18 months. Interestingly, the response to durvalumab was independent of PD-L1 expression, which is in contrast to previous studies showing PD-L1 expression to be a good biomarker for durvalumab response [3].

The results of the PACIFIC trial represent a clinically meaningful benefit and suggests an excellent option for patients with unresectable stage III NSCLC. One important point to highlight is that the addition of durvalumab was well tolerated and did not appear to significantly increase the rate of severe adverse events. Of particular interest is the similar rates of grade 3 or 4 pneumonitis, which appeared to be around 3% for each group. Overall survival data remain immature at the time of this analysis; however, given the acceptable toxicity profile and improved PFS this combination should be considered for these patients in clinical practice. Ongoing trials are underway to evaluate the role of single-agent durvalumab in the front-line setting for NSCLC.

 

Applications for Clinical Practice

In patients with unresectable stage III NSCLC who have no evidence of disease progression following completion of chemoradiotherapy, the addition of durvalumab provided a significant and clinically meaningful improvement in progression-free survival without an increase in serious adverse events. While the overall survival data is immature, the 48% improvement in progression-free survival supports the incorporation of durvalumab into standard practice in this patient population.

—Daniel Isaac, DO, MS

Study Overview

Objective. To evaluate the efficacy of the PD-L1 anti­body durvalumab in the treatment of patients with unresectable stage III non-small-cell lung cancer (NSCLC) following completion of standard chemoradiotherapy.

Design. Interim analysis of the phase III PACIFIC study, a randomized, double-blind, international study.

Setting and participants. A total of 709 patients underwent randomization between May 2014 and April 2016. Eligible patients had histologically proven stage III, locally advanced and unresectable NSCLC with no evidence of disease progression following chemoradiotherapy. The enrolled patients had received at least 2 cycles of platinum-based chemotherapy concurrently with definitive radiation therapy (54 Gy to 66 Gy). Initially, patients were randomized within 2 weeks of completing radiation; however, the protocol was amended to allow randomization up to 42 days following completion of therapy. Patients were not eligible if they had previous exposure to anti-PD-1 or PD-L1 antibodies or active or prior autoimmune disease in the last 2 years. All patients were required to have an WHO performance status of 0 or 1. The patients were stratified at randomization by age (< 65 or > 65 years), sex and smoking status. Enrollment was not restricted to level of PD-L1 expression.

Intervention. Patients were randomized in a 2:1 ratio to receive consolidation durvalumab 10 mg/kg or placebo every 2 weeks for up to 12 months. The intervention was discontinued if there was evidence of confirmed disease progression, treatment with an alternative anticancer therapy, toxicity or patient preference. The response to treatment was assessed every 8 weeks for the first year and then every 12 weeks thereafter.

Main outcome measures. The primary endpoints of the study were progression-free survival (PFS) by blinded independent review and overall survival (OS). Secondary endpoints were the percentage of patients alive without disease progression at 12 and 18 months, objective response rate, duration of response, safety, and time to death or metastasis. Patients were given the option to provide archived tumor specimens for PD-L1 testing.

Results. The baseline characteristics were balanced. The median age at enrollment was 64 years and 91% of the patients were current or former smokers. The vast majority of patients (> 99% in both groups) received concurrent chemoradiotherapy. The response to initial concurrent therapy was similar in both groups with complete response rates of 1.9% and 3% in the durvalumab and placebo groups, respectively, and partial response rates of 48.7% and 46.8%. Archived tumor samples showed ≥ 25% PD-L1 expression in 22.3% of patients (24% in durvalumab group versus 18.6% in placebo group) and < 25% in 41% of patients (39.3%% in durvalumab group versus 44.3% in placebo group). PD-L1 status was unknown in 36.7% of the enrolled patients. Of note, 6% of patients enrolled had EGFR mutations.

After a median follow-up of 14.5 months, the median PFS was 16.8 months with durvalumab versus 5.6 months with placebo (P < 0.001; hazard ratio [HR] 0.52, 95% confidence interval [CI] 0.42–0.65). The 12-month PFS rate was 55.9% and 35.3% in the durvalumab and placebo group, respectively. The 18-month PFS rate was 44.2% and 27% in the durvalumab and placebo group, respectively. The PFS results were consistent across all subgroups. The PFS benefit was observed regardless of PD-L1 expression. The median time to death or metastasis was 23.2 months in the durvalumab group versus 14.6 months with placebo (HR 0.52; 95% CI 0.39–0.69). The objective response rate was significantly higher in the durvalumab group (28.4% vs. 16%, P < 0.001). The median duration of response was longer with durvalumab. Of the patients who responded to durvalumab, 73% had ongoing response at 18 months compared with 47% in the placebo group. OS was not assessed at this interm analysis.

Adverse events (AE) of any grade occurred in over approximately 95% in both groups. Grade 3 or 4 AE occurred in 29.9% in the durvalumab group and 26.1% in the placebo group. The most common grade 3 or 4 AE was pneumonia, occurring in about 4% of patients in each group. More patients in the durvalumab group discontinued treatment (15.4% vs 9.8%). Death due to an AE occurred in 4.4% of the durvalumab group and 5.6% of the placebo group. The most frequent AE leading to discontinuation was pneumonitis or radiation pneumonitis and pneumonia. Pneumonitis or radiation pneumonitis occurred in 33.9% (3.4% grade 3 or 4) and 24.8% (2.6% grade 3 or 4) of the durvalumab and placebo groups, respectively. Immune-mediated AE of any grade were more common in the duvalumab group occurring in 24% of patients (vs. 8% in placebo). Of these, 14% of patients in the durvalumab group required glucocorticoids compared with 4.3% in the placebo group. The most AE of interest was diarrhea, which occurred in 18% of the patients in both groups.

 

 

Conclusion. The addition of consolidative durvalumab following completion of concurrent chemoradiotherapy in patients with stage III, locally advanced NSCLC significantly improved PFS without a significant increase in treatment-related adverse events.

Commentary

Pre-clinical evidence has suggested that chemotherapy and radiation therapy may lead to upregulation of PD-L1 expression by tumor cells leading to increased PD-L1 mediated T cell apoptosis [1,2]. Given prior studies documenting PD-L1 expression as a predictive biomarker for response to durvalumab, the authors of the current trial hypothesized that the addition of durvalumab after chemoradiotherapy would provide clinical benefit likely mediated by upregulation of PD-L1. The results from this pre-planned interim analysis show a significant improvement in progression-free survival with the addition of durvalumab with a 48% decrease in the risk of progression. This benefit was noted across all patient subgroups. In addition, responses to durvalumab were durable, with 72% of the patients who responded having an ongoing response at 18 months. Interestingly, the response to durvalumab was independent of PD-L1 expression, which is in contrast to previous studies showing PD-L1 expression to be a good biomarker for durvalumab response [3].

The results of the PACIFIC trial represent a clinically meaningful benefit and suggests an excellent option for patients with unresectable stage III NSCLC. One important point to highlight is that the addition of durvalumab was well tolerated and did not appear to significantly increase the rate of severe adverse events. Of particular interest is the similar rates of grade 3 or 4 pneumonitis, which appeared to be around 3% for each group. Overall survival data remain immature at the time of this analysis; however, given the acceptable toxicity profile and improved PFS this combination should be considered for these patients in clinical practice. Ongoing trials are underway to evaluate the role of single-agent durvalumab in the front-line setting for NSCLC.

 

Applications for Clinical Practice

In patients with unresectable stage III NSCLC who have no evidence of disease progression following completion of chemoradiotherapy, the addition of durvalumab provided a significant and clinically meaningful improvement in progression-free survival without an increase in serious adverse events. While the overall survival data is immature, the 48% improvement in progression-free survival supports the incorporation of durvalumab into standard practice in this patient population.

—Daniel Isaac, DO, MS

References

1. Deng L, Liang H, Burnette B, et al. Irradiation and anti-PD-L1 treatment synergistically promote antitumor immunity in mice. J Clin Invest2014;124:687–95.

2. Zhang P, Su DM, Liang M, Fu J. Chemopreventive agents induce programmed death-1-ligand 1 (PD-L1) surface expression in breast cancer cells and promote PD-L1 mediated T cell apoptosis. Mol Immun 2008;45:1470–6.

3. Antonia SJ, Brahmer JR, Khleif S, et al. Phase ½ [What should this be? 3?]study of the safety and clinical activity of durvalumab in patients with non-small cell lung cancer (NSCLC). Presented at the 41st European Society for Medical Oncology Annual Meeting, Copenhagen, October 7–11 2016.

References

1. Deng L, Liang H, Burnette B, et al. Irradiation and anti-PD-L1 treatment synergistically promote antitumor immunity in mice. J Clin Invest2014;124:687–95.

2. Zhang P, Su DM, Liang M, Fu J. Chemopreventive agents induce programmed death-1-ligand 1 (PD-L1) surface expression in breast cancer cells and promote PD-L1 mediated T cell apoptosis. Mol Immun 2008;45:1470–6.

3. Antonia SJ, Brahmer JR, Khleif S, et al. Phase ½ [What should this be? 3?]study of the safety and clinical activity of durvalumab in patients with non-small cell lung cancer (NSCLC). Presented at the 41st European Society for Medical Oncology Annual Meeting, Copenhagen, October 7–11 2016.

Issue
Journal of Clinical Outcomes Management - 25(1)
Issue
Journal of Clinical Outcomes Management - 25(1)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Mepolizumab for Eosinophilic Chronic Obstructive Pulmonary Disease

Article Type
Changed
Wed, 04/29/2020 - 11:30

Study Overview

Objective. To determine the effect of mepolizumab on the annual rate of chronic obstructive pulmonary disease (COPD) exacerbations in high-risk patients.

Design. Two randomized double-blind placebo-controlled parallel trials (METREO and METREX).

Setting and participants. Participants were recruited from over 15 countries in over 100 investigative sites. Inclusion criteria were adults (40 years or older) with a diagnosis of COPD for at least 1 year with: airflow limitation (FEV1/FVC < 0.7); some bronchodilator reversibility (post-bronchodilator FEV1 > 20% and ≤ 80% of predicted values); current COPD therapy for at least 3 months prior to enrollment (a high-dose inhaled corticosteroid, ICS, with at least 2 other classes of medications, to obtain “triple therapy”); and a high risk of exacerbations (at least 1 severe [requiring hospitalization] or 2 moderate [treatment with systemic corticosteroids and/or antibiotics] exacerbations in past year).

Notable exclusion criteria were patients with diagnoses of asthma in never-smokers, alpha-1 antitrypsin deficiency, recent exacerbations (in past month), lung volume reduction surgery (in past year), eosinophilic or parasitic diseases, or those with recent monoclonal antibody treatment. Patients with the asthma-COPD overlap syndrome were included only if they had a history of smoking and met the COPD inclusion criteria listed above.

Intervention. The treatment period lasted for a total of 52 weeks, with an additional 8 weeks of follow-up. Patients were randomized 1:1 to placebo or low-dose medication (100 mg) using permuted-block randomization in the METREX study regardless of eosinophil count (but they were stratified for a modified intention-to-treat analysis at screening into either low eosinophilic count [< 150 cells/uL] or high [≥ 150 cells/uL]). In the METREO study, patients were randomized 1:1:1 to placebo, low-dose (100 mg), or high-dose (300 mg) medication only if blood eosinophilia was present (≥ 150 cells/uL at screening or ≥ 300 cells/uL in past 12 months). Investigators and patients were blinded to presence of drug or placebo. Sample size calculations indicated that in order to provide a 90% power to detect a 30% decrease in the rate of exacerbations in METREX and 35% decrease in METREO, a total of 800 patients and 660 patients would need to be enrolled in METREX and METREO respectively. Both studies met their enrollment quota.

Main outcome measures. The primary outcome was the annual rate of exacerbations that were either moderate (requiring systemic corticosteroids and/or antibiotics) or severe (requiring hospitalization). Secondary outcomes included the time to first moderate/severe exacerbation, change from baseline in the COPD Assessment Test (CAT) and St. George’s Respiratory Questionnaire (SGRQ), and change from baseline in blood eosinophil count, FEV1, and FVC. Safety and adverse events endpoints were also assessed.

A modified intention-to-treat analysis was performed overall and in the METREX study stratified on eosinophilic count at screening; all patients who underwent randomization and received at least one dose of medication or placebo were included in that respective group. Multiple comparisons were accounted for using the Benjamini-Hochberg Test, exacerbations were assumed to follow a negative binomial distribution, and Cox proportional-hazards was used to model the relationship between covariates of interest and the primary outcome.

Main results. In the METREX study, 1161 patients were enrolled and 836 underwent randomization and received at least 1 dose of medication or placebo. In METREO, 1071 patients were enrolled and 674 underwent randomization and received at least one dose of medication or placebo. In both studies the patients in the medication and placebo groups were well balanced at baseline across demographics (age, gender, smoking history, duration of COPD) and pulmonary function (FEV1, FVC, FEV1/FVC, CAT, SGRQ). In METREX, a total of 462 (55%) patients had an eosinophilic phenotype and 374 (45%) did not.

There was no difference between groups in the primary endpoint of annual exacerbation rate in METREO (1.49/yr in placebo vs. 1.19/yr in low-dose and 1.27/yr in high-dose mepolizumab, rate ratio of high-dose to placebo 0.86, 95% confidence interval [CI] 0.7–1.05, P = 0.14). There was no difference in the primary outcome in the overall intention-to-treat analysis in the METREX study (1.49/yr in mepolizumab vs. 1.52/yr in placebo, P > 0.99). Only when analyzing the high eosinophilic phenotype in the stratified intention-to-treat METREX group was there a significant difference in the primary outcome (1.41/yr in mepolizumab vs. 1.71/yr in placebo, P = 0.04, rate ratio 0.82, 95% CI 0.68–0.98).

There were no significant differences in any secondary endpoint in the METREO study. In the METREX study, mepolizumab treatment resulted in a significantly longer time to first exacerbation (192 days vs. 141 days, hazard ratio 0.75, 95% CI 0.60–0.94, P = 0.04) but no difference in the change in SGRQ (–2.8 vs. –3.0, P > 0.99) or CAT score (–0.8 vs. 0, P > 0.99). There was no significant difference in any measures of pulmonary function between the treatment and placebo groups (FEV1, FVC, FEV1/FVC). As expected, there was a significant decrease in peripheral blood eosinophil count in both studies in the medication arm. The incidence of adverse events and safety endpoints were similar between the trial groups in METREX and METREO.

 

 

Conclusions. In this pair of placebo-controlled double-blind randomized parallel studies, there was a significant decline in annual exacerbation rate in patients with an eosinophilic phenotype treated with mepolizumab in a stratified intention-to-treat analysis of one of two parallel studies (METREX). However, there was no significant difference in the primary outcome of the other parallel study (METREO), which included only those patients with an eosinophilic phenotype. Additionally, there was no significant difference in any secondary endpoints in either study. The medication was generally safe and well tolerated.

Commentary

Mepolizumab is a humanized monoclonal antibody that targets and blocks interleukin-5, a key mediator of eosinophilic activity. Due to its ability to decrease eosinophil number and function, it is currently approved as a therapy for severe asthma with an eosinophilic phenotype [1]. While asthma and COPD have historically been thought of as separate entities with distinct pathophysiologic mechanisms, recent evidence has suggested that a subset of COPD patients experience significant eosinophilic inflammation. This group may behave more like asthmatic patients, and may have a different response to medications such as inhaled corticosteroids, but the role of eosinophils to guide prognostication and treatment in this group is still unclear [2,3].

In this study, Pavord and colleagues investigated the use of the anti-IL5 drug mepolizumab in COPD patients at risk of exacerbations who demonstrated an eosinophilic phenotype. The physiologic rationale for the study was that eosinophilic inflammation is thought to be a driver of exacerbations in COPD patients with an eosinophilic phenotype, and therefore a decrease in eosinophilic number and function should result in a decrease in exacerbations. The authors conducted a well-designed placebo-controlled double-blind study with a clearly defined endpoint, met their enrollment goals as determined by their power calculations, and used COPD patients at high risk of exacerbations to enrich their study.

There was no difference in the primary outcome in the METREO arm of the study, which included patients with baseline eosinophilia (> 150 cells/uL) or in the overall intention-to-treat analysis in METREX (which did not screen patients on baseline eosinophil count). Only when stratified on baseline eosinophil count in the METREX study was a significant treatment effect found, where patients with high eosinophil count at baseline (> 150 cells/uL) had a decreased risk of exacerbations when treated with mepolizumab. Notably there was no difference in any secondary outcome in METREO or in METREX aside from a longer time to first exacerbation in METREX in the mepolizumab group. The authors use this data to conclude that mepolizumab treatment results in a lower rate of exacerbations and a longer time to the first exacerbation in COPD patients with an eosinophilic phenotype, and the extent of the treatment effect is related to blood eosinophil counts.

The authors conducted a well-designed and rigorous study, and used robust and appropriate statistical analysis; however, significant questions remain regarding their conclusions. The primary concern is the role of mepolizumab in the treatment of COPD patients to decrease exacerbations may be overstated. When including only those with baseline eosinophilia in the METREO arm, there was no significant difference between placebo and low or high dose of mepolizumab; however, there was an appropriate and expected decrease in blood eosinophils, indicating the medication worked as intended. In the overall intention-to-treat analysis in the METREX arm, there was no difference between mepolizumab and placebo, and only in the analysis of METREX stratified to eosinophil count was there a significant difference (with an upper confidence interval rate ratio [0.98] approaching unity).

Additionally there was no significant difference between the 2 groups across a number of clinically important secondary endpoints, including pulmonary function measurements and symptomatic scores. Only the time to exacerbation was significantly longer in the mepolizumab group in METREX.

Taken together, this calls into question the conclusion that a decrease in eosinophil counts due to mepolizumab has resulted in a lower rate of exacerbations, particularly as a higher dose of mepolizumab did not result in a stronger effect. The lack of difference between groups in secondary endpoints is also concerning, as those would be expected to improve with a decrease in exacerbations [4,5]. As the authors point out, their evidence suggests that eosinophils may be an important biomarker in COPD and may aid in the therapeutic decision-making process. However, given the inconsistencies in the data as noted above, it would be difficult to rely on the evidence from this study alone to support their conclusion regarding the clinical utility of mepolizumab in COPD.

The authors discuss a number of limitations that may account for the lack of consistent effect seen in this study. Aside from the standard limitations applicable to any clinical trial, they note the potential confounding effect of previous oral glucocorticoid therapy in reducing eosinophil counts. This may have masked the eosinophilic phenotype in some study patients, leading to the attenuated effect of mepolizumab seen in this study.

The authors also note that information that might be potentially valuable for identifying treatment responders, such as a history of allergies and atopy, were not available. Inclusion of those patients may be helpful in enriching the trial with potential treatment-responders, and future studies may benefit from focusing on COPD patients with a more atopic phenotype who more closely resemble those with the asthma-COPD overlap syndrome.

A final limitation to discuss is the focus on blood eosinophilic counts. Due to the difficulty of measuring sputum eosinophils, and the reasonable degree of correlation between blood and sputum in asthmatic patients, blood eosinophils have largely supplanted sputum eosinophils as markers of TH2 CD4 T-cell activity in the pulmonary system [6]. This substitution is also used in the COPD population, however, due to the differences in pathophysiology it is unclear if eosinophils in asthmatic patients behave similarly to those in COPD patients [7]. Additionally, the cutoff of 150 cells/uL has been obtained primarily from sub-group analysis of previous studies on COPD patients, but it is unclear if this cutoff truly reflects elevated sputum eosinophilia. While there is likely some degree of correlation between blood and sputum eosinophilia in COPD patients, a lack of significant effect seen in this study may be due to an incorrect cutoff for elevated eosinophilia and a reliance on blood eosinophils over sputum counts. Further studies utilizing sputum eosinophils may be of value in addressing this limitation.

 

 

Applications for Clinical Practice

In this study, Pavord and colleagues found a potential benefit of mepolizumab treatment for reducing exacerbations in COPD patients with an eosinophilic phenotype. The conflicting results regarding the underlying physiology and the weak treatment effect suggest this medication may not be ready for use in clinical practice without additional supporting evidence. From a practical standpoint, the high cost of medication (~$2500 per month) and marginal benefit of treatment imply that treatment with mepolizumab in COPD patients may not be cost-effective, and even treatment in individual patients on a trial basis should be discouraged until additional supporting data becomes available. Of primary concern are the optimal selection of COPD patients that will achieve benefit with mepolizumab treatment, and the optimal dose of medication to achieve that benefit. The results presented here do not satisfactorily answer these questions, and additional studies are required.

—Arun Jose, MD, The George Washington University, Washington, DC

References

1. Pelaia C, Vatrella A, Busceti MT, et al. Severe eosinophilic asthma: from the pathogenic role of interleukin-5 to the therapeutic action of mepolizumab. Drug Des Devel Ther 2017;11:3137–44.

2. Kim VL, Coombs NA, Staples KJ, et al. Impact and associations of eosinophilic inflammation in COPD: analysis of the AERIS cohort. Eur Respir J 2017;50:pii:1700853.

3. Roche N, Chapman KR, Vogelmeier CF, et al. Blood eosinophils and response to maintenance chronic obstructive pulmonary disease treatment. Data from the FLAME trial. Am J Respir Crit Care Med 2017;195:1189–97.

4. Halpin DMG, Decramer M, Celli BR, et al. Effect of a single exacerbation on decline in lung function in COPD. Respir Med 2017;128:85–91.

5. Rassouli F, Baty F, Stolz D, et al. Longitudinal change of COPD assessment test (CAT in a telehealthcare cohort is associated with exacerbation risk. Int J COPD 2017;12:3103–9.

6. Gauthier M, Ray A, Wenzel SE. Evolving concepts of asthma. Am J Respir Crit Care Med 2015;192:660–8.

7. Negewo NA, McDonald VM, Baines KJ, et al. Peripheral blood eosinophils: a surrogate marker for airway eosinophilia in stable COPD. Int J COPD 2016;11:1495–504.

Article PDF
Issue
Journal of Clinical Outcomes Management - 25(1)
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To determine the effect of mepolizumab on the annual rate of chronic obstructive pulmonary disease (COPD) exacerbations in high-risk patients.

Design. Two randomized double-blind placebo-controlled parallel trials (METREO and METREX).

Setting and participants. Participants were recruited from over 15 countries in over 100 investigative sites. Inclusion criteria were adults (40 years or older) with a diagnosis of COPD for at least 1 year with: airflow limitation (FEV1/FVC < 0.7); some bronchodilator reversibility (post-bronchodilator FEV1 > 20% and ≤ 80% of predicted values); current COPD therapy for at least 3 months prior to enrollment (a high-dose inhaled corticosteroid, ICS, with at least 2 other classes of medications, to obtain “triple therapy”); and a high risk of exacerbations (at least 1 severe [requiring hospitalization] or 2 moderate [treatment with systemic corticosteroids and/or antibiotics] exacerbations in past year).

Notable exclusion criteria were patients with diagnoses of asthma in never-smokers, alpha-1 antitrypsin deficiency, recent exacerbations (in past month), lung volume reduction surgery (in past year), eosinophilic or parasitic diseases, or those with recent monoclonal antibody treatment. Patients with the asthma-COPD overlap syndrome were included only if they had a history of smoking and met the COPD inclusion criteria listed above.

Intervention. The treatment period lasted for a total of 52 weeks, with an additional 8 weeks of follow-up. Patients were randomized 1:1 to placebo or low-dose medication (100 mg) using permuted-block randomization in the METREX study regardless of eosinophil count (but they were stratified for a modified intention-to-treat analysis at screening into either low eosinophilic count [< 150 cells/uL] or high [≥ 150 cells/uL]). In the METREO study, patients were randomized 1:1:1 to placebo, low-dose (100 mg), or high-dose (300 mg) medication only if blood eosinophilia was present (≥ 150 cells/uL at screening or ≥ 300 cells/uL in past 12 months). Investigators and patients were blinded to presence of drug or placebo. Sample size calculations indicated that in order to provide a 90% power to detect a 30% decrease in the rate of exacerbations in METREX and 35% decrease in METREO, a total of 800 patients and 660 patients would need to be enrolled in METREX and METREO respectively. Both studies met their enrollment quota.

Main outcome measures. The primary outcome was the annual rate of exacerbations that were either moderate (requiring systemic corticosteroids and/or antibiotics) or severe (requiring hospitalization). Secondary outcomes included the time to first moderate/severe exacerbation, change from baseline in the COPD Assessment Test (CAT) and St. George’s Respiratory Questionnaire (SGRQ), and change from baseline in blood eosinophil count, FEV1, and FVC. Safety and adverse events endpoints were also assessed.

A modified intention-to-treat analysis was performed overall and in the METREX study stratified on eosinophilic count at screening; all patients who underwent randomization and received at least one dose of medication or placebo were included in that respective group. Multiple comparisons were accounted for using the Benjamini-Hochberg Test, exacerbations were assumed to follow a negative binomial distribution, and Cox proportional-hazards was used to model the relationship between covariates of interest and the primary outcome.

Main results. In the METREX study, 1161 patients were enrolled and 836 underwent randomization and received at least 1 dose of medication or placebo. In METREO, 1071 patients were enrolled and 674 underwent randomization and received at least one dose of medication or placebo. In both studies the patients in the medication and placebo groups were well balanced at baseline across demographics (age, gender, smoking history, duration of COPD) and pulmonary function (FEV1, FVC, FEV1/FVC, CAT, SGRQ). In METREX, a total of 462 (55%) patients had an eosinophilic phenotype and 374 (45%) did not.

There was no difference between groups in the primary endpoint of annual exacerbation rate in METREO (1.49/yr in placebo vs. 1.19/yr in low-dose and 1.27/yr in high-dose mepolizumab, rate ratio of high-dose to placebo 0.86, 95% confidence interval [CI] 0.7–1.05, P = 0.14). There was no difference in the primary outcome in the overall intention-to-treat analysis in the METREX study (1.49/yr in mepolizumab vs. 1.52/yr in placebo, P > 0.99). Only when analyzing the high eosinophilic phenotype in the stratified intention-to-treat METREX group was there a significant difference in the primary outcome (1.41/yr in mepolizumab vs. 1.71/yr in placebo, P = 0.04, rate ratio 0.82, 95% CI 0.68–0.98).

There were no significant differences in any secondary endpoint in the METREO study. In the METREX study, mepolizumab treatment resulted in a significantly longer time to first exacerbation (192 days vs. 141 days, hazard ratio 0.75, 95% CI 0.60–0.94, P = 0.04) but no difference in the change in SGRQ (–2.8 vs. –3.0, P > 0.99) or CAT score (–0.8 vs. 0, P > 0.99). There was no significant difference in any measures of pulmonary function between the treatment and placebo groups (FEV1, FVC, FEV1/FVC). As expected, there was a significant decrease in peripheral blood eosinophil count in both studies in the medication arm. The incidence of adverse events and safety endpoints were similar between the trial groups in METREX and METREO.

 

 

Conclusions. In this pair of placebo-controlled double-blind randomized parallel studies, there was a significant decline in annual exacerbation rate in patients with an eosinophilic phenotype treated with mepolizumab in a stratified intention-to-treat analysis of one of two parallel studies (METREX). However, there was no significant difference in the primary outcome of the other parallel study (METREO), which included only those patients with an eosinophilic phenotype. Additionally, there was no significant difference in any secondary endpoints in either study. The medication was generally safe and well tolerated.

Commentary

Mepolizumab is a humanized monoclonal antibody that targets and blocks interleukin-5, a key mediator of eosinophilic activity. Due to its ability to decrease eosinophil number and function, it is currently approved as a therapy for severe asthma with an eosinophilic phenotype [1]. While asthma and COPD have historically been thought of as separate entities with distinct pathophysiologic mechanisms, recent evidence has suggested that a subset of COPD patients experience significant eosinophilic inflammation. This group may behave more like asthmatic patients, and may have a different response to medications such as inhaled corticosteroids, but the role of eosinophils to guide prognostication and treatment in this group is still unclear [2,3].

In this study, Pavord and colleagues investigated the use of the anti-IL5 drug mepolizumab in COPD patients at risk of exacerbations who demonstrated an eosinophilic phenotype. The physiologic rationale for the study was that eosinophilic inflammation is thought to be a driver of exacerbations in COPD patients with an eosinophilic phenotype, and therefore a decrease in eosinophilic number and function should result in a decrease in exacerbations. The authors conducted a well-designed placebo-controlled double-blind study with a clearly defined endpoint, met their enrollment goals as determined by their power calculations, and used COPD patients at high risk of exacerbations to enrich their study.

There was no difference in the primary outcome in the METREO arm of the study, which included patients with baseline eosinophilia (> 150 cells/uL) or in the overall intention-to-treat analysis in METREX (which did not screen patients on baseline eosinophil count). Only when stratified on baseline eosinophil count in the METREX study was a significant treatment effect found, where patients with high eosinophil count at baseline (> 150 cells/uL) had a decreased risk of exacerbations when treated with mepolizumab. Notably there was no difference in any secondary outcome in METREO or in METREX aside from a longer time to first exacerbation in METREX in the mepolizumab group. The authors use this data to conclude that mepolizumab treatment results in a lower rate of exacerbations and a longer time to the first exacerbation in COPD patients with an eosinophilic phenotype, and the extent of the treatment effect is related to blood eosinophil counts.

The authors conducted a well-designed and rigorous study, and used robust and appropriate statistical analysis; however, significant questions remain regarding their conclusions. The primary concern is the role of mepolizumab in the treatment of COPD patients to decrease exacerbations may be overstated. When including only those with baseline eosinophilia in the METREO arm, there was no significant difference between placebo and low or high dose of mepolizumab; however, there was an appropriate and expected decrease in blood eosinophils, indicating the medication worked as intended. In the overall intention-to-treat analysis in the METREX arm, there was no difference between mepolizumab and placebo, and only in the analysis of METREX stratified to eosinophil count was there a significant difference (with an upper confidence interval rate ratio [0.98] approaching unity).

Additionally there was no significant difference between the 2 groups across a number of clinically important secondary endpoints, including pulmonary function measurements and symptomatic scores. Only the time to exacerbation was significantly longer in the mepolizumab group in METREX.

Taken together, this calls into question the conclusion that a decrease in eosinophil counts due to mepolizumab has resulted in a lower rate of exacerbations, particularly as a higher dose of mepolizumab did not result in a stronger effect. The lack of difference between groups in secondary endpoints is also concerning, as those would be expected to improve with a decrease in exacerbations [4,5]. As the authors point out, their evidence suggests that eosinophils may be an important biomarker in COPD and may aid in the therapeutic decision-making process. However, given the inconsistencies in the data as noted above, it would be difficult to rely on the evidence from this study alone to support their conclusion regarding the clinical utility of mepolizumab in COPD.

The authors discuss a number of limitations that may account for the lack of consistent effect seen in this study. Aside from the standard limitations applicable to any clinical trial, they note the potential confounding effect of previous oral glucocorticoid therapy in reducing eosinophil counts. This may have masked the eosinophilic phenotype in some study patients, leading to the attenuated effect of mepolizumab seen in this study.

The authors also note that information that might be potentially valuable for identifying treatment responders, such as a history of allergies and atopy, were not available. Inclusion of those patients may be helpful in enriching the trial with potential treatment-responders, and future studies may benefit from focusing on COPD patients with a more atopic phenotype who more closely resemble those with the asthma-COPD overlap syndrome.

A final limitation to discuss is the focus on blood eosinophilic counts. Due to the difficulty of measuring sputum eosinophils, and the reasonable degree of correlation between blood and sputum in asthmatic patients, blood eosinophils have largely supplanted sputum eosinophils as markers of TH2 CD4 T-cell activity in the pulmonary system [6]. This substitution is also used in the COPD population, however, due to the differences in pathophysiology it is unclear if eosinophils in asthmatic patients behave similarly to those in COPD patients [7]. Additionally, the cutoff of 150 cells/uL has been obtained primarily from sub-group analysis of previous studies on COPD patients, but it is unclear if this cutoff truly reflects elevated sputum eosinophilia. While there is likely some degree of correlation between blood and sputum eosinophilia in COPD patients, a lack of significant effect seen in this study may be due to an incorrect cutoff for elevated eosinophilia and a reliance on blood eosinophils over sputum counts. Further studies utilizing sputum eosinophils may be of value in addressing this limitation.

 

 

Applications for Clinical Practice

In this study, Pavord and colleagues found a potential benefit of mepolizumab treatment for reducing exacerbations in COPD patients with an eosinophilic phenotype. The conflicting results regarding the underlying physiology and the weak treatment effect suggest this medication may not be ready for use in clinical practice without additional supporting evidence. From a practical standpoint, the high cost of medication (~$2500 per month) and marginal benefit of treatment imply that treatment with mepolizumab in COPD patients may not be cost-effective, and even treatment in individual patients on a trial basis should be discouraged until additional supporting data becomes available. Of primary concern are the optimal selection of COPD patients that will achieve benefit with mepolizumab treatment, and the optimal dose of medication to achieve that benefit. The results presented here do not satisfactorily answer these questions, and additional studies are required.

—Arun Jose, MD, The George Washington University, Washington, DC

Study Overview

Objective. To determine the effect of mepolizumab on the annual rate of chronic obstructive pulmonary disease (COPD) exacerbations in high-risk patients.

Design. Two randomized double-blind placebo-controlled parallel trials (METREO and METREX).

Setting and participants. Participants were recruited from over 15 countries in over 100 investigative sites. Inclusion criteria were adults (40 years or older) with a diagnosis of COPD for at least 1 year with: airflow limitation (FEV1/FVC < 0.7); some bronchodilator reversibility (post-bronchodilator FEV1 > 20% and ≤ 80% of predicted values); current COPD therapy for at least 3 months prior to enrollment (a high-dose inhaled corticosteroid, ICS, with at least 2 other classes of medications, to obtain “triple therapy”); and a high risk of exacerbations (at least 1 severe [requiring hospitalization] or 2 moderate [treatment with systemic corticosteroids and/or antibiotics] exacerbations in past year).

Notable exclusion criteria were patients with diagnoses of asthma in never-smokers, alpha-1 antitrypsin deficiency, recent exacerbations (in past month), lung volume reduction surgery (in past year), eosinophilic or parasitic diseases, or those with recent monoclonal antibody treatment. Patients with the asthma-COPD overlap syndrome were included only if they had a history of smoking and met the COPD inclusion criteria listed above.

Intervention. The treatment period lasted for a total of 52 weeks, with an additional 8 weeks of follow-up. Patients were randomized 1:1 to placebo or low-dose medication (100 mg) using permuted-block randomization in the METREX study regardless of eosinophil count (but they were stratified for a modified intention-to-treat analysis at screening into either low eosinophilic count [< 150 cells/uL] or high [≥ 150 cells/uL]). In the METREO study, patients were randomized 1:1:1 to placebo, low-dose (100 mg), or high-dose (300 mg) medication only if blood eosinophilia was present (≥ 150 cells/uL at screening or ≥ 300 cells/uL in past 12 months). Investigators and patients were blinded to presence of drug or placebo. Sample size calculations indicated that in order to provide a 90% power to detect a 30% decrease in the rate of exacerbations in METREX and 35% decrease in METREO, a total of 800 patients and 660 patients would need to be enrolled in METREX and METREO respectively. Both studies met their enrollment quota.

Main outcome measures. The primary outcome was the annual rate of exacerbations that were either moderate (requiring systemic corticosteroids and/or antibiotics) or severe (requiring hospitalization). Secondary outcomes included the time to first moderate/severe exacerbation, change from baseline in the COPD Assessment Test (CAT) and St. George’s Respiratory Questionnaire (SGRQ), and change from baseline in blood eosinophil count, FEV1, and FVC. Safety and adverse events endpoints were also assessed.

A modified intention-to-treat analysis was performed overall and in the METREX study stratified on eosinophilic count at screening; all patients who underwent randomization and received at least one dose of medication or placebo were included in that respective group. Multiple comparisons were accounted for using the Benjamini-Hochberg Test, exacerbations were assumed to follow a negative binomial distribution, and Cox proportional-hazards was used to model the relationship between covariates of interest and the primary outcome.

Main results. In the METREX study, 1161 patients were enrolled and 836 underwent randomization and received at least 1 dose of medication or placebo. In METREO, 1071 patients were enrolled and 674 underwent randomization and received at least one dose of medication or placebo. In both studies the patients in the medication and placebo groups were well balanced at baseline across demographics (age, gender, smoking history, duration of COPD) and pulmonary function (FEV1, FVC, FEV1/FVC, CAT, SGRQ). In METREX, a total of 462 (55%) patients had an eosinophilic phenotype and 374 (45%) did not.

There was no difference between groups in the primary endpoint of annual exacerbation rate in METREO (1.49/yr in placebo vs. 1.19/yr in low-dose and 1.27/yr in high-dose mepolizumab, rate ratio of high-dose to placebo 0.86, 95% confidence interval [CI] 0.7–1.05, P = 0.14). There was no difference in the primary outcome in the overall intention-to-treat analysis in the METREX study (1.49/yr in mepolizumab vs. 1.52/yr in placebo, P > 0.99). Only when analyzing the high eosinophilic phenotype in the stratified intention-to-treat METREX group was there a significant difference in the primary outcome (1.41/yr in mepolizumab vs. 1.71/yr in placebo, P = 0.04, rate ratio 0.82, 95% CI 0.68–0.98).

There were no significant differences in any secondary endpoint in the METREO study. In the METREX study, mepolizumab treatment resulted in a significantly longer time to first exacerbation (192 days vs. 141 days, hazard ratio 0.75, 95% CI 0.60–0.94, P = 0.04) but no difference in the change in SGRQ (–2.8 vs. –3.0, P > 0.99) or CAT score (–0.8 vs. 0, P > 0.99). There was no significant difference in any measures of pulmonary function between the treatment and placebo groups (FEV1, FVC, FEV1/FVC). As expected, there was a significant decrease in peripheral blood eosinophil count in both studies in the medication arm. The incidence of adverse events and safety endpoints were similar between the trial groups in METREX and METREO.

 

 

Conclusions. In this pair of placebo-controlled double-blind randomized parallel studies, there was a significant decline in annual exacerbation rate in patients with an eosinophilic phenotype treated with mepolizumab in a stratified intention-to-treat analysis of one of two parallel studies (METREX). However, there was no significant difference in the primary outcome of the other parallel study (METREO), which included only those patients with an eosinophilic phenotype. Additionally, there was no significant difference in any secondary endpoints in either study. The medication was generally safe and well tolerated.

Commentary

Mepolizumab is a humanized monoclonal antibody that targets and blocks interleukin-5, a key mediator of eosinophilic activity. Due to its ability to decrease eosinophil number and function, it is currently approved as a therapy for severe asthma with an eosinophilic phenotype [1]. While asthma and COPD have historically been thought of as separate entities with distinct pathophysiologic mechanisms, recent evidence has suggested that a subset of COPD patients experience significant eosinophilic inflammation. This group may behave more like asthmatic patients, and may have a different response to medications such as inhaled corticosteroids, but the role of eosinophils to guide prognostication and treatment in this group is still unclear [2,3].

In this study, Pavord and colleagues investigated the use of the anti-IL5 drug mepolizumab in COPD patients at risk of exacerbations who demonstrated an eosinophilic phenotype. The physiologic rationale for the study was that eosinophilic inflammation is thought to be a driver of exacerbations in COPD patients with an eosinophilic phenotype, and therefore a decrease in eosinophilic number and function should result in a decrease in exacerbations. The authors conducted a well-designed placebo-controlled double-blind study with a clearly defined endpoint, met their enrollment goals as determined by their power calculations, and used COPD patients at high risk of exacerbations to enrich their study.

There was no difference in the primary outcome in the METREO arm of the study, which included patients with baseline eosinophilia (> 150 cells/uL) or in the overall intention-to-treat analysis in METREX (which did not screen patients on baseline eosinophil count). Only when stratified on baseline eosinophil count in the METREX study was a significant treatment effect found, where patients with high eosinophil count at baseline (> 150 cells/uL) had a decreased risk of exacerbations when treated with mepolizumab. Notably there was no difference in any secondary outcome in METREO or in METREX aside from a longer time to first exacerbation in METREX in the mepolizumab group. The authors use this data to conclude that mepolizumab treatment results in a lower rate of exacerbations and a longer time to the first exacerbation in COPD patients with an eosinophilic phenotype, and the extent of the treatment effect is related to blood eosinophil counts.

The authors conducted a well-designed and rigorous study, and used robust and appropriate statistical analysis; however, significant questions remain regarding their conclusions. The primary concern is the role of mepolizumab in the treatment of COPD patients to decrease exacerbations may be overstated. When including only those with baseline eosinophilia in the METREO arm, there was no significant difference between placebo and low or high dose of mepolizumab; however, there was an appropriate and expected decrease in blood eosinophils, indicating the medication worked as intended. In the overall intention-to-treat analysis in the METREX arm, there was no difference between mepolizumab and placebo, and only in the analysis of METREX stratified to eosinophil count was there a significant difference (with an upper confidence interval rate ratio [0.98] approaching unity).

Additionally there was no significant difference between the 2 groups across a number of clinically important secondary endpoints, including pulmonary function measurements and symptomatic scores. Only the time to exacerbation was significantly longer in the mepolizumab group in METREX.

Taken together, this calls into question the conclusion that a decrease in eosinophil counts due to mepolizumab has resulted in a lower rate of exacerbations, particularly as a higher dose of mepolizumab did not result in a stronger effect. The lack of difference between groups in secondary endpoints is also concerning, as those would be expected to improve with a decrease in exacerbations [4,5]. As the authors point out, their evidence suggests that eosinophils may be an important biomarker in COPD and may aid in the therapeutic decision-making process. However, given the inconsistencies in the data as noted above, it would be difficult to rely on the evidence from this study alone to support their conclusion regarding the clinical utility of mepolizumab in COPD.

The authors discuss a number of limitations that may account for the lack of consistent effect seen in this study. Aside from the standard limitations applicable to any clinical trial, they note the potential confounding effect of previous oral glucocorticoid therapy in reducing eosinophil counts. This may have masked the eosinophilic phenotype in some study patients, leading to the attenuated effect of mepolizumab seen in this study.

The authors also note that information that might be potentially valuable for identifying treatment responders, such as a history of allergies and atopy, were not available. Inclusion of those patients may be helpful in enriching the trial with potential treatment-responders, and future studies may benefit from focusing on COPD patients with a more atopic phenotype who more closely resemble those with the asthma-COPD overlap syndrome.

A final limitation to discuss is the focus on blood eosinophilic counts. Due to the difficulty of measuring sputum eosinophils, and the reasonable degree of correlation between blood and sputum in asthmatic patients, blood eosinophils have largely supplanted sputum eosinophils as markers of TH2 CD4 T-cell activity in the pulmonary system [6]. This substitution is also used in the COPD population, however, due to the differences in pathophysiology it is unclear if eosinophils in asthmatic patients behave similarly to those in COPD patients [7]. Additionally, the cutoff of 150 cells/uL has been obtained primarily from sub-group analysis of previous studies on COPD patients, but it is unclear if this cutoff truly reflects elevated sputum eosinophilia. While there is likely some degree of correlation between blood and sputum eosinophilia in COPD patients, a lack of significant effect seen in this study may be due to an incorrect cutoff for elevated eosinophilia and a reliance on blood eosinophils over sputum counts. Further studies utilizing sputum eosinophils may be of value in addressing this limitation.

 

 

Applications for Clinical Practice

In this study, Pavord and colleagues found a potential benefit of mepolizumab treatment for reducing exacerbations in COPD patients with an eosinophilic phenotype. The conflicting results regarding the underlying physiology and the weak treatment effect suggest this medication may not be ready for use in clinical practice without additional supporting evidence. From a practical standpoint, the high cost of medication (~$2500 per month) and marginal benefit of treatment imply that treatment with mepolizumab in COPD patients may not be cost-effective, and even treatment in individual patients on a trial basis should be discouraged until additional supporting data becomes available. Of primary concern are the optimal selection of COPD patients that will achieve benefit with mepolizumab treatment, and the optimal dose of medication to achieve that benefit. The results presented here do not satisfactorily answer these questions, and additional studies are required.

—Arun Jose, MD, The George Washington University, Washington, DC

References

1. Pelaia C, Vatrella A, Busceti MT, et al. Severe eosinophilic asthma: from the pathogenic role of interleukin-5 to the therapeutic action of mepolizumab. Drug Des Devel Ther 2017;11:3137–44.

2. Kim VL, Coombs NA, Staples KJ, et al. Impact and associations of eosinophilic inflammation in COPD: analysis of the AERIS cohort. Eur Respir J 2017;50:pii:1700853.

3. Roche N, Chapman KR, Vogelmeier CF, et al. Blood eosinophils and response to maintenance chronic obstructive pulmonary disease treatment. Data from the FLAME trial. Am J Respir Crit Care Med 2017;195:1189–97.

4. Halpin DMG, Decramer M, Celli BR, et al. Effect of a single exacerbation on decline in lung function in COPD. Respir Med 2017;128:85–91.

5. Rassouli F, Baty F, Stolz D, et al. Longitudinal change of COPD assessment test (CAT in a telehealthcare cohort is associated with exacerbation risk. Int J COPD 2017;12:3103–9.

6. Gauthier M, Ray A, Wenzel SE. Evolving concepts of asthma. Am J Respir Crit Care Med 2015;192:660–8.

7. Negewo NA, McDonald VM, Baines KJ, et al. Peripheral blood eosinophils: a surrogate marker for airway eosinophilia in stable COPD. Int J COPD 2016;11:1495–504.

References

1. Pelaia C, Vatrella A, Busceti MT, et al. Severe eosinophilic asthma: from the pathogenic role of interleukin-5 to the therapeutic action of mepolizumab. Drug Des Devel Ther 2017;11:3137–44.

2. Kim VL, Coombs NA, Staples KJ, et al. Impact and associations of eosinophilic inflammation in COPD: analysis of the AERIS cohort. Eur Respir J 2017;50:pii:1700853.

3. Roche N, Chapman KR, Vogelmeier CF, et al. Blood eosinophils and response to maintenance chronic obstructive pulmonary disease treatment. Data from the FLAME trial. Am J Respir Crit Care Med 2017;195:1189–97.

4. Halpin DMG, Decramer M, Celli BR, et al. Effect of a single exacerbation on decline in lung function in COPD. Respir Med 2017;128:85–91.

5. Rassouli F, Baty F, Stolz D, et al. Longitudinal change of COPD assessment test (CAT in a telehealthcare cohort is associated with exacerbation risk. Int J COPD 2017;12:3103–9.

6. Gauthier M, Ray A, Wenzel SE. Evolving concepts of asthma. Am J Respir Crit Care Med 2015;192:660–8.

7. Negewo NA, McDonald VM, Baines KJ, et al. Peripheral blood eosinophils: a surrogate marker for airway eosinophilia in stable COPD. Int J COPD 2016;11:1495–504.

Issue
Journal of Clinical Outcomes Management - 25(1)
Issue
Journal of Clinical Outcomes Management - 25(1)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Factors Impacting Receipt of Weight Loss Advice from Providers Among Patients with Overweight/Obesity

Article Type
Changed
Wed, 04/29/2020 - 11:37

Study Overview

Objective. To examine receipt of provider advice to lose weight among primary care patients who are overweight or obese.

Design. Cross-sectional study.

Setting and participants. Participants were recruited through convenience sampling of primary care practices that were members in a national practice-based research network or part of federally qualified health care system based in the Southeastern United States. Each practice used 1 or more of the following recruitment strategies: self-referral from study flyers posted in practices, given during clinic appointments, or posted on the practice portal (n = 3 practices); mailed invitations to patients part of a practice registry (n = 7 practices); and on-site recruitment by research staff during clinic hours (n = 2 practices). Inclusion criteria included having at least a 3-year history of being a patient in the practice, being aged 18 years or older, and having an overweight or obese status according to Centers for Disease Control definitions (body mass index [BMI] 25.0–29.9 kg/m2 = overweight, ≥ 30 kg/m2 = obese). After completing informed consent, participants completed an interview comprising a 20-minute survey, either in English or Spanish, either in-person or by telephone.

Measures. The survey obtained measures related to sociodemographic characteristics (race, gender, age, marital status, education level, employment status, income level), clinical characteristics (height and weight, history of diabetes/hypertension), psychological variables (readiness to make weight loss or maintenance efforts and confidence in their ability to lose or maintain weight), shared decision-making about weight loss/management (using the SDM-Q-9, with a higher total score indicating greater shared decision-making), and physician advice about weight loss (whether they had ever been advised by a doctor or other health care professional to lose weight or reduce their weight).

Main results. Among the study sample (n = 282), 65% were female, 60% were from racial and ethnic minority groups, 55% were married, 57% had some college education or higher, and 37% had an income level below $20,000/year. The mean age of participants was 53.1 (± 14.4) years. 59% had been advised by their physician to lose weight.

The percentage of participants who reported receiving provider advice was statistically different from 50% using the binomial test (P = 0.0035). Based on bivariate analysis of provider advice about weight loss, women were significantly more likely than men to report that their provider had advised them to lose weight (P = 0.001). Both actual and perceived obesity were associated significantly with receiving provider advice about weight loss (both P = 0.001). Diabetic patients were also significantly more likely than nondiabetic patients to report that their provider had advised them to lose weight (P = 0.01). Participants who reported greater readiness to lose or maintain their weight were more likely to report provider advice about weight loss compared to those with less readiness (P = 0.003). While employed patients, those who had at least some college education, and those who were hypertensive were more likely to report provider advice compared to those who were unemployed, had less education, and were not hypertensive, these associations were not statistically significant (P = 0.06, P = 0.06, P = 0.10, respectively). There were no racial/ethnic differences in receipt of provider advice to lose weight (P = 0.76). Participants with greater shared decision-making were more likely to report provider advice about weight loss (P < 0.001).

Based on results of the multivariate logistic regression analysis, obesity status, perceived obesity, and SDM about weight loss/management had significant independent associations with receiving physician advice about weight loss. Participants with obesity were more likely than those with overweight status to report provider advice (odds ratio [OR] = 1.31, 95% CI = 1.25–4.34, P = 0.001). Similarly, participants who believed they had overweight/obesity had a greater likelihood of reporting provider advice compared with those who did not believe they were obese/overweight (OR = 1.40, 95% CI = 2.43–6.37, P < 0.001). Shared decision making about weight loss/management was associated with an increased likelihood of reporting provider advice (OR = 3.30, 95% CI = 2.62–4.12, P < 0.001).

Conclusions. Many patients with overweight/obesity may not be receiving advice to lose/manage their weight by their provider. While providers should advise patients with overweight/obesity about weight loss and management, patient beliefs about their weight status and perceptions about shared decision-making are important to reporting receipt of provider advice about weight loss/management. Patient beliefs as well as provider behaviors should be addressed as part of efforts to improve the management of obesity/overweight in primary care.

Commentary

Over 35% of adults in the United States have a BMI in the obese range [1], putting them at risk for obesity-related comorbidities [2], often diagnosed and treated within primary care settings. The US Preventive Services Task Force recommends that all patients be screened for obesity and offered intensive lifestyle counseling, since modest weight loss can have significant health benefits [3]. Providers, particularly within the primary care setting, are ideally situated to promote weight loss via effective obesity counseling, as multiple clinic visits over time have the potential to enable rapport building and behavioral change management [4]. Indeed, a 2013 systematic review and meta-analysis of published studies of survey data examining provider weight loss counseling and its association with changes in patient weight loss behavior found that primary care provider advice on weight loss appears to have a significant impact on patient attempts to change behaviors related to their weight [5]. In this study, the authors reported higher rates of physician advice about weight loss compared to other studies, however, the results still demonstrate that based on patient reporting, not all providers are advising weight management or weight loss. Several studies have discussed barriers to weight management and obesity counseling among adults by physicians, which include lack of training, time, and perceived ineffectiveness of their own efforts [6–8].

Additionally, and perhaps more importantly, different factors can impact patient perception of provider advice and/or counseling around weight management, weight loss, or obesity. These can include race/ethnicity [9], health literacy [10], and motivation [11]. This study adds to the literature by shedding new light on variables that are important to patients being advised by providers to lose/manage their weight, including actual and perceived obesity status, and perceived shared decision-making. Previous research has focused on patient-provider communication and shared decision-making in the areas of antibiotic use [12], diabetes management [13], and weight loss [14].

Strengths of this study included the variety of recruitment methods employed to enroll patients from multiple clinic sites, the diverse sociodemographic characteristics of the study sample that resulted, the assessment of variables using standard or previously used measures, and the use of both bivariate and multivariate analyses to assess relationships between variables. Key limitations were acknowledged by the authors and included the cross-sectional design, which does not allow for causality to be assessed; the use of surveys for data collection, which relies on subjective and self-reported data; the assessment of weight management/loss advice only from the perspective of the patient, as opposed to including the provider perspective or using objective observations/data; and the lack of assessment of advice content or frequency of advice given.

Applications for Clinical Practice

As the authors suggest, this study highlights opportunities for improving weight-related advice for patients. Providers should incorporate obesity screening and counseling with all patients, as recommended by clinical care guidelines and the literature. In weight management conversations, providers should also be mindful of patient beliefs and understanding of their weight status, and incorporate shared decision-making practices to increase patient self-efficacy (ie, confidence, readiness) to make weight loss efforts.)

References

1. Flegal KM, Kruszon-Moran D, Carroll MD, et al. Trends in obesity among adults in the United States, 2005 to 2014. JAMA 2016;315:2284–91.

2. Guh DP, Zhang W, Bansback N, et al. The incidence of co-morbidities related to obesity and overweight: a systematic review and meta-analysis. BMC Public Health 2009;9:88.

3. Moyer VA. Screening for and management of obesity in adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2012;157:373–8.

4. Schlair S, Moore S, Mcmacken M, Jay M. How to deliver high quality obesity counseling using the 5As framework. J Clin Outcomes Manag 2012;19:221–9.

5. Rose SA, Poynter PS, Anderson JW, et al. Physician weight loss advice and patient weight loss behavior change: a literature review and meta-analysis of survey data. Int J Obes (Lond) 2013;37:118–28.

6. Forman-Hoffman V, Little A, Wahls T. Barriers to obesity management: a pilot study of primary care clinicians. BMC Fam Pract 2006;7:35.

7. Jay M, Gillespie C, Ark T, et al. Do internists, pediatricians, and psychiatrists feel competent in obesity care? Using a needs assessment to drive curriculum design. J Gen Intern Med 2008;23:1066–70.

8. Leverence RR, Williams RL, Sussman A, Crabtree BF. Obesity counseling and guidelines in primary care: a qualitative study. Am J Prev Med 2007;32:334–9.

9. Durant NH, Bartman B, Person SD, et al. Patient provider communication about the health effects of obesity. Patient Educ Couns 2009;75:53–7.

10. Zarcadoolas C, Levy, J, Sealy Y, et al. Health literacy at work to address overweight and obesity in adults: The development of the Obesity Action Kit. J Commun Health 2011;4:88–101.

11. Befort CA, Greiner KA, Hall S, et al. Weight-related perceptions among patients and physicians: how well do physicians judge patients’ motivation to lose weight? J Gen Intern Med 2006;21:1086–90.

12. Schoenthaler A, Albright G, Hibbard J, Goldman R. Simulated conversations with virtual humans to improve patient-provider communication and reduce unnecessary prescriptions for antibiotics: a repeated measure pilot study. JMIR Med Educ 2017;3:e7.

13. Griffith M, Siminerio L, Payne T, Krall J. A shared decision-making approach to telemedicine: engaging rural patients in glycemic management. J Clin Med 2016;5:103.

14. Carcone AI, Naar-King S, E. Brogan K, et al. Provider communication behaviors that predict motivation to change in black adolescents with obesity. J Dev Behav Pediatr 2013;34:599–608.

Article PDF
Issue
Journal of Clinical Outcomes Management - 24(12)a
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To examine receipt of provider advice to lose weight among primary care patients who are overweight or obese.

Design. Cross-sectional study.

Setting and participants. Participants were recruited through convenience sampling of primary care practices that were members in a national practice-based research network or part of federally qualified health care system based in the Southeastern United States. Each practice used 1 or more of the following recruitment strategies: self-referral from study flyers posted in practices, given during clinic appointments, or posted on the practice portal (n = 3 practices); mailed invitations to patients part of a practice registry (n = 7 practices); and on-site recruitment by research staff during clinic hours (n = 2 practices). Inclusion criteria included having at least a 3-year history of being a patient in the practice, being aged 18 years or older, and having an overweight or obese status according to Centers for Disease Control definitions (body mass index [BMI] 25.0–29.9 kg/m2 = overweight, ≥ 30 kg/m2 = obese). After completing informed consent, participants completed an interview comprising a 20-minute survey, either in English or Spanish, either in-person or by telephone.

Measures. The survey obtained measures related to sociodemographic characteristics (race, gender, age, marital status, education level, employment status, income level), clinical characteristics (height and weight, history of diabetes/hypertension), psychological variables (readiness to make weight loss or maintenance efforts and confidence in their ability to lose or maintain weight), shared decision-making about weight loss/management (using the SDM-Q-9, with a higher total score indicating greater shared decision-making), and physician advice about weight loss (whether they had ever been advised by a doctor or other health care professional to lose weight or reduce their weight).

Main results. Among the study sample (n = 282), 65% were female, 60% were from racial and ethnic minority groups, 55% were married, 57% had some college education or higher, and 37% had an income level below $20,000/year. The mean age of participants was 53.1 (± 14.4) years. 59% had been advised by their physician to lose weight.

The percentage of participants who reported receiving provider advice was statistically different from 50% using the binomial test (P = 0.0035). Based on bivariate analysis of provider advice about weight loss, women were significantly more likely than men to report that their provider had advised them to lose weight (P = 0.001). Both actual and perceived obesity were associated significantly with receiving provider advice about weight loss (both P = 0.001). Diabetic patients were also significantly more likely than nondiabetic patients to report that their provider had advised them to lose weight (P = 0.01). Participants who reported greater readiness to lose or maintain their weight were more likely to report provider advice about weight loss compared to those with less readiness (P = 0.003). While employed patients, those who had at least some college education, and those who were hypertensive were more likely to report provider advice compared to those who were unemployed, had less education, and were not hypertensive, these associations were not statistically significant (P = 0.06, P = 0.06, P = 0.10, respectively). There were no racial/ethnic differences in receipt of provider advice to lose weight (P = 0.76). Participants with greater shared decision-making were more likely to report provider advice about weight loss (P < 0.001).

Based on results of the multivariate logistic regression analysis, obesity status, perceived obesity, and SDM about weight loss/management had significant independent associations with receiving physician advice about weight loss. Participants with obesity were more likely than those with overweight status to report provider advice (odds ratio [OR] = 1.31, 95% CI = 1.25–4.34, P = 0.001). Similarly, participants who believed they had overweight/obesity had a greater likelihood of reporting provider advice compared with those who did not believe they were obese/overweight (OR = 1.40, 95% CI = 2.43–6.37, P < 0.001). Shared decision making about weight loss/management was associated with an increased likelihood of reporting provider advice (OR = 3.30, 95% CI = 2.62–4.12, P < 0.001).

Conclusions. Many patients with overweight/obesity may not be receiving advice to lose/manage their weight by their provider. While providers should advise patients with overweight/obesity about weight loss and management, patient beliefs about their weight status and perceptions about shared decision-making are important to reporting receipt of provider advice about weight loss/management. Patient beliefs as well as provider behaviors should be addressed as part of efforts to improve the management of obesity/overweight in primary care.

Commentary

Over 35% of adults in the United States have a BMI in the obese range [1], putting them at risk for obesity-related comorbidities [2], often diagnosed and treated within primary care settings. The US Preventive Services Task Force recommends that all patients be screened for obesity and offered intensive lifestyle counseling, since modest weight loss can have significant health benefits [3]. Providers, particularly within the primary care setting, are ideally situated to promote weight loss via effective obesity counseling, as multiple clinic visits over time have the potential to enable rapport building and behavioral change management [4]. Indeed, a 2013 systematic review and meta-analysis of published studies of survey data examining provider weight loss counseling and its association with changes in patient weight loss behavior found that primary care provider advice on weight loss appears to have a significant impact on patient attempts to change behaviors related to their weight [5]. In this study, the authors reported higher rates of physician advice about weight loss compared to other studies, however, the results still demonstrate that based on patient reporting, not all providers are advising weight management or weight loss. Several studies have discussed barriers to weight management and obesity counseling among adults by physicians, which include lack of training, time, and perceived ineffectiveness of their own efforts [6–8].

Additionally, and perhaps more importantly, different factors can impact patient perception of provider advice and/or counseling around weight management, weight loss, or obesity. These can include race/ethnicity [9], health literacy [10], and motivation [11]. This study adds to the literature by shedding new light on variables that are important to patients being advised by providers to lose/manage their weight, including actual and perceived obesity status, and perceived shared decision-making. Previous research has focused on patient-provider communication and shared decision-making in the areas of antibiotic use [12], diabetes management [13], and weight loss [14].

Strengths of this study included the variety of recruitment methods employed to enroll patients from multiple clinic sites, the diverse sociodemographic characteristics of the study sample that resulted, the assessment of variables using standard or previously used measures, and the use of both bivariate and multivariate analyses to assess relationships between variables. Key limitations were acknowledged by the authors and included the cross-sectional design, which does not allow for causality to be assessed; the use of surveys for data collection, which relies on subjective and self-reported data; the assessment of weight management/loss advice only from the perspective of the patient, as opposed to including the provider perspective or using objective observations/data; and the lack of assessment of advice content or frequency of advice given.

Applications for Clinical Practice

As the authors suggest, this study highlights opportunities for improving weight-related advice for patients. Providers should incorporate obesity screening and counseling with all patients, as recommended by clinical care guidelines and the literature. In weight management conversations, providers should also be mindful of patient beliefs and understanding of their weight status, and incorporate shared decision-making practices to increase patient self-efficacy (ie, confidence, readiness) to make weight loss efforts.)

Study Overview

Objective. To examine receipt of provider advice to lose weight among primary care patients who are overweight or obese.

Design. Cross-sectional study.

Setting and participants. Participants were recruited through convenience sampling of primary care practices that were members in a national practice-based research network or part of federally qualified health care system based in the Southeastern United States. Each practice used 1 or more of the following recruitment strategies: self-referral from study flyers posted in practices, given during clinic appointments, or posted on the practice portal (n = 3 practices); mailed invitations to patients part of a practice registry (n = 7 practices); and on-site recruitment by research staff during clinic hours (n = 2 practices). Inclusion criteria included having at least a 3-year history of being a patient in the practice, being aged 18 years or older, and having an overweight or obese status according to Centers for Disease Control definitions (body mass index [BMI] 25.0–29.9 kg/m2 = overweight, ≥ 30 kg/m2 = obese). After completing informed consent, participants completed an interview comprising a 20-minute survey, either in English or Spanish, either in-person or by telephone.

Measures. The survey obtained measures related to sociodemographic characteristics (race, gender, age, marital status, education level, employment status, income level), clinical characteristics (height and weight, history of diabetes/hypertension), psychological variables (readiness to make weight loss or maintenance efforts and confidence in their ability to lose or maintain weight), shared decision-making about weight loss/management (using the SDM-Q-9, with a higher total score indicating greater shared decision-making), and physician advice about weight loss (whether they had ever been advised by a doctor or other health care professional to lose weight or reduce their weight).

Main results. Among the study sample (n = 282), 65% were female, 60% were from racial and ethnic minority groups, 55% were married, 57% had some college education or higher, and 37% had an income level below $20,000/year. The mean age of participants was 53.1 (± 14.4) years. 59% had been advised by their physician to lose weight.

The percentage of participants who reported receiving provider advice was statistically different from 50% using the binomial test (P = 0.0035). Based on bivariate analysis of provider advice about weight loss, women were significantly more likely than men to report that their provider had advised them to lose weight (P = 0.001). Both actual and perceived obesity were associated significantly with receiving provider advice about weight loss (both P = 0.001). Diabetic patients were also significantly more likely than nondiabetic patients to report that their provider had advised them to lose weight (P = 0.01). Participants who reported greater readiness to lose or maintain their weight were more likely to report provider advice about weight loss compared to those with less readiness (P = 0.003). While employed patients, those who had at least some college education, and those who were hypertensive were more likely to report provider advice compared to those who were unemployed, had less education, and were not hypertensive, these associations were not statistically significant (P = 0.06, P = 0.06, P = 0.10, respectively). There were no racial/ethnic differences in receipt of provider advice to lose weight (P = 0.76). Participants with greater shared decision-making were more likely to report provider advice about weight loss (P < 0.001).

Based on results of the multivariate logistic regression analysis, obesity status, perceived obesity, and SDM about weight loss/management had significant independent associations with receiving physician advice about weight loss. Participants with obesity were more likely than those with overweight status to report provider advice (odds ratio [OR] = 1.31, 95% CI = 1.25–4.34, P = 0.001). Similarly, participants who believed they had overweight/obesity had a greater likelihood of reporting provider advice compared with those who did not believe they were obese/overweight (OR = 1.40, 95% CI = 2.43–6.37, P < 0.001). Shared decision making about weight loss/management was associated with an increased likelihood of reporting provider advice (OR = 3.30, 95% CI = 2.62–4.12, P < 0.001).

Conclusions. Many patients with overweight/obesity may not be receiving advice to lose/manage their weight by their provider. While providers should advise patients with overweight/obesity about weight loss and management, patient beliefs about their weight status and perceptions about shared decision-making are important to reporting receipt of provider advice about weight loss/management. Patient beliefs as well as provider behaviors should be addressed as part of efforts to improve the management of obesity/overweight in primary care.

Commentary

Over 35% of adults in the United States have a BMI in the obese range [1], putting them at risk for obesity-related comorbidities [2], often diagnosed and treated within primary care settings. The US Preventive Services Task Force recommends that all patients be screened for obesity and offered intensive lifestyle counseling, since modest weight loss can have significant health benefits [3]. Providers, particularly within the primary care setting, are ideally situated to promote weight loss via effective obesity counseling, as multiple clinic visits over time have the potential to enable rapport building and behavioral change management [4]. Indeed, a 2013 systematic review and meta-analysis of published studies of survey data examining provider weight loss counseling and its association with changes in patient weight loss behavior found that primary care provider advice on weight loss appears to have a significant impact on patient attempts to change behaviors related to their weight [5]. In this study, the authors reported higher rates of physician advice about weight loss compared to other studies, however, the results still demonstrate that based on patient reporting, not all providers are advising weight management or weight loss. Several studies have discussed barriers to weight management and obesity counseling among adults by physicians, which include lack of training, time, and perceived ineffectiveness of their own efforts [6–8].

Additionally, and perhaps more importantly, different factors can impact patient perception of provider advice and/or counseling around weight management, weight loss, or obesity. These can include race/ethnicity [9], health literacy [10], and motivation [11]. This study adds to the literature by shedding new light on variables that are important to patients being advised by providers to lose/manage their weight, including actual and perceived obesity status, and perceived shared decision-making. Previous research has focused on patient-provider communication and shared decision-making in the areas of antibiotic use [12], diabetes management [13], and weight loss [14].

Strengths of this study included the variety of recruitment methods employed to enroll patients from multiple clinic sites, the diverse sociodemographic characteristics of the study sample that resulted, the assessment of variables using standard or previously used measures, and the use of both bivariate and multivariate analyses to assess relationships between variables. Key limitations were acknowledged by the authors and included the cross-sectional design, which does not allow for causality to be assessed; the use of surveys for data collection, which relies on subjective and self-reported data; the assessment of weight management/loss advice only from the perspective of the patient, as opposed to including the provider perspective or using objective observations/data; and the lack of assessment of advice content or frequency of advice given.

Applications for Clinical Practice

As the authors suggest, this study highlights opportunities for improving weight-related advice for patients. Providers should incorporate obesity screening and counseling with all patients, as recommended by clinical care guidelines and the literature. In weight management conversations, providers should also be mindful of patient beliefs and understanding of their weight status, and incorporate shared decision-making practices to increase patient self-efficacy (ie, confidence, readiness) to make weight loss efforts.)

References

1. Flegal KM, Kruszon-Moran D, Carroll MD, et al. Trends in obesity among adults in the United States, 2005 to 2014. JAMA 2016;315:2284–91.

2. Guh DP, Zhang W, Bansback N, et al. The incidence of co-morbidities related to obesity and overweight: a systematic review and meta-analysis. BMC Public Health 2009;9:88.

3. Moyer VA. Screening for and management of obesity in adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2012;157:373–8.

4. Schlair S, Moore S, Mcmacken M, Jay M. How to deliver high quality obesity counseling using the 5As framework. J Clin Outcomes Manag 2012;19:221–9.

5. Rose SA, Poynter PS, Anderson JW, et al. Physician weight loss advice and patient weight loss behavior change: a literature review and meta-analysis of survey data. Int J Obes (Lond) 2013;37:118–28.

6. Forman-Hoffman V, Little A, Wahls T. Barriers to obesity management: a pilot study of primary care clinicians. BMC Fam Pract 2006;7:35.

7. Jay M, Gillespie C, Ark T, et al. Do internists, pediatricians, and psychiatrists feel competent in obesity care? Using a needs assessment to drive curriculum design. J Gen Intern Med 2008;23:1066–70.

8. Leverence RR, Williams RL, Sussman A, Crabtree BF. Obesity counseling and guidelines in primary care: a qualitative study. Am J Prev Med 2007;32:334–9.

9. Durant NH, Bartman B, Person SD, et al. Patient provider communication about the health effects of obesity. Patient Educ Couns 2009;75:53–7.

10. Zarcadoolas C, Levy, J, Sealy Y, et al. Health literacy at work to address overweight and obesity in adults: The development of the Obesity Action Kit. J Commun Health 2011;4:88–101.

11. Befort CA, Greiner KA, Hall S, et al. Weight-related perceptions among patients and physicians: how well do physicians judge patients’ motivation to lose weight? J Gen Intern Med 2006;21:1086–90.

12. Schoenthaler A, Albright G, Hibbard J, Goldman R. Simulated conversations with virtual humans to improve patient-provider communication and reduce unnecessary prescriptions for antibiotics: a repeated measure pilot study. JMIR Med Educ 2017;3:e7.

13. Griffith M, Siminerio L, Payne T, Krall J. A shared decision-making approach to telemedicine: engaging rural patients in glycemic management. J Clin Med 2016;5:103.

14. Carcone AI, Naar-King S, E. Brogan K, et al. Provider communication behaviors that predict motivation to change in black adolescents with obesity. J Dev Behav Pediatr 2013;34:599–608.

References

1. Flegal KM, Kruszon-Moran D, Carroll MD, et al. Trends in obesity among adults in the United States, 2005 to 2014. JAMA 2016;315:2284–91.

2. Guh DP, Zhang W, Bansback N, et al. The incidence of co-morbidities related to obesity and overweight: a systematic review and meta-analysis. BMC Public Health 2009;9:88.

3. Moyer VA. Screening for and management of obesity in adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2012;157:373–8.

4. Schlair S, Moore S, Mcmacken M, Jay M. How to deliver high quality obesity counseling using the 5As framework. J Clin Outcomes Manag 2012;19:221–9.

5. Rose SA, Poynter PS, Anderson JW, et al. Physician weight loss advice and patient weight loss behavior change: a literature review and meta-analysis of survey data. Int J Obes (Lond) 2013;37:118–28.

6. Forman-Hoffman V, Little A, Wahls T. Barriers to obesity management: a pilot study of primary care clinicians. BMC Fam Pract 2006;7:35.

7. Jay M, Gillespie C, Ark T, et al. Do internists, pediatricians, and psychiatrists feel competent in obesity care? Using a needs assessment to drive curriculum design. J Gen Intern Med 2008;23:1066–70.

8. Leverence RR, Williams RL, Sussman A, Crabtree BF. Obesity counseling and guidelines in primary care: a qualitative study. Am J Prev Med 2007;32:334–9.

9. Durant NH, Bartman B, Person SD, et al. Patient provider communication about the health effects of obesity. Patient Educ Couns 2009;75:53–7.

10. Zarcadoolas C, Levy, J, Sealy Y, et al. Health literacy at work to address overweight and obesity in adults: The development of the Obesity Action Kit. J Commun Health 2011;4:88–101.

11. Befort CA, Greiner KA, Hall S, et al. Weight-related perceptions among patients and physicians: how well do physicians judge patients’ motivation to lose weight? J Gen Intern Med 2006;21:1086–90.

12. Schoenthaler A, Albright G, Hibbard J, Goldman R. Simulated conversations with virtual humans to improve patient-provider communication and reduce unnecessary prescriptions for antibiotics: a repeated measure pilot study. JMIR Med Educ 2017;3:e7.

13. Griffith M, Siminerio L, Payne T, Krall J. A shared decision-making approach to telemedicine: engaging rural patients in glycemic management. J Clin Med 2016;5:103.

14. Carcone AI, Naar-King S, E. Brogan K, et al. Provider communication behaviors that predict motivation to change in black adolescents with obesity. J Dev Behav Pediatr 2013;34:599–608.

Issue
Journal of Clinical Outcomes Management - 24(12)a
Issue
Journal of Clinical Outcomes Management - 24(12)a
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Inhaled Corticosteroid Plus Long-Acting Beta-Agonist for Asthma: Real-Life Evidence

Article Type
Changed
Wed, 04/29/2020 - 11:57

Study Overview

Objective. To determine the effectiveness of asthma treatment using fluticasone furoate plus vilanterol in a setting that is closer to usual clinical practice.

Design. Open-label, parallel group, randomised controlled trial.

Setting and participants. The study was conducted at 74 general practice clinics in Salford and South Manchester, UK, between Nov 2012 and Dec 2016. Patients with a general practitioner’s diagnosis of symptomatic asthma and on maintenance inhaler therapy (either inhaled corticosteroid [ICS] alone or in combination with a long-acting bronchodilator [LABA]) were recruited. Patients with recent history of life-threatening asthma, COPD, or concomitant life-threatening disease were excluded. Participants were randomly assigned through a centralized randomization service and stratified by Asthma Control Test (ACT) score and by previous asthma maintenance therapy (ICS or ICS/LABA). Only those with an ACT score < 20 were included in the study.

Intervention. Patients were randomized to receive either a combination of fluticasone furoate and vilanterol (FF/VI) delivered by novel dry powder inhalation (DPI) (Ellipta) or to continue with their maintenance therapy. General practitioners provided care in their usual manner and could continuously optimize therapy according to their clinical opinion. Treatments were dispensed by community pharmacies in the usual way. Patients could modify their treatment and remain in the study. Those in the FF/VI group were allowed to change to other asthma medications and could stop taking FF/VI. Those in the usual care group were also allowed to alter medications, but could not initiate FF/VI.

Main outcome measures. The primary endpoint was ACT score at week 24 (the percentage of patients at week 24 with either an ACT score of 20 or greater or an increase of 3 or greater in the ACT score from baseline, termed responders). Safety endpoints included the incidence of serious pneumonias. The study utilized the Salford electronic medical record system, which allows near to real-time collection and monitoring of safety data. Secondary endpoints included ACT at various weeks, all asthma-related primary and secondary care contacts, annual rate of severe exacerbations, number of salbutamol inhalers dispensed, and time to modification or initial therapy.

Main results. 4233 patients were randomized, with 2119 patients randomized to usual care and 2114 randomized to the FF/VI group. 605 from the usual care group and 602 from the FF/VI group had a baseline ACT score greater than or equal to 20 and were thus excluded from the primary effectiveness analysis population. 306 in the usual care group and 342 in the FF/VI group withdrew for various reasons, including adverse events, or were lost to follow-up or protocol deviations. Mean patient age was 50 years. Within the usual care group, 64% of patients received ICS/LABA combination and 36% received ICS only. Within the FF/VI group, 65% were prescribed 100 μg/25 μg FFI/VI and 35% were prescribed 200 μg/25 μg FF/VI. At week 24, the FF/VI group had 74% responders whereas the usual care group had 60% responders; the odds of being a responder with FF/VI was twice that of being a responder with usual care (OR 1.97; 95% CI 1.71–2.26, P < 0.001). Patients in the FF/VI group had a slightly higher incidence of pneumonia than did the usual care group (23 vs 16; incidence ratio 1.4, 95% CI 0.8–2.7). Also, those in the FF/VI group had an increase in the rate of primary care visits/contacts per year (9.7% increase, 95% CI 4.6%–15.0%).

Conclusion. In patients with a general practitioner’s diagnosis of symptomatic asthma and on maintenance inhaler therapy, initiation of a once-daily treatment regimen of combined FF/VI improved asthma control without increasing the risk of serious adverse events when compared with optimized usual care.

Commentary

Woodcock et al conducted a pragmatic randomized controlled study. This innovative research method prospectively enrolled a large number of patients who were randomized to groups that could involve 1 or more interventions and who were then followed according to the treating physician’s usual practice. The patients’ experience was kept as close to everyday clinical practice care as possible to preserve the real-world nature of the study. The positive aspect of this innovative pragmatic research design is the inclusion of patients with varied disease severity and with comorbidities that are not well represented in conventional double-blind randomized controlled trials, such as patients with smoking history, obesity, or multiple comorbidities. In addition, an electronic health record system was used to track serious adverse events in near real-time and increased the accuracy of the data and minimized data loss.

While the pragmatic study design offers innovation, it also has some limitations. Effectiveness studies using a pragmatic approach are less controlled compared with traditional efficacy RCTs and are more prone to low medication compliance and high rates of follow-up loss. Further, Woodcock et al allowed patients to remain in the FF/VI group even though they may have stopped taking FF/VI. Indeed, in the FF/VI group, 463 (22%) of the 2114 patients changed their medication, and 381 (18%) switched to the usual care group. Patients were analyzed using intention to treat and thus were analyzed in the group to which they were initially randomized. This could have affected results, as a good proportion of patients in the FF/VI group were not actually taking the FF/VI. Within the usual care group, 376 (18%) of 2119 patients altered their medication and 3 (< 1%) switched to FF/VI, though this was prohibited. In routine care, adherence rates are expected to be low (20%–40%) and this is another possible weakness of the study; in closely monitored RCTs, adherence rates are around 80%–90%.

The authors did not include objective measures of the severity or types of asthma, which can be obtained using pulmonary function tests, eosinophil count, or other markers of inflammation. By identifying asthma patients via the general practitioner’s diagnosis, the study is more reflective of real life and primary care–driven; however, one cannot rule out accidental inclusion of patients who do not have asthma (which could include patients with post-infectious cough, vocal cord dysfunction, or anxiety) or patients who would not readily respond to typical asthma therapy (such as those with allergic bronchopulmonary aspergillosis or eosinophilic granulomatosis with polyangitis). In addition, the authors used only subjective measures to define control: ACT score by telephone. Other outcome measures included exacerbation rate, primary care physician visits, and time to exacerbation, which may be insensitive to detecting residual inflammation or severity of asthma. In lieu of objectively measuring the degree of airway obstruction or inflammation, the outcomes measured by the authors may not have comprehensively evaluated efficacy.

The open-label, intention-to-treat, and pragmatic design of the study may have generated major selection bias, despite the randomization. Because general practitioners who directly participated in the recruitment of the patients also monitored their treatment, volunteer or referral bias may have occurred. As the authors admitted, there were differences present in practice and treatment due to variation of training and education of the general practitioners. In addition, the current study was funded by a pharmaceutical company and the trial medication was dispensed free of cost, further generating bias.

Further consideration of the study medication also brings up questions about the study design. Combined therapy with low- to moderate-dose ICS/LABA is currently indicated for asthma patients with moderate persistent or higher severity asthma. The current US insurance system encourages management to begin with low-dose ICS before escalating to a combination of ICS/LABA. Given the previously published evidence of superiority for combined ICS/LABA over ICS alone on asthma control [2,3], inclusion criteria could have been limited only to patients who were already receiving ICS/LABA to more accurately equate the trial medication with the accepted standard medications. By including patients who were on ICS/LABA as well as those only on ICS (in usual care group, 64% were on ICS/LABA and 36% were on ICS) the likelihood of responders in the FF/VI group could have been inflated compared to usual care group. In addition, patients with a low severity of asthma symptoms, such as only intermittent asthma or mild persistent asthma, could have been overtreated by FF/VI per current guidelines. About 30% of the patients initially enrolled in the study had baseline ACT scores greater than 20, and some patients had less severe asthma as indicated by the treatment with ICS alone. The authors also included 2 different doses of fluticasone furoate in their study group.

It is of concern that the incidence of pneumonia with ICS/LABA in this study was slightly higher in the FF/VI than in the usual care group. Although it was not statistically significant in this study, the increased pneumonia risk with ICS has been observed in many other studies [4,5].

 

 

Applications for Clinical Practice

Fluticasone furoate plus vilanterol (FF/VI) can be a therapeutic option in patients with asthma, with a small increased risk for pneumonia that is similar to other types of inhaled corticosteroids. However, a stepwise therapeutic approach, following the published asthma treatment strategy [6], should be emphasized when escalating treatment to include FF/VI.

—Minkyung Kwon, MD, Joel Roberson, MD, and Neal Patel, MD, Pulmonary and Critical Care Medicine, Mayo Clinic Florida, Jacksonville, FL (Drs. Kwon and Patel), and Department of Radiology, Oakland University/Beaumont Health, Royal Oak, MI (Dr. Roberson)

References

1. Chalkidou K, Tunis S, Whicher D, et al. The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clin Trials (London, England) 2012;9:436–46.

2. O’Byrne PM, Bleecker ER, Bateman ED, et al. Once-daily fluticasone furoate alone or combined with vilanterol in persistent asthma. Eur Respir J 2014;43:773–82.

3. Bateman ED, O’Byrne PM, Busse WW, et al. Once-daily fluticasone furoate (FF)/vilanterol reduces risk of severe exacerbations in asthma versus FF alone. Thorax 2014;69:312–9.

4. McKeever T, Harrison TW, Hubbard R, Shaw D. Inhaled corticosteroids and the risk of pneumonia in people with asthma: a case-control study. Chest 2013;144:1788–94.

5. Crim C, Dransfield MT, Bourbeau J, et al. Pneumonia risk with inhaled fluticasone furoate and vilanterol compared with vilanterol alone in patients with COPD. Ann Am Thorac Soc 2015;12:27–34.

6. GINA. Global strategy for asthma management and prevention. 2017. Accessed at ginaasthma.org.

 

Article PDF
Issue
Journal of Clinical Outcomes Management - 24(11)
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To determine the effectiveness of asthma treatment using fluticasone furoate plus vilanterol in a setting that is closer to usual clinical practice.

Design. Open-label, parallel group, randomised controlled trial.

Setting and participants. The study was conducted at 74 general practice clinics in Salford and South Manchester, UK, between Nov 2012 and Dec 2016. Patients with a general practitioner’s diagnosis of symptomatic asthma and on maintenance inhaler therapy (either inhaled corticosteroid [ICS] alone or in combination with a long-acting bronchodilator [LABA]) were recruited. Patients with recent history of life-threatening asthma, COPD, or concomitant life-threatening disease were excluded. Participants were randomly assigned through a centralized randomization service and stratified by Asthma Control Test (ACT) score and by previous asthma maintenance therapy (ICS or ICS/LABA). Only those with an ACT score < 20 were included in the study.

Intervention. Patients were randomized to receive either a combination of fluticasone furoate and vilanterol (FF/VI) delivered by novel dry powder inhalation (DPI) (Ellipta) or to continue with their maintenance therapy. General practitioners provided care in their usual manner and could continuously optimize therapy according to their clinical opinion. Treatments were dispensed by community pharmacies in the usual way. Patients could modify their treatment and remain in the study. Those in the FF/VI group were allowed to change to other asthma medications and could stop taking FF/VI. Those in the usual care group were also allowed to alter medications, but could not initiate FF/VI.

Main outcome measures. The primary endpoint was ACT score at week 24 (the percentage of patients at week 24 with either an ACT score of 20 or greater or an increase of 3 or greater in the ACT score from baseline, termed responders). Safety endpoints included the incidence of serious pneumonias. The study utilized the Salford electronic medical record system, which allows near to real-time collection and monitoring of safety data. Secondary endpoints included ACT at various weeks, all asthma-related primary and secondary care contacts, annual rate of severe exacerbations, number of salbutamol inhalers dispensed, and time to modification or initial therapy.

Main results. 4233 patients were randomized, with 2119 patients randomized to usual care and 2114 randomized to the FF/VI group. 605 from the usual care group and 602 from the FF/VI group had a baseline ACT score greater than or equal to 20 and were thus excluded from the primary effectiveness analysis population. 306 in the usual care group and 342 in the FF/VI group withdrew for various reasons, including adverse events, or were lost to follow-up or protocol deviations. Mean patient age was 50 years. Within the usual care group, 64% of patients received ICS/LABA combination and 36% received ICS only. Within the FF/VI group, 65% were prescribed 100 μg/25 μg FFI/VI and 35% were prescribed 200 μg/25 μg FF/VI. At week 24, the FF/VI group had 74% responders whereas the usual care group had 60% responders; the odds of being a responder with FF/VI was twice that of being a responder with usual care (OR 1.97; 95% CI 1.71–2.26, P < 0.001). Patients in the FF/VI group had a slightly higher incidence of pneumonia than did the usual care group (23 vs 16; incidence ratio 1.4, 95% CI 0.8–2.7). Also, those in the FF/VI group had an increase in the rate of primary care visits/contacts per year (9.7% increase, 95% CI 4.6%–15.0%).

Conclusion. In patients with a general practitioner’s diagnosis of symptomatic asthma and on maintenance inhaler therapy, initiation of a once-daily treatment regimen of combined FF/VI improved asthma control without increasing the risk of serious adverse events when compared with optimized usual care.

Commentary

Woodcock et al conducted a pragmatic randomized controlled study. This innovative research method prospectively enrolled a large number of patients who were randomized to groups that could involve 1 or more interventions and who were then followed according to the treating physician’s usual practice. The patients’ experience was kept as close to everyday clinical practice care as possible to preserve the real-world nature of the study. The positive aspect of this innovative pragmatic research design is the inclusion of patients with varied disease severity and with comorbidities that are not well represented in conventional double-blind randomized controlled trials, such as patients with smoking history, obesity, or multiple comorbidities. In addition, an electronic health record system was used to track serious adverse events in near real-time and increased the accuracy of the data and minimized data loss.

While the pragmatic study design offers innovation, it also has some limitations. Effectiveness studies using a pragmatic approach are less controlled compared with traditional efficacy RCTs and are more prone to low medication compliance and high rates of follow-up loss. Further, Woodcock et al allowed patients to remain in the FF/VI group even though they may have stopped taking FF/VI. Indeed, in the FF/VI group, 463 (22%) of the 2114 patients changed their medication, and 381 (18%) switched to the usual care group. Patients were analyzed using intention to treat and thus were analyzed in the group to which they were initially randomized. This could have affected results, as a good proportion of patients in the FF/VI group were not actually taking the FF/VI. Within the usual care group, 376 (18%) of 2119 patients altered their medication and 3 (< 1%) switched to FF/VI, though this was prohibited. In routine care, adherence rates are expected to be low (20%–40%) and this is another possible weakness of the study; in closely monitored RCTs, adherence rates are around 80%–90%.

The authors did not include objective measures of the severity or types of asthma, which can be obtained using pulmonary function tests, eosinophil count, or other markers of inflammation. By identifying asthma patients via the general practitioner’s diagnosis, the study is more reflective of real life and primary care–driven; however, one cannot rule out accidental inclusion of patients who do not have asthma (which could include patients with post-infectious cough, vocal cord dysfunction, or anxiety) or patients who would not readily respond to typical asthma therapy (such as those with allergic bronchopulmonary aspergillosis or eosinophilic granulomatosis with polyangitis). In addition, the authors used only subjective measures to define control: ACT score by telephone. Other outcome measures included exacerbation rate, primary care physician visits, and time to exacerbation, which may be insensitive to detecting residual inflammation or severity of asthma. In lieu of objectively measuring the degree of airway obstruction or inflammation, the outcomes measured by the authors may not have comprehensively evaluated efficacy.

The open-label, intention-to-treat, and pragmatic design of the study may have generated major selection bias, despite the randomization. Because general practitioners who directly participated in the recruitment of the patients also monitored their treatment, volunteer or referral bias may have occurred. As the authors admitted, there were differences present in practice and treatment due to variation of training and education of the general practitioners. In addition, the current study was funded by a pharmaceutical company and the trial medication was dispensed free of cost, further generating bias.

Further consideration of the study medication also brings up questions about the study design. Combined therapy with low- to moderate-dose ICS/LABA is currently indicated for asthma patients with moderate persistent or higher severity asthma. The current US insurance system encourages management to begin with low-dose ICS before escalating to a combination of ICS/LABA. Given the previously published evidence of superiority for combined ICS/LABA over ICS alone on asthma control [2,3], inclusion criteria could have been limited only to patients who were already receiving ICS/LABA to more accurately equate the trial medication with the accepted standard medications. By including patients who were on ICS/LABA as well as those only on ICS (in usual care group, 64% were on ICS/LABA and 36% were on ICS) the likelihood of responders in the FF/VI group could have been inflated compared to usual care group. In addition, patients with a low severity of asthma symptoms, such as only intermittent asthma or mild persistent asthma, could have been overtreated by FF/VI per current guidelines. About 30% of the patients initially enrolled in the study had baseline ACT scores greater than 20, and some patients had less severe asthma as indicated by the treatment with ICS alone. The authors also included 2 different doses of fluticasone furoate in their study group.

It is of concern that the incidence of pneumonia with ICS/LABA in this study was slightly higher in the FF/VI than in the usual care group. Although it was not statistically significant in this study, the increased pneumonia risk with ICS has been observed in many other studies [4,5].

 

 

Applications for Clinical Practice

Fluticasone furoate plus vilanterol (FF/VI) can be a therapeutic option in patients with asthma, with a small increased risk for pneumonia that is similar to other types of inhaled corticosteroids. However, a stepwise therapeutic approach, following the published asthma treatment strategy [6], should be emphasized when escalating treatment to include FF/VI.

—Minkyung Kwon, MD, Joel Roberson, MD, and Neal Patel, MD, Pulmonary and Critical Care Medicine, Mayo Clinic Florida, Jacksonville, FL (Drs. Kwon and Patel), and Department of Radiology, Oakland University/Beaumont Health, Royal Oak, MI (Dr. Roberson)

Study Overview

Objective. To determine the effectiveness of asthma treatment using fluticasone furoate plus vilanterol in a setting that is closer to usual clinical practice.

Design. Open-label, parallel group, randomised controlled trial.

Setting and participants. The study was conducted at 74 general practice clinics in Salford and South Manchester, UK, between Nov 2012 and Dec 2016. Patients with a general practitioner’s diagnosis of symptomatic asthma and on maintenance inhaler therapy (either inhaled corticosteroid [ICS] alone or in combination with a long-acting bronchodilator [LABA]) were recruited. Patients with recent history of life-threatening asthma, COPD, or concomitant life-threatening disease were excluded. Participants were randomly assigned through a centralized randomization service and stratified by Asthma Control Test (ACT) score and by previous asthma maintenance therapy (ICS or ICS/LABA). Only those with an ACT score < 20 were included in the study.

Intervention. Patients were randomized to receive either a combination of fluticasone furoate and vilanterol (FF/VI) delivered by novel dry powder inhalation (DPI) (Ellipta) or to continue with their maintenance therapy. General practitioners provided care in their usual manner and could continuously optimize therapy according to their clinical opinion. Treatments were dispensed by community pharmacies in the usual way. Patients could modify their treatment and remain in the study. Those in the FF/VI group were allowed to change to other asthma medications and could stop taking FF/VI. Those in the usual care group were also allowed to alter medications, but could not initiate FF/VI.

Main outcome measures. The primary endpoint was ACT score at week 24 (the percentage of patients at week 24 with either an ACT score of 20 or greater or an increase of 3 or greater in the ACT score from baseline, termed responders). Safety endpoints included the incidence of serious pneumonias. The study utilized the Salford electronic medical record system, which allows near to real-time collection and monitoring of safety data. Secondary endpoints included ACT at various weeks, all asthma-related primary and secondary care contacts, annual rate of severe exacerbations, number of salbutamol inhalers dispensed, and time to modification or initial therapy.

Main results. 4233 patients were randomized, with 2119 patients randomized to usual care and 2114 randomized to the FF/VI group. 605 from the usual care group and 602 from the FF/VI group had a baseline ACT score greater than or equal to 20 and were thus excluded from the primary effectiveness analysis population. 306 in the usual care group and 342 in the FF/VI group withdrew for various reasons, including adverse events, or were lost to follow-up or protocol deviations. Mean patient age was 50 years. Within the usual care group, 64% of patients received ICS/LABA combination and 36% received ICS only. Within the FF/VI group, 65% were prescribed 100 μg/25 μg FFI/VI and 35% were prescribed 200 μg/25 μg FF/VI. At week 24, the FF/VI group had 74% responders whereas the usual care group had 60% responders; the odds of being a responder with FF/VI was twice that of being a responder with usual care (OR 1.97; 95% CI 1.71–2.26, P < 0.001). Patients in the FF/VI group had a slightly higher incidence of pneumonia than did the usual care group (23 vs 16; incidence ratio 1.4, 95% CI 0.8–2.7). Also, those in the FF/VI group had an increase in the rate of primary care visits/contacts per year (9.7% increase, 95% CI 4.6%–15.0%).

Conclusion. In patients with a general practitioner’s diagnosis of symptomatic asthma and on maintenance inhaler therapy, initiation of a once-daily treatment regimen of combined FF/VI improved asthma control without increasing the risk of serious adverse events when compared with optimized usual care.

Commentary

Woodcock et al conducted a pragmatic randomized controlled study. This innovative research method prospectively enrolled a large number of patients who were randomized to groups that could involve 1 or more interventions and who were then followed according to the treating physician’s usual practice. The patients’ experience was kept as close to everyday clinical practice care as possible to preserve the real-world nature of the study. The positive aspect of this innovative pragmatic research design is the inclusion of patients with varied disease severity and with comorbidities that are not well represented in conventional double-blind randomized controlled trials, such as patients with smoking history, obesity, or multiple comorbidities. In addition, an electronic health record system was used to track serious adverse events in near real-time and increased the accuracy of the data and minimized data loss.

While the pragmatic study design offers innovation, it also has some limitations. Effectiveness studies using a pragmatic approach are less controlled compared with traditional efficacy RCTs and are more prone to low medication compliance and high rates of follow-up loss. Further, Woodcock et al allowed patients to remain in the FF/VI group even though they may have stopped taking FF/VI. Indeed, in the FF/VI group, 463 (22%) of the 2114 patients changed their medication, and 381 (18%) switched to the usual care group. Patients were analyzed using intention to treat and thus were analyzed in the group to which they were initially randomized. This could have affected results, as a good proportion of patients in the FF/VI group were not actually taking the FF/VI. Within the usual care group, 376 (18%) of 2119 patients altered their medication and 3 (< 1%) switched to FF/VI, though this was prohibited. In routine care, adherence rates are expected to be low (20%–40%) and this is another possible weakness of the study; in closely monitored RCTs, adherence rates are around 80%–90%.

The authors did not include objective measures of the severity or types of asthma, which can be obtained using pulmonary function tests, eosinophil count, or other markers of inflammation. By identifying asthma patients via the general practitioner’s diagnosis, the study is more reflective of real life and primary care–driven; however, one cannot rule out accidental inclusion of patients who do not have asthma (which could include patients with post-infectious cough, vocal cord dysfunction, or anxiety) or patients who would not readily respond to typical asthma therapy (such as those with allergic bronchopulmonary aspergillosis or eosinophilic granulomatosis with polyangitis). In addition, the authors used only subjective measures to define control: ACT score by telephone. Other outcome measures included exacerbation rate, primary care physician visits, and time to exacerbation, which may be insensitive to detecting residual inflammation or severity of asthma. In lieu of objectively measuring the degree of airway obstruction or inflammation, the outcomes measured by the authors may not have comprehensively evaluated efficacy.

The open-label, intention-to-treat, and pragmatic design of the study may have generated major selection bias, despite the randomization. Because general practitioners who directly participated in the recruitment of the patients also monitored their treatment, volunteer or referral bias may have occurred. As the authors admitted, there were differences present in practice and treatment due to variation of training and education of the general practitioners. In addition, the current study was funded by a pharmaceutical company and the trial medication was dispensed free of cost, further generating bias.

Further consideration of the study medication also brings up questions about the study design. Combined therapy with low- to moderate-dose ICS/LABA is currently indicated for asthma patients with moderate persistent or higher severity asthma. The current US insurance system encourages management to begin with low-dose ICS before escalating to a combination of ICS/LABA. Given the previously published evidence of superiority for combined ICS/LABA over ICS alone on asthma control [2,3], inclusion criteria could have been limited only to patients who were already receiving ICS/LABA to more accurately equate the trial medication with the accepted standard medications. By including patients who were on ICS/LABA as well as those only on ICS (in usual care group, 64% were on ICS/LABA and 36% were on ICS) the likelihood of responders in the FF/VI group could have been inflated compared to usual care group. In addition, patients with a low severity of asthma symptoms, such as only intermittent asthma or mild persistent asthma, could have been overtreated by FF/VI per current guidelines. About 30% of the patients initially enrolled in the study had baseline ACT scores greater than 20, and some patients had less severe asthma as indicated by the treatment with ICS alone. The authors also included 2 different doses of fluticasone furoate in their study group.

It is of concern that the incidence of pneumonia with ICS/LABA in this study was slightly higher in the FF/VI than in the usual care group. Although it was not statistically significant in this study, the increased pneumonia risk with ICS has been observed in many other studies [4,5].

 

 

Applications for Clinical Practice

Fluticasone furoate plus vilanterol (FF/VI) can be a therapeutic option in patients with asthma, with a small increased risk for pneumonia that is similar to other types of inhaled corticosteroids. However, a stepwise therapeutic approach, following the published asthma treatment strategy [6], should be emphasized when escalating treatment to include FF/VI.

—Minkyung Kwon, MD, Joel Roberson, MD, and Neal Patel, MD, Pulmonary and Critical Care Medicine, Mayo Clinic Florida, Jacksonville, FL (Drs. Kwon and Patel), and Department of Radiology, Oakland University/Beaumont Health, Royal Oak, MI (Dr. Roberson)

References

1. Chalkidou K, Tunis S, Whicher D, et al. The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clin Trials (London, England) 2012;9:436–46.

2. O’Byrne PM, Bleecker ER, Bateman ED, et al. Once-daily fluticasone furoate alone or combined with vilanterol in persistent asthma. Eur Respir J 2014;43:773–82.

3. Bateman ED, O’Byrne PM, Busse WW, et al. Once-daily fluticasone furoate (FF)/vilanterol reduces risk of severe exacerbations in asthma versus FF alone. Thorax 2014;69:312–9.

4. McKeever T, Harrison TW, Hubbard R, Shaw D. Inhaled corticosteroids and the risk of pneumonia in people with asthma: a case-control study. Chest 2013;144:1788–94.

5. Crim C, Dransfield MT, Bourbeau J, et al. Pneumonia risk with inhaled fluticasone furoate and vilanterol compared with vilanterol alone in patients with COPD. Ann Am Thorac Soc 2015;12:27–34.

6. GINA. Global strategy for asthma management and prevention. 2017. Accessed at ginaasthma.org.

 

References

1. Chalkidou K, Tunis S, Whicher D, et al. The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clin Trials (London, England) 2012;9:436–46.

2. O’Byrne PM, Bleecker ER, Bateman ED, et al. Once-daily fluticasone furoate alone or combined with vilanterol in persistent asthma. Eur Respir J 2014;43:773–82.

3. Bateman ED, O’Byrne PM, Busse WW, et al. Once-daily fluticasone furoate (FF)/vilanterol reduces risk of severe exacerbations in asthma versus FF alone. Thorax 2014;69:312–9.

4. McKeever T, Harrison TW, Hubbard R, Shaw D. Inhaled corticosteroids and the risk of pneumonia in people with asthma: a case-control study. Chest 2013;144:1788–94.

5. Crim C, Dransfield MT, Bourbeau J, et al. Pneumonia risk with inhaled fluticasone furoate and vilanterol compared with vilanterol alone in patients with COPD. Ann Am Thorac Soc 2015;12:27–34.

6. GINA. Global strategy for asthma management and prevention. 2017. Accessed at ginaasthma.org.

 

Issue
Journal of Clinical Outcomes Management - 24(11)
Issue
Journal of Clinical Outcomes Management - 24(11)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Prolonged Survival in Metastatic Melanoma

Article Type
Changed
Wed, 04/29/2020 - 11:53

Study Overview

Objective. To compare clinical outcomes and toxicities between combined nivolumab plus ipilimumab (N+I) versus ipilimumab alone (I) or nivolumab alone (N) in patients with advanced melanoma.

Design. Randomized controlled trial 1:1:1 of N+I (nivolumab 1 mg/kg + ipilimumab 3 mg/kg every 3 weeks for 4 doses, followed by nivolumab 3 mg/kg every 2 weeks) versus N (3 mg/kg every 2 weeks) versus I plus placebo (3 mg/kg every 3 weeks for four doses).

Setting and participants. Adult patients with previously untreated stage III (unresectable) or stage IV melanoma and ECOG performance status of 0 or 1 (on a scale of 0–5 with higher score indicating greater disability). Patients with active brain metastases, ocular melanoma, or autoimmune disease were excluded. This study took place in academic and community practices across the United States, Europe, and Australia. 945 patients were randomized. If patients progressed, additional therapies were at clinician discretion.

Main outcome measures. Primary end points were progression-free survival and overall survival. Secondary end points were objective response rate, toxicity profile, and evaluation of PD-L1 (programmed death-ligand 1) as a predictive marker for progression-free survival and overall survival.

Main results. Baseline patient characteristics were published previously [1]. There were no significant differences among groups except that the I only group had a higher frequency of brian metastases (4.8%) vs the N only group (2.5%). At censored follow-up of a minimum of 36 months, median overall survival was not reached in the N+I group, was 37.6 months in the N only group and was 19.9 months in the I only group (hazard ratio [HR] for death 0.55 (P < 0.001) for N+I vs. I and 0.65 (P < 0.001) for N vs. I). Overall survival at 3 years was 58% in the N+I group vs. 52% in the N only group vs. 34% in the I only group. The rate of objective response was 58% in the N+I group vs. 44% in the N only group vs. 19% in the I only group. Progression-free survival at 3 years was 39% in the N+I group, 32% in the N only group and 10% in the I only group. The level of PD-L1 expression was not associated with response or overall survival. Grade 3 or 4 treatment-related adverse events occurred in 59% of the N+I group vs. 21% in N vs. 28% in I group. As therapy after progression was left to clinician discretion, crossover was common with 43% of the I only group receiving nivolumab as second-line therapy and 28% of the N only group receiving ipilimumab as second-line therapy.

Treatment-related events that lead to therapy discontinuation occurred much more frequently in those who received N+I (40%) vs. N (12%) vs. I (16%). However, among the N+I patients who discontinued after a median of 3 cycles of treatment, 67% were still alive at 3 years. In addition, when adverse events were treated with safety guidelines, most immune-mediated adverse events resolved within 3 to 4 weeks. The most common grade 3 or 4 adverse events in the N+I group were diarrhea (9%), elevated lipase (11%), and elevated liver transaminases (9%). A total of 2 treatment-related deaths were reported in the N+I group.

Conclusion. Both the combination therapy of nivolumab + ipilimumab and nivolumab alone offer superior 3-year overall survival and progression-free survival compared with ipilimumab alone in advanced melanoma, with acceptable toxicity profiles.

Commentary

Historically, unresectable and metastatic melanoma has had a dismal prognosis, with responses to chemotherapy in about 10% to 15% and rarely were these responses durable [2]. The previous standard of care was high-dose IL-2, a form of immunotherapy which leads to long-term survival in a small minority of patients (~15%) [3]. The encouraging results seen in this small minority lead to optimism for efficacy from additional immune-modifying agents.

The novel immunotherapy agents, known as checkpoint inhibitors, are antibodies directed against PD-1 (nivolumab and pembrolizumab), PD-L1 (atezolizumab, avelumab, and urvalumab), and CTLA-4 (ipilimumab). Each of these antigens are critical in a T cell process known as checkpoint inhibition. When these antigens are activated they inhibit T cells, a process critical for self recognition in the healthy human without cancer. However, many malignancies have developed molecular mechanisms to activate these checkpoint pathways and turn off T cell anti-tumor activity. By implementing checkpoint inhibitor antibodies, as done in this study, these drugs allow the T cells to be disinhibited and therefore exert anti-tumor activity. These drugs have been truly ground-breaking and are now FDA-approved in a number of malignancies, including bladder cancer, non–small cell lung cancer, head and neck squamous cell carcinoma, refractory Hodgkin lymphoma, mismatch repair–affected GI adenocarcinomas, renal cell carcinoma, and Merkel cell carcinoma. They offer the additional advantage of often an improved toxicity profile compared with traditional cytotoxic chemotherapy, as they are not typically associated with cytopenias, nausea, or hair loss, for example [4].

In this study, 3-year data from the CheckMate 067 trial is reported. As reported in this study, checkpoint inhibition has lead to truly remarkable improvements in outcomes for patients with advanced melanoma. In this study, the authors have demonstrated superiority of nivolumab plus ipilimumab and nivolumab alone versus ipilimumab alone. These results are similar to those seen in the KEYNOTE-006 trial which compared pemrolizumab (another anti-PD-1 antibody) to ipilimumab. In the KEYNOTE-006 trial, overall survival at 33 months was 50% in the pembrolizumab group versus 39% in the ipilimumab group.

In this study, the combination therapy was more toxic, requiring more frequent treatment discontinuation, though importantly, 3-year overall survival was 67% even among those who discontinued therapy. Grade 3 or 4 toxicity events seem to be associated with efficacy in this study. This is not surprising as this has been seen in some other tumor types as well [5], though it deserves more dedicated investigation as a prognostic marker in this population.

 

 

Applications for Clinical Practice

In this well-designed and -executed multicenter randomized trial, funded by Bristol-Myers Squibb and implemented in a selected population with good performance status, all 3 immunotherapies demonstrated impressive improvements in the management of advanced melanoma. The combination nivolumab and ipilimumab was the most effective, with markedly higher survival and response rates, but also with higher toxicity requiring treatment discontinuation, though this did not decrease the efficacy of the therapy. Both the combination nivolumab plus ipilimumab and nivolimab alone are acceptable treatments for patients with advanced melanoma and good performance status; cost and comorbidities will be critical in personalizing therapy.

—Matthew Painschab, MD, University of North Carolina, Chapel Hill, NC

References

1. Larkin J, Chiarion-Sileni V, Gonzalez R, et al. Combined nivolumab and ipilimumab or monotherapy in untreated melanoma. N Engl J Med 2015;373:23–34.

2. Hill GJI, Krementz ET, Hill HZ. Dimethyl triazeno imidazole carboxamide and combination therapy for melanoma. Cancer 1984;53:1299–305.

3. Atkins MB, Lotze MT, Dutcher JP, et al. High-dose recombinant interleuken 2 therapy for patiens with metastatic melanoma: analysis of 270 patients treated between 1985 and 1993. J Clin Oncol 1999;17:2105–16.

4. Michot JM, Bigenwald C, Champiat S, et al. Immune-related adverse events with immune checkpoint blockade: a comprehensive review. Eur J Cancer 2016;54:139–48.

5. Haratani K, Hayashi H, Chiba Y, et al. Association of immune-related adverse events with nivolumab efficacy in non-small-cell lung cancer. JAMA Oncol 2017 Sept 21.

Article PDF
Issue
Journal of Clinical Outcomes Management - 24(11)
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To compare clinical outcomes and toxicities between combined nivolumab plus ipilimumab (N+I) versus ipilimumab alone (I) or nivolumab alone (N) in patients with advanced melanoma.

Design. Randomized controlled trial 1:1:1 of N+I (nivolumab 1 mg/kg + ipilimumab 3 mg/kg every 3 weeks for 4 doses, followed by nivolumab 3 mg/kg every 2 weeks) versus N (3 mg/kg every 2 weeks) versus I plus placebo (3 mg/kg every 3 weeks for four doses).

Setting and participants. Adult patients with previously untreated stage III (unresectable) or stage IV melanoma and ECOG performance status of 0 or 1 (on a scale of 0–5 with higher score indicating greater disability). Patients with active brain metastases, ocular melanoma, or autoimmune disease were excluded. This study took place in academic and community practices across the United States, Europe, and Australia. 945 patients were randomized. If patients progressed, additional therapies were at clinician discretion.

Main outcome measures. Primary end points were progression-free survival and overall survival. Secondary end points were objective response rate, toxicity profile, and evaluation of PD-L1 (programmed death-ligand 1) as a predictive marker for progression-free survival and overall survival.

Main results. Baseline patient characteristics were published previously [1]. There were no significant differences among groups except that the I only group had a higher frequency of brian metastases (4.8%) vs the N only group (2.5%). At censored follow-up of a minimum of 36 months, median overall survival was not reached in the N+I group, was 37.6 months in the N only group and was 19.9 months in the I only group (hazard ratio [HR] for death 0.55 (P < 0.001) for N+I vs. I and 0.65 (P < 0.001) for N vs. I). Overall survival at 3 years was 58% in the N+I group vs. 52% in the N only group vs. 34% in the I only group. The rate of objective response was 58% in the N+I group vs. 44% in the N only group vs. 19% in the I only group. Progression-free survival at 3 years was 39% in the N+I group, 32% in the N only group and 10% in the I only group. The level of PD-L1 expression was not associated with response or overall survival. Grade 3 or 4 treatment-related adverse events occurred in 59% of the N+I group vs. 21% in N vs. 28% in I group. As therapy after progression was left to clinician discretion, crossover was common with 43% of the I only group receiving nivolumab as second-line therapy and 28% of the N only group receiving ipilimumab as second-line therapy.

Treatment-related events that lead to therapy discontinuation occurred much more frequently in those who received N+I (40%) vs. N (12%) vs. I (16%). However, among the N+I patients who discontinued after a median of 3 cycles of treatment, 67% were still alive at 3 years. In addition, when adverse events were treated with safety guidelines, most immune-mediated adverse events resolved within 3 to 4 weeks. The most common grade 3 or 4 adverse events in the N+I group were diarrhea (9%), elevated lipase (11%), and elevated liver transaminases (9%). A total of 2 treatment-related deaths were reported in the N+I group.

Conclusion. Both the combination therapy of nivolumab + ipilimumab and nivolumab alone offer superior 3-year overall survival and progression-free survival compared with ipilimumab alone in advanced melanoma, with acceptable toxicity profiles.

Commentary

Historically, unresectable and metastatic melanoma has had a dismal prognosis, with responses to chemotherapy in about 10% to 15% and rarely were these responses durable [2]. The previous standard of care was high-dose IL-2, a form of immunotherapy which leads to long-term survival in a small minority of patients (~15%) [3]. The encouraging results seen in this small minority lead to optimism for efficacy from additional immune-modifying agents.

The novel immunotherapy agents, known as checkpoint inhibitors, are antibodies directed against PD-1 (nivolumab and pembrolizumab), PD-L1 (atezolizumab, avelumab, and urvalumab), and CTLA-4 (ipilimumab). Each of these antigens are critical in a T cell process known as checkpoint inhibition. When these antigens are activated they inhibit T cells, a process critical for self recognition in the healthy human without cancer. However, many malignancies have developed molecular mechanisms to activate these checkpoint pathways and turn off T cell anti-tumor activity. By implementing checkpoint inhibitor antibodies, as done in this study, these drugs allow the T cells to be disinhibited and therefore exert anti-tumor activity. These drugs have been truly ground-breaking and are now FDA-approved in a number of malignancies, including bladder cancer, non–small cell lung cancer, head and neck squamous cell carcinoma, refractory Hodgkin lymphoma, mismatch repair–affected GI adenocarcinomas, renal cell carcinoma, and Merkel cell carcinoma. They offer the additional advantage of often an improved toxicity profile compared with traditional cytotoxic chemotherapy, as they are not typically associated with cytopenias, nausea, or hair loss, for example [4].

In this study, 3-year data from the CheckMate 067 trial is reported. As reported in this study, checkpoint inhibition has lead to truly remarkable improvements in outcomes for patients with advanced melanoma. In this study, the authors have demonstrated superiority of nivolumab plus ipilimumab and nivolumab alone versus ipilimumab alone. These results are similar to those seen in the KEYNOTE-006 trial which compared pemrolizumab (another anti-PD-1 antibody) to ipilimumab. In the KEYNOTE-006 trial, overall survival at 33 months was 50% in the pembrolizumab group versus 39% in the ipilimumab group.

In this study, the combination therapy was more toxic, requiring more frequent treatment discontinuation, though importantly, 3-year overall survival was 67% even among those who discontinued therapy. Grade 3 or 4 toxicity events seem to be associated with efficacy in this study. This is not surprising as this has been seen in some other tumor types as well [5], though it deserves more dedicated investigation as a prognostic marker in this population.

 

 

Applications for Clinical Practice

In this well-designed and -executed multicenter randomized trial, funded by Bristol-Myers Squibb and implemented in a selected population with good performance status, all 3 immunotherapies demonstrated impressive improvements in the management of advanced melanoma. The combination nivolumab and ipilimumab was the most effective, with markedly higher survival and response rates, but also with higher toxicity requiring treatment discontinuation, though this did not decrease the efficacy of the therapy. Both the combination nivolumab plus ipilimumab and nivolimab alone are acceptable treatments for patients with advanced melanoma and good performance status; cost and comorbidities will be critical in personalizing therapy.

—Matthew Painschab, MD, University of North Carolina, Chapel Hill, NC

Study Overview

Objective. To compare clinical outcomes and toxicities between combined nivolumab plus ipilimumab (N+I) versus ipilimumab alone (I) or nivolumab alone (N) in patients with advanced melanoma.

Design. Randomized controlled trial 1:1:1 of N+I (nivolumab 1 mg/kg + ipilimumab 3 mg/kg every 3 weeks for 4 doses, followed by nivolumab 3 mg/kg every 2 weeks) versus N (3 mg/kg every 2 weeks) versus I plus placebo (3 mg/kg every 3 weeks for four doses).

Setting and participants. Adult patients with previously untreated stage III (unresectable) or stage IV melanoma and ECOG performance status of 0 or 1 (on a scale of 0–5 with higher score indicating greater disability). Patients with active brain metastases, ocular melanoma, or autoimmune disease were excluded. This study took place in academic and community practices across the United States, Europe, and Australia. 945 patients were randomized. If patients progressed, additional therapies were at clinician discretion.

Main outcome measures. Primary end points were progression-free survival and overall survival. Secondary end points were objective response rate, toxicity profile, and evaluation of PD-L1 (programmed death-ligand 1) as a predictive marker for progression-free survival and overall survival.

Main results. Baseline patient characteristics were published previously [1]. There were no significant differences among groups except that the I only group had a higher frequency of brian metastases (4.8%) vs the N only group (2.5%). At censored follow-up of a minimum of 36 months, median overall survival was not reached in the N+I group, was 37.6 months in the N only group and was 19.9 months in the I only group (hazard ratio [HR] for death 0.55 (P < 0.001) for N+I vs. I and 0.65 (P < 0.001) for N vs. I). Overall survival at 3 years was 58% in the N+I group vs. 52% in the N only group vs. 34% in the I only group. The rate of objective response was 58% in the N+I group vs. 44% in the N only group vs. 19% in the I only group. Progression-free survival at 3 years was 39% in the N+I group, 32% in the N only group and 10% in the I only group. The level of PD-L1 expression was not associated with response or overall survival. Grade 3 or 4 treatment-related adverse events occurred in 59% of the N+I group vs. 21% in N vs. 28% in I group. As therapy after progression was left to clinician discretion, crossover was common with 43% of the I only group receiving nivolumab as second-line therapy and 28% of the N only group receiving ipilimumab as second-line therapy.

Treatment-related events that lead to therapy discontinuation occurred much more frequently in those who received N+I (40%) vs. N (12%) vs. I (16%). However, among the N+I patients who discontinued after a median of 3 cycles of treatment, 67% were still alive at 3 years. In addition, when adverse events were treated with safety guidelines, most immune-mediated adverse events resolved within 3 to 4 weeks. The most common grade 3 or 4 adverse events in the N+I group were diarrhea (9%), elevated lipase (11%), and elevated liver transaminases (9%). A total of 2 treatment-related deaths were reported in the N+I group.

Conclusion. Both the combination therapy of nivolumab + ipilimumab and nivolumab alone offer superior 3-year overall survival and progression-free survival compared with ipilimumab alone in advanced melanoma, with acceptable toxicity profiles.

Commentary

Historically, unresectable and metastatic melanoma has had a dismal prognosis, with responses to chemotherapy in about 10% to 15% and rarely were these responses durable [2]. The previous standard of care was high-dose IL-2, a form of immunotherapy which leads to long-term survival in a small minority of patients (~15%) [3]. The encouraging results seen in this small minority lead to optimism for efficacy from additional immune-modifying agents.

The novel immunotherapy agents, known as checkpoint inhibitors, are antibodies directed against PD-1 (nivolumab and pembrolizumab), PD-L1 (atezolizumab, avelumab, and urvalumab), and CTLA-4 (ipilimumab). Each of these antigens are critical in a T cell process known as checkpoint inhibition. When these antigens are activated they inhibit T cells, a process critical for self recognition in the healthy human without cancer. However, many malignancies have developed molecular mechanisms to activate these checkpoint pathways and turn off T cell anti-tumor activity. By implementing checkpoint inhibitor antibodies, as done in this study, these drugs allow the T cells to be disinhibited and therefore exert anti-tumor activity. These drugs have been truly ground-breaking and are now FDA-approved in a number of malignancies, including bladder cancer, non–small cell lung cancer, head and neck squamous cell carcinoma, refractory Hodgkin lymphoma, mismatch repair–affected GI adenocarcinomas, renal cell carcinoma, and Merkel cell carcinoma. They offer the additional advantage of often an improved toxicity profile compared with traditional cytotoxic chemotherapy, as they are not typically associated with cytopenias, nausea, or hair loss, for example [4].

In this study, 3-year data from the CheckMate 067 trial is reported. As reported in this study, checkpoint inhibition has lead to truly remarkable improvements in outcomes for patients with advanced melanoma. In this study, the authors have demonstrated superiority of nivolumab plus ipilimumab and nivolumab alone versus ipilimumab alone. These results are similar to those seen in the KEYNOTE-006 trial which compared pemrolizumab (another anti-PD-1 antibody) to ipilimumab. In the KEYNOTE-006 trial, overall survival at 33 months was 50% in the pembrolizumab group versus 39% in the ipilimumab group.

In this study, the combination therapy was more toxic, requiring more frequent treatment discontinuation, though importantly, 3-year overall survival was 67% even among those who discontinued therapy. Grade 3 or 4 toxicity events seem to be associated with efficacy in this study. This is not surprising as this has been seen in some other tumor types as well [5], though it deserves more dedicated investigation as a prognostic marker in this population.

 

 

Applications for Clinical Practice

In this well-designed and -executed multicenter randomized trial, funded by Bristol-Myers Squibb and implemented in a selected population with good performance status, all 3 immunotherapies demonstrated impressive improvements in the management of advanced melanoma. The combination nivolumab and ipilimumab was the most effective, with markedly higher survival and response rates, but also with higher toxicity requiring treatment discontinuation, though this did not decrease the efficacy of the therapy. Both the combination nivolumab plus ipilimumab and nivolimab alone are acceptable treatments for patients with advanced melanoma and good performance status; cost and comorbidities will be critical in personalizing therapy.

—Matthew Painschab, MD, University of North Carolina, Chapel Hill, NC

References

1. Larkin J, Chiarion-Sileni V, Gonzalez R, et al. Combined nivolumab and ipilimumab or monotherapy in untreated melanoma. N Engl J Med 2015;373:23–34.

2. Hill GJI, Krementz ET, Hill HZ. Dimethyl triazeno imidazole carboxamide and combination therapy for melanoma. Cancer 1984;53:1299–305.

3. Atkins MB, Lotze MT, Dutcher JP, et al. High-dose recombinant interleuken 2 therapy for patiens with metastatic melanoma: analysis of 270 patients treated between 1985 and 1993. J Clin Oncol 1999;17:2105–16.

4. Michot JM, Bigenwald C, Champiat S, et al. Immune-related adverse events with immune checkpoint blockade: a comprehensive review. Eur J Cancer 2016;54:139–48.

5. Haratani K, Hayashi H, Chiba Y, et al. Association of immune-related adverse events with nivolumab efficacy in non-small-cell lung cancer. JAMA Oncol 2017 Sept 21.

References

1. Larkin J, Chiarion-Sileni V, Gonzalez R, et al. Combined nivolumab and ipilimumab or monotherapy in untreated melanoma. N Engl J Med 2015;373:23–34.

2. Hill GJI, Krementz ET, Hill HZ. Dimethyl triazeno imidazole carboxamide and combination therapy for melanoma. Cancer 1984;53:1299–305.

3. Atkins MB, Lotze MT, Dutcher JP, et al. High-dose recombinant interleuken 2 therapy for patiens with metastatic melanoma: analysis of 270 patients treated between 1985 and 1993. J Clin Oncol 1999;17:2105–16.

4. Michot JM, Bigenwald C, Champiat S, et al. Immune-related adverse events with immune checkpoint blockade: a comprehensive review. Eur J Cancer 2016;54:139–48.

5. Haratani K, Hayashi H, Chiba Y, et al. Association of immune-related adverse events with nivolumab efficacy in non-small-cell lung cancer. JAMA Oncol 2017 Sept 21.

Issue
Journal of Clinical Outcomes Management - 24(11)
Issue
Journal of Clinical Outcomes Management - 24(11)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Follow-up of Prostatectomy versus Observation for Early Prostate Cancer

Article Type
Changed
Wed, 04/29/2020 - 11:50

Objective. To determine differences in all-cause and prostate cancer–specific mortality between subgroups of patients who underwent watchful waiting versus radical prostactectomy (RP) for early-stage prostate cancer.

Design. Randomized prospective multicenter trial (PIVOT study).

Setting and participants. Study participants were Department of Veterans Affairs (VA) patients younger than age 75 with biopsy-proven local prostate cancer (T1–T2, M0 by TNM staging and centrally confirmed by pathology laboratory in Baylor) between November 1994 and January 2002. They were patients at NCI medical center–associated VA facilities. Patients had to be eligible for RP and not limited by concomitant medical comorbidities. Patients were excluded if they had undergone therapy for prostate cancer other than transurethral resection of prostate cancer (TURP) for diagnostic purposes including radiation, androgen deprivation theory (ADT), chemotherapy, or definitive surgery. They were also excluded if they had a PSA > 50 ng/mL or a bone scan suggestive of metastatic disease.

Main outcome measures. The primary outcome of the study was all-cause mortality. The secondary outcome was prostate cancer–specific mortality. These were measured from date of diagnosis to August 2014 or until the patient died. A third-party end-points committee blinded to patient arm in the trial determined the cause of death from medical record assessment.

Main results. 731 men with a mean age of 67 were randomly assigned to RP or watchful waiting. The median PSA of patients was 7.8 ng/mL with 75% of patients having a Gleason score ≤ 7 and 74% of patients having low- or intermediate-risk prostate cancer. As of August 2014, 468 of 731 men had died; cause of death was unavailable in 7 patients (2 patients in the surgery arm and 5 in the observation arm). Median duration of follow-up to death or end of follow-up was 12.7 years. All-cause mortality was not significantly different between RP and observation arms (hazard ratio 0.84, 95% confidence interval [CI] 0.7–1.01, P = 0.06). The incidence of death at 19.5 years was 61.3% in patients assigned to surgery versus 66.8% in the watchful waiting arm (relative risk 0.92, 95% CI 0.82–1.02). Deaths from prostate cancer or treatment occurred in 69 patients in the study; 65 from prostate cancer and 4 from treatment. Prostate cancer–associated mortality was not significantly lower in the RP arm than in the watchful waiting arm (hazard ratio 0.63, 95% CI 0.39–1.02, P = 0.06). Mortality was not significantly reduced in any examined subgroup (age > or < 65, white or black ethnicity, PSA > 10 ng/mL or < 10 ng/mL, low/high/intermediate grade, Gleason score). Fewer men who underwent surgery (40.9%) had progression compared to those who underwent observation (68.4%). Most of these patients experienced local progression: 34.1% in the surgery arm and 61.9% in the observation arm. Distant progression was seen in 10.7% of patients treated with RP and 14.2% in the untreated arm. Treatment for progression (local, asymptomatic or by PSA rise) occurred in 59.7% of men assigned to observation and in 33.5% of men assigned to surgery. ADT was more frequently utilized as a treatment modality in men who were initially observed (44.4%) than in men who had up-front surgery (21.7%).

With regard to patient-related outcomes (PROs), more men assigned to RP reported bothersome symptoms such as physical discomfort and limitations in performing activities of daily living (ADLs) at 2 years than in men who did not undergo the intervention. This difference did not persist at later time points beyond 2 years. The use of incontinence pads was markedly higher in surgically treated men than in untreated men. 40% of patients in the treatment arm had to use at least 1 incontinence pad per day within 6 months of RP; this number remained unchanged at 10 years. Rates of erectile dysfunction were reported as lower at 2 (80% versus 45%), 5 (80% versus 55%) and 10 (85% versus 70%) years in men who were watched versus those who underwent surgery. Rates of optimal sexual function were reported as lower in resected men at 1 (35% versus 65%), 5 (38% versus 55%) and 10 (50% versus 70%) years than in men who were watched.

Conclusion. Patients with localized prostate cancer who were randomized to observation rather than RP did not experience greater all-cause mortality or prostate cancer–specific mortality than their surgical counterparts. Furthermore, they experienced less erectile dysfunction, less sexual function impairment, and less incontinence than patients who underwent surgery. Patients who underwent surgery had higher rates of ADL dysfunction and physical discomfort although these differences did not persist beyond 2 years.

Commentary

Nearly 162,000 men will be diagnosed with prostate cancer in 2017, and it is anticipated 27,000 will succumb to their disease [1]. This ratio of incident cases to annual mortality represents one of the lowest ratios amongst all cancer sites and suggests most prostate cancers are indolent. Localized prostate cancer is usually defined by low (Gleason score ≤ 6, PSA < 10 ng/mL and ≤ T2 stage) or intermediate (Gleason score ≤ 7, PSA 10–20 ng/mL, and ≤ T2b stage) risk characteristics. 70% of patients present with low-risk disease, which carries a mortality risk of close to 6% at 15 years [2]. Despite this, nearly 90% of these patients are treated with RP, external beam radiation, or brachytherapy. Some published studies suggest up to 60% of low-risk prostate cancer patients may be overtreated [3,4]. The decision to treat low-risk patients is controversial, as morbidities (eg, sexual dysfunction, erectile dysfunction, incontinence) from a radical prostatectomy or focal radiation therapy are significant while the potential gain may be minimal.

Two other trials in addition to current PIVOT follow-up study have sought to answer the question of whether observation (through either watchful waiting or active surveillance) or treatment (surgery or radiation) is the optimal approach in the management of patients with localized prostate cancer. The SPCG-4 trial [5], which began enrollment in the pre-PSA screening era, included Scandinavian patients with biopsy-proven prostate cancer who were < 75, and had life expectancy > 10 years, ≤ T2 lesions, and PSA < 50 ng/mL. Patients began enrollment in 1989 and were watched for more than 20 years. They were seen in clinic every 6 months for the first 2 years and annually thereafter. The primary outcomes of the trial were death from any cause, death from prostate cancer, or risk of bony and visceral metastases. 447 of 695 included men (200 men in the RP group and 247 men in the watchful waiting group) had died by 2012. The cumulative incidence of death from prostate cancer at the 18-year follow-up point was 17.7% in the surgery arm versus 28.7% in the observation arm. The incidence of distant metastases at the 18-year follow-up point was 26.1% in the radical prostatectomy arm and 38.3% in the watchful waiting group. 67.4% of men assigned to watchful waiting utilized ADT while 42.4% of men treated with prostatectomy utilized ADT palliatively post progression [5].

Vaccine bottles

The ProtecT trial was a United Kingdom study that enrolled 1643 men with prostate cancer aged 50–69 years between 1999 and 2009. The trial randomized men to 3 arms: watchful waiting, RP, or radiation therapy. Patients were eligible for the study if they were < 70 and had ≤ T2 stage disease. 97% of patients had a Gleason score ≤ 7. The primary outcome was prostate cancer–associated mortality at 10 years. Secondary outcomes included death from any cause, rates of distant metastases, and clinical progression. At the end of follow-up, prostate cancer–specific survival was 98.8% in all groups with no significant differences between groups. There was no evidence that differences between prostate cancer–associated mortality varied between groups when stratified by Gleason score, age, PSA, or clinical stage. Additionally, all-cause mortality rates were equivalently distributed across groups [6].

One of the primary reasons why PIVOT and ProtecT may have had different outcomes than the SPCG-4 trial may relate to the aggressiveness of tumors in patients in the various studies. Median PSA levels in the PIVOT and ProtecT trials, respectively, were 7.8 ng/mL and 4.2 ng/mL, compared with 13.2 ng/mL in the SPCG-4 trial. 70% and 77% of patients in PIVOT and ProtecT, respectively, had Gleason score ≤ 6 compared with 57% in the SPCG-4 trial. It is possible that SPCG-4 demonstrated the benefit of RP compared to observation because more patients had higher-risk tumors. Other studies have assessed the economic cost of treatment versus observation in low-risk prostate cancer patients using outcomes such as quality-adjusted life events (QALEs). In a 2013 decision analysis, observation was more effective and less costly than up-front treatment with radiation therapy or RP. Specifically, amongst modes of observation, watchful waiting rather than active surveillance (with every-6-months PSA screening) was more effective and less expensive [7].

Some of the strengths of the PIVOT trial include its prospective randomized design, multicenter patient cohorts, central blinded pathology review, and prolonged follow-up time of nearly 20 years. The trial also had several important limitations. First, the trial included a smaller sample size of patients than the investigators originally intended (2000 patients) and was subsequently underpowered to detect the predetermined outcome of mortality difference between the arms. Second, nearly 20% of patients were not adherent with their treatment arm assignments, which could have potentially confounded the results. Finally, the trial included a patient population that was sicker than the average patient diagnosed in the community with prostate cancer. Trial patients were more likely to succumb to diseases other than prostate cancer and thus may not have been alive long enough to demonstrate a difference between the trial arms (20-year mortality rate was close to 50% in trial patients compared with 30% in the general population post prostatectomy).

Applications for Clinical Practice

The NCCN guidelines suggest that patients with low-risk or intermediate-risk prostate cancer with life expectancies < 10 years should proceed with observation alone. In patients with low-risk disease and life expectancies > 10 years, active surveillance, radiation therapy, or RP are all recommended options. In intermediate-risk patients with life expectancies of > 10 years, treatment with surgery or radiation is warranted. Based on the findings from the PIVOT trial and other trials mentioned above, observation seems to be the most reasonable approach in patients with low-risk prostate cancer. The risks of treatment with RP or radiation outweigh the potential benefits from therapy, particularly in the absence of long-term mortality benefit.

—Satya Das, MD, Vanderbilt Ingram Cancer Center, Nashville, TN

References

1. SEER. https://seer.cancer.gov/statfacts/html/prost.html.

2. Lu-Yao G, Albertsen P, Moore D, et al. Outcomes of localized prostate cancer following conservative management. JAMA 2009;302:1202–9.

3. Cooperberg M, Broering J, Kantoff P, et al. Contemporary trends in low risk prostate cancer: risk assessment and treatment. J Urol 2007;178(3 Pt 2):S14–9.

4. Welch H, Black W. Overdiagnosis in cancer. J Natl Cancer Inst 2010;102:605–13.

5. Bill-Axelson A, Holmberg L, Garmo H, et al. Radical prostatectomy or watchful waiting in early prostate cancer. N Engl J Med 2014;370:932–42.

6. Hamdy F, Donovan J, Lane J, et al. 10-year outcomes after monitoring, surgery, or radiotherapy for localized prostate cancer. N Engl J Med 2016;375:1415–24.

7. Hayes J, Ollendorf D, Pearson S, et al. Observation versus initial treatment for men with localized, low-risk prostate cancer a cost-effectiveness analysis. Ann Intern Med 2013;158:853–60.

Article PDF
Issue
Journal of Clinical Outcomes Management - 24(11)
Publications
Topics
Sections
Article PDF
Article PDF

Objective. To determine differences in all-cause and prostate cancer–specific mortality between subgroups of patients who underwent watchful waiting versus radical prostactectomy (RP) for early-stage prostate cancer.

Design. Randomized prospective multicenter trial (PIVOT study).

Setting and participants. Study participants were Department of Veterans Affairs (VA) patients younger than age 75 with biopsy-proven local prostate cancer (T1–T2, M0 by TNM staging and centrally confirmed by pathology laboratory in Baylor) between November 1994 and January 2002. They were patients at NCI medical center–associated VA facilities. Patients had to be eligible for RP and not limited by concomitant medical comorbidities. Patients were excluded if they had undergone therapy for prostate cancer other than transurethral resection of prostate cancer (TURP) for diagnostic purposes including radiation, androgen deprivation theory (ADT), chemotherapy, or definitive surgery. They were also excluded if they had a PSA > 50 ng/mL or a bone scan suggestive of metastatic disease.

Main outcome measures. The primary outcome of the study was all-cause mortality. The secondary outcome was prostate cancer–specific mortality. These were measured from date of diagnosis to August 2014 or until the patient died. A third-party end-points committee blinded to patient arm in the trial determined the cause of death from medical record assessment.

Main results. 731 men with a mean age of 67 were randomly assigned to RP or watchful waiting. The median PSA of patients was 7.8 ng/mL with 75% of patients having a Gleason score ≤ 7 and 74% of patients having low- or intermediate-risk prostate cancer. As of August 2014, 468 of 731 men had died; cause of death was unavailable in 7 patients (2 patients in the surgery arm and 5 in the observation arm). Median duration of follow-up to death or end of follow-up was 12.7 years. All-cause mortality was not significantly different between RP and observation arms (hazard ratio 0.84, 95% confidence interval [CI] 0.7–1.01, P = 0.06). The incidence of death at 19.5 years was 61.3% in patients assigned to surgery versus 66.8% in the watchful waiting arm (relative risk 0.92, 95% CI 0.82–1.02). Deaths from prostate cancer or treatment occurred in 69 patients in the study; 65 from prostate cancer and 4 from treatment. Prostate cancer–associated mortality was not significantly lower in the RP arm than in the watchful waiting arm (hazard ratio 0.63, 95% CI 0.39–1.02, P = 0.06). Mortality was not significantly reduced in any examined subgroup (age > or < 65, white or black ethnicity, PSA > 10 ng/mL or < 10 ng/mL, low/high/intermediate grade, Gleason score). Fewer men who underwent surgery (40.9%) had progression compared to those who underwent observation (68.4%). Most of these patients experienced local progression: 34.1% in the surgery arm and 61.9% in the observation arm. Distant progression was seen in 10.7% of patients treated with RP and 14.2% in the untreated arm. Treatment for progression (local, asymptomatic or by PSA rise) occurred in 59.7% of men assigned to observation and in 33.5% of men assigned to surgery. ADT was more frequently utilized as a treatment modality in men who were initially observed (44.4%) than in men who had up-front surgery (21.7%).

With regard to patient-related outcomes (PROs), more men assigned to RP reported bothersome symptoms such as physical discomfort and limitations in performing activities of daily living (ADLs) at 2 years than in men who did not undergo the intervention. This difference did not persist at later time points beyond 2 years. The use of incontinence pads was markedly higher in surgically treated men than in untreated men. 40% of patients in the treatment arm had to use at least 1 incontinence pad per day within 6 months of RP; this number remained unchanged at 10 years. Rates of erectile dysfunction were reported as lower at 2 (80% versus 45%), 5 (80% versus 55%) and 10 (85% versus 70%) years in men who were watched versus those who underwent surgery. Rates of optimal sexual function were reported as lower in resected men at 1 (35% versus 65%), 5 (38% versus 55%) and 10 (50% versus 70%) years than in men who were watched.

Conclusion. Patients with localized prostate cancer who were randomized to observation rather than RP did not experience greater all-cause mortality or prostate cancer–specific mortality than their surgical counterparts. Furthermore, they experienced less erectile dysfunction, less sexual function impairment, and less incontinence than patients who underwent surgery. Patients who underwent surgery had higher rates of ADL dysfunction and physical discomfort although these differences did not persist beyond 2 years.

Commentary

Nearly 162,000 men will be diagnosed with prostate cancer in 2017, and it is anticipated 27,000 will succumb to their disease [1]. This ratio of incident cases to annual mortality represents one of the lowest ratios amongst all cancer sites and suggests most prostate cancers are indolent. Localized prostate cancer is usually defined by low (Gleason score ≤ 6, PSA < 10 ng/mL and ≤ T2 stage) or intermediate (Gleason score ≤ 7, PSA 10–20 ng/mL, and ≤ T2b stage) risk characteristics. 70% of patients present with low-risk disease, which carries a mortality risk of close to 6% at 15 years [2]. Despite this, nearly 90% of these patients are treated with RP, external beam radiation, or brachytherapy. Some published studies suggest up to 60% of low-risk prostate cancer patients may be overtreated [3,4]. The decision to treat low-risk patients is controversial, as morbidities (eg, sexual dysfunction, erectile dysfunction, incontinence) from a radical prostatectomy or focal radiation therapy are significant while the potential gain may be minimal.

Two other trials in addition to current PIVOT follow-up study have sought to answer the question of whether observation (through either watchful waiting or active surveillance) or treatment (surgery or radiation) is the optimal approach in the management of patients with localized prostate cancer. The SPCG-4 trial [5], which began enrollment in the pre-PSA screening era, included Scandinavian patients with biopsy-proven prostate cancer who were < 75, and had life expectancy > 10 years, ≤ T2 lesions, and PSA < 50 ng/mL. Patients began enrollment in 1989 and were watched for more than 20 years. They were seen in clinic every 6 months for the first 2 years and annually thereafter. The primary outcomes of the trial were death from any cause, death from prostate cancer, or risk of bony and visceral metastases. 447 of 695 included men (200 men in the RP group and 247 men in the watchful waiting group) had died by 2012. The cumulative incidence of death from prostate cancer at the 18-year follow-up point was 17.7% in the surgery arm versus 28.7% in the observation arm. The incidence of distant metastases at the 18-year follow-up point was 26.1% in the radical prostatectomy arm and 38.3% in the watchful waiting group. 67.4% of men assigned to watchful waiting utilized ADT while 42.4% of men treated with prostatectomy utilized ADT palliatively post progression [5].

Vaccine bottles

The ProtecT trial was a United Kingdom study that enrolled 1643 men with prostate cancer aged 50–69 years between 1999 and 2009. The trial randomized men to 3 arms: watchful waiting, RP, or radiation therapy. Patients were eligible for the study if they were < 70 and had ≤ T2 stage disease. 97% of patients had a Gleason score ≤ 7. The primary outcome was prostate cancer–associated mortality at 10 years. Secondary outcomes included death from any cause, rates of distant metastases, and clinical progression. At the end of follow-up, prostate cancer–specific survival was 98.8% in all groups with no significant differences between groups. There was no evidence that differences between prostate cancer–associated mortality varied between groups when stratified by Gleason score, age, PSA, or clinical stage. Additionally, all-cause mortality rates were equivalently distributed across groups [6].

One of the primary reasons why PIVOT and ProtecT may have had different outcomes than the SPCG-4 trial may relate to the aggressiveness of tumors in patients in the various studies. Median PSA levels in the PIVOT and ProtecT trials, respectively, were 7.8 ng/mL and 4.2 ng/mL, compared with 13.2 ng/mL in the SPCG-4 trial. 70% and 77% of patients in PIVOT and ProtecT, respectively, had Gleason score ≤ 6 compared with 57% in the SPCG-4 trial. It is possible that SPCG-4 demonstrated the benefit of RP compared to observation because more patients had higher-risk tumors. Other studies have assessed the economic cost of treatment versus observation in low-risk prostate cancer patients using outcomes such as quality-adjusted life events (QALEs). In a 2013 decision analysis, observation was more effective and less costly than up-front treatment with radiation therapy or RP. Specifically, amongst modes of observation, watchful waiting rather than active surveillance (with every-6-months PSA screening) was more effective and less expensive [7].

Some of the strengths of the PIVOT trial include its prospective randomized design, multicenter patient cohorts, central blinded pathology review, and prolonged follow-up time of nearly 20 years. The trial also had several important limitations. First, the trial included a smaller sample size of patients than the investigators originally intended (2000 patients) and was subsequently underpowered to detect the predetermined outcome of mortality difference between the arms. Second, nearly 20% of patients were not adherent with their treatment arm assignments, which could have potentially confounded the results. Finally, the trial included a patient population that was sicker than the average patient diagnosed in the community with prostate cancer. Trial patients were more likely to succumb to diseases other than prostate cancer and thus may not have been alive long enough to demonstrate a difference between the trial arms (20-year mortality rate was close to 50% in trial patients compared with 30% in the general population post prostatectomy).

Applications for Clinical Practice

The NCCN guidelines suggest that patients with low-risk or intermediate-risk prostate cancer with life expectancies < 10 years should proceed with observation alone. In patients with low-risk disease and life expectancies > 10 years, active surveillance, radiation therapy, or RP are all recommended options. In intermediate-risk patients with life expectancies of > 10 years, treatment with surgery or radiation is warranted. Based on the findings from the PIVOT trial and other trials mentioned above, observation seems to be the most reasonable approach in patients with low-risk prostate cancer. The risks of treatment with RP or radiation outweigh the potential benefits from therapy, particularly in the absence of long-term mortality benefit.

—Satya Das, MD, Vanderbilt Ingram Cancer Center, Nashville, TN

Objective. To determine differences in all-cause and prostate cancer–specific mortality between subgroups of patients who underwent watchful waiting versus radical prostactectomy (RP) for early-stage prostate cancer.

Design. Randomized prospective multicenter trial (PIVOT study).

Setting and participants. Study participants were Department of Veterans Affairs (VA) patients younger than age 75 with biopsy-proven local prostate cancer (T1–T2, M0 by TNM staging and centrally confirmed by pathology laboratory in Baylor) between November 1994 and January 2002. They were patients at NCI medical center–associated VA facilities. Patients had to be eligible for RP and not limited by concomitant medical comorbidities. Patients were excluded if they had undergone therapy for prostate cancer other than transurethral resection of prostate cancer (TURP) for diagnostic purposes including radiation, androgen deprivation theory (ADT), chemotherapy, or definitive surgery. They were also excluded if they had a PSA > 50 ng/mL or a bone scan suggestive of metastatic disease.

Main outcome measures. The primary outcome of the study was all-cause mortality. The secondary outcome was prostate cancer–specific mortality. These were measured from date of diagnosis to August 2014 or until the patient died. A third-party end-points committee blinded to patient arm in the trial determined the cause of death from medical record assessment.

Main results. 731 men with a mean age of 67 were randomly assigned to RP or watchful waiting. The median PSA of patients was 7.8 ng/mL with 75% of patients having a Gleason score ≤ 7 and 74% of patients having low- or intermediate-risk prostate cancer. As of August 2014, 468 of 731 men had died; cause of death was unavailable in 7 patients (2 patients in the surgery arm and 5 in the observation arm). Median duration of follow-up to death or end of follow-up was 12.7 years. All-cause mortality was not significantly different between RP and observation arms (hazard ratio 0.84, 95% confidence interval [CI] 0.7–1.01, P = 0.06). The incidence of death at 19.5 years was 61.3% in patients assigned to surgery versus 66.8% in the watchful waiting arm (relative risk 0.92, 95% CI 0.82–1.02). Deaths from prostate cancer or treatment occurred in 69 patients in the study; 65 from prostate cancer and 4 from treatment. Prostate cancer–associated mortality was not significantly lower in the RP arm than in the watchful waiting arm (hazard ratio 0.63, 95% CI 0.39–1.02, P = 0.06). Mortality was not significantly reduced in any examined subgroup (age > or < 65, white or black ethnicity, PSA > 10 ng/mL or < 10 ng/mL, low/high/intermediate grade, Gleason score). Fewer men who underwent surgery (40.9%) had progression compared to those who underwent observation (68.4%). Most of these patients experienced local progression: 34.1% in the surgery arm and 61.9% in the observation arm. Distant progression was seen in 10.7% of patients treated with RP and 14.2% in the untreated arm. Treatment for progression (local, asymptomatic or by PSA rise) occurred in 59.7% of men assigned to observation and in 33.5% of men assigned to surgery. ADT was more frequently utilized as a treatment modality in men who were initially observed (44.4%) than in men who had up-front surgery (21.7%).

With regard to patient-related outcomes (PROs), more men assigned to RP reported bothersome symptoms such as physical discomfort and limitations in performing activities of daily living (ADLs) at 2 years than in men who did not undergo the intervention. This difference did not persist at later time points beyond 2 years. The use of incontinence pads was markedly higher in surgically treated men than in untreated men. 40% of patients in the treatment arm had to use at least 1 incontinence pad per day within 6 months of RP; this number remained unchanged at 10 years. Rates of erectile dysfunction were reported as lower at 2 (80% versus 45%), 5 (80% versus 55%) and 10 (85% versus 70%) years in men who were watched versus those who underwent surgery. Rates of optimal sexual function were reported as lower in resected men at 1 (35% versus 65%), 5 (38% versus 55%) and 10 (50% versus 70%) years than in men who were watched.

Conclusion. Patients with localized prostate cancer who were randomized to observation rather than RP did not experience greater all-cause mortality or prostate cancer–specific mortality than their surgical counterparts. Furthermore, they experienced less erectile dysfunction, less sexual function impairment, and less incontinence than patients who underwent surgery. Patients who underwent surgery had higher rates of ADL dysfunction and physical discomfort although these differences did not persist beyond 2 years.

Commentary

Nearly 162,000 men will be diagnosed with prostate cancer in 2017, and it is anticipated 27,000 will succumb to their disease [1]. This ratio of incident cases to annual mortality represents one of the lowest ratios amongst all cancer sites and suggests most prostate cancers are indolent. Localized prostate cancer is usually defined by low (Gleason score ≤ 6, PSA < 10 ng/mL and ≤ T2 stage) or intermediate (Gleason score ≤ 7, PSA 10–20 ng/mL, and ≤ T2b stage) risk characteristics. 70% of patients present with low-risk disease, which carries a mortality risk of close to 6% at 15 years [2]. Despite this, nearly 90% of these patients are treated with RP, external beam radiation, or brachytherapy. Some published studies suggest up to 60% of low-risk prostate cancer patients may be overtreated [3,4]. The decision to treat low-risk patients is controversial, as morbidities (eg, sexual dysfunction, erectile dysfunction, incontinence) from a radical prostatectomy or focal radiation therapy are significant while the potential gain may be minimal.

Two other trials in addition to current PIVOT follow-up study have sought to answer the question of whether observation (through either watchful waiting or active surveillance) or treatment (surgery or radiation) is the optimal approach in the management of patients with localized prostate cancer. The SPCG-4 trial [5], which began enrollment in the pre-PSA screening era, included Scandinavian patients with biopsy-proven prostate cancer who were < 75, and had life expectancy > 10 years, ≤ T2 lesions, and PSA < 50 ng/mL. Patients began enrollment in 1989 and were watched for more than 20 years. They were seen in clinic every 6 months for the first 2 years and annually thereafter. The primary outcomes of the trial were death from any cause, death from prostate cancer, or risk of bony and visceral metastases. 447 of 695 included men (200 men in the RP group and 247 men in the watchful waiting group) had died by 2012. The cumulative incidence of death from prostate cancer at the 18-year follow-up point was 17.7% in the surgery arm versus 28.7% in the observation arm. The incidence of distant metastases at the 18-year follow-up point was 26.1% in the radical prostatectomy arm and 38.3% in the watchful waiting group. 67.4% of men assigned to watchful waiting utilized ADT while 42.4% of men treated with prostatectomy utilized ADT palliatively post progression [5].

Vaccine bottles

The ProtecT trial was a United Kingdom study that enrolled 1643 men with prostate cancer aged 50–69 years between 1999 and 2009. The trial randomized men to 3 arms: watchful waiting, RP, or radiation therapy. Patients were eligible for the study if they were < 70 and had ≤ T2 stage disease. 97% of patients had a Gleason score ≤ 7. The primary outcome was prostate cancer–associated mortality at 10 years. Secondary outcomes included death from any cause, rates of distant metastases, and clinical progression. At the end of follow-up, prostate cancer–specific survival was 98.8% in all groups with no significant differences between groups. There was no evidence that differences between prostate cancer–associated mortality varied between groups when stratified by Gleason score, age, PSA, or clinical stage. Additionally, all-cause mortality rates were equivalently distributed across groups [6].

One of the primary reasons why PIVOT and ProtecT may have had different outcomes than the SPCG-4 trial may relate to the aggressiveness of tumors in patients in the various studies. Median PSA levels in the PIVOT and ProtecT trials, respectively, were 7.8 ng/mL and 4.2 ng/mL, compared with 13.2 ng/mL in the SPCG-4 trial. 70% and 77% of patients in PIVOT and ProtecT, respectively, had Gleason score ≤ 6 compared with 57% in the SPCG-4 trial. It is possible that SPCG-4 demonstrated the benefit of RP compared to observation because more patients had higher-risk tumors. Other studies have assessed the economic cost of treatment versus observation in low-risk prostate cancer patients using outcomes such as quality-adjusted life events (QALEs). In a 2013 decision analysis, observation was more effective and less costly than up-front treatment with radiation therapy or RP. Specifically, amongst modes of observation, watchful waiting rather than active surveillance (with every-6-months PSA screening) was more effective and less expensive [7].

Some of the strengths of the PIVOT trial include its prospective randomized design, multicenter patient cohorts, central blinded pathology review, and prolonged follow-up time of nearly 20 years. The trial also had several important limitations. First, the trial included a smaller sample size of patients than the investigators originally intended (2000 patients) and was subsequently underpowered to detect the predetermined outcome of mortality difference between the arms. Second, nearly 20% of patients were not adherent with their treatment arm assignments, which could have potentially confounded the results. Finally, the trial included a patient population that was sicker than the average patient diagnosed in the community with prostate cancer. Trial patients were more likely to succumb to diseases other than prostate cancer and thus may not have been alive long enough to demonstrate a difference between the trial arms (20-year mortality rate was close to 50% in trial patients compared with 30% in the general population post prostatectomy).

Applications for Clinical Practice

The NCCN guidelines suggest that patients with low-risk or intermediate-risk prostate cancer with life expectancies < 10 years should proceed with observation alone. In patients with low-risk disease and life expectancies > 10 years, active surveillance, radiation therapy, or RP are all recommended options. In intermediate-risk patients with life expectancies of > 10 years, treatment with surgery or radiation is warranted. Based on the findings from the PIVOT trial and other trials mentioned above, observation seems to be the most reasonable approach in patients with low-risk prostate cancer. The risks of treatment with RP or radiation outweigh the potential benefits from therapy, particularly in the absence of long-term mortality benefit.

—Satya Das, MD, Vanderbilt Ingram Cancer Center, Nashville, TN

References

1. SEER. https://seer.cancer.gov/statfacts/html/prost.html.

2. Lu-Yao G, Albertsen P, Moore D, et al. Outcomes of localized prostate cancer following conservative management. JAMA 2009;302:1202–9.

3. Cooperberg M, Broering J, Kantoff P, et al. Contemporary trends in low risk prostate cancer: risk assessment and treatment. J Urol 2007;178(3 Pt 2):S14–9.

4. Welch H, Black W. Overdiagnosis in cancer. J Natl Cancer Inst 2010;102:605–13.

5. Bill-Axelson A, Holmberg L, Garmo H, et al. Radical prostatectomy or watchful waiting in early prostate cancer. N Engl J Med 2014;370:932–42.

6. Hamdy F, Donovan J, Lane J, et al. 10-year outcomes after monitoring, surgery, or radiotherapy for localized prostate cancer. N Engl J Med 2016;375:1415–24.

7. Hayes J, Ollendorf D, Pearson S, et al. Observation versus initial treatment for men with localized, low-risk prostate cancer a cost-effectiveness analysis. Ann Intern Med 2013;158:853–60.

References

1. SEER. https://seer.cancer.gov/statfacts/html/prost.html.

2. Lu-Yao G, Albertsen P, Moore D, et al. Outcomes of localized prostate cancer following conservative management. JAMA 2009;302:1202–9.

3. Cooperberg M, Broering J, Kantoff P, et al. Contemporary trends in low risk prostate cancer: risk assessment and treatment. J Urol 2007;178(3 Pt 2):S14–9.

4. Welch H, Black W. Overdiagnosis in cancer. J Natl Cancer Inst 2010;102:605–13.

5. Bill-Axelson A, Holmberg L, Garmo H, et al. Radical prostatectomy or watchful waiting in early prostate cancer. N Engl J Med 2014;370:932–42.

6. Hamdy F, Donovan J, Lane J, et al. 10-year outcomes after monitoring, surgery, or radiotherapy for localized prostate cancer. N Engl J Med 2016;375:1415–24.

7. Hayes J, Ollendorf D, Pearson S, et al. Observation versus initial treatment for men with localized, low-risk prostate cancer a cost-effectiveness analysis. Ann Intern Med 2013;158:853–60.

Issue
Journal of Clinical Outcomes Management - 24(11)
Issue
Journal of Clinical Outcomes Management - 24(11)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Survival Outcomes in Stage IV Differentiated Thyroid Cancer After Postsurgical RAI versus EBRT

Article Type
Changed
Wed, 04/29/2020 - 12:01

Study Overview

Objective. To evaluate survival trends and differences in a large cohort of patients with stage IV differentiated thyroid cancer treated with radioactive iodine (RAI), external beam radiation therapy (EBRT), or no radiation following surgery.

Design. Multicenter retrospective cohort study using data from the National Cancer Database (NCDB) from 2002–2012.

Setting and participants. The study group consisted of a random sample of all inpatient discharges with a diagnosis of differentiated thyroid cancer (DTC). This yielded a cohort of 11,832 patients with stage IV DTC who underwent primary surgical treatment with thyroidecromy. Patients were stratified by cancer histology into follicular thyroid cancer (FTC) and papillary thyroid cancer (PTC). Patients were additionally stratified into 3 substage groups: IV-A, IV-B, and IV-C. Administrative censoring was implemented at 5 and 10 year marks of survival time.

Main outcome measures. The primary outcome was all-cause mortality. Survival was analyzed at 5 and 10 years. Multivariate analysis was performed on a number of covariates including age, sex, race, socioeconomic status, TNM stage, tumor grade, surgical length of stay, and surgical treatment variables such as neck dissection and lymph node surgery.

Main results. Most patients (91.24%) had PTC and 8.76% had FTC. The average age of patients in the RAI group was younger (FTC, age 66; PTC, age 58) than patients in the EBRT (FTC, age 69; PTC, age 65) or no RT groups. (FTC, age 73; PTC, age 61). In contrast to FTC patients, a large majority of PTC patients underwent surgical neck dissection. There were no significant differences in sex, ethnicity, primary payer, median income quartile, or education level among the 3 groups for patients with FTC. However, in PTC there was a majority of female and ethnically white/Caucasian patients in all 3 groups. In addition, patients with PTC who did not receive RT or received RAI were more likely to have private insurance versus those who underwent EBRT, who were more often covered under Medicare. These differences in primary payer were statistically significant (P < 0.001).

Statistically significant differences in mortality were observed at 5 and 10 years in both papillary and follicular thyroid cancer among the 3 groups. In the PTC groups, patients treated with EBRT had the highest mortality rates (46.6% at 5 years, 50.7% at 10 years), while patients with PTC receiving no RT had lower mortality rates (22.7% at 5 years, 25.5% at 10 years), and PTC patients receiving RAI had the lowest mortality rates (11.0% at 5 years, 14.0% at 10 years). Similar results were seen in patients with FTC, in which patients treated with EBRT had the highest mortality rates (51.4% at 5 years, 59.9% at 10 years), while patient with FTC receiving no RT had lower mortality rates (45.5% at 5 years, 51% at 10 years), and FTC patients receiving RAI had the lowest mortality rates (29.2% at 5 years, 36.8% at 10 years).

Using univariate analysis, EBRT showed a statistically significant increase in 5- and 10-year mortality for patients with PTC stage IV-A and IV-B as compared with no radiation. This was demonstrated in both stage IV-A and IV-B subgroups at 5 years (EBRT 5-year HR PTC stage IV-A = 2.04, 95% confidence interval [CI] 1.74–2.39, P < 0.001; EBRT 5-year HR PTC stage IV-B = 2.23, 95% CI 1.42–3.51, P < 0.001; and 10 years [EBRT 10-year HR PTC stage IV-A = 2.12, 95% CI 1.79-2.52 P < 0.001; EBRT 10-year HR PTC stage IV-B = 2.03, 95% CI 1.33-3.10, P < 0.001). RAI showed a statistically significant decrease in 5- and 10-year mortality in both PTC and FTC compared with no radiation, regardless of pathologic sub-stage. The largest reduction in risk was seen in FTC stage IV-B patients at 5 years [RAI 5 year HR FTC stage IV-B = 0.31, 95% CI 0.12-0.80, P < 0.05). Multivariate analysis was also performed and showed similar results to univariate analysis except that there was no longer a statistically significant difference in EBRT versus no RT in stage IV-A PTC at 5 and 10 years (EBRT 5-year HR PTC stage IV-A = 1.2, 95% CI 0.91–1.59, EBRT 10-year HR PTC stage IV-A = 1.29, 95% CI 0.93–1.79). Reductions in death hazard seen in all groups treated with RAI versus no RT previously observed in univariate analysis remained statistically significant in all groups on multivariate analysis.

Multivariate analysis revealed a number of significant covariates. Increase in age was noted to be associated with higher death hazard in all groups except FTC stage IV-B and stage IV-C. Every additional year of age increased the hazard of death by ~2% to 5%, up to a maximum of 9% per year. Females overall had a lower hazard of death compared with their male counterparts, most notably in PTC. African-American patients had improved survival in FTC (5 years) but lower survival in PTC (5 and 10 years) as compared with white patients. Tumor grade showed a dose response in models studied, with increasing death hazards with worsening tumor differentiation.

 

 

Conclusion. RAI was associated with improved survival in patients with stage IV DTC, while EBRT was associated with poorer survival outcomes.

Commentary

Radioiodine therapy has been used for treatment of DTC since the 1940s. Radioactive iodine (I-131) is largely taken up by thyroid follicular cells via their sodium-iodide transporter causing acute thyroid cell death by emission of short path length beta particles [1].

External beam radiation therapy (EBRT) is the most common radiation therapy approach to deliver radiation from a source outside of the patient. EBRT machines produce radiation by either radioactive decay of a nuclide or by acceleration of charged particles such as electrons or protons. Using a linear accelerator, charged particles are accelerated to a high enough energy to allow transmission of particles as an electron beam or x-ray, which is subsequently directed at the tumor [2].

This study by Yang and colleagues aimed to examine survival differences in patients with stage IV DTC who received one of these adjuvant radiation modalities post-thyroidectomy. All treatment groups showed improved survival, with RAI with decreases in death hazard in both univariate and multivariate analysis. Patients with stage IV DTC prolonged their survival by a factor of 1.53–4.66 in multivariate models and 1.63–4.92 in univariate models. This clearly supports the effectiveness of RAI as an adjuvant treatment to DTC following surgical resection.

However, this study has several limitations. As this was a retrospective cohort study, the lack of randomization introduces a potential source of bias. In addition, since data was collected via the National Cancer Database, there was limited information that could be obtained on the subjects studied. Disease-specific survival and recurrence rates were not reported and even histological grades were missing more than 50% of the time. Finally, older age and more advanced stage in the EBRT cohorts were likely confounders in the results of increased death hazard and mortality that were observed. It should be noted, however, that attempts to adjust for these covariates were made by the authors by analyzing the data using multivariate analysis.

There are a number of potential reasons as to why the RAI-treated patients did significantly better than the EBRT-treated patients. Based on the current literature and guidelines, EBRT is mainly recommended as a palliative treatment of locally advanced, unresectable, or metastatic disease in primarily noniodine-avid tumors. Therefore, it is certainly feasible that patients in this study who underwent treatment with EBRT had more aggressive disease and were thus at higher risk to begin with. Perhaps the indications to treat with EBRT inherently confer a poorer prognosis in advanced DTC patients. In addition, RAI is a systemic treatment modality whereas EBRT is only directed locally to the neck and thus may miss micro-metastatic lesions elsewhere in the body.

Applications for Clinical Practice

Current standard practice in thyroid cancer management involve the use of radioiodine therapy in treatment of selected intermediate-risk and all high-risk DTC patients after total thyroidectomy. These patients are treated with 131-I to destroy both remnant normal thyroid tissue and microscopic or subclinical disease remaining after surgery. The decision to administer radioactive iodine post-thyroidectomy in patients with DTC is based on risk stratification of clinicopathologic features of the tumor. The efficacy of RAI is dependent on many factors including sites of disease, patient preparation, tumor characteristics, and dose of radiation administered.

EBRT is currently used much less frequently than RAI in the management of differentiated thyroid cancer. Its main use has been for palliative treatment of locally advanced, unresectable, or metastatic disease in primarily noniodine-avid tumors. It has also been suggested for use in older patients (age 55 years or older) with gross extrathyroidal extension at the time of surgery (T4 disease), or in younger patients with T4b or extensive T4a disease and poor histologic features, with tumors that are strongly suspected to not concentrate iodine. The use of EBRT in other settings is not well established [3,4].

Treatment benefits of RAI in DTC have been extensively studied; however, this is the largest study that has examined long-term survival in a cohort of just under 12,000 patients with stage IV DTC. The results from this large cohort with advanced disease further demonstrates improved overall survival in stage IV DTC patients treated with RAI at 5 and 10 years. It is clear that RAI is the first-line adjuvant radiation therapy of DTC and should remain the standard of care in thyroid cancer management.

—Kayur Bhavsar, MD, University of Maryland School of Medicine
Baltimore, MD

References

1. Spitzweg C, Harrington KJ, Pinke LA, et al. Clinical review 132: The sodium iodide symporter and its potential role in cancer therapy. J Clin Endocrinol Metab 2001;86:3327–35.

2. Delaney TF, Kooey HM. Protons and charge particle radiotherapy. Philadelphia: Lippincott Williams & Wilkins; 2008.

3. Giuliani M, Brierley J. Indications for the use of external beam radiation in thyroid cancer. Curr Opin Oncol 2014;26:45–50.

4. Cooper DS, et al Revised American Thyroid Association management guidelines for patients with thyroid nodules and differentiated thyroid cancer. Thyroid 2009;19:1167–214.

Article PDF
Issue
Journal of Clinical Outcomes Management - 24(11)
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To evaluate survival trends and differences in a large cohort of patients with stage IV differentiated thyroid cancer treated with radioactive iodine (RAI), external beam radiation therapy (EBRT), or no radiation following surgery.

Design. Multicenter retrospective cohort study using data from the National Cancer Database (NCDB) from 2002–2012.

Setting and participants. The study group consisted of a random sample of all inpatient discharges with a diagnosis of differentiated thyroid cancer (DTC). This yielded a cohort of 11,832 patients with stage IV DTC who underwent primary surgical treatment with thyroidecromy. Patients were stratified by cancer histology into follicular thyroid cancer (FTC) and papillary thyroid cancer (PTC). Patients were additionally stratified into 3 substage groups: IV-A, IV-B, and IV-C. Administrative censoring was implemented at 5 and 10 year marks of survival time.

Main outcome measures. The primary outcome was all-cause mortality. Survival was analyzed at 5 and 10 years. Multivariate analysis was performed on a number of covariates including age, sex, race, socioeconomic status, TNM stage, tumor grade, surgical length of stay, and surgical treatment variables such as neck dissection and lymph node surgery.

Main results. Most patients (91.24%) had PTC and 8.76% had FTC. The average age of patients in the RAI group was younger (FTC, age 66; PTC, age 58) than patients in the EBRT (FTC, age 69; PTC, age 65) or no RT groups. (FTC, age 73; PTC, age 61). In contrast to FTC patients, a large majority of PTC patients underwent surgical neck dissection. There were no significant differences in sex, ethnicity, primary payer, median income quartile, or education level among the 3 groups for patients with FTC. However, in PTC there was a majority of female and ethnically white/Caucasian patients in all 3 groups. In addition, patients with PTC who did not receive RT or received RAI were more likely to have private insurance versus those who underwent EBRT, who were more often covered under Medicare. These differences in primary payer were statistically significant (P < 0.001).

Statistically significant differences in mortality were observed at 5 and 10 years in both papillary and follicular thyroid cancer among the 3 groups. In the PTC groups, patients treated with EBRT had the highest mortality rates (46.6% at 5 years, 50.7% at 10 years), while patients with PTC receiving no RT had lower mortality rates (22.7% at 5 years, 25.5% at 10 years), and PTC patients receiving RAI had the lowest mortality rates (11.0% at 5 years, 14.0% at 10 years). Similar results were seen in patients with FTC, in which patients treated with EBRT had the highest mortality rates (51.4% at 5 years, 59.9% at 10 years), while patient with FTC receiving no RT had lower mortality rates (45.5% at 5 years, 51% at 10 years), and FTC patients receiving RAI had the lowest mortality rates (29.2% at 5 years, 36.8% at 10 years).

Using univariate analysis, EBRT showed a statistically significant increase in 5- and 10-year mortality for patients with PTC stage IV-A and IV-B as compared with no radiation. This was demonstrated in both stage IV-A and IV-B subgroups at 5 years (EBRT 5-year HR PTC stage IV-A = 2.04, 95% confidence interval [CI] 1.74–2.39, P < 0.001; EBRT 5-year HR PTC stage IV-B = 2.23, 95% CI 1.42–3.51, P < 0.001; and 10 years [EBRT 10-year HR PTC stage IV-A = 2.12, 95% CI 1.79-2.52 P < 0.001; EBRT 10-year HR PTC stage IV-B = 2.03, 95% CI 1.33-3.10, P < 0.001). RAI showed a statistically significant decrease in 5- and 10-year mortality in both PTC and FTC compared with no radiation, regardless of pathologic sub-stage. The largest reduction in risk was seen in FTC stage IV-B patients at 5 years [RAI 5 year HR FTC stage IV-B = 0.31, 95% CI 0.12-0.80, P < 0.05). Multivariate analysis was also performed and showed similar results to univariate analysis except that there was no longer a statistically significant difference in EBRT versus no RT in stage IV-A PTC at 5 and 10 years (EBRT 5-year HR PTC stage IV-A = 1.2, 95% CI 0.91–1.59, EBRT 10-year HR PTC stage IV-A = 1.29, 95% CI 0.93–1.79). Reductions in death hazard seen in all groups treated with RAI versus no RT previously observed in univariate analysis remained statistically significant in all groups on multivariate analysis.

Multivariate analysis revealed a number of significant covariates. Increase in age was noted to be associated with higher death hazard in all groups except FTC stage IV-B and stage IV-C. Every additional year of age increased the hazard of death by ~2% to 5%, up to a maximum of 9% per year. Females overall had a lower hazard of death compared with their male counterparts, most notably in PTC. African-American patients had improved survival in FTC (5 years) but lower survival in PTC (5 and 10 years) as compared with white patients. Tumor grade showed a dose response in models studied, with increasing death hazards with worsening tumor differentiation.

 

 

Conclusion. RAI was associated with improved survival in patients with stage IV DTC, while EBRT was associated with poorer survival outcomes.

Commentary

Radioiodine therapy has been used for treatment of DTC since the 1940s. Radioactive iodine (I-131) is largely taken up by thyroid follicular cells via their sodium-iodide transporter causing acute thyroid cell death by emission of short path length beta particles [1].

External beam radiation therapy (EBRT) is the most common radiation therapy approach to deliver radiation from a source outside of the patient. EBRT machines produce radiation by either radioactive decay of a nuclide or by acceleration of charged particles such as electrons or protons. Using a linear accelerator, charged particles are accelerated to a high enough energy to allow transmission of particles as an electron beam or x-ray, which is subsequently directed at the tumor [2].

This study by Yang and colleagues aimed to examine survival differences in patients with stage IV DTC who received one of these adjuvant radiation modalities post-thyroidectomy. All treatment groups showed improved survival, with RAI with decreases in death hazard in both univariate and multivariate analysis. Patients with stage IV DTC prolonged their survival by a factor of 1.53–4.66 in multivariate models and 1.63–4.92 in univariate models. This clearly supports the effectiveness of RAI as an adjuvant treatment to DTC following surgical resection.

However, this study has several limitations. As this was a retrospective cohort study, the lack of randomization introduces a potential source of bias. In addition, since data was collected via the National Cancer Database, there was limited information that could be obtained on the subjects studied. Disease-specific survival and recurrence rates were not reported and even histological grades were missing more than 50% of the time. Finally, older age and more advanced stage in the EBRT cohorts were likely confounders in the results of increased death hazard and mortality that were observed. It should be noted, however, that attempts to adjust for these covariates were made by the authors by analyzing the data using multivariate analysis.

There are a number of potential reasons as to why the RAI-treated patients did significantly better than the EBRT-treated patients. Based on the current literature and guidelines, EBRT is mainly recommended as a palliative treatment of locally advanced, unresectable, or metastatic disease in primarily noniodine-avid tumors. Therefore, it is certainly feasible that patients in this study who underwent treatment with EBRT had more aggressive disease and were thus at higher risk to begin with. Perhaps the indications to treat with EBRT inherently confer a poorer prognosis in advanced DTC patients. In addition, RAI is a systemic treatment modality whereas EBRT is only directed locally to the neck and thus may miss micro-metastatic lesions elsewhere in the body.

Applications for Clinical Practice

Current standard practice in thyroid cancer management involve the use of radioiodine therapy in treatment of selected intermediate-risk and all high-risk DTC patients after total thyroidectomy. These patients are treated with 131-I to destroy both remnant normal thyroid tissue and microscopic or subclinical disease remaining after surgery. The decision to administer radioactive iodine post-thyroidectomy in patients with DTC is based on risk stratification of clinicopathologic features of the tumor. The efficacy of RAI is dependent on many factors including sites of disease, patient preparation, tumor characteristics, and dose of radiation administered.

EBRT is currently used much less frequently than RAI in the management of differentiated thyroid cancer. Its main use has been for palliative treatment of locally advanced, unresectable, or metastatic disease in primarily noniodine-avid tumors. It has also been suggested for use in older patients (age 55 years or older) with gross extrathyroidal extension at the time of surgery (T4 disease), or in younger patients with T4b or extensive T4a disease and poor histologic features, with tumors that are strongly suspected to not concentrate iodine. The use of EBRT in other settings is not well established [3,4].

Treatment benefits of RAI in DTC have been extensively studied; however, this is the largest study that has examined long-term survival in a cohort of just under 12,000 patients with stage IV DTC. The results from this large cohort with advanced disease further demonstrates improved overall survival in stage IV DTC patients treated with RAI at 5 and 10 years. It is clear that RAI is the first-line adjuvant radiation therapy of DTC and should remain the standard of care in thyroid cancer management.

—Kayur Bhavsar, MD, University of Maryland School of Medicine
Baltimore, MD

Study Overview

Objective. To evaluate survival trends and differences in a large cohort of patients with stage IV differentiated thyroid cancer treated with radioactive iodine (RAI), external beam radiation therapy (EBRT), or no radiation following surgery.

Design. Multicenter retrospective cohort study using data from the National Cancer Database (NCDB) from 2002–2012.

Setting and participants. The study group consisted of a random sample of all inpatient discharges with a diagnosis of differentiated thyroid cancer (DTC). This yielded a cohort of 11,832 patients with stage IV DTC who underwent primary surgical treatment with thyroidecromy. Patients were stratified by cancer histology into follicular thyroid cancer (FTC) and papillary thyroid cancer (PTC). Patients were additionally stratified into 3 substage groups: IV-A, IV-B, and IV-C. Administrative censoring was implemented at 5 and 10 year marks of survival time.

Main outcome measures. The primary outcome was all-cause mortality. Survival was analyzed at 5 and 10 years. Multivariate analysis was performed on a number of covariates including age, sex, race, socioeconomic status, TNM stage, tumor grade, surgical length of stay, and surgical treatment variables such as neck dissection and lymph node surgery.

Main results. Most patients (91.24%) had PTC and 8.76% had FTC. The average age of patients in the RAI group was younger (FTC, age 66; PTC, age 58) than patients in the EBRT (FTC, age 69; PTC, age 65) or no RT groups. (FTC, age 73; PTC, age 61). In contrast to FTC patients, a large majority of PTC patients underwent surgical neck dissection. There were no significant differences in sex, ethnicity, primary payer, median income quartile, or education level among the 3 groups for patients with FTC. However, in PTC there was a majority of female and ethnically white/Caucasian patients in all 3 groups. In addition, patients with PTC who did not receive RT or received RAI were more likely to have private insurance versus those who underwent EBRT, who were more often covered under Medicare. These differences in primary payer were statistically significant (P < 0.001).

Statistically significant differences in mortality were observed at 5 and 10 years in both papillary and follicular thyroid cancer among the 3 groups. In the PTC groups, patients treated with EBRT had the highest mortality rates (46.6% at 5 years, 50.7% at 10 years), while patients with PTC receiving no RT had lower mortality rates (22.7% at 5 years, 25.5% at 10 years), and PTC patients receiving RAI had the lowest mortality rates (11.0% at 5 years, 14.0% at 10 years). Similar results were seen in patients with FTC, in which patients treated with EBRT had the highest mortality rates (51.4% at 5 years, 59.9% at 10 years), while patient with FTC receiving no RT had lower mortality rates (45.5% at 5 years, 51% at 10 years), and FTC patients receiving RAI had the lowest mortality rates (29.2% at 5 years, 36.8% at 10 years).

Using univariate analysis, EBRT showed a statistically significant increase in 5- and 10-year mortality for patients with PTC stage IV-A and IV-B as compared with no radiation. This was demonstrated in both stage IV-A and IV-B subgroups at 5 years (EBRT 5-year HR PTC stage IV-A = 2.04, 95% confidence interval [CI] 1.74–2.39, P < 0.001; EBRT 5-year HR PTC stage IV-B = 2.23, 95% CI 1.42–3.51, P < 0.001; and 10 years [EBRT 10-year HR PTC stage IV-A = 2.12, 95% CI 1.79-2.52 P < 0.001; EBRT 10-year HR PTC stage IV-B = 2.03, 95% CI 1.33-3.10, P < 0.001). RAI showed a statistically significant decrease in 5- and 10-year mortality in both PTC and FTC compared with no radiation, regardless of pathologic sub-stage. The largest reduction in risk was seen in FTC stage IV-B patients at 5 years [RAI 5 year HR FTC stage IV-B = 0.31, 95% CI 0.12-0.80, P < 0.05). Multivariate analysis was also performed and showed similar results to univariate analysis except that there was no longer a statistically significant difference in EBRT versus no RT in stage IV-A PTC at 5 and 10 years (EBRT 5-year HR PTC stage IV-A = 1.2, 95% CI 0.91–1.59, EBRT 10-year HR PTC stage IV-A = 1.29, 95% CI 0.93–1.79). Reductions in death hazard seen in all groups treated with RAI versus no RT previously observed in univariate analysis remained statistically significant in all groups on multivariate analysis.

Multivariate analysis revealed a number of significant covariates. Increase in age was noted to be associated with higher death hazard in all groups except FTC stage IV-B and stage IV-C. Every additional year of age increased the hazard of death by ~2% to 5%, up to a maximum of 9% per year. Females overall had a lower hazard of death compared with their male counterparts, most notably in PTC. African-American patients had improved survival in FTC (5 years) but lower survival in PTC (5 and 10 years) as compared with white patients. Tumor grade showed a dose response in models studied, with increasing death hazards with worsening tumor differentiation.

 

 

Conclusion. RAI was associated with improved survival in patients with stage IV DTC, while EBRT was associated with poorer survival outcomes.

Commentary

Radioiodine therapy has been used for treatment of DTC since the 1940s. Radioactive iodine (I-131) is largely taken up by thyroid follicular cells via their sodium-iodide transporter causing acute thyroid cell death by emission of short path length beta particles [1].

External beam radiation therapy (EBRT) is the most common radiation therapy approach to deliver radiation from a source outside of the patient. EBRT machines produce radiation by either radioactive decay of a nuclide or by acceleration of charged particles such as electrons or protons. Using a linear accelerator, charged particles are accelerated to a high enough energy to allow transmission of particles as an electron beam or x-ray, which is subsequently directed at the tumor [2].

This study by Yang and colleagues aimed to examine survival differences in patients with stage IV DTC who received one of these adjuvant radiation modalities post-thyroidectomy. All treatment groups showed improved survival, with RAI with decreases in death hazard in both univariate and multivariate analysis. Patients with stage IV DTC prolonged their survival by a factor of 1.53–4.66 in multivariate models and 1.63–4.92 in univariate models. This clearly supports the effectiveness of RAI as an adjuvant treatment to DTC following surgical resection.

However, this study has several limitations. As this was a retrospective cohort study, the lack of randomization introduces a potential source of bias. In addition, since data was collected via the National Cancer Database, there was limited information that could be obtained on the subjects studied. Disease-specific survival and recurrence rates were not reported and even histological grades were missing more than 50% of the time. Finally, older age and more advanced stage in the EBRT cohorts were likely confounders in the results of increased death hazard and mortality that were observed. It should be noted, however, that attempts to adjust for these covariates were made by the authors by analyzing the data using multivariate analysis.

There are a number of potential reasons as to why the RAI-treated patients did significantly better than the EBRT-treated patients. Based on the current literature and guidelines, EBRT is mainly recommended as a palliative treatment of locally advanced, unresectable, or metastatic disease in primarily noniodine-avid tumors. Therefore, it is certainly feasible that patients in this study who underwent treatment with EBRT had more aggressive disease and were thus at higher risk to begin with. Perhaps the indications to treat with EBRT inherently confer a poorer prognosis in advanced DTC patients. In addition, RAI is a systemic treatment modality whereas EBRT is only directed locally to the neck and thus may miss micro-metastatic lesions elsewhere in the body.

Applications for Clinical Practice

Current standard practice in thyroid cancer management involve the use of radioiodine therapy in treatment of selected intermediate-risk and all high-risk DTC patients after total thyroidectomy. These patients are treated with 131-I to destroy both remnant normal thyroid tissue and microscopic or subclinical disease remaining after surgery. The decision to administer radioactive iodine post-thyroidectomy in patients with DTC is based on risk stratification of clinicopathologic features of the tumor. The efficacy of RAI is dependent on many factors including sites of disease, patient preparation, tumor characteristics, and dose of radiation administered.

EBRT is currently used much less frequently than RAI in the management of differentiated thyroid cancer. Its main use has been for palliative treatment of locally advanced, unresectable, or metastatic disease in primarily noniodine-avid tumors. It has also been suggested for use in older patients (age 55 years or older) with gross extrathyroidal extension at the time of surgery (T4 disease), or in younger patients with T4b or extensive T4a disease and poor histologic features, with tumors that are strongly suspected to not concentrate iodine. The use of EBRT in other settings is not well established [3,4].

Treatment benefits of RAI in DTC have been extensively studied; however, this is the largest study that has examined long-term survival in a cohort of just under 12,000 patients with stage IV DTC. The results from this large cohort with advanced disease further demonstrates improved overall survival in stage IV DTC patients treated with RAI at 5 and 10 years. It is clear that RAI is the first-line adjuvant radiation therapy of DTC and should remain the standard of care in thyroid cancer management.

—Kayur Bhavsar, MD, University of Maryland School of Medicine
Baltimore, MD

References

1. Spitzweg C, Harrington KJ, Pinke LA, et al. Clinical review 132: The sodium iodide symporter and its potential role in cancer therapy. J Clin Endocrinol Metab 2001;86:3327–35.

2. Delaney TF, Kooey HM. Protons and charge particle radiotherapy. Philadelphia: Lippincott Williams & Wilkins; 2008.

3. Giuliani M, Brierley J. Indications for the use of external beam radiation in thyroid cancer. Curr Opin Oncol 2014;26:45–50.

4. Cooper DS, et al Revised American Thyroid Association management guidelines for patients with thyroid nodules and differentiated thyroid cancer. Thyroid 2009;19:1167–214.

References

1. Spitzweg C, Harrington KJ, Pinke LA, et al. Clinical review 132: The sodium iodide symporter and its potential role in cancer therapy. J Clin Endocrinol Metab 2001;86:3327–35.

2. Delaney TF, Kooey HM. Protons and charge particle radiotherapy. Philadelphia: Lippincott Williams & Wilkins; 2008.

3. Giuliani M, Brierley J. Indications for the use of external beam radiation in thyroid cancer. Curr Opin Oncol 2014;26:45–50.

4. Cooper DS, et al Revised American Thyroid Association management guidelines for patients with thyroid nodules and differentiated thyroid cancer. Thyroid 2009;19:1167–214.

Issue
Journal of Clinical Outcomes Management - 24(11)
Issue
Journal of Clinical Outcomes Management - 24(11)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media