User login
Adding radiation to immunotherapy may extend PFS in progressive lung cancer
For patients with metastatic non–small cell lung cancer (NSCLC) who have disease progression on immunotherapy, adding stereotactic body radiotherapy (SBRT) could improve progression-free survival (PFS), according to investigators.
Patients with more CD8+ T cells in circulation, and those with higher tumor infiltrating lymphocyte (TIL) scores derived the most benefit from SBRT, lead author Allison Campbell, MD, PhD, of Yale Cancer Center in New Haven, Conn., and colleagues, reported at the annual meeting of the American Society for Radiation Oncology.
“In rare cases, adding radiation to immunotherapy has been shown to result in therapeutic synergy,” Dr. Campbell said. “When we give high-dose radiation to patients on immunotherapy, some tumors that were not targeted by the radiation can shrink, and this is called ‘the abscopal effect.’ ”
The investigators designed the phase 2 trial to determine if the abscopal effect would occur if high-dose radiation was delivered to a single site in patients who had progressed on checkpoint inhibitor therapy. Fifty-six patients were enrolled, all with at least two sites of metastatic NSCLC. Of these patients, 6 had already progressed on immunotherapy, while 50 were naive to immunotherapy and began pembrolizumab during the trial, with 16 eventually progressing; collectively, these 22 patients with disease progression were identified as candidates for SBRT. Almost all candidates (21 out of 22) completed SBRT, which was delivered in three or five high-dose fractions. Only one site was treated, while other sites were tracked over time with computed tomography (CT) to assess for the abscopal effect. In addition, blood was analyzed for circulating immune cell composition.
After a median follow-up of 15.2 months, the disease control rate was 57%, with some abscopal responses detected. Two patients (10%) achieved a partial response lasting more than 1 year, and 10 patients (48%) maintained stable disease after SBRT. Although programmed death-ligand 1 (PD-L1) positivity was associated with a trend toward increased PFS, this was not statistically significant. In contrast, TIL score was significantly correlated with PFS; patients with TIL scores of 2-3 had a median PFS of 6.7 months, compared with 2.2 months among those with TIL scores of 1 or less. Similarly, immune-related adverse events predicted outcome, with patients who experienced such events achieving longer median PFS than those who did not (6.5 vs 2.2 months). Furthermore, blood testing revealed that the best responders had more CD8+ killer T cells and fewer CD4+ regulatory T cells in peripheral blood compared with patients who responded poorly.
After Dr. Campbell’s presentation, Benjamin Movsas, MD, chair of radiation oncology at the Henry Ford Cancer Institute in Detroit, offered some expert insight. “[The findings from this study] suggest perhaps that radiation may be able to reinvigorate the immune system,” Dr. Movsas said. “Maybe we can get more mileage out of the immunotherapy with this approach. Could radiation kind of be like an immune vaccine of sorts? There’s a lot of exciting possibilities.”
Dr. Movsas also noted how biomarker findings may be able to guide treatment decisions, highlighting how T cell populations predicted outcomes. “This era of precision medicine is really helping us improve benefits,” he said. “The immune profile really matters.”
The investigators disclosed relationships with Genentech, AstraZeneca, Merck, and others.
SOURCE: Campbell et al. ASTRO 2019. Abstract 74.
For patients with metastatic non–small cell lung cancer (NSCLC) who have disease progression on immunotherapy, adding stereotactic body radiotherapy (SBRT) could improve progression-free survival (PFS), according to investigators.
Patients with more CD8+ T cells in circulation, and those with higher tumor infiltrating lymphocyte (TIL) scores derived the most benefit from SBRT, lead author Allison Campbell, MD, PhD, of Yale Cancer Center in New Haven, Conn., and colleagues, reported at the annual meeting of the American Society for Radiation Oncology.
“In rare cases, adding radiation to immunotherapy has been shown to result in therapeutic synergy,” Dr. Campbell said. “When we give high-dose radiation to patients on immunotherapy, some tumors that were not targeted by the radiation can shrink, and this is called ‘the abscopal effect.’ ”
The investigators designed the phase 2 trial to determine if the abscopal effect would occur if high-dose radiation was delivered to a single site in patients who had progressed on checkpoint inhibitor therapy. Fifty-six patients were enrolled, all with at least two sites of metastatic NSCLC. Of these patients, 6 had already progressed on immunotherapy, while 50 were naive to immunotherapy and began pembrolizumab during the trial, with 16 eventually progressing; collectively, these 22 patients with disease progression were identified as candidates for SBRT. Almost all candidates (21 out of 22) completed SBRT, which was delivered in three or five high-dose fractions. Only one site was treated, while other sites were tracked over time with computed tomography (CT) to assess for the abscopal effect. In addition, blood was analyzed for circulating immune cell composition.
After a median follow-up of 15.2 months, the disease control rate was 57%, with some abscopal responses detected. Two patients (10%) achieved a partial response lasting more than 1 year, and 10 patients (48%) maintained stable disease after SBRT. Although programmed death-ligand 1 (PD-L1) positivity was associated with a trend toward increased PFS, this was not statistically significant. In contrast, TIL score was significantly correlated with PFS; patients with TIL scores of 2-3 had a median PFS of 6.7 months, compared with 2.2 months among those with TIL scores of 1 or less. Similarly, immune-related adverse events predicted outcome, with patients who experienced such events achieving longer median PFS than those who did not (6.5 vs 2.2 months). Furthermore, blood testing revealed that the best responders had more CD8+ killer T cells and fewer CD4+ regulatory T cells in peripheral blood compared with patients who responded poorly.
After Dr. Campbell’s presentation, Benjamin Movsas, MD, chair of radiation oncology at the Henry Ford Cancer Institute in Detroit, offered some expert insight. “[The findings from this study] suggest perhaps that radiation may be able to reinvigorate the immune system,” Dr. Movsas said. “Maybe we can get more mileage out of the immunotherapy with this approach. Could radiation kind of be like an immune vaccine of sorts? There’s a lot of exciting possibilities.”
Dr. Movsas also noted how biomarker findings may be able to guide treatment decisions, highlighting how T cell populations predicted outcomes. “This era of precision medicine is really helping us improve benefits,” he said. “The immune profile really matters.”
The investigators disclosed relationships with Genentech, AstraZeneca, Merck, and others.
SOURCE: Campbell et al. ASTRO 2019. Abstract 74.
For patients with metastatic non–small cell lung cancer (NSCLC) who have disease progression on immunotherapy, adding stereotactic body radiotherapy (SBRT) could improve progression-free survival (PFS), according to investigators.
Patients with more CD8+ T cells in circulation, and those with higher tumor infiltrating lymphocyte (TIL) scores derived the most benefit from SBRT, lead author Allison Campbell, MD, PhD, of Yale Cancer Center in New Haven, Conn., and colleagues, reported at the annual meeting of the American Society for Radiation Oncology.
“In rare cases, adding radiation to immunotherapy has been shown to result in therapeutic synergy,” Dr. Campbell said. “When we give high-dose radiation to patients on immunotherapy, some tumors that were not targeted by the radiation can shrink, and this is called ‘the abscopal effect.’ ”
The investigators designed the phase 2 trial to determine if the abscopal effect would occur if high-dose radiation was delivered to a single site in patients who had progressed on checkpoint inhibitor therapy. Fifty-six patients were enrolled, all with at least two sites of metastatic NSCLC. Of these patients, 6 had already progressed on immunotherapy, while 50 were naive to immunotherapy and began pembrolizumab during the trial, with 16 eventually progressing; collectively, these 22 patients with disease progression were identified as candidates for SBRT. Almost all candidates (21 out of 22) completed SBRT, which was delivered in three or five high-dose fractions. Only one site was treated, while other sites were tracked over time with computed tomography (CT) to assess for the abscopal effect. In addition, blood was analyzed for circulating immune cell composition.
After a median follow-up of 15.2 months, the disease control rate was 57%, with some abscopal responses detected. Two patients (10%) achieved a partial response lasting more than 1 year, and 10 patients (48%) maintained stable disease after SBRT. Although programmed death-ligand 1 (PD-L1) positivity was associated with a trend toward increased PFS, this was not statistically significant. In contrast, TIL score was significantly correlated with PFS; patients with TIL scores of 2-3 had a median PFS of 6.7 months, compared with 2.2 months among those with TIL scores of 1 or less. Similarly, immune-related adverse events predicted outcome, with patients who experienced such events achieving longer median PFS than those who did not (6.5 vs 2.2 months). Furthermore, blood testing revealed that the best responders had more CD8+ killer T cells and fewer CD4+ regulatory T cells in peripheral blood compared with patients who responded poorly.
After Dr. Campbell’s presentation, Benjamin Movsas, MD, chair of radiation oncology at the Henry Ford Cancer Institute in Detroit, offered some expert insight. “[The findings from this study] suggest perhaps that radiation may be able to reinvigorate the immune system,” Dr. Movsas said. “Maybe we can get more mileage out of the immunotherapy with this approach. Could radiation kind of be like an immune vaccine of sorts? There’s a lot of exciting possibilities.”
Dr. Movsas also noted how biomarker findings may be able to guide treatment decisions, highlighting how T cell populations predicted outcomes. “This era of precision medicine is really helping us improve benefits,” he said. “The immune profile really matters.”
The investigators disclosed relationships with Genentech, AstraZeneca, Merck, and others.
SOURCE: Campbell et al. ASTRO 2019. Abstract 74.
REPORTING FROM ASTRO 2019
Closure of women’s health clinics may negatively impact cervical cancer outcomes
.
States with a decreased number of women’s clinics per capita between 2010 and 2013 were found to have less screening for cervical cancer, more advanced stage of cervical cancer at presentation, and higher mortality from cervical cancer than states with no decrease in clinics, reported lead author Amar J. Srivastava, MD, of Washington University in St. Louis, who also noted that these changes occurred within a relatively short time frame.
“We know that women are generally diagnosed through the utilization of Pap smears,” Dr. Srivastava said during a presentation at the annual meeting of the American Society for Radiation Oncology. “These are low-cost tests that are available at multiple low-cost women’s health clinics. Unfortunately ... over the course of the past decade, we’ve seen a significant reduction of these clinics throughout the United States.”
“Between 2010 and 2013, which is the period of interest in this study, we know that about 100 of these women’s health clinics closed,” Dr. Srivastava said. “This was due to a combination of several factors; some of it was due to funding, some of it was due to restructuring of the clinics, and there were also laws passed throughout many states that ultimately led to the closure of many clinics.”
To determine the impact of these closures, the investigators first divided states into those that had women’s clinic closures between 2010 and 2013 and those that did not. Comparisons between these two cohorts involved the use of two databases. The first was the Behavioral Risk Factors Surveillance Study (BRFSS), which provided data from 197,143 cases, enabling assessment of differences between screening availability. The second database was the Surveillance, Epidemiology, and End Results (SEER) registry, which provided data from 10,652 patients, facilitating comparisons of stage at time of diagnosis and mortality rate.
Results were described in terms of relative differences between the two cohorts. For instance, screening rate among women with cervical cancer in states that had a decreased number of clinics was 1.63% lower than in states that did not lose clinics. This disparity was more pronounced in specific demographic subgroups, including Hispanic women (–5.82%), women aged between 21 and 34 years (–5.19%), unmarried women (–4.10%), and uninsured women (–6.88%).
“Historically, these are marginalized, underserved groups, and unfortunately, it comes as no surprise that these were the groups of women who were most dramatically hit by these changes,” Dr. Srivastava said.
Early-stage diagnosis was also significantly less common in states that had a decreased number of clinics, by a margin of 13.2%. Finally, the overall mortality rate among women with cervical cancer was 36% higher in states with clinic closures, a difference that climbed to 40% when comparing only metro residents.
Connecting the dots, Dr. Srivastava suggested that the decreased availability of screening may have led to fewer diagnoses at an early stage, which is more curable than late-stage disease, ultimately translating to a higher mortality rate. After noting that this chain of causality cannot be confirmed, owing to the retrospective nature of the study, Dr. Srivastava finished his presentation with a call to action.
“These findings should really give us some pause,” he said, “as physicians, as people who care about other people, to spend some time, try to figure out what’s going on, and try to address this disparity.”
After the presentation, Geraldine M. Jacobsen, MD, chair of radiation oncology at West Virginia University Cancer Institute, in Morgantown, W.V., echoed Dr. Srivastava’s concern.
“This study really raises broader questions,” Dr. Jacobsen said. “In the United States we’re always engaged in an ongoing dialogue about health care, health care policy, [and] health care costs. But a study like this brings to us the human face of what these dialogues mean. Policy affects people, and if we make changes in health care policy or health care legislation, we’re impacting people’s health and people’s lives.”
The investigators disclosed relationships with Phelps County Regional Medical Center, the Elsa U. Pardee Foundation, the American Society of Clinical Oncology, and ASTRO.
SOURCE: Srivastava AJ et al. ASTRO 2019, Abstract 202.
.
States with a decreased number of women’s clinics per capita between 2010 and 2013 were found to have less screening for cervical cancer, more advanced stage of cervical cancer at presentation, and higher mortality from cervical cancer than states with no decrease in clinics, reported lead author Amar J. Srivastava, MD, of Washington University in St. Louis, who also noted that these changes occurred within a relatively short time frame.
“We know that women are generally diagnosed through the utilization of Pap smears,” Dr. Srivastava said during a presentation at the annual meeting of the American Society for Radiation Oncology. “These are low-cost tests that are available at multiple low-cost women’s health clinics. Unfortunately ... over the course of the past decade, we’ve seen a significant reduction of these clinics throughout the United States.”
“Between 2010 and 2013, which is the period of interest in this study, we know that about 100 of these women’s health clinics closed,” Dr. Srivastava said. “This was due to a combination of several factors; some of it was due to funding, some of it was due to restructuring of the clinics, and there were also laws passed throughout many states that ultimately led to the closure of many clinics.”
To determine the impact of these closures, the investigators first divided states into those that had women’s clinic closures between 2010 and 2013 and those that did not. Comparisons between these two cohorts involved the use of two databases. The first was the Behavioral Risk Factors Surveillance Study (BRFSS), which provided data from 197,143 cases, enabling assessment of differences between screening availability. The second database was the Surveillance, Epidemiology, and End Results (SEER) registry, which provided data from 10,652 patients, facilitating comparisons of stage at time of diagnosis and mortality rate.
Results were described in terms of relative differences between the two cohorts. For instance, screening rate among women with cervical cancer in states that had a decreased number of clinics was 1.63% lower than in states that did not lose clinics. This disparity was more pronounced in specific demographic subgroups, including Hispanic women (–5.82%), women aged between 21 and 34 years (–5.19%), unmarried women (–4.10%), and uninsured women (–6.88%).
“Historically, these are marginalized, underserved groups, and unfortunately, it comes as no surprise that these were the groups of women who were most dramatically hit by these changes,” Dr. Srivastava said.
Early-stage diagnosis was also significantly less common in states that had a decreased number of clinics, by a margin of 13.2%. Finally, the overall mortality rate among women with cervical cancer was 36% higher in states with clinic closures, a difference that climbed to 40% when comparing only metro residents.
Connecting the dots, Dr. Srivastava suggested that the decreased availability of screening may have led to fewer diagnoses at an early stage, which is more curable than late-stage disease, ultimately translating to a higher mortality rate. After noting that this chain of causality cannot be confirmed, owing to the retrospective nature of the study, Dr. Srivastava finished his presentation with a call to action.
“These findings should really give us some pause,” he said, “as physicians, as people who care about other people, to spend some time, try to figure out what’s going on, and try to address this disparity.”
After the presentation, Geraldine M. Jacobsen, MD, chair of radiation oncology at West Virginia University Cancer Institute, in Morgantown, W.V., echoed Dr. Srivastava’s concern.
“This study really raises broader questions,” Dr. Jacobsen said. “In the United States we’re always engaged in an ongoing dialogue about health care, health care policy, [and] health care costs. But a study like this brings to us the human face of what these dialogues mean. Policy affects people, and if we make changes in health care policy or health care legislation, we’re impacting people’s health and people’s lives.”
The investigators disclosed relationships with Phelps County Regional Medical Center, the Elsa U. Pardee Foundation, the American Society of Clinical Oncology, and ASTRO.
SOURCE: Srivastava AJ et al. ASTRO 2019, Abstract 202.
.
States with a decreased number of women’s clinics per capita between 2010 and 2013 were found to have less screening for cervical cancer, more advanced stage of cervical cancer at presentation, and higher mortality from cervical cancer than states with no decrease in clinics, reported lead author Amar J. Srivastava, MD, of Washington University in St. Louis, who also noted that these changes occurred within a relatively short time frame.
“We know that women are generally diagnosed through the utilization of Pap smears,” Dr. Srivastava said during a presentation at the annual meeting of the American Society for Radiation Oncology. “These are low-cost tests that are available at multiple low-cost women’s health clinics. Unfortunately ... over the course of the past decade, we’ve seen a significant reduction of these clinics throughout the United States.”
“Between 2010 and 2013, which is the period of interest in this study, we know that about 100 of these women’s health clinics closed,” Dr. Srivastava said. “This was due to a combination of several factors; some of it was due to funding, some of it was due to restructuring of the clinics, and there were also laws passed throughout many states that ultimately led to the closure of many clinics.”
To determine the impact of these closures, the investigators first divided states into those that had women’s clinic closures between 2010 and 2013 and those that did not. Comparisons between these two cohorts involved the use of two databases. The first was the Behavioral Risk Factors Surveillance Study (BRFSS), which provided data from 197,143 cases, enabling assessment of differences between screening availability. The second database was the Surveillance, Epidemiology, and End Results (SEER) registry, which provided data from 10,652 patients, facilitating comparisons of stage at time of diagnosis and mortality rate.
Results were described in terms of relative differences between the two cohorts. For instance, screening rate among women with cervical cancer in states that had a decreased number of clinics was 1.63% lower than in states that did not lose clinics. This disparity was more pronounced in specific demographic subgroups, including Hispanic women (–5.82%), women aged between 21 and 34 years (–5.19%), unmarried women (–4.10%), and uninsured women (–6.88%).
“Historically, these are marginalized, underserved groups, and unfortunately, it comes as no surprise that these were the groups of women who were most dramatically hit by these changes,” Dr. Srivastava said.
Early-stage diagnosis was also significantly less common in states that had a decreased number of clinics, by a margin of 13.2%. Finally, the overall mortality rate among women with cervical cancer was 36% higher in states with clinic closures, a difference that climbed to 40% when comparing only metro residents.
Connecting the dots, Dr. Srivastava suggested that the decreased availability of screening may have led to fewer diagnoses at an early stage, which is more curable than late-stage disease, ultimately translating to a higher mortality rate. After noting that this chain of causality cannot be confirmed, owing to the retrospective nature of the study, Dr. Srivastava finished his presentation with a call to action.
“These findings should really give us some pause,” he said, “as physicians, as people who care about other people, to spend some time, try to figure out what’s going on, and try to address this disparity.”
After the presentation, Geraldine M. Jacobsen, MD, chair of radiation oncology at West Virginia University Cancer Institute, in Morgantown, W.V., echoed Dr. Srivastava’s concern.
“This study really raises broader questions,” Dr. Jacobsen said. “In the United States we’re always engaged in an ongoing dialogue about health care, health care policy, [and] health care costs. But a study like this brings to us the human face of what these dialogues mean. Policy affects people, and if we make changes in health care policy or health care legislation, we’re impacting people’s health and people’s lives.”
The investigators disclosed relationships with Phelps County Regional Medical Center, the Elsa U. Pardee Foundation, the American Society of Clinical Oncology, and ASTRO.
SOURCE: Srivastava AJ et al. ASTRO 2019, Abstract 202.
REPORTING FROM ASTRO 2019
Ovarian function suppression gains support for premenopausal breast cancer
Adding 2 years of ovarian function suppression (OFS) to the standard 5-year regimen of tamoxifen could improve disease-free and overall survival in women with estrogen receptor–positive breast cancer who have been previously treated with chemotherapy and definitive surgery, according to results from the phase 3 ASTRRA trial.
The findings add support to recent results from the similarly designed Suppression of Ovarian Function Trial (SOFT), reported Hyun-Ah Kim, MD, PhD, of Korea Cancer Center Hospital, Seoul, and colleagues.
“Although OFS in breast cancer has been studied for decades and has been used widely in clinical practice, evidence for the benefits of adding OFS to standard adjuvant tamoxifen treatment is insufficient,” the investigators wrote in the Journal of Clinical Oncology.
The ASTRRA trial enrolled 1,483 premenopausal women aged 45 years or younger with estrogen receptor–positive breast cancer who had been previously treated with chemotherapy and definitive surgery. Of those, 1,293 women were randomized to receive either 5 years of tamoxifen, or the same regimen plus 2 years of OFS, at 35 treatment centers in South Korea. In all, 1,282 women were eligible for analysis.
The primary endpoint was disease-free survival, defined as secondary malignancy, invasive contralateral breast cancer, invasive local recurrence, regional recurrence, distant recurrence, or death from any cause. The secondary endpoint was overall survival.
After a median follow-up of 63 months, women who received OFS in addition to tamoxifen had an estimated disease-free survival rate of 91.1%, compared with 87.5% in those who received tamoxifen alone (P = .033). Similarly, adding OFS was associated with a better estimated 5-year overall survival rate, compared with standard monotherapy (99.4% vs. 97.8%; P = .029), Dr. Kim and associates said.
Despite having a shorter follow-up and smaller population size, the results from ASTRRA were similar to those from SOFT, most likely because ASTRRA patients had higher-risk disease, the investigators noted.
“The results of ASTRRA confirm the findings of SOFT, that the addition of OFS to tamoxifen provides survival benefits for women [who are] at sufficient risk for recurrence to receive adjuvant chemotherapy and who remain in a premenopausal state after chemotherapy,” they concluded.
The study was primarily funded by AstraZeneca, with additional support from the Korea Institute of Radiological and Medical Sciences. The investigators disclosed relationships with Novartis, Roche, Amgen, and others.
SOURCE: Kim HA et al. J Clin Oncol. 2019 Sep 16. doi: 10. 1200/JCO.19.00126.
Adding 2 years of ovarian function suppression (OFS) to the standard 5-year regimen of tamoxifen could improve disease-free and overall survival in women with estrogen receptor–positive breast cancer who have been previously treated with chemotherapy and definitive surgery, according to results from the phase 3 ASTRRA trial.
The findings add support to recent results from the similarly designed Suppression of Ovarian Function Trial (SOFT), reported Hyun-Ah Kim, MD, PhD, of Korea Cancer Center Hospital, Seoul, and colleagues.
“Although OFS in breast cancer has been studied for decades and has been used widely in clinical practice, evidence for the benefits of adding OFS to standard adjuvant tamoxifen treatment is insufficient,” the investigators wrote in the Journal of Clinical Oncology.
The ASTRRA trial enrolled 1,483 premenopausal women aged 45 years or younger with estrogen receptor–positive breast cancer who had been previously treated with chemotherapy and definitive surgery. Of those, 1,293 women were randomized to receive either 5 years of tamoxifen, or the same regimen plus 2 years of OFS, at 35 treatment centers in South Korea. In all, 1,282 women were eligible for analysis.
The primary endpoint was disease-free survival, defined as secondary malignancy, invasive contralateral breast cancer, invasive local recurrence, regional recurrence, distant recurrence, or death from any cause. The secondary endpoint was overall survival.
After a median follow-up of 63 months, women who received OFS in addition to tamoxifen had an estimated disease-free survival rate of 91.1%, compared with 87.5% in those who received tamoxifen alone (P = .033). Similarly, adding OFS was associated with a better estimated 5-year overall survival rate, compared with standard monotherapy (99.4% vs. 97.8%; P = .029), Dr. Kim and associates said.
Despite having a shorter follow-up and smaller population size, the results from ASTRRA were similar to those from SOFT, most likely because ASTRRA patients had higher-risk disease, the investigators noted.
“The results of ASTRRA confirm the findings of SOFT, that the addition of OFS to tamoxifen provides survival benefits for women [who are] at sufficient risk for recurrence to receive adjuvant chemotherapy and who remain in a premenopausal state after chemotherapy,” they concluded.
The study was primarily funded by AstraZeneca, with additional support from the Korea Institute of Radiological and Medical Sciences. The investigators disclosed relationships with Novartis, Roche, Amgen, and others.
SOURCE: Kim HA et al. J Clin Oncol. 2019 Sep 16. doi: 10. 1200/JCO.19.00126.
Adding 2 years of ovarian function suppression (OFS) to the standard 5-year regimen of tamoxifen could improve disease-free and overall survival in women with estrogen receptor–positive breast cancer who have been previously treated with chemotherapy and definitive surgery, according to results from the phase 3 ASTRRA trial.
The findings add support to recent results from the similarly designed Suppression of Ovarian Function Trial (SOFT), reported Hyun-Ah Kim, MD, PhD, of Korea Cancer Center Hospital, Seoul, and colleagues.
“Although OFS in breast cancer has been studied for decades and has been used widely in clinical practice, evidence for the benefits of adding OFS to standard adjuvant tamoxifen treatment is insufficient,” the investigators wrote in the Journal of Clinical Oncology.
The ASTRRA trial enrolled 1,483 premenopausal women aged 45 years or younger with estrogen receptor–positive breast cancer who had been previously treated with chemotherapy and definitive surgery. Of those, 1,293 women were randomized to receive either 5 years of tamoxifen, or the same regimen plus 2 years of OFS, at 35 treatment centers in South Korea. In all, 1,282 women were eligible for analysis.
The primary endpoint was disease-free survival, defined as secondary malignancy, invasive contralateral breast cancer, invasive local recurrence, regional recurrence, distant recurrence, or death from any cause. The secondary endpoint was overall survival.
After a median follow-up of 63 months, women who received OFS in addition to tamoxifen had an estimated disease-free survival rate of 91.1%, compared with 87.5% in those who received tamoxifen alone (P = .033). Similarly, adding OFS was associated with a better estimated 5-year overall survival rate, compared with standard monotherapy (99.4% vs. 97.8%; P = .029), Dr. Kim and associates said.
Despite having a shorter follow-up and smaller population size, the results from ASTRRA were similar to those from SOFT, most likely because ASTRRA patients had higher-risk disease, the investigators noted.
“The results of ASTRRA confirm the findings of SOFT, that the addition of OFS to tamoxifen provides survival benefits for women [who are] at sufficient risk for recurrence to receive adjuvant chemotherapy and who remain in a premenopausal state after chemotherapy,” they concluded.
The study was primarily funded by AstraZeneca, with additional support from the Korea Institute of Radiological and Medical Sciences. The investigators disclosed relationships with Novartis, Roche, Amgen, and others.
SOURCE: Kim HA et al. J Clin Oncol. 2019 Sep 16. doi: 10. 1200/JCO.19.00126.
FROM THE JOURNAL OF CLINICAL ONCOLOGY
Supercooling extends donor liver viability by 27 hours
Standard cooling to 4°C provides just 12 hours of organ preservation, but laboratory testing showed that supercooling to –4°C added 27 hours of viability, reported lead author Reinier J. de Vries, MD, of Harvard Medical School and Massachusetts General Hospital in Boston, and colleagues.
“The absence of technology to preserve organs for more than a few hours is one of the fundamental causes of the donor organ–shortage crisis,” the investigators wrote in Nature Biotechnology.
Supercooling organs to high-subzero temperatures has been shown to prolong organ life while avoiding ice-mediated injury, but techniques that are successful for rat livers have been difficult to translate to human livers because of their larger size, which increases the risk of ice formation, the investigators explained.
Three strategies were employed to overcome this problem: minimization of air-liquid interfaces, development of a new supercooling-preservation solution, and hypothermic machine perfusion to more evenly distribute preservation solution throughout the liver tissue. For recovery of organs after supercooling, the investigators used subnormothermic machine perfusion, which has been used effectively in rat transplants.
In order to measure the impact of this process on organ viability, the investigators first measured adenylate energy content, both before supercooling and after recovery.
“Adenylate energy content, and, particularly, the organ’s ability to recover it during (re)perfusion, is considered the most representative metric for liver viability,” they wrote.
The difference between pre- and postsupercooling energy charge was less than 20%; in comparison, failed liver transplants in large animals and clinical trials have typically involved an energy-charge loss of 40% or more.
To further test organ viability, the investigators measured pre- and postsupercooling levels of bile production, oxygen uptake, and vascular resistance. All of these parameters have been shown to predict transplant success in rats, and bile production has additional precedent from human studies.
On average, bile production, portal resistance, and arterial resistance were not significantly affected by supercooling. Although portal vein resistance was 20% higher after supercooling, this compared favorably with increases of 100%-150% that have been measured in nonviable livers. Similarly, oxygen uptake increased by a mean of 17%, but this was three times lower than changes that have been observed in livers with impaired viability, at 51%.
Additional measures of hepatocellular injury, including AST and ALT, were also supportive of viability after supercooling. Histopathology confirmed these findings by showing preserved tissue architecture.
“In summary, we find that the human livers tested displayed no substantial difference in viability before and after extended subzero supercooling preservation,” the investigators wrote.
To simulate transplantation, the investigators reperfused the organs with blood at a normal temperature, including platelets, complement, and white blood cells, which are drivers of ischemia reperfusion injury. During this process, energy charge remained stable, which indicates preserved mitochondrial function. While energy charge held steady, lactate metabolism increased with bile and urea production, suggesting increased liver function. Bile pH and HCO3– levels fell within range for viability. Although bile glucose exceeded proposed criteria, the investigators pointed out that levels still fell within parameters for research-quality livers. Lactate levels also rose within the first hour of reperfusion, but the investigators suggested that this finding should be interpreted with appropriate context.
“It should be considered that the livers in this study were initially rejected for transplantation,” they wrote, “and the confidence intervals of the lactate concentration at the end of reperfusion largely overlap with time-matched values reported by others during [normothermic machine perfusion] of rejected human livers.”
Hepatocellular injury and histology also were evaluated during and after simulated transplantation, respectively, with favorable results. Although sites of preexisting hepatic injury were aggravated by the process, and rates of apoptosis increased, the investigators considered these changes were clinically insignificant.
Looking to the future, the investigators suggested that further refinement of the process could facilitate even-lower storage temperatures while better preserving liver viability.
“The use of human livers makes this study clinically relevant and promotes the translation of subzero organ preservation to the clinic,” the investigators concluded. “However, long-term survival experiments of transplanted supercooled livers in swine or an alternative large animal model will be needed before clinical translation.”
The study was funded by the National Institutes of Health and the Department of Defense. Dr. de Vries and four other coauthors have provisional patent applications related to the study, and one coauthor disclosed a financial relationship with Organ Solutions.
SOURCE: de Vries RJ et al. Nature Biotechnol. 2019 Sep 9. doi: 10.1038/s41587-019-0223-y.
Standard cooling to 4°C provides just 12 hours of organ preservation, but laboratory testing showed that supercooling to –4°C added 27 hours of viability, reported lead author Reinier J. de Vries, MD, of Harvard Medical School and Massachusetts General Hospital in Boston, and colleagues.
“The absence of technology to preserve organs for more than a few hours is one of the fundamental causes of the donor organ–shortage crisis,” the investigators wrote in Nature Biotechnology.
Supercooling organs to high-subzero temperatures has been shown to prolong organ life while avoiding ice-mediated injury, but techniques that are successful for rat livers have been difficult to translate to human livers because of their larger size, which increases the risk of ice formation, the investigators explained.
Three strategies were employed to overcome this problem: minimization of air-liquid interfaces, development of a new supercooling-preservation solution, and hypothermic machine perfusion to more evenly distribute preservation solution throughout the liver tissue. For recovery of organs after supercooling, the investigators used subnormothermic machine perfusion, which has been used effectively in rat transplants.
In order to measure the impact of this process on organ viability, the investigators first measured adenylate energy content, both before supercooling and after recovery.
“Adenylate energy content, and, particularly, the organ’s ability to recover it during (re)perfusion, is considered the most representative metric for liver viability,” they wrote.
The difference between pre- and postsupercooling energy charge was less than 20%; in comparison, failed liver transplants in large animals and clinical trials have typically involved an energy-charge loss of 40% or more.
To further test organ viability, the investigators measured pre- and postsupercooling levels of bile production, oxygen uptake, and vascular resistance. All of these parameters have been shown to predict transplant success in rats, and bile production has additional precedent from human studies.
On average, bile production, portal resistance, and arterial resistance were not significantly affected by supercooling. Although portal vein resistance was 20% higher after supercooling, this compared favorably with increases of 100%-150% that have been measured in nonviable livers. Similarly, oxygen uptake increased by a mean of 17%, but this was three times lower than changes that have been observed in livers with impaired viability, at 51%.
Additional measures of hepatocellular injury, including AST and ALT, were also supportive of viability after supercooling. Histopathology confirmed these findings by showing preserved tissue architecture.
“In summary, we find that the human livers tested displayed no substantial difference in viability before and after extended subzero supercooling preservation,” the investigators wrote.
To simulate transplantation, the investigators reperfused the organs with blood at a normal temperature, including platelets, complement, and white blood cells, which are drivers of ischemia reperfusion injury. During this process, energy charge remained stable, which indicates preserved mitochondrial function. While energy charge held steady, lactate metabolism increased with bile and urea production, suggesting increased liver function. Bile pH and HCO3– levels fell within range for viability. Although bile glucose exceeded proposed criteria, the investigators pointed out that levels still fell within parameters for research-quality livers. Lactate levels also rose within the first hour of reperfusion, but the investigators suggested that this finding should be interpreted with appropriate context.
“It should be considered that the livers in this study were initially rejected for transplantation,” they wrote, “and the confidence intervals of the lactate concentration at the end of reperfusion largely overlap with time-matched values reported by others during [normothermic machine perfusion] of rejected human livers.”
Hepatocellular injury and histology also were evaluated during and after simulated transplantation, respectively, with favorable results. Although sites of preexisting hepatic injury were aggravated by the process, and rates of apoptosis increased, the investigators considered these changes were clinically insignificant.
Looking to the future, the investigators suggested that further refinement of the process could facilitate even-lower storage temperatures while better preserving liver viability.
“The use of human livers makes this study clinically relevant and promotes the translation of subzero organ preservation to the clinic,” the investigators concluded. “However, long-term survival experiments of transplanted supercooled livers in swine or an alternative large animal model will be needed before clinical translation.”
The study was funded by the National Institutes of Health and the Department of Defense. Dr. de Vries and four other coauthors have provisional patent applications related to the study, and one coauthor disclosed a financial relationship with Organ Solutions.
SOURCE: de Vries RJ et al. Nature Biotechnol. 2019 Sep 9. doi: 10.1038/s41587-019-0223-y.
Standard cooling to 4°C provides just 12 hours of organ preservation, but laboratory testing showed that supercooling to –4°C added 27 hours of viability, reported lead author Reinier J. de Vries, MD, of Harvard Medical School and Massachusetts General Hospital in Boston, and colleagues.
“The absence of technology to preserve organs for more than a few hours is one of the fundamental causes of the donor organ–shortage crisis,” the investigators wrote in Nature Biotechnology.
Supercooling organs to high-subzero temperatures has been shown to prolong organ life while avoiding ice-mediated injury, but techniques that are successful for rat livers have been difficult to translate to human livers because of their larger size, which increases the risk of ice formation, the investigators explained.
Three strategies were employed to overcome this problem: minimization of air-liquid interfaces, development of a new supercooling-preservation solution, and hypothermic machine perfusion to more evenly distribute preservation solution throughout the liver tissue. For recovery of organs after supercooling, the investigators used subnormothermic machine perfusion, which has been used effectively in rat transplants.
In order to measure the impact of this process on organ viability, the investigators first measured adenylate energy content, both before supercooling and after recovery.
“Adenylate energy content, and, particularly, the organ’s ability to recover it during (re)perfusion, is considered the most representative metric for liver viability,” they wrote.
The difference between pre- and postsupercooling energy charge was less than 20%; in comparison, failed liver transplants in large animals and clinical trials have typically involved an energy-charge loss of 40% or more.
To further test organ viability, the investigators measured pre- and postsupercooling levels of bile production, oxygen uptake, and vascular resistance. All of these parameters have been shown to predict transplant success in rats, and bile production has additional precedent from human studies.
On average, bile production, portal resistance, and arterial resistance were not significantly affected by supercooling. Although portal vein resistance was 20% higher after supercooling, this compared favorably with increases of 100%-150% that have been measured in nonviable livers. Similarly, oxygen uptake increased by a mean of 17%, but this was three times lower than changes that have been observed in livers with impaired viability, at 51%.
Additional measures of hepatocellular injury, including AST and ALT, were also supportive of viability after supercooling. Histopathology confirmed these findings by showing preserved tissue architecture.
“In summary, we find that the human livers tested displayed no substantial difference in viability before and after extended subzero supercooling preservation,” the investigators wrote.
To simulate transplantation, the investigators reperfused the organs with blood at a normal temperature, including platelets, complement, and white blood cells, which are drivers of ischemia reperfusion injury. During this process, energy charge remained stable, which indicates preserved mitochondrial function. While energy charge held steady, lactate metabolism increased with bile and urea production, suggesting increased liver function. Bile pH and HCO3– levels fell within range for viability. Although bile glucose exceeded proposed criteria, the investigators pointed out that levels still fell within parameters for research-quality livers. Lactate levels also rose within the first hour of reperfusion, but the investigators suggested that this finding should be interpreted with appropriate context.
“It should be considered that the livers in this study were initially rejected for transplantation,” they wrote, “and the confidence intervals of the lactate concentration at the end of reperfusion largely overlap with time-matched values reported by others during [normothermic machine perfusion] of rejected human livers.”
Hepatocellular injury and histology also were evaluated during and after simulated transplantation, respectively, with favorable results. Although sites of preexisting hepatic injury were aggravated by the process, and rates of apoptosis increased, the investigators considered these changes were clinically insignificant.
Looking to the future, the investigators suggested that further refinement of the process could facilitate even-lower storage temperatures while better preserving liver viability.
“The use of human livers makes this study clinically relevant and promotes the translation of subzero organ preservation to the clinic,” the investigators concluded. “However, long-term survival experiments of transplanted supercooled livers in swine or an alternative large animal model will be needed before clinical translation.”
The study was funded by the National Institutes of Health and the Department of Defense. Dr. de Vries and four other coauthors have provisional patent applications related to the study, and one coauthor disclosed a financial relationship with Organ Solutions.
SOURCE: de Vries RJ et al. Nature Biotechnol. 2019 Sep 9. doi: 10.1038/s41587-019-0223-y.
FROM NATURE BIOTECHNOLOGY
Nonmyeloablative conditioning carries lowers infection risk in patients with AML
For patients with acute myeloid leukemia (AML) in need of allogeneic hematopoietic cell transplantation (alloHCT), reduced-intensity/nonmyeloablative conditioning (RIC/NMA) offers a lower risk of infection than myeloablative conditioning (MAC), based on a retrospective study involving more than 1,700 patients.
Within 100 days of treatment, patients who underwent MAC were significantly more likely to develop a bacterial infection, and develop it at an earlier date, than patients who had undergone RIC/NMA, reported lead author Celalettin Ustun, MD, of Rush University in Chicago, and colleagues.
“The incidence of infections, a common and often severe complication of alloHCT, is expected to be lower after RIC/NMA compared with MAC and thus contribute to the decreased [nonrelapse mortality],” the investigators wrote in Blood Advances, noting that this hypothesis has previously lacked supporting data, prompting the present study.
The retrospective analysis involved 1,755 patients with AML who were in first complete remission. Data were drawn from the Center for International Blood and Marrow Transplant Research (CIBMTR). The primary end point was incidence of infection within 100 days after T-cell replete alloHCT in patients receiving MAC (n = 978) versus those who underwent RIC/NMA (n = 777). Secondary end points included comparisons of infection types and infection density.
Patients who received RIC/NMA were generally older and more likely to have myelodysplastic syndrome than patients in the MAC group; the groups were otherwise similar, based on comorbidities, cytogenetic risks, and Karnofsky performance scores.
The proportion of patients who developed at least one infection was comparable between groups: 61% of MAC patients versus 58% of RIC/NMA patients (P = .21), but further analysis showed that MAC was in fact associated with some relatively increased risks. For instance, patients in the MAC group tended to develop infections sooner than patients treated with RIC/NMA (21 vs. 15 days), and more patients treated with MAC had at least one bacterial infection by day 100 (46% vs. 37%).
Although the proportion of patients developing at least one viral infection was slightly lower in the MAC group than the RIC/NMA group (34% vs. 39%), overall infection density was higher, which takes into account multiple infections.
The increased bacterial infections after MAC were caused by gram-positive bacteria, while the increased viral infections with RIC/NMA were caused by cytomegalovirus, the investigators reported.
“RIC/NMA alloHCT is associated with a decreased risk of any infection and particularly early bacterial infections,” the investigators wrote. “The risk of viral and fungal infections per days at risk is similar.”
The Center for International Blood and Marrow Transplant Research is supported by grants from the U.S. government and several pharmaceutical companies. The investigators reported having no conflicts of interest.
SOURCE: Ustun C et al. Blood Adv. 2019 Sep 10;3(17):2525-36.
For patients with acute myeloid leukemia (AML) in need of allogeneic hematopoietic cell transplantation (alloHCT), reduced-intensity/nonmyeloablative conditioning (RIC/NMA) offers a lower risk of infection than myeloablative conditioning (MAC), based on a retrospective study involving more than 1,700 patients.
Within 100 days of treatment, patients who underwent MAC were significantly more likely to develop a bacterial infection, and develop it at an earlier date, than patients who had undergone RIC/NMA, reported lead author Celalettin Ustun, MD, of Rush University in Chicago, and colleagues.
“The incidence of infections, a common and often severe complication of alloHCT, is expected to be lower after RIC/NMA compared with MAC and thus contribute to the decreased [nonrelapse mortality],” the investigators wrote in Blood Advances, noting that this hypothesis has previously lacked supporting data, prompting the present study.
The retrospective analysis involved 1,755 patients with AML who were in first complete remission. Data were drawn from the Center for International Blood and Marrow Transplant Research (CIBMTR). The primary end point was incidence of infection within 100 days after T-cell replete alloHCT in patients receiving MAC (n = 978) versus those who underwent RIC/NMA (n = 777). Secondary end points included comparisons of infection types and infection density.
Patients who received RIC/NMA were generally older and more likely to have myelodysplastic syndrome than patients in the MAC group; the groups were otherwise similar, based on comorbidities, cytogenetic risks, and Karnofsky performance scores.
The proportion of patients who developed at least one infection was comparable between groups: 61% of MAC patients versus 58% of RIC/NMA patients (P = .21), but further analysis showed that MAC was in fact associated with some relatively increased risks. For instance, patients in the MAC group tended to develop infections sooner than patients treated with RIC/NMA (21 vs. 15 days), and more patients treated with MAC had at least one bacterial infection by day 100 (46% vs. 37%).
Although the proportion of patients developing at least one viral infection was slightly lower in the MAC group than the RIC/NMA group (34% vs. 39%), overall infection density was higher, which takes into account multiple infections.
The increased bacterial infections after MAC were caused by gram-positive bacteria, while the increased viral infections with RIC/NMA were caused by cytomegalovirus, the investigators reported.
“RIC/NMA alloHCT is associated with a decreased risk of any infection and particularly early bacterial infections,” the investigators wrote. “The risk of viral and fungal infections per days at risk is similar.”
The Center for International Blood and Marrow Transplant Research is supported by grants from the U.S. government and several pharmaceutical companies. The investigators reported having no conflicts of interest.
SOURCE: Ustun C et al. Blood Adv. 2019 Sep 10;3(17):2525-36.
For patients with acute myeloid leukemia (AML) in need of allogeneic hematopoietic cell transplantation (alloHCT), reduced-intensity/nonmyeloablative conditioning (RIC/NMA) offers a lower risk of infection than myeloablative conditioning (MAC), based on a retrospective study involving more than 1,700 patients.
Within 100 days of treatment, patients who underwent MAC were significantly more likely to develop a bacterial infection, and develop it at an earlier date, than patients who had undergone RIC/NMA, reported lead author Celalettin Ustun, MD, of Rush University in Chicago, and colleagues.
“The incidence of infections, a common and often severe complication of alloHCT, is expected to be lower after RIC/NMA compared with MAC and thus contribute to the decreased [nonrelapse mortality],” the investigators wrote in Blood Advances, noting that this hypothesis has previously lacked supporting data, prompting the present study.
The retrospective analysis involved 1,755 patients with AML who were in first complete remission. Data were drawn from the Center for International Blood and Marrow Transplant Research (CIBMTR). The primary end point was incidence of infection within 100 days after T-cell replete alloHCT in patients receiving MAC (n = 978) versus those who underwent RIC/NMA (n = 777). Secondary end points included comparisons of infection types and infection density.
Patients who received RIC/NMA were generally older and more likely to have myelodysplastic syndrome than patients in the MAC group; the groups were otherwise similar, based on comorbidities, cytogenetic risks, and Karnofsky performance scores.
The proportion of patients who developed at least one infection was comparable between groups: 61% of MAC patients versus 58% of RIC/NMA patients (P = .21), but further analysis showed that MAC was in fact associated with some relatively increased risks. For instance, patients in the MAC group tended to develop infections sooner than patients treated with RIC/NMA (21 vs. 15 days), and more patients treated with MAC had at least one bacterial infection by day 100 (46% vs. 37%).
Although the proportion of patients developing at least one viral infection was slightly lower in the MAC group than the RIC/NMA group (34% vs. 39%), overall infection density was higher, which takes into account multiple infections.
The increased bacterial infections after MAC were caused by gram-positive bacteria, while the increased viral infections with RIC/NMA were caused by cytomegalovirus, the investigators reported.
“RIC/NMA alloHCT is associated with a decreased risk of any infection and particularly early bacterial infections,” the investigators wrote. “The risk of viral and fungal infections per days at risk is similar.”
The Center for International Blood and Marrow Transplant Research is supported by grants from the U.S. government and several pharmaceutical companies. The investigators reported having no conflicts of interest.
SOURCE: Ustun C et al. Blood Adv. 2019 Sep 10;3(17):2525-36.
FROM BLOOD ADVANCES
Key clinical point: For patients with acute myeloid leukemia (AML) in need of allogeneic hematopoietic cell transplantation (alloHCT), reduced-intensity/nonmyeloablative conditioning (RIC/NMA) offers a lower risk of infection than myeloablative conditioning (MAC).
Major finding: By day 100, 37% of patients who received RIC/NMA had at least one bacterial infection, compared with 46% of patients who underwent MAC (P = .0004).
Study details: A retrospective study involving 1,755 patients with AML in first complete remission.
Disclosures: The Center for International Blood and Marrow Transplant Research is supported by grants from the U.S. government and several pharmaceutical companies. The investigators reported having no conflicts of interest.
Source: Ustun C et al. Blood Adv. 2019 Sep 3(17):2525-36.
SABR offers surgery alternative for localized RCC
For patients with localized renal cell carcinoma (RCC), stereotactic ablative body radiotherapy (SABR) may be an effective alternative to surgery, according to findings from a retrospective study.
Patients with smaller tumors and nonmetastatic disease achieved the best outcomes with SABR, reported lead author Rodney E. Wegner, MD, of Allegheny Health Network Cancer Institute, Pittsburgh, and colleagues.
“Radiation therapy is often overlooked in [RCC] as historic preclinical data reported RCC as being relatively radioresistant to external beam radiation at conventional doses,” the investigators wrote in Advances in Radiation Oncology. However, SABR may be able to overcome this resistance by delivering “highly conformal dose escalated radiation,” the investigators noted, citing two recent reports from the International Radiosurgery Oncology Consortium for Kidney (IROCK) that showed promising results (J Urol. 2019 Jun;201[6]:1097-104 and Cancer. 2018 Mar 1;124[5]:934-42).
The present study included 347 patients with RCC from the National Cancer Database who were treated with SABR and not surgery. Most patients (94%) did not have systemic therapy. Similar proportions lacked lymph node involvement (97%) or distant metastasis (93%). About three-quarters of patients (76%) had T1 disease. The median SABR dose was 45 Gy, ranging from 35 to 54 Gy, most frequently given in three fractions.
After a median follow-up of 36 months, ranging from 1 to 156 months, median overall survival across all patients was 58 months. SABR was most effective for patients with nonmetastatic disease who had smaller tumors.
An inverse correlation between tumor size and overall survival was apparent given that patients with tumors 2.5 cm or smaller had the longest median overall survival, at 92 months, with decrements in survival as tumors got larger. Survival dropped to 88 months for tumors 2.6-3.5 cm, 44 months for tumors 3.5-5.0 cm, and finally to 26 months for tumors larger than 5.0 cm. In addition to tumor size and metastatic disease, age was a risk factor for shorter survival.
“The results presented demonstrate excellent post-SABR outcomes, with median overall survival in the range of 7-8 years for smaller lesions,” the investigators wrote. “This is particularly impressive considering that many of these patients were likely medically inoperable.”
The researchers noted that most of kidney SABR is done at academic centers, which highlights the importance of appropriate technology and training for delivering this treatment.
“Further prospective research is needed to verify its safety and efficacy,” the investigators concluded.
No external funding was provided for the project and the investigators reported no conflicts of interest.
SOURCE: Wegner RE et al. Adv Rad Onc. 2019 Aug 8. doi: 10.1016/j.adro.2019.07.018.
For patients with localized renal cell carcinoma (RCC), stereotactic ablative body radiotherapy (SABR) may be an effective alternative to surgery, according to findings from a retrospective study.
Patients with smaller tumors and nonmetastatic disease achieved the best outcomes with SABR, reported lead author Rodney E. Wegner, MD, of Allegheny Health Network Cancer Institute, Pittsburgh, and colleagues.
“Radiation therapy is often overlooked in [RCC] as historic preclinical data reported RCC as being relatively radioresistant to external beam radiation at conventional doses,” the investigators wrote in Advances in Radiation Oncology. However, SABR may be able to overcome this resistance by delivering “highly conformal dose escalated radiation,” the investigators noted, citing two recent reports from the International Radiosurgery Oncology Consortium for Kidney (IROCK) that showed promising results (J Urol. 2019 Jun;201[6]:1097-104 and Cancer. 2018 Mar 1;124[5]:934-42).
The present study included 347 patients with RCC from the National Cancer Database who were treated with SABR and not surgery. Most patients (94%) did not have systemic therapy. Similar proportions lacked lymph node involvement (97%) or distant metastasis (93%). About three-quarters of patients (76%) had T1 disease. The median SABR dose was 45 Gy, ranging from 35 to 54 Gy, most frequently given in three fractions.
After a median follow-up of 36 months, ranging from 1 to 156 months, median overall survival across all patients was 58 months. SABR was most effective for patients with nonmetastatic disease who had smaller tumors.
An inverse correlation between tumor size and overall survival was apparent given that patients with tumors 2.5 cm or smaller had the longest median overall survival, at 92 months, with decrements in survival as tumors got larger. Survival dropped to 88 months for tumors 2.6-3.5 cm, 44 months for tumors 3.5-5.0 cm, and finally to 26 months for tumors larger than 5.0 cm. In addition to tumor size and metastatic disease, age was a risk factor for shorter survival.
“The results presented demonstrate excellent post-SABR outcomes, with median overall survival in the range of 7-8 years for smaller lesions,” the investigators wrote. “This is particularly impressive considering that many of these patients were likely medically inoperable.”
The researchers noted that most of kidney SABR is done at academic centers, which highlights the importance of appropriate technology and training for delivering this treatment.
“Further prospective research is needed to verify its safety and efficacy,” the investigators concluded.
No external funding was provided for the project and the investigators reported no conflicts of interest.
SOURCE: Wegner RE et al. Adv Rad Onc. 2019 Aug 8. doi: 10.1016/j.adro.2019.07.018.
For patients with localized renal cell carcinoma (RCC), stereotactic ablative body radiotherapy (SABR) may be an effective alternative to surgery, according to findings from a retrospective study.
Patients with smaller tumors and nonmetastatic disease achieved the best outcomes with SABR, reported lead author Rodney E. Wegner, MD, of Allegheny Health Network Cancer Institute, Pittsburgh, and colleagues.
“Radiation therapy is often overlooked in [RCC] as historic preclinical data reported RCC as being relatively radioresistant to external beam radiation at conventional doses,” the investigators wrote in Advances in Radiation Oncology. However, SABR may be able to overcome this resistance by delivering “highly conformal dose escalated radiation,” the investigators noted, citing two recent reports from the International Radiosurgery Oncology Consortium for Kidney (IROCK) that showed promising results (J Urol. 2019 Jun;201[6]:1097-104 and Cancer. 2018 Mar 1;124[5]:934-42).
The present study included 347 patients with RCC from the National Cancer Database who were treated with SABR and not surgery. Most patients (94%) did not have systemic therapy. Similar proportions lacked lymph node involvement (97%) or distant metastasis (93%). About three-quarters of patients (76%) had T1 disease. The median SABR dose was 45 Gy, ranging from 35 to 54 Gy, most frequently given in three fractions.
After a median follow-up of 36 months, ranging from 1 to 156 months, median overall survival across all patients was 58 months. SABR was most effective for patients with nonmetastatic disease who had smaller tumors.
An inverse correlation between tumor size and overall survival was apparent given that patients with tumors 2.5 cm or smaller had the longest median overall survival, at 92 months, with decrements in survival as tumors got larger. Survival dropped to 88 months for tumors 2.6-3.5 cm, 44 months for tumors 3.5-5.0 cm, and finally to 26 months for tumors larger than 5.0 cm. In addition to tumor size and metastatic disease, age was a risk factor for shorter survival.
“The results presented demonstrate excellent post-SABR outcomes, with median overall survival in the range of 7-8 years for smaller lesions,” the investigators wrote. “This is particularly impressive considering that many of these patients were likely medically inoperable.”
The researchers noted that most of kidney SABR is done at academic centers, which highlights the importance of appropriate technology and training for delivering this treatment.
“Further prospective research is needed to verify its safety and efficacy,” the investigators concluded.
No external funding was provided for the project and the investigators reported no conflicts of interest.
SOURCE: Wegner RE et al. Adv Rad Onc. 2019 Aug 8. doi: 10.1016/j.adro.2019.07.018.
FROM ADVANCES IN RADIATION ONCOLOGY
Key clinical point:
Major finding: Median overall survival was 92 months among patients with renal tumors no larger than 2.5 cm.
Study details: A retrospective study involving 347 patients with localized renal cell carcinoma (RCC) who were treated with stereotactic ablative body radiotherapy.
Disclosures: No external funding was provided for the study and the investigators reported having no conflicts of interest.
Source: Wegner RE et al. Adv Rad Onc. 2019 Aug 8. doi: 10.1016/j.adro.2019.07.018.
Pelvic floor muscle training outperforms attention-control massage for fecal incontinence
For first-line treatment of patients with fecal incontinence, pelvic floor muscle training (PFMT) is superior to attention-control massage, according to investigators.
Source: American Gastroenterological Association
In a study involving 98 patients, those who combined PFMT with biofeedback and conservative therapy were five times as likely to report improved symptoms than those who used attention-control massage and conservative therapy, reported Anja Ussing, MD, of Copenhagen University Hospital in Hvidovre, Denmark, and colleagues. Patients in the PFMT group also had significantly greater reductions in severity of incontinence, based on Vaizey incontinence score.
“Evidence from randomized controlled trials regarding the effect of PFMT for fecal incontinence is lacking,” the investigators wrote in Clinical Gastroenterology and Hepatology. Although previous trials have evaluated PFMT, none controlled for the effect of interactions with care providers. “To evaluate the effect of PFMT, there is a need for a trial that uses a comparator to control for this nonspecific trial effect associated with the attention given by the health care professional.”
To perform such a trial, the investigators recruited 98 patients with a history of fecal incontinence for at least 6 months. Patients were excluded if they had severe neurologic conditions, pregnancy, diarrhea, rectal prolapse, previous radiotherapy or cancer surgery in the lower abdomen, cognitive impairment, inadequate fluency in Danish, or a history of at least two PFMT training sessions within the past year. Enrolled patients were randomized in a 1:1 ratio to receive PFMT with biofeedback and conservative treatment, or attention-control massage training and conservative therapy. The primary outcome was symptom improvement, determined by the Patient Global Impression of Improvement scale at 16 weeks. Secondary outcome measures included the Fecal Incontinence Severity Index, Vaizey score, and Fecal Incontinence Quality of Life Scale.
Patients were predominantly female, with just three men in the PFMT group and six in the attention-control massage group. The PFMT group also had a slightly higher median age, at 65 years, compared with 58 years in the control group.
At 16 weeks, the difference in self-reported symptoms was dramatic, with 74.5% of patients in the PFMT group reporting improvement, compared with 35.5% in the control group, which translated to an unadjusted odds ratio of 5.16 (P = .0002). When symptom improvements were confined to those who reported being “very much better” or “much better,” the disparity between groups still remained strong, with an unadjusted OR of 2.98 (P = .025). Among the three secondary outcomes, only the Vaizey score showed a significant difference between groups. Patients treated with PFMT had a mean difference in Vaizey score change of –1.83 points, using a scale from 0 to 24, with 24 representing complete incontinence (P = .04).
“We were not able to show any differences between groups in the number of fecal incontinence episodes,” the investigators wrote. “We had much missing data in the bowel diaries and we can only guess what the result would have been if the data had been more complete. Electronic assessment of incontinence episodes could be a way to reduce the amount of missing data in future trials.”
Still, the investigators concluded that PFMT was the superior therapy. “Based on the results, PFMT in combination with conservative treatment should be offered as first-line treatment for adults with fecal incontinence.”
They also highlighted the broad applicability of their findings, regardless of facility type.
“In the current trial, more than one-third of patients had sphincter injuries confirmed at endoanal ultrasound, this reflects the tertiary setting of our trial,” they wrote. “However, our results may be highly relevant in a primary setting because there is an unmet need for treatment of fecal incontinence in primary health care, and the interventions do not necessarily need to be conducted at specialized centers.”
The study was funded by the Danish Foundation for Research in Physiotherapy, The Lundbeck Foundation, the Research Foundation at Copenhagen University Hospital, and the Foundation of Aase and Ejnar Danielsen. The investigators reported additional relationships with Medtronic, Helsefonden, Gynzone, and others.
SOURCE: Ussing A et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.015.
For first-line treatment of patients with fecal incontinence, pelvic floor muscle training (PFMT) is superior to attention-control massage, according to investigators.
Source: American Gastroenterological Association
In a study involving 98 patients, those who combined PFMT with biofeedback and conservative therapy were five times as likely to report improved symptoms than those who used attention-control massage and conservative therapy, reported Anja Ussing, MD, of Copenhagen University Hospital in Hvidovre, Denmark, and colleagues. Patients in the PFMT group also had significantly greater reductions in severity of incontinence, based on Vaizey incontinence score.
“Evidence from randomized controlled trials regarding the effect of PFMT for fecal incontinence is lacking,” the investigators wrote in Clinical Gastroenterology and Hepatology. Although previous trials have evaluated PFMT, none controlled for the effect of interactions with care providers. “To evaluate the effect of PFMT, there is a need for a trial that uses a comparator to control for this nonspecific trial effect associated with the attention given by the health care professional.”
To perform such a trial, the investigators recruited 98 patients with a history of fecal incontinence for at least 6 months. Patients were excluded if they had severe neurologic conditions, pregnancy, diarrhea, rectal prolapse, previous radiotherapy or cancer surgery in the lower abdomen, cognitive impairment, inadequate fluency in Danish, or a history of at least two PFMT training sessions within the past year. Enrolled patients were randomized in a 1:1 ratio to receive PFMT with biofeedback and conservative treatment, or attention-control massage training and conservative therapy. The primary outcome was symptom improvement, determined by the Patient Global Impression of Improvement scale at 16 weeks. Secondary outcome measures included the Fecal Incontinence Severity Index, Vaizey score, and Fecal Incontinence Quality of Life Scale.
Patients were predominantly female, with just three men in the PFMT group and six in the attention-control massage group. The PFMT group also had a slightly higher median age, at 65 years, compared with 58 years in the control group.
At 16 weeks, the difference in self-reported symptoms was dramatic, with 74.5% of patients in the PFMT group reporting improvement, compared with 35.5% in the control group, which translated to an unadjusted odds ratio of 5.16 (P = .0002). When symptom improvements were confined to those who reported being “very much better” or “much better,” the disparity between groups still remained strong, with an unadjusted OR of 2.98 (P = .025). Among the three secondary outcomes, only the Vaizey score showed a significant difference between groups. Patients treated with PFMT had a mean difference in Vaizey score change of –1.83 points, using a scale from 0 to 24, with 24 representing complete incontinence (P = .04).
“We were not able to show any differences between groups in the number of fecal incontinence episodes,” the investigators wrote. “We had much missing data in the bowel diaries and we can only guess what the result would have been if the data had been more complete. Electronic assessment of incontinence episodes could be a way to reduce the amount of missing data in future trials.”
Still, the investigators concluded that PFMT was the superior therapy. “Based on the results, PFMT in combination with conservative treatment should be offered as first-line treatment for adults with fecal incontinence.”
They also highlighted the broad applicability of their findings, regardless of facility type.
“In the current trial, more than one-third of patients had sphincter injuries confirmed at endoanal ultrasound, this reflects the tertiary setting of our trial,” they wrote. “However, our results may be highly relevant in a primary setting because there is an unmet need for treatment of fecal incontinence in primary health care, and the interventions do not necessarily need to be conducted at specialized centers.”
The study was funded by the Danish Foundation for Research in Physiotherapy, The Lundbeck Foundation, the Research Foundation at Copenhagen University Hospital, and the Foundation of Aase and Ejnar Danielsen. The investigators reported additional relationships with Medtronic, Helsefonden, Gynzone, and others.
SOURCE: Ussing A et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.015.
For first-line treatment of patients with fecal incontinence, pelvic floor muscle training (PFMT) is superior to attention-control massage, according to investigators.
Source: American Gastroenterological Association
In a study involving 98 patients, those who combined PFMT with biofeedback and conservative therapy were five times as likely to report improved symptoms than those who used attention-control massage and conservative therapy, reported Anja Ussing, MD, of Copenhagen University Hospital in Hvidovre, Denmark, and colleagues. Patients in the PFMT group also had significantly greater reductions in severity of incontinence, based on Vaizey incontinence score.
“Evidence from randomized controlled trials regarding the effect of PFMT for fecal incontinence is lacking,” the investigators wrote in Clinical Gastroenterology and Hepatology. Although previous trials have evaluated PFMT, none controlled for the effect of interactions with care providers. “To evaluate the effect of PFMT, there is a need for a trial that uses a comparator to control for this nonspecific trial effect associated with the attention given by the health care professional.”
To perform such a trial, the investigators recruited 98 patients with a history of fecal incontinence for at least 6 months. Patients were excluded if they had severe neurologic conditions, pregnancy, diarrhea, rectal prolapse, previous radiotherapy or cancer surgery in the lower abdomen, cognitive impairment, inadequate fluency in Danish, or a history of at least two PFMT training sessions within the past year. Enrolled patients were randomized in a 1:1 ratio to receive PFMT with biofeedback and conservative treatment, or attention-control massage training and conservative therapy. The primary outcome was symptom improvement, determined by the Patient Global Impression of Improvement scale at 16 weeks. Secondary outcome measures included the Fecal Incontinence Severity Index, Vaizey score, and Fecal Incontinence Quality of Life Scale.
Patients were predominantly female, with just three men in the PFMT group and six in the attention-control massage group. The PFMT group also had a slightly higher median age, at 65 years, compared with 58 years in the control group.
At 16 weeks, the difference in self-reported symptoms was dramatic, with 74.5% of patients in the PFMT group reporting improvement, compared with 35.5% in the control group, which translated to an unadjusted odds ratio of 5.16 (P = .0002). When symptom improvements were confined to those who reported being “very much better” or “much better,” the disparity between groups still remained strong, with an unadjusted OR of 2.98 (P = .025). Among the three secondary outcomes, only the Vaizey score showed a significant difference between groups. Patients treated with PFMT had a mean difference in Vaizey score change of –1.83 points, using a scale from 0 to 24, with 24 representing complete incontinence (P = .04).
“We were not able to show any differences between groups in the number of fecal incontinence episodes,” the investigators wrote. “We had much missing data in the bowel diaries and we can only guess what the result would have been if the data had been more complete. Electronic assessment of incontinence episodes could be a way to reduce the amount of missing data in future trials.”
Still, the investigators concluded that PFMT was the superior therapy. “Based on the results, PFMT in combination with conservative treatment should be offered as first-line treatment for adults with fecal incontinence.”
They also highlighted the broad applicability of their findings, regardless of facility type.
“In the current trial, more than one-third of patients had sphincter injuries confirmed at endoanal ultrasound, this reflects the tertiary setting of our trial,” they wrote. “However, our results may be highly relevant in a primary setting because there is an unmet need for treatment of fecal incontinence in primary health care, and the interventions do not necessarily need to be conducted at specialized centers.”
The study was funded by the Danish Foundation for Research in Physiotherapy, The Lundbeck Foundation, the Research Foundation at Copenhagen University Hospital, and the Foundation of Aase and Ejnar Danielsen. The investigators reported additional relationships with Medtronic, Helsefonden, Gynzone, and others.
SOURCE: Ussing A et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.015.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Clip closure reduces postop bleeding risk after proximal polyp resection
In a prospective study of almost 1,000 patients, this benefit was not influenced by polyp size, electrocautery setting, or concomitant use of antithrombotic medications, reported Heiko Pohl, MD, of Geisel School of Medicine at Dartmouth, Hanover, N.H., and colleagues.
“Endoscopic resection has replaced surgical resection as the primary treatment for large colon polyps due to a lower morbidity and less need for hospitalization,” the investigators wrote in Gastroenterology. “Postprocedure bleeding is the most common severe complication, occurring in 2%-24% of patients.” This risk is particularly common among patients with large polyps in the proximal colon.
Although previous trials have suggested that closing polyp resection sites with hemoclips could reduce the risk of postoperative bleeding, studies to date have been retrospective or uncontrolled, precluding definitive conclusions.
The prospective, controlled trial involved 44 endoscopists at 18 treatment centers. Enrollment included 919 patients with large, nonpedunculated colorectal polyps of at least 20 mm in diameter. Patients were randomized in an approximate 1:1 ratio into the clip group or control group and followed for at least 30 days after endoscopic polyp resection. The primary outcome was postoperative bleeding, defined as severe bleeding that required invasive intervention such as surgery or blood transfusion during follow-up. Subgroup analysis looked for associations between bleeding and polyp location, size, electrocautery setting, and medications.
Across the entire population, postoperative bleeding was significantly less common among patients who had their resection sites closed with clips, occurring at a rate of 3.5%, compared with 7.1% in the control group (P = .015). Serious adverse events were also less common in the clip group than the control group (4.8% vs. 9.5%; P = .006).
While the reduction of bleeding risk from clip closure was not influenced by polyp size, use of antithrombotic medications, or electrocautery setting, polyp location turned out to be a critical factor. Greatest reduction in risk of postoperative bleeding was seen among the 615 patients who had proximal polyps, based on a bleeding rate of 3.3% when clipped versus 9.6% among those who went without clips (P = .001). In contrast, clips in the distal colon were associated with a higher absolute risk of postoperative bleeding than no clips (4.0% vs. 1.4%); however, this difference was not statistically significant (P = .178).
“[T]his multicenter trial provides strong evidence that endoscopic clip closure of the mucosal defect after resection of large ... nonpedunculated colon polyps in the proximal colon significantly reduces the risk of postprocedure bleeding,” the investigators wrote.
They suggested that their study provides greater confidence in findings than similar trials previously conducted, enough to recommend that endoscopic techniques be altered accordingly. “[O]ur trial was methodologically rigorous, adequately powered, and all polyps were removed by endoscopic mucosal resection, which is considered the standard technique for large colon polyps in Western countries,” they wrote. “The results of the study are therefore broadly applicable to current practice. Furthermore, conduct of the study at different centers with multiple endoscopists strengthens generalizability of the findings.”
The investigators also speculated about why postoperative bleeding risk was increased when clips were used in the distal colon. “Potential explanations include a poorer quality of clipping, a shorter clip retention time, possible related to a thicker colon wall in the distal compared to the proximal colon,” they wrote, adding that “these considerations are worthy of further study.”
Indeed, more work remains to be done. “A formal cost-effectiveness analysis is needed to better understand the value of clip closure,” they wrote. “Such analysis can then also examine possible thresholds, for instance regarding the minimum proportion of polyp resections, for which complete closure should be achieved, or the maximum number of clips to close a defect.”
The study was funded by Boston Scientific. The investigators reported additional relationships with U.S. Endoscopy, Olympus, Medtronic, and others.
SOURCE: Pohl H et al. Gastroenterology. 2019 Mar 15. doi: 10.1053/j.gastro.2019.03.019.
In a prospective study of almost 1,000 patients, this benefit was not influenced by polyp size, electrocautery setting, or concomitant use of antithrombotic medications, reported Heiko Pohl, MD, of Geisel School of Medicine at Dartmouth, Hanover, N.H., and colleagues.
“Endoscopic resection has replaced surgical resection as the primary treatment for large colon polyps due to a lower morbidity and less need for hospitalization,” the investigators wrote in Gastroenterology. “Postprocedure bleeding is the most common severe complication, occurring in 2%-24% of patients.” This risk is particularly common among patients with large polyps in the proximal colon.
Although previous trials have suggested that closing polyp resection sites with hemoclips could reduce the risk of postoperative bleeding, studies to date have been retrospective or uncontrolled, precluding definitive conclusions.
The prospective, controlled trial involved 44 endoscopists at 18 treatment centers. Enrollment included 919 patients with large, nonpedunculated colorectal polyps of at least 20 mm in diameter. Patients were randomized in an approximate 1:1 ratio into the clip group or control group and followed for at least 30 days after endoscopic polyp resection. The primary outcome was postoperative bleeding, defined as severe bleeding that required invasive intervention such as surgery or blood transfusion during follow-up. Subgroup analysis looked for associations between bleeding and polyp location, size, electrocautery setting, and medications.
Across the entire population, postoperative bleeding was significantly less common among patients who had their resection sites closed with clips, occurring at a rate of 3.5%, compared with 7.1% in the control group (P = .015). Serious adverse events were also less common in the clip group than the control group (4.8% vs. 9.5%; P = .006).
While the reduction of bleeding risk from clip closure was not influenced by polyp size, use of antithrombotic medications, or electrocautery setting, polyp location turned out to be a critical factor. Greatest reduction in risk of postoperative bleeding was seen among the 615 patients who had proximal polyps, based on a bleeding rate of 3.3% when clipped versus 9.6% among those who went without clips (P = .001). In contrast, clips in the distal colon were associated with a higher absolute risk of postoperative bleeding than no clips (4.0% vs. 1.4%); however, this difference was not statistically significant (P = .178).
“[T]his multicenter trial provides strong evidence that endoscopic clip closure of the mucosal defect after resection of large ... nonpedunculated colon polyps in the proximal colon significantly reduces the risk of postprocedure bleeding,” the investigators wrote.
They suggested that their study provides greater confidence in findings than similar trials previously conducted, enough to recommend that endoscopic techniques be altered accordingly. “[O]ur trial was methodologically rigorous, adequately powered, and all polyps were removed by endoscopic mucosal resection, which is considered the standard technique for large colon polyps in Western countries,” they wrote. “The results of the study are therefore broadly applicable to current practice. Furthermore, conduct of the study at different centers with multiple endoscopists strengthens generalizability of the findings.”
The investigators also speculated about why postoperative bleeding risk was increased when clips were used in the distal colon. “Potential explanations include a poorer quality of clipping, a shorter clip retention time, possible related to a thicker colon wall in the distal compared to the proximal colon,” they wrote, adding that “these considerations are worthy of further study.”
Indeed, more work remains to be done. “A formal cost-effectiveness analysis is needed to better understand the value of clip closure,” they wrote. “Such analysis can then also examine possible thresholds, for instance regarding the minimum proportion of polyp resections, for which complete closure should be achieved, or the maximum number of clips to close a defect.”
The study was funded by Boston Scientific. The investigators reported additional relationships with U.S. Endoscopy, Olympus, Medtronic, and others.
SOURCE: Pohl H et al. Gastroenterology. 2019 Mar 15. doi: 10.1053/j.gastro.2019.03.019.
In a prospective study of almost 1,000 patients, this benefit was not influenced by polyp size, electrocautery setting, or concomitant use of antithrombotic medications, reported Heiko Pohl, MD, of Geisel School of Medicine at Dartmouth, Hanover, N.H., and colleagues.
“Endoscopic resection has replaced surgical resection as the primary treatment for large colon polyps due to a lower morbidity and less need for hospitalization,” the investigators wrote in Gastroenterology. “Postprocedure bleeding is the most common severe complication, occurring in 2%-24% of patients.” This risk is particularly common among patients with large polyps in the proximal colon.
Although previous trials have suggested that closing polyp resection sites with hemoclips could reduce the risk of postoperative bleeding, studies to date have been retrospective or uncontrolled, precluding definitive conclusions.
The prospective, controlled trial involved 44 endoscopists at 18 treatment centers. Enrollment included 919 patients with large, nonpedunculated colorectal polyps of at least 20 mm in diameter. Patients were randomized in an approximate 1:1 ratio into the clip group or control group and followed for at least 30 days after endoscopic polyp resection. The primary outcome was postoperative bleeding, defined as severe bleeding that required invasive intervention such as surgery or blood transfusion during follow-up. Subgroup analysis looked for associations between bleeding and polyp location, size, electrocautery setting, and medications.
Across the entire population, postoperative bleeding was significantly less common among patients who had their resection sites closed with clips, occurring at a rate of 3.5%, compared with 7.1% in the control group (P = .015). Serious adverse events were also less common in the clip group than the control group (4.8% vs. 9.5%; P = .006).
While the reduction of bleeding risk from clip closure was not influenced by polyp size, use of antithrombotic medications, or electrocautery setting, polyp location turned out to be a critical factor. Greatest reduction in risk of postoperative bleeding was seen among the 615 patients who had proximal polyps, based on a bleeding rate of 3.3% when clipped versus 9.6% among those who went without clips (P = .001). In contrast, clips in the distal colon were associated with a higher absolute risk of postoperative bleeding than no clips (4.0% vs. 1.4%); however, this difference was not statistically significant (P = .178).
“[T]his multicenter trial provides strong evidence that endoscopic clip closure of the mucosal defect after resection of large ... nonpedunculated colon polyps in the proximal colon significantly reduces the risk of postprocedure bleeding,” the investigators wrote.
They suggested that their study provides greater confidence in findings than similar trials previously conducted, enough to recommend that endoscopic techniques be altered accordingly. “[O]ur trial was methodologically rigorous, adequately powered, and all polyps were removed by endoscopic mucosal resection, which is considered the standard technique for large colon polyps in Western countries,” they wrote. “The results of the study are therefore broadly applicable to current practice. Furthermore, conduct of the study at different centers with multiple endoscopists strengthens generalizability of the findings.”
The investigators also speculated about why postoperative bleeding risk was increased when clips were used in the distal colon. “Potential explanations include a poorer quality of clipping, a shorter clip retention time, possible related to a thicker colon wall in the distal compared to the proximal colon,” they wrote, adding that “these considerations are worthy of further study.”
Indeed, more work remains to be done. “A formal cost-effectiveness analysis is needed to better understand the value of clip closure,” they wrote. “Such analysis can then also examine possible thresholds, for instance regarding the minimum proportion of polyp resections, for which complete closure should be achieved, or the maximum number of clips to close a defect.”
The study was funded by Boston Scientific. The investigators reported additional relationships with U.S. Endoscopy, Olympus, Medtronic, and others.
SOURCE: Pohl H et al. Gastroenterology. 2019 Mar 15. doi: 10.1053/j.gastro.2019.03.019.
FROM GASTROENTEROLOGY
Patients with viral hepatitis are living longer, increasing risk of extrahepatic mortality
Patients with viral hepatitis may live longer after treatment with direct-acting antiviral agents (DAAs), but their risk of extrahepatic causes of death may rise as a result, according to investigators.
Importantly, this increasing rate of extrahepatic mortality shouldn’t be seen as a causal link with DAA use, cautioned lead author Donghee Kim, MD, PhD, of Stanford (Calif.) University, and colleagues. Instead, the upward trend is more likely because of successful treatment with DAAs, which can increase lifespan, and with it, time for susceptibility to extrahepatic conditions.
This was just one finding from a retrospective study that used U.S. Census and National Center for Health Statistics mortality records to evaluate almost 28 million deaths that occurred between 2007 and 2017. The investigators looked for mortality trends among patients with common chronic liver diseases, including viral hepatitis, alcoholic liver disease (ALD), and nonalcoholic fatty liver disease (NAFLD), noting that each of these conditions is associated with extrahepatic complications. The study included deaths due to extrahepatic cancer, cardiovascular disease, and diabetes.
While the efficacy of therapy for viral hepatitis has improved markedly since 2014, treatments for ALD and NAFLD have remained static, the investigators noted.
“Unfortunately, there have been no significant breakthroughs in the treatment of [ALD] over the last 2 decades, resulting in an increase in estimated global mortality to 3.8%,” the investigators wrote in Gastroenterology.
“[NAFLD] is the most common chronic liver disease in the world,” they added. “The leading cause of death in individuals with NAFLD is cardiovascular disease, followed by extrahepatic malignancies, and then liver-related mortality. However, recent trends in ALD and NAFLD-related extrahepatic complications in comparison to viral hepatitis have not been studied.”
The results of the current study supported the positive impact of DAAs, which began to see widespread use in 2014. Age-standardized mortality among patients with hepatitis C virus rose until 2014 (2.2% per year) and dropped thereafter (–6.5% per year). Mortality among those with hepatitis B virus steadily decreased over the study period (–1.2% per year).
Of note, while deaths because of HCV-related liver disease dropped from 2014 to 2017, extrahepatic causes of death didn’t follow suit. Age-standardized mortality for cardiovascular disease and diabetes increased at average annual rates of 1.9% and 3.3%, respectively, while the rate of extrahepatic cancer-related deaths held steady.
“The widespread use, higher efficacy and durable response to DAA agents in individuals with HCV infection may have resulted in a paradigm shift in the clinical progression of coexisting disease entities following response to DAA agents in the virus-free environment,” the investigators wrote. “These findings suggest assessment and identification of risk and risk factors for extrahepatic cancer, cardiovascular disease, and diabetes in individuals who have been successfully treated and cured of HCV infection.”
In sharp contrast with the viral hepatitis findings, mortality rates among patients with ALD and NAFLD increased at an accelerating rate over the 11-year study period.
Among patients with ALD, all-cause mortality increased by an average of 3.4% per year, at a higher rate in the second half of the study than the first (4.6% vs 2.1%). Liver disease–related mortality rose at a similar, accelerating rate. In the same group, deaths due to cardiovascular disease increased at an average annual rate of 2.1%, which was accelerating, while extrahepatic cancer-related deaths increased at a more constant rate of 3.6%.
For patients with NAFLD, all-cause mortality increased by 8.1% per year, accelerating from 6.1% in the first half of the study to 11.2% in the second. Deaths from liver disease increased at an average rate of 12.6% per year, while extrahepatic deaths increased significantly for all three included types: cardiovascular disease (2.0%), extrahepatic cancer (15.1%), and diabetes (9.7%).
Concerning the worsening rates of mortality among patients with ALD and NAFLD, the investigators cited a lack of progress in treatments, and suggested that “the quest for newer therapies must remain the cornerstone in our efforts.”
The investigators reported no external funding or conflicts of interest.
SOURCE: Kim D et al. Gastroenterology. 2019 Jun 25. doi: 10.1053/j.gastro.2019.06.026.
Chronic liver disease is one of the leading causes of death in the United States. Whereas mortality from other causes (e.g., heart disease and cancer) has declined, age-adjusted mortality from chronic liver disease has continued to increase. There have been a few major advances in the treatment of several chronic liver diseases in recent years. These include nucleos(t)ide analogues for hepatitis B virus (HBV) and direct-acting antiviral agents for the treatment of hepatitis C virus infection (HCV). Many studies show that these treatments are highly effective in improving patient outcomes, including patient survival. However, whether these individual-level benefits have translated into population-level improvements remains unclear.
Overall, the results were mixed; they were encouraging for viral hepatitis but concerning for alcoholic and nonalcoholic liver disease. Specifically, all-cause mortality from HCV was on an upward trajectory in the first 7 years (from 2007 to 2014) but the trend shifted from 2014 onward. Importantly, this inflection point coincided with the timing of the new HCV treatments. Most of this positive shift post 2014 was related to a strong downward trend in liver-related mortality. In contrast, upward trends in mortality related to extrahepatic causes (such as cardiovascular mortality) continued unabated. The authors found similar results for HBV. The story, however, was different for alcohol and nonalcohol-related liver disease – both conditions lacking effective treatments; liver-related mortality for both continued to increase during the study period.
Although we cannot make causal inferences from this study, overall, the results are good news. They suggest that HBV and HCV treatments have reached enough infected people to result in tangible improvements in the burden of chronic liver disease. We may now need to shift the focus of secondary prevention efforts from liver to nonliver (extrahepatic) morbidity in the newer cohorts of patients with treated HCV and HBV.
Fasiha Kanwal, MD, MSHS, is an investigator in the clinical epidemiology and comparative effectiveness program for the Center for Innovations in Quality, Effectiveness, and Safety in collaboration with the Michael E. DeBakey VA Medical Center, as well as an associate professor of medicine in gastroenterology and hepatology at Baylor College of Medicine in Houston. She has no conflicts of interest.
Chronic liver disease is one of the leading causes of death in the United States. Whereas mortality from other causes (e.g., heart disease and cancer) has declined, age-adjusted mortality from chronic liver disease has continued to increase. There have been a few major advances in the treatment of several chronic liver diseases in recent years. These include nucleos(t)ide analogues for hepatitis B virus (HBV) and direct-acting antiviral agents for the treatment of hepatitis C virus infection (HCV). Many studies show that these treatments are highly effective in improving patient outcomes, including patient survival. However, whether these individual-level benefits have translated into population-level improvements remains unclear.
Overall, the results were mixed; they were encouraging for viral hepatitis but concerning for alcoholic and nonalcoholic liver disease. Specifically, all-cause mortality from HCV was on an upward trajectory in the first 7 years (from 2007 to 2014) but the trend shifted from 2014 onward. Importantly, this inflection point coincided with the timing of the new HCV treatments. Most of this positive shift post 2014 was related to a strong downward trend in liver-related mortality. In contrast, upward trends in mortality related to extrahepatic causes (such as cardiovascular mortality) continued unabated. The authors found similar results for HBV. The story, however, was different for alcohol and nonalcohol-related liver disease – both conditions lacking effective treatments; liver-related mortality for both continued to increase during the study period.
Although we cannot make causal inferences from this study, overall, the results are good news. They suggest that HBV and HCV treatments have reached enough infected people to result in tangible improvements in the burden of chronic liver disease. We may now need to shift the focus of secondary prevention efforts from liver to nonliver (extrahepatic) morbidity in the newer cohorts of patients with treated HCV and HBV.
Fasiha Kanwal, MD, MSHS, is an investigator in the clinical epidemiology and comparative effectiveness program for the Center for Innovations in Quality, Effectiveness, and Safety in collaboration with the Michael E. DeBakey VA Medical Center, as well as an associate professor of medicine in gastroenterology and hepatology at Baylor College of Medicine in Houston. She has no conflicts of interest.
Chronic liver disease is one of the leading causes of death in the United States. Whereas mortality from other causes (e.g., heart disease and cancer) has declined, age-adjusted mortality from chronic liver disease has continued to increase. There have been a few major advances in the treatment of several chronic liver diseases in recent years. These include nucleos(t)ide analogues for hepatitis B virus (HBV) and direct-acting antiviral agents for the treatment of hepatitis C virus infection (HCV). Many studies show that these treatments are highly effective in improving patient outcomes, including patient survival. However, whether these individual-level benefits have translated into population-level improvements remains unclear.
Overall, the results were mixed; they were encouraging for viral hepatitis but concerning for alcoholic and nonalcoholic liver disease. Specifically, all-cause mortality from HCV was on an upward trajectory in the first 7 years (from 2007 to 2014) but the trend shifted from 2014 onward. Importantly, this inflection point coincided with the timing of the new HCV treatments. Most of this positive shift post 2014 was related to a strong downward trend in liver-related mortality. In contrast, upward trends in mortality related to extrahepatic causes (such as cardiovascular mortality) continued unabated. The authors found similar results for HBV. The story, however, was different for alcohol and nonalcohol-related liver disease – both conditions lacking effective treatments; liver-related mortality for both continued to increase during the study period.
Although we cannot make causal inferences from this study, overall, the results are good news. They suggest that HBV and HCV treatments have reached enough infected people to result in tangible improvements in the burden of chronic liver disease. We may now need to shift the focus of secondary prevention efforts from liver to nonliver (extrahepatic) morbidity in the newer cohorts of patients with treated HCV and HBV.
Fasiha Kanwal, MD, MSHS, is an investigator in the clinical epidemiology and comparative effectiveness program for the Center for Innovations in Quality, Effectiveness, and Safety in collaboration with the Michael E. DeBakey VA Medical Center, as well as an associate professor of medicine in gastroenterology and hepatology at Baylor College of Medicine in Houston. She has no conflicts of interest.
Patients with viral hepatitis may live longer after treatment with direct-acting antiviral agents (DAAs), but their risk of extrahepatic causes of death may rise as a result, according to investigators.
Importantly, this increasing rate of extrahepatic mortality shouldn’t be seen as a causal link with DAA use, cautioned lead author Donghee Kim, MD, PhD, of Stanford (Calif.) University, and colleagues. Instead, the upward trend is more likely because of successful treatment with DAAs, which can increase lifespan, and with it, time for susceptibility to extrahepatic conditions.
This was just one finding from a retrospective study that used U.S. Census and National Center for Health Statistics mortality records to evaluate almost 28 million deaths that occurred between 2007 and 2017. The investigators looked for mortality trends among patients with common chronic liver diseases, including viral hepatitis, alcoholic liver disease (ALD), and nonalcoholic fatty liver disease (NAFLD), noting that each of these conditions is associated with extrahepatic complications. The study included deaths due to extrahepatic cancer, cardiovascular disease, and diabetes.
While the efficacy of therapy for viral hepatitis has improved markedly since 2014, treatments for ALD and NAFLD have remained static, the investigators noted.
“Unfortunately, there have been no significant breakthroughs in the treatment of [ALD] over the last 2 decades, resulting in an increase in estimated global mortality to 3.8%,” the investigators wrote in Gastroenterology.
“[NAFLD] is the most common chronic liver disease in the world,” they added. “The leading cause of death in individuals with NAFLD is cardiovascular disease, followed by extrahepatic malignancies, and then liver-related mortality. However, recent trends in ALD and NAFLD-related extrahepatic complications in comparison to viral hepatitis have not been studied.”
The results of the current study supported the positive impact of DAAs, which began to see widespread use in 2014. Age-standardized mortality among patients with hepatitis C virus rose until 2014 (2.2% per year) and dropped thereafter (–6.5% per year). Mortality among those with hepatitis B virus steadily decreased over the study period (–1.2% per year).
Of note, while deaths because of HCV-related liver disease dropped from 2014 to 2017, extrahepatic causes of death didn’t follow suit. Age-standardized mortality for cardiovascular disease and diabetes increased at average annual rates of 1.9% and 3.3%, respectively, while the rate of extrahepatic cancer-related deaths held steady.
“The widespread use, higher efficacy and durable response to DAA agents in individuals with HCV infection may have resulted in a paradigm shift in the clinical progression of coexisting disease entities following response to DAA agents in the virus-free environment,” the investigators wrote. “These findings suggest assessment and identification of risk and risk factors for extrahepatic cancer, cardiovascular disease, and diabetes in individuals who have been successfully treated and cured of HCV infection.”
In sharp contrast with the viral hepatitis findings, mortality rates among patients with ALD and NAFLD increased at an accelerating rate over the 11-year study period.
Among patients with ALD, all-cause mortality increased by an average of 3.4% per year, at a higher rate in the second half of the study than the first (4.6% vs 2.1%). Liver disease–related mortality rose at a similar, accelerating rate. In the same group, deaths due to cardiovascular disease increased at an average annual rate of 2.1%, which was accelerating, while extrahepatic cancer-related deaths increased at a more constant rate of 3.6%.
For patients with NAFLD, all-cause mortality increased by 8.1% per year, accelerating from 6.1% in the first half of the study to 11.2% in the second. Deaths from liver disease increased at an average rate of 12.6% per year, while extrahepatic deaths increased significantly for all three included types: cardiovascular disease (2.0%), extrahepatic cancer (15.1%), and diabetes (9.7%).
Concerning the worsening rates of mortality among patients with ALD and NAFLD, the investigators cited a lack of progress in treatments, and suggested that “the quest for newer therapies must remain the cornerstone in our efforts.”
The investigators reported no external funding or conflicts of interest.
SOURCE: Kim D et al. Gastroenterology. 2019 Jun 25. doi: 10.1053/j.gastro.2019.06.026.
Patients with viral hepatitis may live longer after treatment with direct-acting antiviral agents (DAAs), but their risk of extrahepatic causes of death may rise as a result, according to investigators.
Importantly, this increasing rate of extrahepatic mortality shouldn’t be seen as a causal link with DAA use, cautioned lead author Donghee Kim, MD, PhD, of Stanford (Calif.) University, and colleagues. Instead, the upward trend is more likely because of successful treatment with DAAs, which can increase lifespan, and with it, time for susceptibility to extrahepatic conditions.
This was just one finding from a retrospective study that used U.S. Census and National Center for Health Statistics mortality records to evaluate almost 28 million deaths that occurred between 2007 and 2017. The investigators looked for mortality trends among patients with common chronic liver diseases, including viral hepatitis, alcoholic liver disease (ALD), and nonalcoholic fatty liver disease (NAFLD), noting that each of these conditions is associated with extrahepatic complications. The study included deaths due to extrahepatic cancer, cardiovascular disease, and diabetes.
While the efficacy of therapy for viral hepatitis has improved markedly since 2014, treatments for ALD and NAFLD have remained static, the investigators noted.
“Unfortunately, there have been no significant breakthroughs in the treatment of [ALD] over the last 2 decades, resulting in an increase in estimated global mortality to 3.8%,” the investigators wrote in Gastroenterology.
“[NAFLD] is the most common chronic liver disease in the world,” they added. “The leading cause of death in individuals with NAFLD is cardiovascular disease, followed by extrahepatic malignancies, and then liver-related mortality. However, recent trends in ALD and NAFLD-related extrahepatic complications in comparison to viral hepatitis have not been studied.”
The results of the current study supported the positive impact of DAAs, which began to see widespread use in 2014. Age-standardized mortality among patients with hepatitis C virus rose until 2014 (2.2% per year) and dropped thereafter (–6.5% per year). Mortality among those with hepatitis B virus steadily decreased over the study period (–1.2% per year).
Of note, while deaths because of HCV-related liver disease dropped from 2014 to 2017, extrahepatic causes of death didn’t follow suit. Age-standardized mortality for cardiovascular disease and diabetes increased at average annual rates of 1.9% and 3.3%, respectively, while the rate of extrahepatic cancer-related deaths held steady.
“The widespread use, higher efficacy and durable response to DAA agents in individuals with HCV infection may have resulted in a paradigm shift in the clinical progression of coexisting disease entities following response to DAA agents in the virus-free environment,” the investigators wrote. “These findings suggest assessment and identification of risk and risk factors for extrahepatic cancer, cardiovascular disease, and diabetes in individuals who have been successfully treated and cured of HCV infection.”
In sharp contrast with the viral hepatitis findings, mortality rates among patients with ALD and NAFLD increased at an accelerating rate over the 11-year study period.
Among patients with ALD, all-cause mortality increased by an average of 3.4% per year, at a higher rate in the second half of the study than the first (4.6% vs 2.1%). Liver disease–related mortality rose at a similar, accelerating rate. In the same group, deaths due to cardiovascular disease increased at an average annual rate of 2.1%, which was accelerating, while extrahepatic cancer-related deaths increased at a more constant rate of 3.6%.
For patients with NAFLD, all-cause mortality increased by 8.1% per year, accelerating from 6.1% in the first half of the study to 11.2% in the second. Deaths from liver disease increased at an average rate of 12.6% per year, while extrahepatic deaths increased significantly for all three included types: cardiovascular disease (2.0%), extrahepatic cancer (15.1%), and diabetes (9.7%).
Concerning the worsening rates of mortality among patients with ALD and NAFLD, the investigators cited a lack of progress in treatments, and suggested that “the quest for newer therapies must remain the cornerstone in our efforts.”
The investigators reported no external funding or conflicts of interest.
SOURCE: Kim D et al. Gastroenterology. 2019 Jun 25. doi: 10.1053/j.gastro.2019.06.026.
FROM GASTROENTEROLOGY
Type of renal dysfunction affects liver cirrhosis mortality risk
For non–status 1 patients with cirrhosis who are awaiting liver transplantation, type of renal dysfunction may be a key determinant of mortality risk, based on a retrospective analysis of more than 22,000 patients.
Risk of death was greatest for patients with acute on chronic kidney disease (AKI on CKD), followed by AKI alone, then CKD alone, reported lead author Giuseppe Cullaro, MD, of the University of California, San Francisco, and colleagues.
Although it is well known that renal dysfunction worsens outcomes among patients with liver cirrhosis, the impact of different types of kidney pathology on mortality risk has been minimally researched, the investigators wrote in Clinical Gastroenterology and Hepatology. “To date, studies evaluating the impact of renal dysfunction on prognosis in patients with cirrhosis have mostly focused on AKI.”
To learn more, the investigators performed a retrospective study involving acute, chronic, and acute on chronic kidney disease among patients with cirrhosis. They included data from 22,680 non–status 1 adults who were awaiting liver transplantation between 2007 and 2014, with at least 90 days on the wait list. Information was gathered from the Organ Procurement and Transplantation Network registry.
AKI was defined by fewer than 72 days of hemodialysis, or an increase in creatinine of at least 0.3 mg/dL or at least 50% in the last 7 days. CKD was identified by more than 72 days of hemodialysis, or an estimated glomerular filtration rate less than 60 mL/min/1.73 m2 for 90 days with a final rate of at least 30 mL/min/1.73 m2. Using these criteria, the researchers put patients into four possible categories: AKI on CKD, AKI, CKD, or normal renal function. The primary outcome was wait list mortality, which was defined as death, or removal from the wait list for illness. Follow-up started at the time of addition to the wait list and continued until transplant, removal from the wait list, or death.
Multivariate analysis, which accounted for final MELD-Na score and other confounders, showed that patients with AKI on CKD fared worst, with a 2.86-fold higher mortality risk (subhazard [SHR] ratio, 2.86) than that of patients with normal renal function. The mortality risk for acute on chronic kidney disease was followed closely by patients with AKI alone (SHR, 2.42), and more distantly by patients with CKD alone (SHR, 1.56). Further analysis showed that the disparity between mortality risks of each subgroup became more pronounced with increased MELD-Na score. In addition, evaluation of receiver operating characteristic curves for 6-month wait list mortality showed that the addition of renal function to MELD-Na score increased the accuracy of prognosis from an area under the curve of 0.71 to 0.80 (P less than .001).
“This suggests that incorporating the pattern of renal function could provide an opportunity to better prognosticate risk of mortality in the patients with cirrhosis who are the sickest,” the investigators concluded.
They also speculated about why outcomes may vary by type of kidney dysfunction.
“We suspect that those patients who experience AKI and AKI on CKD in our cohort likely had a triggering event – infection, bleeding, hypovolemia – that put these patients at greater risk for waitlist mortality,” the investigators wrote. “These events inherently carry more risk than stable nonliver-related elevations in serum creatinine that are seen in patients with CKD. Because of this heterogeneity of etiology in renal dysfunction in patients with cirrhosis, it is perhaps not surprising that unique renal function patterns variably impact mortality.”
The investigators noted that the findings from the study have “important implications for clinical practice,” and suggested that including type of renal dysfunction would have the most significant affect on accuracy of prognoses among patients at greatest risk of mortality.
The study was funded by a Paul B. Beeson Career Development Award and the National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Verna disclosed relationships with Salix, Merck, and Gilead.
SOURCE: Cullaro et al. Clin Gastroenterol Hepatol. 2019 Feb 1. doi: 10.1016/j.cgh.2019.01.043.
Cirrhotic patients with renal failure have a sevenfold increase in mortality compared with those without renal failure. Acute kidney injury (AKI) is common in cirrhosis; increasingly, cirrhotic patients awaiting liver transplantation have or are also at risk for CKD. They are sicker, older, and have more comorbidities such as obesity and diabetes. In this study, the cumulative incidence of death on the wait list was much more pronounced for any form of AKI, with those with AKI on CKD having the highest cumulative incidence of wait list mortality compared with those with normal renal function. The study notably raises several important issues. First, AKI exerts a greater influence in risk of mortality on CKD than it does on those with normal renal function. This is relevant given the increasing prevalence of CKD in this population. Second, it emphasizes the need to effectively measure renal function. All serum creatinine-based equations overestimate glomerular filtration rate in the presence of renal dysfunction. Finally, the study highlights the importance of extrahepatic factors in determining mortality on the wait list. While in all comers, a mathematical model such as the MELDNa score may be able to predict mortality, for a specific patient the presence of comorbid conditions, malnutrition and sarcopenia, infections, critical illness, and now pattern of renal dysfunction, may all play a role.
Sumeet K. Asrani, MD, MSc, is a hepatologist affiliated with Baylor University Medical Center, Dallas. He has no conflicts of interest.
Cirrhotic patients with renal failure have a sevenfold increase in mortality compared with those without renal failure. Acute kidney injury (AKI) is common in cirrhosis; increasingly, cirrhotic patients awaiting liver transplantation have or are also at risk for CKD. They are sicker, older, and have more comorbidities such as obesity and diabetes. In this study, the cumulative incidence of death on the wait list was much more pronounced for any form of AKI, with those with AKI on CKD having the highest cumulative incidence of wait list mortality compared with those with normal renal function. The study notably raises several important issues. First, AKI exerts a greater influence in risk of mortality on CKD than it does on those with normal renal function. This is relevant given the increasing prevalence of CKD in this population. Second, it emphasizes the need to effectively measure renal function. All serum creatinine-based equations overestimate glomerular filtration rate in the presence of renal dysfunction. Finally, the study highlights the importance of extrahepatic factors in determining mortality on the wait list. While in all comers, a mathematical model such as the MELDNa score may be able to predict mortality, for a specific patient the presence of comorbid conditions, malnutrition and sarcopenia, infections, critical illness, and now pattern of renal dysfunction, may all play a role.
Sumeet K. Asrani, MD, MSc, is a hepatologist affiliated with Baylor University Medical Center, Dallas. He has no conflicts of interest.
Cirrhotic patients with renal failure have a sevenfold increase in mortality compared with those without renal failure. Acute kidney injury (AKI) is common in cirrhosis; increasingly, cirrhotic patients awaiting liver transplantation have or are also at risk for CKD. They are sicker, older, and have more comorbidities such as obesity and diabetes. In this study, the cumulative incidence of death on the wait list was much more pronounced for any form of AKI, with those with AKI on CKD having the highest cumulative incidence of wait list mortality compared with those with normal renal function. The study notably raises several important issues. First, AKI exerts a greater influence in risk of mortality on CKD than it does on those with normal renal function. This is relevant given the increasing prevalence of CKD in this population. Second, it emphasizes the need to effectively measure renal function. All serum creatinine-based equations overestimate glomerular filtration rate in the presence of renal dysfunction. Finally, the study highlights the importance of extrahepatic factors in determining mortality on the wait list. While in all comers, a mathematical model such as the MELDNa score may be able to predict mortality, for a specific patient the presence of comorbid conditions, malnutrition and sarcopenia, infections, critical illness, and now pattern of renal dysfunction, may all play a role.
Sumeet K. Asrani, MD, MSc, is a hepatologist affiliated with Baylor University Medical Center, Dallas. He has no conflicts of interest.
For non–status 1 patients with cirrhosis who are awaiting liver transplantation, type of renal dysfunction may be a key determinant of mortality risk, based on a retrospective analysis of more than 22,000 patients.
Risk of death was greatest for patients with acute on chronic kidney disease (AKI on CKD), followed by AKI alone, then CKD alone, reported lead author Giuseppe Cullaro, MD, of the University of California, San Francisco, and colleagues.
Although it is well known that renal dysfunction worsens outcomes among patients with liver cirrhosis, the impact of different types of kidney pathology on mortality risk has been minimally researched, the investigators wrote in Clinical Gastroenterology and Hepatology. “To date, studies evaluating the impact of renal dysfunction on prognosis in patients with cirrhosis have mostly focused on AKI.”
To learn more, the investigators performed a retrospective study involving acute, chronic, and acute on chronic kidney disease among patients with cirrhosis. They included data from 22,680 non–status 1 adults who were awaiting liver transplantation between 2007 and 2014, with at least 90 days on the wait list. Information was gathered from the Organ Procurement and Transplantation Network registry.
AKI was defined by fewer than 72 days of hemodialysis, or an increase in creatinine of at least 0.3 mg/dL or at least 50% in the last 7 days. CKD was identified by more than 72 days of hemodialysis, or an estimated glomerular filtration rate less than 60 mL/min/1.73 m2 for 90 days with a final rate of at least 30 mL/min/1.73 m2. Using these criteria, the researchers put patients into four possible categories: AKI on CKD, AKI, CKD, or normal renal function. The primary outcome was wait list mortality, which was defined as death, or removal from the wait list for illness. Follow-up started at the time of addition to the wait list and continued until transplant, removal from the wait list, or death.
Multivariate analysis, which accounted for final MELD-Na score and other confounders, showed that patients with AKI on CKD fared worst, with a 2.86-fold higher mortality risk (subhazard [SHR] ratio, 2.86) than that of patients with normal renal function. The mortality risk for acute on chronic kidney disease was followed closely by patients with AKI alone (SHR, 2.42), and more distantly by patients with CKD alone (SHR, 1.56). Further analysis showed that the disparity between mortality risks of each subgroup became more pronounced with increased MELD-Na score. In addition, evaluation of receiver operating characteristic curves for 6-month wait list mortality showed that the addition of renal function to MELD-Na score increased the accuracy of prognosis from an area under the curve of 0.71 to 0.80 (P less than .001).
“This suggests that incorporating the pattern of renal function could provide an opportunity to better prognosticate risk of mortality in the patients with cirrhosis who are the sickest,” the investigators concluded.
They also speculated about why outcomes may vary by type of kidney dysfunction.
“We suspect that those patients who experience AKI and AKI on CKD in our cohort likely had a triggering event – infection, bleeding, hypovolemia – that put these patients at greater risk for waitlist mortality,” the investigators wrote. “These events inherently carry more risk than stable nonliver-related elevations in serum creatinine that are seen in patients with CKD. Because of this heterogeneity of etiology in renal dysfunction in patients with cirrhosis, it is perhaps not surprising that unique renal function patterns variably impact mortality.”
The investigators noted that the findings from the study have “important implications for clinical practice,” and suggested that including type of renal dysfunction would have the most significant affect on accuracy of prognoses among patients at greatest risk of mortality.
The study was funded by a Paul B. Beeson Career Development Award and the National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Verna disclosed relationships with Salix, Merck, and Gilead.
SOURCE: Cullaro et al. Clin Gastroenterol Hepatol. 2019 Feb 1. doi: 10.1016/j.cgh.2019.01.043.
For non–status 1 patients with cirrhosis who are awaiting liver transplantation, type of renal dysfunction may be a key determinant of mortality risk, based on a retrospective analysis of more than 22,000 patients.
Risk of death was greatest for patients with acute on chronic kidney disease (AKI on CKD), followed by AKI alone, then CKD alone, reported lead author Giuseppe Cullaro, MD, of the University of California, San Francisco, and colleagues.
Although it is well known that renal dysfunction worsens outcomes among patients with liver cirrhosis, the impact of different types of kidney pathology on mortality risk has been minimally researched, the investigators wrote in Clinical Gastroenterology and Hepatology. “To date, studies evaluating the impact of renal dysfunction on prognosis in patients with cirrhosis have mostly focused on AKI.”
To learn more, the investigators performed a retrospective study involving acute, chronic, and acute on chronic kidney disease among patients with cirrhosis. They included data from 22,680 non–status 1 adults who were awaiting liver transplantation between 2007 and 2014, with at least 90 days on the wait list. Information was gathered from the Organ Procurement and Transplantation Network registry.
AKI was defined by fewer than 72 days of hemodialysis, or an increase in creatinine of at least 0.3 mg/dL or at least 50% in the last 7 days. CKD was identified by more than 72 days of hemodialysis, or an estimated glomerular filtration rate less than 60 mL/min/1.73 m2 for 90 days with a final rate of at least 30 mL/min/1.73 m2. Using these criteria, the researchers put patients into four possible categories: AKI on CKD, AKI, CKD, or normal renal function. The primary outcome was wait list mortality, which was defined as death, or removal from the wait list for illness. Follow-up started at the time of addition to the wait list and continued until transplant, removal from the wait list, or death.
Multivariate analysis, which accounted for final MELD-Na score and other confounders, showed that patients with AKI on CKD fared worst, with a 2.86-fold higher mortality risk (subhazard [SHR] ratio, 2.86) than that of patients with normal renal function. The mortality risk for acute on chronic kidney disease was followed closely by patients with AKI alone (SHR, 2.42), and more distantly by patients with CKD alone (SHR, 1.56). Further analysis showed that the disparity between mortality risks of each subgroup became more pronounced with increased MELD-Na score. In addition, evaluation of receiver operating characteristic curves for 6-month wait list mortality showed that the addition of renal function to MELD-Na score increased the accuracy of prognosis from an area under the curve of 0.71 to 0.80 (P less than .001).
“This suggests that incorporating the pattern of renal function could provide an opportunity to better prognosticate risk of mortality in the patients with cirrhosis who are the sickest,” the investigators concluded.
They also speculated about why outcomes may vary by type of kidney dysfunction.
“We suspect that those patients who experience AKI and AKI on CKD in our cohort likely had a triggering event – infection, bleeding, hypovolemia – that put these patients at greater risk for waitlist mortality,” the investigators wrote. “These events inherently carry more risk than stable nonliver-related elevations in serum creatinine that are seen in patients with CKD. Because of this heterogeneity of etiology in renal dysfunction in patients with cirrhosis, it is perhaps not surprising that unique renal function patterns variably impact mortality.”
The investigators noted that the findings from the study have “important implications for clinical practice,” and suggested that including type of renal dysfunction would have the most significant affect on accuracy of prognoses among patients at greatest risk of mortality.
The study was funded by a Paul B. Beeson Career Development Award and the National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Verna disclosed relationships with Salix, Merck, and Gilead.
SOURCE: Cullaro et al. Clin Gastroenterol Hepatol. 2019 Feb 1. doi: 10.1016/j.cgh.2019.01.043.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY