User login
HCC surveillance after anti-HCV therapy cost effective only for patients with cirrhosis
For patients with hepatitis C virus (HCV)–related cirrhosis (F4), but not those with advanced fibrosis (F3), hepatocellular carcinoma (HCC) surveillance after a sustained virologic response (SVR) is cost effective, according to investigators.
Current international guidelines call for HCC surveillance among all patients with advanced fibrosis (F3) or cirrhosis (F4) who have achieved SVR, but this is “very unlikely to be cost effective,” reported lead author Hooman Farhang Zangneh, MD, of Toronto General Hospital and colleagues. “HCV-related HCC rarely occurs in patients without cirrhosis,” the investigators explained in Clinical Gastroenterology and Hepatology. “With cirrhosis present, HCC incidence is 1.4% to 4.9% per year. If found early, options for curative therapy include radiofrequency ablation (RFA), surgical resection, and liver transplantation.”
The investigators developed a Markov model to determine which at-risk patients could undergo surveillance while remaining below willingness-to-pay thresholds. Specifically, cost-effectiveness was assessed for ultrasound screenings annually (every year) or biannually (twice a year) among patients with advanced fibrosis (F3) or compensated cirrhosis (F4) who were aged 50 years and had an SVR. Relevant data were drawn from expert opinions, medical literature, and Canada Life Tables. Various HCC incidence rates were tested, including a constant annual rate, rates based on type of antiviral treatment (direct-acting and interferon-based therapies), others based on stage of fibrosis, and another that increased with age. The model was validated by applying it to patients with F3 or F4 fibrosis who had not yet achieved an SVR. All monetary values were reported in 2015 Canadian dollars.
Representative of current guidelines, the investigators first tested costs when conducting surveillance among all patients with F3 or F4 fibrosis with an assumed constant HCC annual incidence rate of 0.5%. Biannual ultrasound surveillance after SVR caught more cases of HCC still in a curable stage (78%) than no surveillance (29%); however, false-positives were relatively common at 21.8% and 15.7% for biannual and annual surveillance, respectively. The investigators noted that in the real world, some of these false-positives are not detected by more advanced imaging, so patients go on to receive unnecessary RFA, which incurs additional costs. Partly for this reason, while biannual surveillance was more effective, it was also more expensive, with an incremental cost-effectiveness ratio (ICER) of $106,792 per quality-adjusted life-years (QALY), compared with $72,105 per QALY for annual surveillance.
Including only patients with F3 fibrosis after interferon-based therapy, using an HCC incidence of 0.23%, biannual and annual ICERs rose to $484,160 and $204,708 per QALY, respectively, both of which exceed standard willingness-to-pay thresholds. In comparison, annual and biannual ICERs were at most $55,850 and $42,305 per QALY, respectively, among patients with cirrhosis before interferon-induced SVR, using an HCC incidence rate of up to 1.39% per year.
“These results suggest that biannual (or annual) HCC surveillance is likely to be cost effective for patients with cirrhosis, but not for patients with F3 fibrosis before SVR,” the investigators wrote.
Costs for HCC surveillance among cirrhosis patients after direct-acting antiviral-induced SVR were still lower, at $43,229 and $34,307 per QALY, which were far lower than costs for patients with F3 fibrosis, which were $188,157 and $111,667 per QALY.
Focusing on the evident savings associated with surveillance of patients with cirrhosis, the investigators tested two diagnostic thresholds within this population with the aim of reducing costs further. They found that surveillance of patients with a pretreatment aspartate aminotransferase to platelet ratio index (APRI) greater than 2.0 (HCC incidence, 0.89%) was associated with biannual and annual ICERs of $48,729 and $37,806 per QALY, respectively, but when APRI was less than 2.0 (HCC incidence, 0.093%), surveillance was less effective and more expensive than no surveillance at all. A similar trend was found for an FIB-4 threshold of 3.25.
Employment of age-stratified risk of HCC also reduced costs of screening for patients with cirrhosis. With this strategy, ICER was $48,432 per QALY for biannual surveillance and $37,201 per QALY for annual surveillance.
“These data suggest that, if we assume HCC incidence increases with age, biannual or annual surveillance will be cost effective for the vast majority, if not all, patients with cirrhosis before SVR,” the investigators wrote.
“Our analysis suggests that HCC surveillance is very unlikely to be cost effective in patients with F3 fibrosis, whereas both annual and biannual modalities are likely to be cost effective at standard willingness-to-pay thresholds for patients with cirrhosis compared with no surveillance,” the investigators wrote.
“Additional long-term follow-up data are required to help identify patients at highest risk of HCC after SVR to tailor surveillance guidelines,” the investigators concluded.
The study was funded by the Toronto Centre for Liver Disease. The investigators declared no conflicts of interest.
This story was updated on 7/12/2019.
SOURCE: Zangneh et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.018.
For patients with hepatitis C virus (HCV)–related cirrhosis (F4), but not those with advanced fibrosis (F3), hepatocellular carcinoma (HCC) surveillance after a sustained virologic response (SVR) is cost effective, according to investigators.
Current international guidelines call for HCC surveillance among all patients with advanced fibrosis (F3) or cirrhosis (F4) who have achieved SVR, but this is “very unlikely to be cost effective,” reported lead author Hooman Farhang Zangneh, MD, of Toronto General Hospital and colleagues. “HCV-related HCC rarely occurs in patients without cirrhosis,” the investigators explained in Clinical Gastroenterology and Hepatology. “With cirrhosis present, HCC incidence is 1.4% to 4.9% per year. If found early, options for curative therapy include radiofrequency ablation (RFA), surgical resection, and liver transplantation.”
The investigators developed a Markov model to determine which at-risk patients could undergo surveillance while remaining below willingness-to-pay thresholds. Specifically, cost-effectiveness was assessed for ultrasound screenings annually (every year) or biannually (twice a year) among patients with advanced fibrosis (F3) or compensated cirrhosis (F4) who were aged 50 years and had an SVR. Relevant data were drawn from expert opinions, medical literature, and Canada Life Tables. Various HCC incidence rates were tested, including a constant annual rate, rates based on type of antiviral treatment (direct-acting and interferon-based therapies), others based on stage of fibrosis, and another that increased with age. The model was validated by applying it to patients with F3 or F4 fibrosis who had not yet achieved an SVR. All monetary values were reported in 2015 Canadian dollars.
Representative of current guidelines, the investigators first tested costs when conducting surveillance among all patients with F3 or F4 fibrosis with an assumed constant HCC annual incidence rate of 0.5%. Biannual ultrasound surveillance after SVR caught more cases of HCC still in a curable stage (78%) than no surveillance (29%); however, false-positives were relatively common at 21.8% and 15.7% for biannual and annual surveillance, respectively. The investigators noted that in the real world, some of these false-positives are not detected by more advanced imaging, so patients go on to receive unnecessary RFA, which incurs additional costs. Partly for this reason, while biannual surveillance was more effective, it was also more expensive, with an incremental cost-effectiveness ratio (ICER) of $106,792 per quality-adjusted life-years (QALY), compared with $72,105 per QALY for annual surveillance.
Including only patients with F3 fibrosis after interferon-based therapy, using an HCC incidence of 0.23%, biannual and annual ICERs rose to $484,160 and $204,708 per QALY, respectively, both of which exceed standard willingness-to-pay thresholds. In comparison, annual and biannual ICERs were at most $55,850 and $42,305 per QALY, respectively, among patients with cirrhosis before interferon-induced SVR, using an HCC incidence rate of up to 1.39% per year.
“These results suggest that biannual (or annual) HCC surveillance is likely to be cost effective for patients with cirrhosis, but not for patients with F3 fibrosis before SVR,” the investigators wrote.
Costs for HCC surveillance among cirrhosis patients after direct-acting antiviral-induced SVR were still lower, at $43,229 and $34,307 per QALY, which were far lower than costs for patients with F3 fibrosis, which were $188,157 and $111,667 per QALY.
Focusing on the evident savings associated with surveillance of patients with cirrhosis, the investigators tested two diagnostic thresholds within this population with the aim of reducing costs further. They found that surveillance of patients with a pretreatment aspartate aminotransferase to platelet ratio index (APRI) greater than 2.0 (HCC incidence, 0.89%) was associated with biannual and annual ICERs of $48,729 and $37,806 per QALY, respectively, but when APRI was less than 2.0 (HCC incidence, 0.093%), surveillance was less effective and more expensive than no surveillance at all. A similar trend was found for an FIB-4 threshold of 3.25.
Employment of age-stratified risk of HCC also reduced costs of screening for patients with cirrhosis. With this strategy, ICER was $48,432 per QALY for biannual surveillance and $37,201 per QALY for annual surveillance.
“These data suggest that, if we assume HCC incidence increases with age, biannual or annual surveillance will be cost effective for the vast majority, if not all, patients with cirrhosis before SVR,” the investigators wrote.
“Our analysis suggests that HCC surveillance is very unlikely to be cost effective in patients with F3 fibrosis, whereas both annual and biannual modalities are likely to be cost effective at standard willingness-to-pay thresholds for patients with cirrhosis compared with no surveillance,” the investigators wrote.
“Additional long-term follow-up data are required to help identify patients at highest risk of HCC after SVR to tailor surveillance guidelines,” the investigators concluded.
The study was funded by the Toronto Centre for Liver Disease. The investigators declared no conflicts of interest.
This story was updated on 7/12/2019.
SOURCE: Zangneh et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.018.
For patients with hepatitis C virus (HCV)–related cirrhosis (F4), but not those with advanced fibrosis (F3), hepatocellular carcinoma (HCC) surveillance after a sustained virologic response (SVR) is cost effective, according to investigators.
Current international guidelines call for HCC surveillance among all patients with advanced fibrosis (F3) or cirrhosis (F4) who have achieved SVR, but this is “very unlikely to be cost effective,” reported lead author Hooman Farhang Zangneh, MD, of Toronto General Hospital and colleagues. “HCV-related HCC rarely occurs in patients without cirrhosis,” the investigators explained in Clinical Gastroenterology and Hepatology. “With cirrhosis present, HCC incidence is 1.4% to 4.9% per year. If found early, options for curative therapy include radiofrequency ablation (RFA), surgical resection, and liver transplantation.”
The investigators developed a Markov model to determine which at-risk patients could undergo surveillance while remaining below willingness-to-pay thresholds. Specifically, cost-effectiveness was assessed for ultrasound screenings annually (every year) or biannually (twice a year) among patients with advanced fibrosis (F3) or compensated cirrhosis (F4) who were aged 50 years and had an SVR. Relevant data were drawn from expert opinions, medical literature, and Canada Life Tables. Various HCC incidence rates were tested, including a constant annual rate, rates based on type of antiviral treatment (direct-acting and interferon-based therapies), others based on stage of fibrosis, and another that increased with age. The model was validated by applying it to patients with F3 or F4 fibrosis who had not yet achieved an SVR. All monetary values were reported in 2015 Canadian dollars.
Representative of current guidelines, the investigators first tested costs when conducting surveillance among all patients with F3 or F4 fibrosis with an assumed constant HCC annual incidence rate of 0.5%. Biannual ultrasound surveillance after SVR caught more cases of HCC still in a curable stage (78%) than no surveillance (29%); however, false-positives were relatively common at 21.8% and 15.7% for biannual and annual surveillance, respectively. The investigators noted that in the real world, some of these false-positives are not detected by more advanced imaging, so patients go on to receive unnecessary RFA, which incurs additional costs. Partly for this reason, while biannual surveillance was more effective, it was also more expensive, with an incremental cost-effectiveness ratio (ICER) of $106,792 per quality-adjusted life-years (QALY), compared with $72,105 per QALY for annual surveillance.
Including only patients with F3 fibrosis after interferon-based therapy, using an HCC incidence of 0.23%, biannual and annual ICERs rose to $484,160 and $204,708 per QALY, respectively, both of which exceed standard willingness-to-pay thresholds. In comparison, annual and biannual ICERs were at most $55,850 and $42,305 per QALY, respectively, among patients with cirrhosis before interferon-induced SVR, using an HCC incidence rate of up to 1.39% per year.
“These results suggest that biannual (or annual) HCC surveillance is likely to be cost effective for patients with cirrhosis, but not for patients with F3 fibrosis before SVR,” the investigators wrote.
Costs for HCC surveillance among cirrhosis patients after direct-acting antiviral-induced SVR were still lower, at $43,229 and $34,307 per QALY, which were far lower than costs for patients with F3 fibrosis, which were $188,157 and $111,667 per QALY.
Focusing on the evident savings associated with surveillance of patients with cirrhosis, the investigators tested two diagnostic thresholds within this population with the aim of reducing costs further. They found that surveillance of patients with a pretreatment aspartate aminotransferase to platelet ratio index (APRI) greater than 2.0 (HCC incidence, 0.89%) was associated with biannual and annual ICERs of $48,729 and $37,806 per QALY, respectively, but when APRI was less than 2.0 (HCC incidence, 0.093%), surveillance was less effective and more expensive than no surveillance at all. A similar trend was found for an FIB-4 threshold of 3.25.
Employment of age-stratified risk of HCC also reduced costs of screening for patients with cirrhosis. With this strategy, ICER was $48,432 per QALY for biannual surveillance and $37,201 per QALY for annual surveillance.
“These data suggest that, if we assume HCC incidence increases with age, biannual or annual surveillance will be cost effective for the vast majority, if not all, patients with cirrhosis before SVR,” the investigators wrote.
“Our analysis suggests that HCC surveillance is very unlikely to be cost effective in patients with F3 fibrosis, whereas both annual and biannual modalities are likely to be cost effective at standard willingness-to-pay thresholds for patients with cirrhosis compared with no surveillance,” the investigators wrote.
“Additional long-term follow-up data are required to help identify patients at highest risk of HCC after SVR to tailor surveillance guidelines,” the investigators concluded.
The study was funded by the Toronto Centre for Liver Disease. The investigators declared no conflicts of interest.
This story was updated on 7/12/2019.
SOURCE: Zangneh et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.018.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Genomic study reveals five subtypes of colorectal cancer
Colorectal cancer can be divided into five DNA methylation subtypes that predict molecular and clinical behavior and may offer future therapeutic targets, according to investigators.
In 216 unselected colorectal cancers, five subtypes of the CpG island methylator phenotype (CIMP) showed “striking” associations with sex, age, and tumor location, reported lead author Lochlan Fennell, MD, of the QIMR Berghofer Medical Research Institute in Queensland, Australia, and colleagues. CIMP level increased with age in a stepwise fashion, they noted.
Further associations with CIMP subtype and BRAF mutation status support the investigators’ recent report that sessile serrated adenomas are rare in young patients and pose little risk of malignancy. With additional research, these findings could “inform the development of patient-centric surveillance for young and older patients who present with sessile serrated adenomas,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.
“CIMP can be detected using a standardized marker panel to stratify tumors as CIMP-high, CIMP-low, or CIMP-negative.” In the present study, the investigators expanded these three existing subtypes into five subtypes, allowing for better prediction of clinical and molecular characteristics associated with disease progression.
Initial genomic testing showed that 13.4% of cases carried a BRAF V600E mutation, 34.7% were mutated at KRAS codon 12 or 13, and almost half of the patients (42.2%) had a TP53 mutation. Sorted into the three previously described subtypes, CIMP negative was most common (68.5%), followed by CIMP low (20.4%), and CIMP high (11.1%). About two-thirds (66%) of BRAF mutant cancers were CIMP high, compared with just 3% of BRAF wild-type cases (P less than .0001). KRAS mutated cases were more often CIMP-low than KRAS wild-type cancers (34.6% vs. 12.8%; P less than .001).
With use of Illumina HumanMethylation450 Bead Chip arrays and recursively partitioned mixed model clustering, five methylation clusters were identified; specifically, these were CIMP-H1 and CIMP-H2 (high methylation levels), CIMP-L1 and CIMP-L2 (intermediate methylation levels), and CIMP-negative (low methylation level). As described above, methylation level demonstrated a direct relationship with age, ranging from CIMP-negative (61.9 years) to CIMP-H1 (75.2 years). The investigators also reported unique characteristics of each new subtype. For instance, the CIMP-H1 cluster had many features in common with cases of serrated neoplasia, such as BRAF mutation positivity (73.9%; P less than .0001).
“BRAF mutations are a hallmark of the serrated neoplasia pathway, and indicate that these cancers probably arose in serrated precursor lesions,” the investigators wrote. “We previously showed that the colonoscopic incidence of sessile serrated adenomas does not differ between patients aged in their 30s and patients who are much older, whereas BRAF mutant cancers were restricted to older individuals, suggesting these BRAF mutant polyps may have limited malignant potential in young patients.”
In contrast with the CIMP-H1 cases, CIMP-H2 cancers were more often KRAS mutant (54.5% vs. 17.4%). Other findings revealed associations with subtype and location; for example, CIMP-L1 cases were located equally in the distal and proximal colon, whereas CIMP-L2 cases more often localized to the distal colon and rectum. Of note for CIMP-negative cancers, most (62.3%) occurred in the distal colon, and none had a BRAF mutation.
The five methylation subtypes also showed associations with consensus molecular subtypes (CMS) to varying degrees. The two strongest correlations were found in CIMP-H1 cancers and CIMP-H2 cancers, which were most frequently classified as CMS1 (69.6%) and CMS3 (54.5%), respectively.
Using CIBERSORT, the investigators detected a variety of associations between the five subtypes and stromal immune cell composition. For example, CIMP-H1 cases were enriched for macrophages, compared with the other subtypes, except CIMP-L2. Mast cells showed a stepwise relationship with subtype; they contributed the most to the immune microenvironment of CIMP-negative cancers and the least to cases classified as CIMP-H1. A converse trend was found with natural killer cells.
Of note, in CIMP-H1 and CIMP-H2 cancers, oncogenes were significantly more likely than tumor-suppressor genes to undergo gene body methylation, which is positively correlated with gene expression, and oncogenes in these subtypes had significantly greater gene body methylation than normal colonic mucosa.
“The five subtypes identified in this study are highly correlated with key clinical and molecular features, including patient age, tumor location, microsatellite instability, and oncogenic mitogen-activated protein kinase mutations,” they wrote. “We show that cancers with high DNA methylation show an increased preponderance for mutating genes involved in epigenetic regulation, and namely those that are implicated in the chromatin remodeling process.”
Concluding, the investigators explained the role of their research in future therapy development. “Our analyses have identified potentially druggable vulnerabilities in cancers of different methylation subtypes,” they wrote. “Inhibitors targeting synthetic lethalities, such as SWI/SNF component inhibitors for those with ARID mutations, should be evaluated because these agents may be clinically beneficial to certain patient subsets.”
The study was funded by the National Health and Medical Research Council, the US National Institutes of Health, Pathology Queensland, and others. The investigators disclosed no conflicts of interest.
SOURCE: Fennell L et al. CMGH. 2019 Apr 4. doi: 10.1016/j.jcmgh.2019.04.002.
Genomic, epigenomic, and transcriptomic information has revealed molecular subclasses of CRC, which has refined our understanding of the molecular and cellular biology of CRC and improved our treatment of patients with CRC. Several reliable and clinically useful molecular subtypes of colorectal cancer have been identified, including microsatellite unstable (MSI), chromosomal unstable (CIN), CpG island methylator phenotype (CIMP), and CMS 1-4 subtypes. Despite these substantial advances, it is also clear that we still only partially grasp the molecular and cellular biology driving CRC.
The studies by Fennell et al. provide new insights into the CIMP subtype of CRC that address this knowledge gap. Using a large CRC cohort and more detailed molecular information than available in prior studies, they have identified previously unrecognized CRC CIMP subtypes that have unique methylomes and mutation patterns. These 5 CIMP subclasses vary with regard to location in the colon, frequency of mutations in KRAS, BRAF, and MSI, as well as alterations in epigenetic regulatory genes. The observations related to differences in frequencies of MSI, and mutations in KRAS and BRAF help demystify the heterogeneity in clinical and cellular behavior that has been seen in the broader class of CIMP cancers. Perhaps most importantly, their studies identify plausible driver molecular alterations unique to the CIMP subclasses, such as subclass specific mutations in epigenetic regulatory genes and activated oncogenes. These are promising novel targets for chemoprevention strategies and therapies. Fennell and colleagues have now set the stage for functional studies of these molecular alterations to determine their true role in the cellular and clinical behavior of CRC.
William M. Grady, MD, is the Rodger C. Haggitt Professor of Medicine, department of medicine, division of gastroenterology, University of Washington School of Medicine, and clinical research division, Fred Hutchinson Cancer Research Center, Seattle. He is an advisory board member for Freenome and SEngine; has consulted for DiaCarta, Boehringer Ingelheim, and Guardant Health; and has conducted industry-sponsored research for Jannsenn and Cambridge Epigenetic.
Genomic, epigenomic, and transcriptomic information has revealed molecular subclasses of CRC, which has refined our understanding of the molecular and cellular biology of CRC and improved our treatment of patients with CRC. Several reliable and clinically useful molecular subtypes of colorectal cancer have been identified, including microsatellite unstable (MSI), chromosomal unstable (CIN), CpG island methylator phenotype (CIMP), and CMS 1-4 subtypes. Despite these substantial advances, it is also clear that we still only partially grasp the molecular and cellular biology driving CRC.
The studies by Fennell et al. provide new insights into the CIMP subtype of CRC that address this knowledge gap. Using a large CRC cohort and more detailed molecular information than available in prior studies, they have identified previously unrecognized CRC CIMP subtypes that have unique methylomes and mutation patterns. These 5 CIMP subclasses vary with regard to location in the colon, frequency of mutations in KRAS, BRAF, and MSI, as well as alterations in epigenetic regulatory genes. The observations related to differences in frequencies of MSI, and mutations in KRAS and BRAF help demystify the heterogeneity in clinical and cellular behavior that has been seen in the broader class of CIMP cancers. Perhaps most importantly, their studies identify plausible driver molecular alterations unique to the CIMP subclasses, such as subclass specific mutations in epigenetic regulatory genes and activated oncogenes. These are promising novel targets for chemoprevention strategies and therapies. Fennell and colleagues have now set the stage for functional studies of these molecular alterations to determine their true role in the cellular and clinical behavior of CRC.
William M. Grady, MD, is the Rodger C. Haggitt Professor of Medicine, department of medicine, division of gastroenterology, University of Washington School of Medicine, and clinical research division, Fred Hutchinson Cancer Research Center, Seattle. He is an advisory board member for Freenome and SEngine; has consulted for DiaCarta, Boehringer Ingelheim, and Guardant Health; and has conducted industry-sponsored research for Jannsenn and Cambridge Epigenetic.
Genomic, epigenomic, and transcriptomic information has revealed molecular subclasses of CRC, which has refined our understanding of the molecular and cellular biology of CRC and improved our treatment of patients with CRC. Several reliable and clinically useful molecular subtypes of colorectal cancer have been identified, including microsatellite unstable (MSI), chromosomal unstable (CIN), CpG island methylator phenotype (CIMP), and CMS 1-4 subtypes. Despite these substantial advances, it is also clear that we still only partially grasp the molecular and cellular biology driving CRC.
The studies by Fennell et al. provide new insights into the CIMP subtype of CRC that address this knowledge gap. Using a large CRC cohort and more detailed molecular information than available in prior studies, they have identified previously unrecognized CRC CIMP subtypes that have unique methylomes and mutation patterns. These 5 CIMP subclasses vary with regard to location in the colon, frequency of mutations in KRAS, BRAF, and MSI, as well as alterations in epigenetic regulatory genes. The observations related to differences in frequencies of MSI, and mutations in KRAS and BRAF help demystify the heterogeneity in clinical and cellular behavior that has been seen in the broader class of CIMP cancers. Perhaps most importantly, their studies identify plausible driver molecular alterations unique to the CIMP subclasses, such as subclass specific mutations in epigenetic regulatory genes and activated oncogenes. These are promising novel targets for chemoprevention strategies and therapies. Fennell and colleagues have now set the stage for functional studies of these molecular alterations to determine their true role in the cellular and clinical behavior of CRC.
William M. Grady, MD, is the Rodger C. Haggitt Professor of Medicine, department of medicine, division of gastroenterology, University of Washington School of Medicine, and clinical research division, Fred Hutchinson Cancer Research Center, Seattle. He is an advisory board member for Freenome and SEngine; has consulted for DiaCarta, Boehringer Ingelheim, and Guardant Health; and has conducted industry-sponsored research for Jannsenn and Cambridge Epigenetic.
Colorectal cancer can be divided into five DNA methylation subtypes that predict molecular and clinical behavior and may offer future therapeutic targets, according to investigators.
In 216 unselected colorectal cancers, five subtypes of the CpG island methylator phenotype (CIMP) showed “striking” associations with sex, age, and tumor location, reported lead author Lochlan Fennell, MD, of the QIMR Berghofer Medical Research Institute in Queensland, Australia, and colleagues. CIMP level increased with age in a stepwise fashion, they noted.
Further associations with CIMP subtype and BRAF mutation status support the investigators’ recent report that sessile serrated adenomas are rare in young patients and pose little risk of malignancy. With additional research, these findings could “inform the development of patient-centric surveillance for young and older patients who present with sessile serrated adenomas,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.
“CIMP can be detected using a standardized marker panel to stratify tumors as CIMP-high, CIMP-low, or CIMP-negative.” In the present study, the investigators expanded these three existing subtypes into five subtypes, allowing for better prediction of clinical and molecular characteristics associated with disease progression.
Initial genomic testing showed that 13.4% of cases carried a BRAF V600E mutation, 34.7% were mutated at KRAS codon 12 or 13, and almost half of the patients (42.2%) had a TP53 mutation. Sorted into the three previously described subtypes, CIMP negative was most common (68.5%), followed by CIMP low (20.4%), and CIMP high (11.1%). About two-thirds (66%) of BRAF mutant cancers were CIMP high, compared with just 3% of BRAF wild-type cases (P less than .0001). KRAS mutated cases were more often CIMP-low than KRAS wild-type cancers (34.6% vs. 12.8%; P less than .001).
With use of Illumina HumanMethylation450 Bead Chip arrays and recursively partitioned mixed model clustering, five methylation clusters were identified; specifically, these were CIMP-H1 and CIMP-H2 (high methylation levels), CIMP-L1 and CIMP-L2 (intermediate methylation levels), and CIMP-negative (low methylation level). As described above, methylation level demonstrated a direct relationship with age, ranging from CIMP-negative (61.9 years) to CIMP-H1 (75.2 years). The investigators also reported unique characteristics of each new subtype. For instance, the CIMP-H1 cluster had many features in common with cases of serrated neoplasia, such as BRAF mutation positivity (73.9%; P less than .0001).
“BRAF mutations are a hallmark of the serrated neoplasia pathway, and indicate that these cancers probably arose in serrated precursor lesions,” the investigators wrote. “We previously showed that the colonoscopic incidence of sessile serrated adenomas does not differ between patients aged in their 30s and patients who are much older, whereas BRAF mutant cancers were restricted to older individuals, suggesting these BRAF mutant polyps may have limited malignant potential in young patients.”
In contrast with the CIMP-H1 cases, CIMP-H2 cancers were more often KRAS mutant (54.5% vs. 17.4%). Other findings revealed associations with subtype and location; for example, CIMP-L1 cases were located equally in the distal and proximal colon, whereas CIMP-L2 cases more often localized to the distal colon and rectum. Of note for CIMP-negative cancers, most (62.3%) occurred in the distal colon, and none had a BRAF mutation.
The five methylation subtypes also showed associations with consensus molecular subtypes (CMS) to varying degrees. The two strongest correlations were found in CIMP-H1 cancers and CIMP-H2 cancers, which were most frequently classified as CMS1 (69.6%) and CMS3 (54.5%), respectively.
Using CIBERSORT, the investigators detected a variety of associations between the five subtypes and stromal immune cell composition. For example, CIMP-H1 cases were enriched for macrophages, compared with the other subtypes, except CIMP-L2. Mast cells showed a stepwise relationship with subtype; they contributed the most to the immune microenvironment of CIMP-negative cancers and the least to cases classified as CIMP-H1. A converse trend was found with natural killer cells.
Of note, in CIMP-H1 and CIMP-H2 cancers, oncogenes were significantly more likely than tumor-suppressor genes to undergo gene body methylation, which is positively correlated with gene expression, and oncogenes in these subtypes had significantly greater gene body methylation than normal colonic mucosa.
“The five subtypes identified in this study are highly correlated with key clinical and molecular features, including patient age, tumor location, microsatellite instability, and oncogenic mitogen-activated protein kinase mutations,” they wrote. “We show that cancers with high DNA methylation show an increased preponderance for mutating genes involved in epigenetic regulation, and namely those that are implicated in the chromatin remodeling process.”
Concluding, the investigators explained the role of their research in future therapy development. “Our analyses have identified potentially druggable vulnerabilities in cancers of different methylation subtypes,” they wrote. “Inhibitors targeting synthetic lethalities, such as SWI/SNF component inhibitors for those with ARID mutations, should be evaluated because these agents may be clinically beneficial to certain patient subsets.”
The study was funded by the National Health and Medical Research Council, the US National Institutes of Health, Pathology Queensland, and others. The investigators disclosed no conflicts of interest.
SOURCE: Fennell L et al. CMGH. 2019 Apr 4. doi: 10.1016/j.jcmgh.2019.04.002.
Colorectal cancer can be divided into five DNA methylation subtypes that predict molecular and clinical behavior and may offer future therapeutic targets, according to investigators.
In 216 unselected colorectal cancers, five subtypes of the CpG island methylator phenotype (CIMP) showed “striking” associations with sex, age, and tumor location, reported lead author Lochlan Fennell, MD, of the QIMR Berghofer Medical Research Institute in Queensland, Australia, and colleagues. CIMP level increased with age in a stepwise fashion, they noted.
Further associations with CIMP subtype and BRAF mutation status support the investigators’ recent report that sessile serrated adenomas are rare in young patients and pose little risk of malignancy. With additional research, these findings could “inform the development of patient-centric surveillance for young and older patients who present with sessile serrated adenomas,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.
“CIMP can be detected using a standardized marker panel to stratify tumors as CIMP-high, CIMP-low, or CIMP-negative.” In the present study, the investigators expanded these three existing subtypes into five subtypes, allowing for better prediction of clinical and molecular characteristics associated with disease progression.
Initial genomic testing showed that 13.4% of cases carried a BRAF V600E mutation, 34.7% were mutated at KRAS codon 12 or 13, and almost half of the patients (42.2%) had a TP53 mutation. Sorted into the three previously described subtypes, CIMP negative was most common (68.5%), followed by CIMP low (20.4%), and CIMP high (11.1%). About two-thirds (66%) of BRAF mutant cancers were CIMP high, compared with just 3% of BRAF wild-type cases (P less than .0001). KRAS mutated cases were more often CIMP-low than KRAS wild-type cancers (34.6% vs. 12.8%; P less than .001).
With use of Illumina HumanMethylation450 Bead Chip arrays and recursively partitioned mixed model clustering, five methylation clusters were identified; specifically, these were CIMP-H1 and CIMP-H2 (high methylation levels), CIMP-L1 and CIMP-L2 (intermediate methylation levels), and CIMP-negative (low methylation level). As described above, methylation level demonstrated a direct relationship with age, ranging from CIMP-negative (61.9 years) to CIMP-H1 (75.2 years). The investigators also reported unique characteristics of each new subtype. For instance, the CIMP-H1 cluster had many features in common with cases of serrated neoplasia, such as BRAF mutation positivity (73.9%; P less than .0001).
“BRAF mutations are a hallmark of the serrated neoplasia pathway, and indicate that these cancers probably arose in serrated precursor lesions,” the investigators wrote. “We previously showed that the colonoscopic incidence of sessile serrated adenomas does not differ between patients aged in their 30s and patients who are much older, whereas BRAF mutant cancers were restricted to older individuals, suggesting these BRAF mutant polyps may have limited malignant potential in young patients.”
In contrast with the CIMP-H1 cases, CIMP-H2 cancers were more often KRAS mutant (54.5% vs. 17.4%). Other findings revealed associations with subtype and location; for example, CIMP-L1 cases were located equally in the distal and proximal colon, whereas CIMP-L2 cases more often localized to the distal colon and rectum. Of note for CIMP-negative cancers, most (62.3%) occurred in the distal colon, and none had a BRAF mutation.
The five methylation subtypes also showed associations with consensus molecular subtypes (CMS) to varying degrees. The two strongest correlations were found in CIMP-H1 cancers and CIMP-H2 cancers, which were most frequently classified as CMS1 (69.6%) and CMS3 (54.5%), respectively.
Using CIBERSORT, the investigators detected a variety of associations between the five subtypes and stromal immune cell composition. For example, CIMP-H1 cases were enriched for macrophages, compared with the other subtypes, except CIMP-L2. Mast cells showed a stepwise relationship with subtype; they contributed the most to the immune microenvironment of CIMP-negative cancers and the least to cases classified as CIMP-H1. A converse trend was found with natural killer cells.
Of note, in CIMP-H1 and CIMP-H2 cancers, oncogenes were significantly more likely than tumor-suppressor genes to undergo gene body methylation, which is positively correlated with gene expression, and oncogenes in these subtypes had significantly greater gene body methylation than normal colonic mucosa.
“The five subtypes identified in this study are highly correlated with key clinical and molecular features, including patient age, tumor location, microsatellite instability, and oncogenic mitogen-activated protein kinase mutations,” they wrote. “We show that cancers with high DNA methylation show an increased preponderance for mutating genes involved in epigenetic regulation, and namely those that are implicated in the chromatin remodeling process.”
Concluding, the investigators explained the role of their research in future therapy development. “Our analyses have identified potentially druggable vulnerabilities in cancers of different methylation subtypes,” they wrote. “Inhibitors targeting synthetic lethalities, such as SWI/SNF component inhibitors for those with ARID mutations, should be evaluated because these agents may be clinically beneficial to certain patient subsets.”
The study was funded by the National Health and Medical Research Council, the US National Institutes of Health, Pathology Queensland, and others. The investigators disclosed no conflicts of interest.
SOURCE: Fennell L et al. CMGH. 2019 Apr 4. doi: 10.1016/j.jcmgh.2019.04.002.
FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY
Underwater endoscopic mucosal resection may be an option for colorectal lesions
without increasing procedure time or risk of adverse events, based on a recent head-to-head trial conducted in Japan.
UEMR was associated with higher R0 and en bloc resection rates than was conventional EMR (CEMR) when used for intermediate-size colorectal lesions, reported lead author Takeshi Yamashina, MD, of Osaka (Japan) International Cancer Institute, and colleagues. The study was the first multicenter, randomized trial to demonstrate the superiority of UEMR over CEMR, they noted.
Although CEMR is a well-established method of removing sessile colorectal lesions, those larger than 10 mm can be difficult to resect en bloc, which contributes to a local recurrence rate exceeding 15% when alternative, piecemeal resection is performed, the investigators explained in Gastroenterology
Recently, UEMR has emerged as “an alternative to CEMR and is reported to be effective for removing flat or large colorectal polyps,” the investigators wrote. “With UEMR, the bowel lumen is filled with water instead of air/CO2, and the lesion is captured and resected with a snare without submucosal injection of normal saline.”
To find out if UEMR offers better results than CEMR, the investigators recruited 211 patients with 214 colorectal lesions at five centers in Japan. Patients were aged at least 20 years and had mucosal lesions of 10-20 mm in diameter. Based on macroscopic appearance, pit pattern classification with magnifying chromoendoscopy, or narrow-band imaging, lesions were classified as adenoma, sessile serrated adenoma/polyp, or intramucosal adenocarcinoma. Patients were randomly assigned in a 1:1 ratio to the UEMR or CEMR group, and just prior to the procedure, operators were informed of the allocated treatment. Ten expert operators were involved, each with at least 10 years of experience, in addition to 18 nonexpert operators with less than 10 years of experience. The primary endpoint was the difference in R0 resection rate between the two groups, with R0 defined as en bloc resection with histologically negative margins. Secondary endpoints were en bloc resection rate, adverse events, and procedure time.
The results showed a clear win for UEMR, with an R0 rate of 69%, compared with 50% for CEMR (P = .011), and an en bloc resection rate that followed the same trend (89% vs. 75%; P = .007). Neither median procedure times nor number of adverse events were significantly different between groups.
Subset analysis showed that UEMR was best suited for lesions at least 15 mm in diameter, although the investigators pointed out the superior R0 resection rate with UEMR held steady regardless of lesion morphology, size, location, or operator experience level.
The investigators suggested that the findings give reason to amend some existing recommendations. “Although the European Society of Gastrointestinal Endoscopy Clinical Guidelines suggest hot-snare polypectomy with submucosal injection for removing sessile polyps 10-19 mm in size, we found that UEMR was more effective than CEMR, in terms of better R0 and en bloc resection rates,” they wrote. “Hence, we think that UEMR will become an alternative to CEMR. It could fill the gap for removing polyps 9 mm [or larger] (indication for removal by cold-snare polypectomy) and [smaller than] 20 mm (indication for ESD removal).”
During the discussion, the investigators explained that UEMR achieves better outcomes primarily by improving access to lesions. Water immersion causes lesions to float upright into the lumen, while keeping the muscularis propria circular behind the submucosa, which allows for easier snaring and decreases risk of perforation. Furthermore, the investigators noted, water immersion limits flexure angulation, luminal distension, and loop formation, all of which improve maneuverability and visibility.
Still, UEMR may take some operator adjustment, the investigators added, going on to provide some pointers. “In practice, we think it is important to fill the entire lumen only with fluid, so we always deflate the lumen completely and then fill it with fluid,” they wrote. “[When the lumen is filled], it is not necessary to change the patient’s position during the UEMR procedure.”
“Also, in cases with unclear endoscopic vision, endoscopists are familiar with air insufflation but, during UEMR, it is better to infuse the fluid to expand the lumen and maintain a good endoscopic view. Therefore, for the beginner, we recommend that the air insufflation button of the endoscopy machine be switched off.”
Additional tips included using saline instead of distilled water, and employing thin, soft snares.
The investigators reported no external funding or conflicts of interest.
SOURCE: Yamashina T et al. Gastro. 2018 Apr 11. doi: 10.1053/j.gastro.2019.04.005.
without increasing procedure time or risk of adverse events, based on a recent head-to-head trial conducted in Japan.
UEMR was associated with higher R0 and en bloc resection rates than was conventional EMR (CEMR) when used for intermediate-size colorectal lesions, reported lead author Takeshi Yamashina, MD, of Osaka (Japan) International Cancer Institute, and colleagues. The study was the first multicenter, randomized trial to demonstrate the superiority of UEMR over CEMR, they noted.
Although CEMR is a well-established method of removing sessile colorectal lesions, those larger than 10 mm can be difficult to resect en bloc, which contributes to a local recurrence rate exceeding 15% when alternative, piecemeal resection is performed, the investigators explained in Gastroenterology
Recently, UEMR has emerged as “an alternative to CEMR and is reported to be effective for removing flat or large colorectal polyps,” the investigators wrote. “With UEMR, the bowel lumen is filled with water instead of air/CO2, and the lesion is captured and resected with a snare without submucosal injection of normal saline.”
To find out if UEMR offers better results than CEMR, the investigators recruited 211 patients with 214 colorectal lesions at five centers in Japan. Patients were aged at least 20 years and had mucosal lesions of 10-20 mm in diameter. Based on macroscopic appearance, pit pattern classification with magnifying chromoendoscopy, or narrow-band imaging, lesions were classified as adenoma, sessile serrated adenoma/polyp, or intramucosal adenocarcinoma. Patients were randomly assigned in a 1:1 ratio to the UEMR or CEMR group, and just prior to the procedure, operators were informed of the allocated treatment. Ten expert operators were involved, each with at least 10 years of experience, in addition to 18 nonexpert operators with less than 10 years of experience. The primary endpoint was the difference in R0 resection rate between the two groups, with R0 defined as en bloc resection with histologically negative margins. Secondary endpoints were en bloc resection rate, adverse events, and procedure time.
The results showed a clear win for UEMR, with an R0 rate of 69%, compared with 50% for CEMR (P = .011), and an en bloc resection rate that followed the same trend (89% vs. 75%; P = .007). Neither median procedure times nor number of adverse events were significantly different between groups.
Subset analysis showed that UEMR was best suited for lesions at least 15 mm in diameter, although the investigators pointed out the superior R0 resection rate with UEMR held steady regardless of lesion morphology, size, location, or operator experience level.
The investigators suggested that the findings give reason to amend some existing recommendations. “Although the European Society of Gastrointestinal Endoscopy Clinical Guidelines suggest hot-snare polypectomy with submucosal injection for removing sessile polyps 10-19 mm in size, we found that UEMR was more effective than CEMR, in terms of better R0 and en bloc resection rates,” they wrote. “Hence, we think that UEMR will become an alternative to CEMR. It could fill the gap for removing polyps 9 mm [or larger] (indication for removal by cold-snare polypectomy) and [smaller than] 20 mm (indication for ESD removal).”
During the discussion, the investigators explained that UEMR achieves better outcomes primarily by improving access to lesions. Water immersion causes lesions to float upright into the lumen, while keeping the muscularis propria circular behind the submucosa, which allows for easier snaring and decreases risk of perforation. Furthermore, the investigators noted, water immersion limits flexure angulation, luminal distension, and loop formation, all of which improve maneuverability and visibility.
Still, UEMR may take some operator adjustment, the investigators added, going on to provide some pointers. “In practice, we think it is important to fill the entire lumen only with fluid, so we always deflate the lumen completely and then fill it with fluid,” they wrote. “[When the lumen is filled], it is not necessary to change the patient’s position during the UEMR procedure.”
“Also, in cases with unclear endoscopic vision, endoscopists are familiar with air insufflation but, during UEMR, it is better to infuse the fluid to expand the lumen and maintain a good endoscopic view. Therefore, for the beginner, we recommend that the air insufflation button of the endoscopy machine be switched off.”
Additional tips included using saline instead of distilled water, and employing thin, soft snares.
The investigators reported no external funding or conflicts of interest.
SOURCE: Yamashina T et al. Gastro. 2018 Apr 11. doi: 10.1053/j.gastro.2019.04.005.
without increasing procedure time or risk of adverse events, based on a recent head-to-head trial conducted in Japan.
UEMR was associated with higher R0 and en bloc resection rates than was conventional EMR (CEMR) when used for intermediate-size colorectal lesions, reported lead author Takeshi Yamashina, MD, of Osaka (Japan) International Cancer Institute, and colleagues. The study was the first multicenter, randomized trial to demonstrate the superiority of UEMR over CEMR, they noted.
Although CEMR is a well-established method of removing sessile colorectal lesions, those larger than 10 mm can be difficult to resect en bloc, which contributes to a local recurrence rate exceeding 15% when alternative, piecemeal resection is performed, the investigators explained in Gastroenterology
Recently, UEMR has emerged as “an alternative to CEMR and is reported to be effective for removing flat or large colorectal polyps,” the investigators wrote. “With UEMR, the bowel lumen is filled with water instead of air/CO2, and the lesion is captured and resected with a snare without submucosal injection of normal saline.”
To find out if UEMR offers better results than CEMR, the investigators recruited 211 patients with 214 colorectal lesions at five centers in Japan. Patients were aged at least 20 years and had mucosal lesions of 10-20 mm in diameter. Based on macroscopic appearance, pit pattern classification with magnifying chromoendoscopy, or narrow-band imaging, lesions were classified as adenoma, sessile serrated adenoma/polyp, or intramucosal adenocarcinoma. Patients were randomly assigned in a 1:1 ratio to the UEMR or CEMR group, and just prior to the procedure, operators were informed of the allocated treatment. Ten expert operators were involved, each with at least 10 years of experience, in addition to 18 nonexpert operators with less than 10 years of experience. The primary endpoint was the difference in R0 resection rate between the two groups, with R0 defined as en bloc resection with histologically negative margins. Secondary endpoints were en bloc resection rate, adverse events, and procedure time.
The results showed a clear win for UEMR, with an R0 rate of 69%, compared with 50% for CEMR (P = .011), and an en bloc resection rate that followed the same trend (89% vs. 75%; P = .007). Neither median procedure times nor number of adverse events were significantly different between groups.
Subset analysis showed that UEMR was best suited for lesions at least 15 mm in diameter, although the investigators pointed out the superior R0 resection rate with UEMR held steady regardless of lesion morphology, size, location, or operator experience level.
The investigators suggested that the findings give reason to amend some existing recommendations. “Although the European Society of Gastrointestinal Endoscopy Clinical Guidelines suggest hot-snare polypectomy with submucosal injection for removing sessile polyps 10-19 mm in size, we found that UEMR was more effective than CEMR, in terms of better R0 and en bloc resection rates,” they wrote. “Hence, we think that UEMR will become an alternative to CEMR. It could fill the gap for removing polyps 9 mm [or larger] (indication for removal by cold-snare polypectomy) and [smaller than] 20 mm (indication for ESD removal).”
During the discussion, the investigators explained that UEMR achieves better outcomes primarily by improving access to lesions. Water immersion causes lesions to float upright into the lumen, while keeping the muscularis propria circular behind the submucosa, which allows for easier snaring and decreases risk of perforation. Furthermore, the investigators noted, water immersion limits flexure angulation, luminal distension, and loop formation, all of which improve maneuverability and visibility.
Still, UEMR may take some operator adjustment, the investigators added, going on to provide some pointers. “In practice, we think it is important to fill the entire lumen only with fluid, so we always deflate the lumen completely and then fill it with fluid,” they wrote. “[When the lumen is filled], it is not necessary to change the patient’s position during the UEMR procedure.”
“Also, in cases with unclear endoscopic vision, endoscopists are familiar with air insufflation but, during UEMR, it is better to infuse the fluid to expand the lumen and maintain a good endoscopic view. Therefore, for the beginner, we recommend that the air insufflation button of the endoscopy machine be switched off.”
Additional tips included using saline instead of distilled water, and employing thin, soft snares.
The investigators reported no external funding or conflicts of interest.
SOURCE: Yamashina T et al. Gastro. 2018 Apr 11. doi: 10.1053/j.gastro.2019.04.005.
FROM GASTROENTEROLOGY
Formal weight loss programs improve NAFLD
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues.
“Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine.“However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease.
The literature search returned 22 eligible studies involving 2,588 patients. The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
The AGA Practice guide on Obesity and Weight management, Education and Resources (POWER) paper provides physicians with a comprehensive, multi-disciplinary process to guide and personalize innovative obesity care for safe and effective weight management. Learn more at https://www.gastro.org/practice-guidance/practice-updates/obesity-practice-guide
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues.
“Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine.“However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease.
The literature search returned 22 eligible studies involving 2,588 patients. The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
The AGA Practice guide on Obesity and Weight management, Education and Resources (POWER) paper provides physicians with a comprehensive, multi-disciplinary process to guide and personalize innovative obesity care for safe and effective weight management. Learn more at https://www.gastro.org/practice-guidance/practice-updates/obesity-practice-guide
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues.
“Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine.“However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease.
The literature search returned 22 eligible studies involving 2,588 patients. The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
The AGA Practice guide on Obesity and Weight management, Education and Resources (POWER) paper provides physicians with a comprehensive, multi-disciplinary process to guide and personalize innovative obesity care for safe and effective weight management. Learn more at https://www.gastro.org/practice-guidance/practice-updates/obesity-practice-guide
FROM JAMA INTERNAL MEDICINE
Key clinical point:
Study details: A meta-analysis of randomized clinicals involving weight loss interventions for patients with nonalcoholic fatty liver disease (NAFLD).
Disclosures: The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
Source: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
Past studies have attempted to investigate the relationship between weight loss and nonalcoholic fatty liver disease (NAFLD), but they did so with various interventions and outcomes measures. Fortunately, the study by Dr. Koutoukidis and colleagues helps clear up this variability with a well-conducted systematic review. The results offer a convincing case that formal weight loss programs should be a cornerstone of NALFD treatment, based on improvements in blood, histologic, and radiologic biomarkers of liver disease. Since pharmacologic options for NAFLD are limited, these findings are particularly important.
Although the study did not reveal improvements in fibrosis or inflammation with weight loss, this is likely due to the scarcity of trials with histologic measures or long-term follow-up. Where long-term follow-up was available, weight loss was not maintained, disallowing clear conclusions. Still, other studies have shown that sustained weight loss is associated with improvements in fibrosis and mortality, so clinicians should feel encouraged that formal weight loss programs for patients with NAFLD likely have life-saving consequences.
Elizabeth Adler, MD and Danielle Brandman, MD , are with the University of California, San Francisco. Dr. Brandman reported financial affiliations with Conatus, Gilead, and Allergan. Their remarks are adapted from an accompanying editorial (JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2244 ).
Formal weight loss programs improve NAFLD
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues. “Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine. “However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease. The literature search returned 22 eligible studies involving 2,588 patients.
The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
Past studies have attempted to investigate the relationship between weight loss and nonalcoholic fatty liver disease (NAFLD), but they did so with various interventions and outcomes measures. Fortunately, the study by Dr. Koutoukidis and colleagues helps clear up this variability with a well-conducted systematic review. The results offer a convincing case that formal weight loss programs should be a cornerstone of NALFD treatment, based on improvements in blood, histologic, and radiologic biomarkers of liver disease. Since pharmacologic options for NAFLD are limited, these findings are particularly important.
Although the study did not reveal improvements in fibrosis or inflammation with weight loss, this is likely due to the scarcity of trials with histologic measures or long-term follow-up. Where long-term follow-up was available, weight loss was not maintained, disallowing clear conclusions. Still, other studies have shown that sustained weight loss is associated with improvements in fibrosis and mortality, so clinicians should feel encouraged that formal weight loss programs for patients with NAFLD likely have life-saving consequences.
Elizabeth Adler, MD and Danielle Brandman, MD , are with the University of California, San Francisco. Dr. Brandman reported financial affiliations with Conatus, Gilead, and Allergan. Their remarks are adapted from an accompanying editorial (JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2244 ).
Past studies have attempted to investigate the relationship between weight loss and nonalcoholic fatty liver disease (NAFLD), but they did so with various interventions and outcomes measures. Fortunately, the study by Dr. Koutoukidis and colleagues helps clear up this variability with a well-conducted systematic review. The results offer a convincing case that formal weight loss programs should be a cornerstone of NALFD treatment, based on improvements in blood, histologic, and radiologic biomarkers of liver disease. Since pharmacologic options for NAFLD are limited, these findings are particularly important.
Although the study did not reveal improvements in fibrosis or inflammation with weight loss, this is likely due to the scarcity of trials with histologic measures or long-term follow-up. Where long-term follow-up was available, weight loss was not maintained, disallowing clear conclusions. Still, other studies have shown that sustained weight loss is associated with improvements in fibrosis and mortality, so clinicians should feel encouraged that formal weight loss programs for patients with NAFLD likely have life-saving consequences.
Elizabeth Adler, MD and Danielle Brandman, MD , are with the University of California, San Francisco. Dr. Brandman reported financial affiliations with Conatus, Gilead, and Allergan. Their remarks are adapted from an accompanying editorial (JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2244 ).
Past studies have attempted to investigate the relationship between weight loss and nonalcoholic fatty liver disease (NAFLD), but they did so with various interventions and outcomes measures. Fortunately, the study by Dr. Koutoukidis and colleagues helps clear up this variability with a well-conducted systematic review. The results offer a convincing case that formal weight loss programs should be a cornerstone of NALFD treatment, based on improvements in blood, histologic, and radiologic biomarkers of liver disease. Since pharmacologic options for NAFLD are limited, these findings are particularly important.
Although the study did not reveal improvements in fibrosis or inflammation with weight loss, this is likely due to the scarcity of trials with histologic measures or long-term follow-up. Where long-term follow-up was available, weight loss was not maintained, disallowing clear conclusions. Still, other studies have shown that sustained weight loss is associated with improvements in fibrosis and mortality, so clinicians should feel encouraged that formal weight loss programs for patients with NAFLD likely have life-saving consequences.
Elizabeth Adler, MD and Danielle Brandman, MD , are with the University of California, San Francisco. Dr. Brandman reported financial affiliations with Conatus, Gilead, and Allergan. Their remarks are adapted from an accompanying editorial (JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2244 ).
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues. “Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine. “However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease. The literature search returned 22 eligible studies involving 2,588 patients.
The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues. “Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine. “However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease. The literature search returned 22 eligible studies involving 2,588 patients.
The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
FROM JAMA INTERNAL MEDICINE
Key clinical point:
Major finding: Weight loss interventions were associated with significantly decreased alanine aminotransferase (-9.81 U/L; I2 = 97%).
Study details: A meta-analysis of randomized clinicals involving weight loss interventions for patients with nonalcoholic fatty liver disease (NAFLD).
Disclosures: The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
Source: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
Algorithm predicts villous atrophy in children with potential celiac disease
A new algorithm may be able to predict which children with potential celiac disease will go on to develop villous atrophy, according to investigators writing in Gastroenterology.
The risk model was developed from the largest cohort of its kind, with the longest follow-up to date, reported lead author Renata Auricchio, MD, PhD, of University Federico II in Naples, Italy, and colleagues. Using the algorithm, which relies most heavily on a baseline number of intraepithelial lymphocytes (IELs) in mucosa, followed by age at diagnosis and genetic profile, clinicians may now consider prescribing gluten-free diets to only the highest-risk patients, instead of all suspected cases, noting that more than half of potential cases do not develop flat mucosa within 12 years.
Development of the algorithm began with enrollment of 340 children aged 2-18 years who were positive for endomysial antibodies immunoglobulin A antibodies and had tested positive twice consecutively for antitissue transglutaminase antibodies. Additionally, children were required to possess HLA DQ2- or DQ8-positive haplotypes and have normal duodenal architecture in five biopsy samples. Because of symptoms suggestive of celiac disease or parental discretion, 60 patients were started on a gluten-free diet and excluded from the study, leaving 280 patients in the final cohort. These patients were kept on a gluten-containing diet and followed for up to 12 years. Every 6 months, the investigators checked antibodies and clinical status, and every 2 years, small bowel biopsy was performed, if symptoms had not necessitated this earlier.
After a median follow-up of 60 months, ranging from 18 months to 12 years, 39 patients (13.9%) developed symptoms of celiac disease and were placed on a gluten-free diet, although they declined confirmatory biopsy, disallowing classification of celiac disease. Another 33 patients (11.7%) were lost to follow-up and 89 (32%) stopped producing antibodies, with none going on to develop villous atrophy. In total, 42 patients (15%) developed flat mucosa during the follow-up period, with an estimated cumulative incidence of 43% at 12 years. The investigators noted that patients most frequently progressed within two time frames; at 24-48 months after enrollment, or at 96-120 months.
To develop the algorithm, the investigators performed multivariable analysis with several potential risk factors, including age, sex, genetic profile, mucosal characteristics, and concomitant autoimmune diseases. Of these, a high number of IELs upon first biopsy was most highly correlated with progression to celiac disease. Patients who developed villous atrophy had a mean value of 11.9 IELs at first biopsy, compared with 6.44 among those who remained potential (P = .05). The next strongest predictive factors were age and genetic profile. Just 7% of children less than 3 years developed flat mucosa, compared with 51% of patients aged 3-10 years and 55% of those older than 10 years (P = .007). HLA status was predictive in the group aged 3-10 years but not significant in the youngest or oldest patients. Therefore, HLA haplotype was included in the final algorithm, but with smaller contribution than five non-HLA genes, namely, IL12a, SH2B3, RGS1, CCR, and IL2/IL21.
“Combining these risk factors, we set up a model to predict the probability for a patient to evolve from potential celiac disease to villous atrophy,” the investigators wrote. “Overall, the discriminant analysis model allows us to correctly classify, at entry, 80% of the children who will not develop a flat mucosa over follow-up, while approximately 69% of those who will develop flat mucosa are correctly classified by the parameters we analyzed. This system is then more accurate to predict a child who will not develop flat mucosa and then can be monitored on a gluten-containing diet than a child who will become celiac.”
The investigators noted that IEL count may be an uncommon diagnostic; however, they recommended the test, even if it necessitates referral. “The [IEL] count turned out to be crucial for the prediction power of the discriminant analysis,” the investigators wrote.
“The long-term risks of potential celiac disease have never been accurately evaluated. Thus, before adopting a wait-and-see strategy on a gluten-containing diet, a final decision should always be shared with the family.”
Still, the investigators concluded that gluten-free diet “should not be prescribed indistinctly to all patients” with potential celiac disease, as it is a “very heterogenic condition and is not necessarily the first step of overt disease.”
The investigators disclosed no funding or conflicts of interest.
SOURCE: Auricchio R et al. Gastroenterology. 2019 Apr 9. doi: 10.1053/j.gastro.2019.04.004.
While the simplification of the diagnostic process for celiac disease (CD), now heavily reliant on CD-specific autoantibodies, has made the life of clinicians easier in many respects, new scenarios also have emerged that are posing new challenges. One of them is that a substantial, growing portion of subjects (who may or may not have symptoms) present with positive CD autoantibodies but a normal duodenal mucosa (“potential celiac patient”). If left on gluten, with time some will develop villous atrophy, but some won’t. What is the clinician supposed to do with them?
The paper by Auricchio et al. addresses this issue in a rigorous, well-structured way by closely prospectively monitoring a large series of pediatric patients. Their conclusions have very useful implications for the clinician. In fact taking into consideration several criteria, they found valuable after a long observation period – such as age of the child, HLA status, persistence of elevated CD-specific autoantibodies, and presence or absence of intraepithelial lymphocytes in the initial biopsy – they concluded that one can correctly identify at the beginning four out of five potential celiac patients who will not develop villous atrophy, and thus do not need to follow a gluten-free diet.
Ultimately, however, let’s not forget that we are still dealing with percentages of risk to develop full-blown CD, not with definitive certainties. Hence, the decision of starting a gluten-free diet or not (and of how often and in which way to monitor those who remain on gluten) remains a mutually agreed upon plan sealed by two actors: on one side the patient (or the patient’s family); and on the other, an experienced health care provider who has clearly explained the facts. In other words, evidence-based criteria, good old medicine, and a grain of salt!
Stefano Guandalini, MD, is a pediatric gastroenterologist at the University of Chicago Medical Center. He has no conflicts of interest.
While the simplification of the diagnostic process for celiac disease (CD), now heavily reliant on CD-specific autoantibodies, has made the life of clinicians easier in many respects, new scenarios also have emerged that are posing new challenges. One of them is that a substantial, growing portion of subjects (who may or may not have symptoms) present with positive CD autoantibodies but a normal duodenal mucosa (“potential celiac patient”). If left on gluten, with time some will develop villous atrophy, but some won’t. What is the clinician supposed to do with them?
The paper by Auricchio et al. addresses this issue in a rigorous, well-structured way by closely prospectively monitoring a large series of pediatric patients. Their conclusions have very useful implications for the clinician. In fact taking into consideration several criteria, they found valuable after a long observation period – such as age of the child, HLA status, persistence of elevated CD-specific autoantibodies, and presence or absence of intraepithelial lymphocytes in the initial biopsy – they concluded that one can correctly identify at the beginning four out of five potential celiac patients who will not develop villous atrophy, and thus do not need to follow a gluten-free diet.
Ultimately, however, let’s not forget that we are still dealing with percentages of risk to develop full-blown CD, not with definitive certainties. Hence, the decision of starting a gluten-free diet or not (and of how often and in which way to monitor those who remain on gluten) remains a mutually agreed upon plan sealed by two actors: on one side the patient (or the patient’s family); and on the other, an experienced health care provider who has clearly explained the facts. In other words, evidence-based criteria, good old medicine, and a grain of salt!
Stefano Guandalini, MD, is a pediatric gastroenterologist at the University of Chicago Medical Center. He has no conflicts of interest.
While the simplification of the diagnostic process for celiac disease (CD), now heavily reliant on CD-specific autoantibodies, has made the life of clinicians easier in many respects, new scenarios also have emerged that are posing new challenges. One of them is that a substantial, growing portion of subjects (who may or may not have symptoms) present with positive CD autoantibodies but a normal duodenal mucosa (“potential celiac patient”). If left on gluten, with time some will develop villous atrophy, but some won’t. What is the clinician supposed to do with them?
The paper by Auricchio et al. addresses this issue in a rigorous, well-structured way by closely prospectively monitoring a large series of pediatric patients. Their conclusions have very useful implications for the clinician. In fact taking into consideration several criteria, they found valuable after a long observation period – such as age of the child, HLA status, persistence of elevated CD-specific autoantibodies, and presence or absence of intraepithelial lymphocytes in the initial biopsy – they concluded that one can correctly identify at the beginning four out of five potential celiac patients who will not develop villous atrophy, and thus do not need to follow a gluten-free diet.
Ultimately, however, let’s not forget that we are still dealing with percentages of risk to develop full-blown CD, not with definitive certainties. Hence, the decision of starting a gluten-free diet or not (and of how often and in which way to monitor those who remain on gluten) remains a mutually agreed upon plan sealed by two actors: on one side the patient (or the patient’s family); and on the other, an experienced health care provider who has clearly explained the facts. In other words, evidence-based criteria, good old medicine, and a grain of salt!
Stefano Guandalini, MD, is a pediatric gastroenterologist at the University of Chicago Medical Center. He has no conflicts of interest.
A new algorithm may be able to predict which children with potential celiac disease will go on to develop villous atrophy, according to investigators writing in Gastroenterology.
The risk model was developed from the largest cohort of its kind, with the longest follow-up to date, reported lead author Renata Auricchio, MD, PhD, of University Federico II in Naples, Italy, and colleagues. Using the algorithm, which relies most heavily on a baseline number of intraepithelial lymphocytes (IELs) in mucosa, followed by age at diagnosis and genetic profile, clinicians may now consider prescribing gluten-free diets to only the highest-risk patients, instead of all suspected cases, noting that more than half of potential cases do not develop flat mucosa within 12 years.
Development of the algorithm began with enrollment of 340 children aged 2-18 years who were positive for endomysial antibodies immunoglobulin A antibodies and had tested positive twice consecutively for antitissue transglutaminase antibodies. Additionally, children were required to possess HLA DQ2- or DQ8-positive haplotypes and have normal duodenal architecture in five biopsy samples. Because of symptoms suggestive of celiac disease or parental discretion, 60 patients were started on a gluten-free diet and excluded from the study, leaving 280 patients in the final cohort. These patients were kept on a gluten-containing diet and followed for up to 12 years. Every 6 months, the investigators checked antibodies and clinical status, and every 2 years, small bowel biopsy was performed, if symptoms had not necessitated this earlier.
After a median follow-up of 60 months, ranging from 18 months to 12 years, 39 patients (13.9%) developed symptoms of celiac disease and were placed on a gluten-free diet, although they declined confirmatory biopsy, disallowing classification of celiac disease. Another 33 patients (11.7%) were lost to follow-up and 89 (32%) stopped producing antibodies, with none going on to develop villous atrophy. In total, 42 patients (15%) developed flat mucosa during the follow-up period, with an estimated cumulative incidence of 43% at 12 years. The investigators noted that patients most frequently progressed within two time frames; at 24-48 months after enrollment, or at 96-120 months.
To develop the algorithm, the investigators performed multivariable analysis with several potential risk factors, including age, sex, genetic profile, mucosal characteristics, and concomitant autoimmune diseases. Of these, a high number of IELs upon first biopsy was most highly correlated with progression to celiac disease. Patients who developed villous atrophy had a mean value of 11.9 IELs at first biopsy, compared with 6.44 among those who remained potential (P = .05). The next strongest predictive factors were age and genetic profile. Just 7% of children less than 3 years developed flat mucosa, compared with 51% of patients aged 3-10 years and 55% of those older than 10 years (P = .007). HLA status was predictive in the group aged 3-10 years but not significant in the youngest or oldest patients. Therefore, HLA haplotype was included in the final algorithm, but with smaller contribution than five non-HLA genes, namely, IL12a, SH2B3, RGS1, CCR, and IL2/IL21.
“Combining these risk factors, we set up a model to predict the probability for a patient to evolve from potential celiac disease to villous atrophy,” the investigators wrote. “Overall, the discriminant analysis model allows us to correctly classify, at entry, 80% of the children who will not develop a flat mucosa over follow-up, while approximately 69% of those who will develop flat mucosa are correctly classified by the parameters we analyzed. This system is then more accurate to predict a child who will not develop flat mucosa and then can be monitored on a gluten-containing diet than a child who will become celiac.”
The investigators noted that IEL count may be an uncommon diagnostic; however, they recommended the test, even if it necessitates referral. “The [IEL] count turned out to be crucial for the prediction power of the discriminant analysis,” the investigators wrote.
“The long-term risks of potential celiac disease have never been accurately evaluated. Thus, before adopting a wait-and-see strategy on a gluten-containing diet, a final decision should always be shared with the family.”
Still, the investigators concluded that gluten-free diet “should not be prescribed indistinctly to all patients” with potential celiac disease, as it is a “very heterogenic condition and is not necessarily the first step of overt disease.”
The investigators disclosed no funding or conflicts of interest.
SOURCE: Auricchio R et al. Gastroenterology. 2019 Apr 9. doi: 10.1053/j.gastro.2019.04.004.
A new algorithm may be able to predict which children with potential celiac disease will go on to develop villous atrophy, according to investigators writing in Gastroenterology.
The risk model was developed from the largest cohort of its kind, with the longest follow-up to date, reported lead author Renata Auricchio, MD, PhD, of University Federico II in Naples, Italy, and colleagues. Using the algorithm, which relies most heavily on a baseline number of intraepithelial lymphocytes (IELs) in mucosa, followed by age at diagnosis and genetic profile, clinicians may now consider prescribing gluten-free diets to only the highest-risk patients, instead of all suspected cases, noting that more than half of potential cases do not develop flat mucosa within 12 years.
Development of the algorithm began with enrollment of 340 children aged 2-18 years who were positive for endomysial antibodies immunoglobulin A antibodies and had tested positive twice consecutively for antitissue transglutaminase antibodies. Additionally, children were required to possess HLA DQ2- or DQ8-positive haplotypes and have normal duodenal architecture in five biopsy samples. Because of symptoms suggestive of celiac disease or parental discretion, 60 patients were started on a gluten-free diet and excluded from the study, leaving 280 patients in the final cohort. These patients were kept on a gluten-containing diet and followed for up to 12 years. Every 6 months, the investigators checked antibodies and clinical status, and every 2 years, small bowel biopsy was performed, if symptoms had not necessitated this earlier.
After a median follow-up of 60 months, ranging from 18 months to 12 years, 39 patients (13.9%) developed symptoms of celiac disease and were placed on a gluten-free diet, although they declined confirmatory biopsy, disallowing classification of celiac disease. Another 33 patients (11.7%) were lost to follow-up and 89 (32%) stopped producing antibodies, with none going on to develop villous atrophy. In total, 42 patients (15%) developed flat mucosa during the follow-up period, with an estimated cumulative incidence of 43% at 12 years. The investigators noted that patients most frequently progressed within two time frames; at 24-48 months after enrollment, or at 96-120 months.
To develop the algorithm, the investigators performed multivariable analysis with several potential risk factors, including age, sex, genetic profile, mucosal characteristics, and concomitant autoimmune diseases. Of these, a high number of IELs upon first biopsy was most highly correlated with progression to celiac disease. Patients who developed villous atrophy had a mean value of 11.9 IELs at first biopsy, compared with 6.44 among those who remained potential (P = .05). The next strongest predictive factors were age and genetic profile. Just 7% of children less than 3 years developed flat mucosa, compared with 51% of patients aged 3-10 years and 55% of those older than 10 years (P = .007). HLA status was predictive in the group aged 3-10 years but not significant in the youngest or oldest patients. Therefore, HLA haplotype was included in the final algorithm, but with smaller contribution than five non-HLA genes, namely, IL12a, SH2B3, RGS1, CCR, and IL2/IL21.
“Combining these risk factors, we set up a model to predict the probability for a patient to evolve from potential celiac disease to villous atrophy,” the investigators wrote. “Overall, the discriminant analysis model allows us to correctly classify, at entry, 80% of the children who will not develop a flat mucosa over follow-up, while approximately 69% of those who will develop flat mucosa are correctly classified by the parameters we analyzed. This system is then more accurate to predict a child who will not develop flat mucosa and then can be monitored on a gluten-containing diet than a child who will become celiac.”
The investigators noted that IEL count may be an uncommon diagnostic; however, they recommended the test, even if it necessitates referral. “The [IEL] count turned out to be crucial for the prediction power of the discriminant analysis,” the investigators wrote.
“The long-term risks of potential celiac disease have never been accurately evaluated. Thus, before adopting a wait-and-see strategy on a gluten-containing diet, a final decision should always be shared with the family.”
Still, the investigators concluded that gluten-free diet “should not be prescribed indistinctly to all patients” with potential celiac disease, as it is a “very heterogenic condition and is not necessarily the first step of overt disease.”
The investigators disclosed no funding or conflicts of interest.
SOURCE: Auricchio R et al. Gastroenterology. 2019 Apr 9. doi: 10.1053/j.gastro.2019.04.004.
FROM GASTROENTEROLOGY
Immune modulators help anti-TNF agents battle Crohn’s disease, but not UC
Adding an immune modulator (IM) to anti–tumor necrosis factor (anti-TNF) initiation therapy benefits patients with Crohn’s disease (CD) but not those with ulcerative colitis (UC), according to a recent retrospective look at more than 1,000 cases.
The study showed that patients with CD who started combination therapy instead of monotherapy had lower rates of treatment ineffectiveness, experienced longer delays until hospitalization, and less often needed to switch their anti-TNF agent, reported lead author Laura E. Targownik, MD, of the University of Manitoba, in Winnipeg, Canada, and colleagues.
“Current guidelines on the medical management of IBD strongly support the use of IMs and anti-TNFs in combination over anti-TNF monotherapy,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, there is a sparsity of real-world data demonstrating the incremental benefits of combination therapy.”
The investigators noted that the SONIC trial, conducted in 2010, showed that patients treated with combination therapy were more likely to achieve corticosteroid-free remission at weeks 26 and 50; this became the basis of evidence leading multiple clinical guidelines to recommend combination therapy for patients with CD.
The present study involved 852 patients with CD and 303 with UC who began treatment with an anti-TNF agent during 2001-2016. Data were drawn from the Manitoba Inflammatory Bowel Disease (IBD) Epidemiology database.
The main outcome of interest was treatment ineffectiveness, which was defined by any of the following four events: acute, IBD-related hospital admission for more than 48 hours; resective intestinal surgery; corticosteroid use at least 14 days after initiating anti-TNF therapy, or, if corticosteroids were used within 16 weeks of anti-TNF initiation, then subsequent corticosteroid use occurring at least 16 weeks after initiation; or switching to a different anti-TNF agent. The investigators also looked for differences in effectiveness between two agents from each class: anti-TNF agents infliximab and adalimumab, and immunomodulators methotrexate and azathioprine.
Results showed that patients with CD had higher rates of ineffectiveness-free survival when treated with combination therapy instead of monotherapy at 1 year (74.2% vs. 68.6%) and 2 years (64.0% vs. 54.5%). Using a Cox proportional hazards model, this translated to a 38% reduced risk of treatment ineffectiveness (adjusted hazard ratio, 0.62).
“This suggests that the findings of the SONIC trial may extend to real-world clinical practice, even in patients who had previous IM exposure,” the investigators noted.
Combination therapy was also significantly associated with longer time to first IBD-related hospitalization (HR, 0.53) and the need to switch anti-TNF agent (HR, 0.63). However, no such relationships were found for time to resective surgery or corticosteroid use. Although combination therapy had no impact on the rate of primary treatment ineffectiveness in multivariable logistic regression, those who received anti-TNF therapy for more than 90 days had delayed secondary treatment ineffectiveness and fewer IBD-related hospitalizations. Choice of agent from either class had no influence on effectiveness of combination therapy.
In contrast with the above findings, combination therapy in patients with UC was less promising, which aligns with previous studies.
“[W]e were not able to demonstrate a significant advantage to combination therapy in persons with UC,” the investigators wrote. “In addition, all published cohort studies to date have not been able to confirm a significant benefit to combination therapy in UC. ... In light of the lower quality of prior evidence, combined with the results from our study, the indication for combination therapy in UC would appear to be weaker.”
“Further analyses in larger cohorts may clarify whether there is a clinically relevant benefit of combination therapy in persons with UC,” the investigators concluded. “Because of the discrepancy between our findings and those of a meta-analysis of cohort studies previously published on this topic, confirmation of our results is required in future studies.”
The investigators disclosed no funding or conflicts of interest.
SOURCE: Targownik LE et al. Clin Gastroenterol Hepatol. 2018 Nov 15. doi: 10.1016/j.cgh.2018.11.003.
Twenty years after the approval of the first anti–tumor necrosis factor (TNF) biologic agent for the treatment of inflammatory bowel disease (IBD), patients and providers are still learning how to optimize these medications. One optimization is the use of combination therapy (immunomodulator and anti-TNF). Immunomodulators are used independently for maintenance of remission of IBD, and they have been shown to reduce immunogenicity and improve efficacy when used in combination with an anti-TNF agent in prior short-term randomized controlled trials. However, use of combination therapy in the real-world is not universally practiced. Data are lacking on the risks and benefits of long-term use of these agents. Therefore, this article by Targownik et al. is very timely.
Importantly, a mixed group of patients who had previously been on azathioprine monotherapy and those newly starting this therapy at the time of anti-TNF initiation were included in this cohort (a group similar to what we see in real-world practice). Data on risk factors for disease complications, such as disease phenotype or severity, were not available. By contrast, none of the efficacy associations were improved in the smaller group of patients with ulcerative colitis on combination therapy.
As providers counsel patients on the benefits and risks of various IBD treatment choices, these data by Targownik et al. will inform decisions. Future research should incorporate additional means of biologic optimization, such as the use of therapeutic drug monitoring and/or risk factor–based selection of therapeutic agents, to better inform individualized treatment choices.
Millie D. Long MD, MPH, is an associate professor of medicine in the division of gastroenterology and hepatology; Inflammatory Bowel Diseases Center; vice chief for education; director, Gastroenterology and Hepatology Fellowship Program at the University of North Carolina at Chapel Hill. She has the following conflicts of interest: AbbVie, Takeda, Pfizer, UCB, Janssen, Salix, Prometheus, Target Pharmasolutions, and Valeant.
Twenty years after the approval of the first anti–tumor necrosis factor (TNF) biologic agent for the treatment of inflammatory bowel disease (IBD), patients and providers are still learning how to optimize these medications. One optimization is the use of combination therapy (immunomodulator and anti-TNF). Immunomodulators are used independently for maintenance of remission of IBD, and they have been shown to reduce immunogenicity and improve efficacy when used in combination with an anti-TNF agent in prior short-term randomized controlled trials. However, use of combination therapy in the real-world is not universally practiced. Data are lacking on the risks and benefits of long-term use of these agents. Therefore, this article by Targownik et al. is very timely.
Importantly, a mixed group of patients who had previously been on azathioprine monotherapy and those newly starting this therapy at the time of anti-TNF initiation were included in this cohort (a group similar to what we see in real-world practice). Data on risk factors for disease complications, such as disease phenotype or severity, were not available. By contrast, none of the efficacy associations were improved in the smaller group of patients with ulcerative colitis on combination therapy.
As providers counsel patients on the benefits and risks of various IBD treatment choices, these data by Targownik et al. will inform decisions. Future research should incorporate additional means of biologic optimization, such as the use of therapeutic drug monitoring and/or risk factor–based selection of therapeutic agents, to better inform individualized treatment choices.
Millie D. Long MD, MPH, is an associate professor of medicine in the division of gastroenterology and hepatology; Inflammatory Bowel Diseases Center; vice chief for education; director, Gastroenterology and Hepatology Fellowship Program at the University of North Carolina at Chapel Hill. She has the following conflicts of interest: AbbVie, Takeda, Pfizer, UCB, Janssen, Salix, Prometheus, Target Pharmasolutions, and Valeant.
Twenty years after the approval of the first anti–tumor necrosis factor (TNF) biologic agent for the treatment of inflammatory bowel disease (IBD), patients and providers are still learning how to optimize these medications. One optimization is the use of combination therapy (immunomodulator and anti-TNF). Immunomodulators are used independently for maintenance of remission of IBD, and they have been shown to reduce immunogenicity and improve efficacy when used in combination with an anti-TNF agent in prior short-term randomized controlled trials. However, use of combination therapy in the real-world is not universally practiced. Data are lacking on the risks and benefits of long-term use of these agents. Therefore, this article by Targownik et al. is very timely.
Importantly, a mixed group of patients who had previously been on azathioprine monotherapy and those newly starting this therapy at the time of anti-TNF initiation were included in this cohort (a group similar to what we see in real-world practice). Data on risk factors for disease complications, such as disease phenotype or severity, were not available. By contrast, none of the efficacy associations were improved in the smaller group of patients with ulcerative colitis on combination therapy.
As providers counsel patients on the benefits and risks of various IBD treatment choices, these data by Targownik et al. will inform decisions. Future research should incorporate additional means of biologic optimization, such as the use of therapeutic drug monitoring and/or risk factor–based selection of therapeutic agents, to better inform individualized treatment choices.
Millie D. Long MD, MPH, is an associate professor of medicine in the division of gastroenterology and hepatology; Inflammatory Bowel Diseases Center; vice chief for education; director, Gastroenterology and Hepatology Fellowship Program at the University of North Carolina at Chapel Hill. She has the following conflicts of interest: AbbVie, Takeda, Pfizer, UCB, Janssen, Salix, Prometheus, Target Pharmasolutions, and Valeant.
Adding an immune modulator (IM) to anti–tumor necrosis factor (anti-TNF) initiation therapy benefits patients with Crohn’s disease (CD) but not those with ulcerative colitis (UC), according to a recent retrospective look at more than 1,000 cases.
The study showed that patients with CD who started combination therapy instead of monotherapy had lower rates of treatment ineffectiveness, experienced longer delays until hospitalization, and less often needed to switch their anti-TNF agent, reported lead author Laura E. Targownik, MD, of the University of Manitoba, in Winnipeg, Canada, and colleagues.
“Current guidelines on the medical management of IBD strongly support the use of IMs and anti-TNFs in combination over anti-TNF monotherapy,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, there is a sparsity of real-world data demonstrating the incremental benefits of combination therapy.”
The investigators noted that the SONIC trial, conducted in 2010, showed that patients treated with combination therapy were more likely to achieve corticosteroid-free remission at weeks 26 and 50; this became the basis of evidence leading multiple clinical guidelines to recommend combination therapy for patients with CD.
The present study involved 852 patients with CD and 303 with UC who began treatment with an anti-TNF agent during 2001-2016. Data were drawn from the Manitoba Inflammatory Bowel Disease (IBD) Epidemiology database.
The main outcome of interest was treatment ineffectiveness, which was defined by any of the following four events: acute, IBD-related hospital admission for more than 48 hours; resective intestinal surgery; corticosteroid use at least 14 days after initiating anti-TNF therapy, or, if corticosteroids were used within 16 weeks of anti-TNF initiation, then subsequent corticosteroid use occurring at least 16 weeks after initiation; or switching to a different anti-TNF agent. The investigators also looked for differences in effectiveness between two agents from each class: anti-TNF agents infliximab and adalimumab, and immunomodulators methotrexate and azathioprine.
Results showed that patients with CD had higher rates of ineffectiveness-free survival when treated with combination therapy instead of monotherapy at 1 year (74.2% vs. 68.6%) and 2 years (64.0% vs. 54.5%). Using a Cox proportional hazards model, this translated to a 38% reduced risk of treatment ineffectiveness (adjusted hazard ratio, 0.62).
“This suggests that the findings of the SONIC trial may extend to real-world clinical practice, even in patients who had previous IM exposure,” the investigators noted.
Combination therapy was also significantly associated with longer time to first IBD-related hospitalization (HR, 0.53) and the need to switch anti-TNF agent (HR, 0.63). However, no such relationships were found for time to resective surgery or corticosteroid use. Although combination therapy had no impact on the rate of primary treatment ineffectiveness in multivariable logistic regression, those who received anti-TNF therapy for more than 90 days had delayed secondary treatment ineffectiveness and fewer IBD-related hospitalizations. Choice of agent from either class had no influence on effectiveness of combination therapy.
In contrast with the above findings, combination therapy in patients with UC was less promising, which aligns with previous studies.
“[W]e were not able to demonstrate a significant advantage to combination therapy in persons with UC,” the investigators wrote. “In addition, all published cohort studies to date have not been able to confirm a significant benefit to combination therapy in UC. ... In light of the lower quality of prior evidence, combined with the results from our study, the indication for combination therapy in UC would appear to be weaker.”
“Further analyses in larger cohorts may clarify whether there is a clinically relevant benefit of combination therapy in persons with UC,” the investigators concluded. “Because of the discrepancy between our findings and those of a meta-analysis of cohort studies previously published on this topic, confirmation of our results is required in future studies.”
The investigators disclosed no funding or conflicts of interest.
SOURCE: Targownik LE et al. Clin Gastroenterol Hepatol. 2018 Nov 15. doi: 10.1016/j.cgh.2018.11.003.
Adding an immune modulator (IM) to anti–tumor necrosis factor (anti-TNF) initiation therapy benefits patients with Crohn’s disease (CD) but not those with ulcerative colitis (UC), according to a recent retrospective look at more than 1,000 cases.
The study showed that patients with CD who started combination therapy instead of monotherapy had lower rates of treatment ineffectiveness, experienced longer delays until hospitalization, and less often needed to switch their anti-TNF agent, reported lead author Laura E. Targownik, MD, of the University of Manitoba, in Winnipeg, Canada, and colleagues.
“Current guidelines on the medical management of IBD strongly support the use of IMs and anti-TNFs in combination over anti-TNF monotherapy,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, there is a sparsity of real-world data demonstrating the incremental benefits of combination therapy.”
The investigators noted that the SONIC trial, conducted in 2010, showed that patients treated with combination therapy were more likely to achieve corticosteroid-free remission at weeks 26 and 50; this became the basis of evidence leading multiple clinical guidelines to recommend combination therapy for patients with CD.
The present study involved 852 patients with CD and 303 with UC who began treatment with an anti-TNF agent during 2001-2016. Data were drawn from the Manitoba Inflammatory Bowel Disease (IBD) Epidemiology database.
The main outcome of interest was treatment ineffectiveness, which was defined by any of the following four events: acute, IBD-related hospital admission for more than 48 hours; resective intestinal surgery; corticosteroid use at least 14 days after initiating anti-TNF therapy, or, if corticosteroids were used within 16 weeks of anti-TNF initiation, then subsequent corticosteroid use occurring at least 16 weeks after initiation; or switching to a different anti-TNF agent. The investigators also looked for differences in effectiveness between two agents from each class: anti-TNF agents infliximab and adalimumab, and immunomodulators methotrexate and azathioprine.
Results showed that patients with CD had higher rates of ineffectiveness-free survival when treated with combination therapy instead of monotherapy at 1 year (74.2% vs. 68.6%) and 2 years (64.0% vs. 54.5%). Using a Cox proportional hazards model, this translated to a 38% reduced risk of treatment ineffectiveness (adjusted hazard ratio, 0.62).
“This suggests that the findings of the SONIC trial may extend to real-world clinical practice, even in patients who had previous IM exposure,” the investigators noted.
Combination therapy was also significantly associated with longer time to first IBD-related hospitalization (HR, 0.53) and the need to switch anti-TNF agent (HR, 0.63). However, no such relationships were found for time to resective surgery or corticosteroid use. Although combination therapy had no impact on the rate of primary treatment ineffectiveness in multivariable logistic regression, those who received anti-TNF therapy for more than 90 days had delayed secondary treatment ineffectiveness and fewer IBD-related hospitalizations. Choice of agent from either class had no influence on effectiveness of combination therapy.
In contrast with the above findings, combination therapy in patients with UC was less promising, which aligns with previous studies.
“[W]e were not able to demonstrate a significant advantage to combination therapy in persons with UC,” the investigators wrote. “In addition, all published cohort studies to date have not been able to confirm a significant benefit to combination therapy in UC. ... In light of the lower quality of prior evidence, combined with the results from our study, the indication for combination therapy in UC would appear to be weaker.”
“Further analyses in larger cohorts may clarify whether there is a clinically relevant benefit of combination therapy in persons with UC,” the investigators concluded. “Because of the discrepancy between our findings and those of a meta-analysis of cohort studies previously published on this topic, confirmation of our results is required in future studies.”
The investigators disclosed no funding or conflicts of interest.
SOURCE: Targownik LE et al. Clin Gastroenterol Hepatol. 2018 Nov 15. doi: 10.1016/j.cgh.2018.11.003.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
ADMIRAL results solidify gilteritinib as new standard for FLT3-mutated AML
AMSTERDAM – For patients with FLT3-mutated, relapsed or refractory acute myeloid leukemia (AML), gilteritinib (Xospata) offers better median overall survival than salvage chemotherapy, according to results from the phase 3 ADMIRAL trial.
Patients treated with gilteritinib also more often responded to therapy and entered remission, reported lead author Alexander Perl, MD, of the University of Pennsylvania, Philadelphia.
To overcome resistance mechanisms to existing FLT3 inhibitors, drug developers have been seeking agents with activity against both FLT3-ITD and FLT3-TKD mutations, Dr. Perl explained during his presentation at the annual congress of the European Hematology Association. “Gilteritinib is one of these agents,” he said, noting a unique mechanism of action that also may limit toxicity concerns associated with existing FLT3 inhibitors.
The international ADMIRAL trial involved 371 patients with FLT3-mutated AML who had not responded to induction therapy or were untreated after first relapse.
The population was randomized in a 2:1 ratio to receive either gilteritinib 120 mg/day or one of four salvage chemotherapy regimens: azacitidine (AZA), low-dose cytarabine (LoDAC), mitoxantrone/etoposide/cytarabine (MEC), or fludarabine/cytarabine/granulocyte colony-stimulating factor/idarubicin (FLAG-IDA).
Coprimary endpoints were overall survival and the combined rate of complete remission and complete remission with partial hematologic recovery (CR/CRh). Secondary endpoints were complete remission rate and event-free survival.
Demographic data showed that the median patient age was 62 years with a broad range (19-85 years). Most patients were positive for FLT3-ITD (88.4%), while fewer tested positive for FLT3-TKD (8.4%) or both mutations (1.9%). Relapsed AML was more common than refractory disease (60.6% vs. 39.4%).
The efficacy analysis revealed that patients treated with gilteritinib had a median overall survival of 9.3 months, significantly longer than the 5.6 months among those treated with salvage chemotherapy (hazard ratio for death = 0.637; P = .0007). The 1-year survival rate was 37.1% for the gilteritinib group, compared with 16.7% among those who received chemotherapy.
The superiority of gilteritinib was further supported by twofold higher rates of CR/CRh (34.0% vs. 15.3%) and complete remission (21.1% vs. 10.5%). Similarly, median event-free survival was significantly longer in the gilteritinib group (2.8 vs. 0.7 months). Most subgroups, such as age and sex, showed consistent benefit.
Overall, gilteritinib demonstrated a favorable safety profile. After adjusting for exposure duration, serious treatment related adverse events were more common in the chemotherapy group than the gilteritinib group (9.2% vs. 7.1%). Common grade 3 or higher adverse events related to gilteritinib were anemia (19.5%), febrile neutropenia (15.4%), thrombocytopenia (12.2%), and decreased platelet count (12.2%).
“We were able to give [gilteritinib] in an outpatient setting,” Dr. Perl said.
Although comparisons between responses based on mutation type were not possible, owing to small sample sizes, Dr. Perl highlighted that gilteritinib showed activity against both FLT3 mutation subtypes.
“This drug has been approved on the results of this study,” Dr. Perl said. “Because of this, we have a new standard of care for this population.”
The study was funded by Astellas. The investigators reported financial relationships with AbbVie, Bayer, Takeda, and other companies.
SOURCE: Perl A et al. EHA Congress, Abstract S876.
AMSTERDAM – For patients with FLT3-mutated, relapsed or refractory acute myeloid leukemia (AML), gilteritinib (Xospata) offers better median overall survival than salvage chemotherapy, according to results from the phase 3 ADMIRAL trial.
Patients treated with gilteritinib also more often responded to therapy and entered remission, reported lead author Alexander Perl, MD, of the University of Pennsylvania, Philadelphia.
To overcome resistance mechanisms to existing FLT3 inhibitors, drug developers have been seeking agents with activity against both FLT3-ITD and FLT3-TKD mutations, Dr. Perl explained during his presentation at the annual congress of the European Hematology Association. “Gilteritinib is one of these agents,” he said, noting a unique mechanism of action that also may limit toxicity concerns associated with existing FLT3 inhibitors.
The international ADMIRAL trial involved 371 patients with FLT3-mutated AML who had not responded to induction therapy or were untreated after first relapse.
The population was randomized in a 2:1 ratio to receive either gilteritinib 120 mg/day or one of four salvage chemotherapy regimens: azacitidine (AZA), low-dose cytarabine (LoDAC), mitoxantrone/etoposide/cytarabine (MEC), or fludarabine/cytarabine/granulocyte colony-stimulating factor/idarubicin (FLAG-IDA).
Coprimary endpoints were overall survival and the combined rate of complete remission and complete remission with partial hematologic recovery (CR/CRh). Secondary endpoints were complete remission rate and event-free survival.
Demographic data showed that the median patient age was 62 years with a broad range (19-85 years). Most patients were positive for FLT3-ITD (88.4%), while fewer tested positive for FLT3-TKD (8.4%) or both mutations (1.9%). Relapsed AML was more common than refractory disease (60.6% vs. 39.4%).
The efficacy analysis revealed that patients treated with gilteritinib had a median overall survival of 9.3 months, significantly longer than the 5.6 months among those treated with salvage chemotherapy (hazard ratio for death = 0.637; P = .0007). The 1-year survival rate was 37.1% for the gilteritinib group, compared with 16.7% among those who received chemotherapy.
The superiority of gilteritinib was further supported by twofold higher rates of CR/CRh (34.0% vs. 15.3%) and complete remission (21.1% vs. 10.5%). Similarly, median event-free survival was significantly longer in the gilteritinib group (2.8 vs. 0.7 months). Most subgroups, such as age and sex, showed consistent benefit.
Overall, gilteritinib demonstrated a favorable safety profile. After adjusting for exposure duration, serious treatment related adverse events were more common in the chemotherapy group than the gilteritinib group (9.2% vs. 7.1%). Common grade 3 or higher adverse events related to gilteritinib were anemia (19.5%), febrile neutropenia (15.4%), thrombocytopenia (12.2%), and decreased platelet count (12.2%).
“We were able to give [gilteritinib] in an outpatient setting,” Dr. Perl said.
Although comparisons between responses based on mutation type were not possible, owing to small sample sizes, Dr. Perl highlighted that gilteritinib showed activity against both FLT3 mutation subtypes.
“This drug has been approved on the results of this study,” Dr. Perl said. “Because of this, we have a new standard of care for this population.”
The study was funded by Astellas. The investigators reported financial relationships with AbbVie, Bayer, Takeda, and other companies.
SOURCE: Perl A et al. EHA Congress, Abstract S876.
AMSTERDAM – For patients with FLT3-mutated, relapsed or refractory acute myeloid leukemia (AML), gilteritinib (Xospata) offers better median overall survival than salvage chemotherapy, according to results from the phase 3 ADMIRAL trial.
Patients treated with gilteritinib also more often responded to therapy and entered remission, reported lead author Alexander Perl, MD, of the University of Pennsylvania, Philadelphia.
To overcome resistance mechanisms to existing FLT3 inhibitors, drug developers have been seeking agents with activity against both FLT3-ITD and FLT3-TKD mutations, Dr. Perl explained during his presentation at the annual congress of the European Hematology Association. “Gilteritinib is one of these agents,” he said, noting a unique mechanism of action that also may limit toxicity concerns associated with existing FLT3 inhibitors.
The international ADMIRAL trial involved 371 patients with FLT3-mutated AML who had not responded to induction therapy or were untreated after first relapse.
The population was randomized in a 2:1 ratio to receive either gilteritinib 120 mg/day or one of four salvage chemotherapy regimens: azacitidine (AZA), low-dose cytarabine (LoDAC), mitoxantrone/etoposide/cytarabine (MEC), or fludarabine/cytarabine/granulocyte colony-stimulating factor/idarubicin (FLAG-IDA).
Coprimary endpoints were overall survival and the combined rate of complete remission and complete remission with partial hematologic recovery (CR/CRh). Secondary endpoints were complete remission rate and event-free survival.
Demographic data showed that the median patient age was 62 years with a broad range (19-85 years). Most patients were positive for FLT3-ITD (88.4%), while fewer tested positive for FLT3-TKD (8.4%) or both mutations (1.9%). Relapsed AML was more common than refractory disease (60.6% vs. 39.4%).
The efficacy analysis revealed that patients treated with gilteritinib had a median overall survival of 9.3 months, significantly longer than the 5.6 months among those treated with salvage chemotherapy (hazard ratio for death = 0.637; P = .0007). The 1-year survival rate was 37.1% for the gilteritinib group, compared with 16.7% among those who received chemotherapy.
The superiority of gilteritinib was further supported by twofold higher rates of CR/CRh (34.0% vs. 15.3%) and complete remission (21.1% vs. 10.5%). Similarly, median event-free survival was significantly longer in the gilteritinib group (2.8 vs. 0.7 months). Most subgroups, such as age and sex, showed consistent benefit.
Overall, gilteritinib demonstrated a favorable safety profile. After adjusting for exposure duration, serious treatment related adverse events were more common in the chemotherapy group than the gilteritinib group (9.2% vs. 7.1%). Common grade 3 or higher adverse events related to gilteritinib were anemia (19.5%), febrile neutropenia (15.4%), thrombocytopenia (12.2%), and decreased platelet count (12.2%).
“We were able to give [gilteritinib] in an outpatient setting,” Dr. Perl said.
Although comparisons between responses based on mutation type were not possible, owing to small sample sizes, Dr. Perl highlighted that gilteritinib showed activity against both FLT3 mutation subtypes.
“This drug has been approved on the results of this study,” Dr. Perl said. “Because of this, we have a new standard of care for this population.”
The study was funded by Astellas. The investigators reported financial relationships with AbbVie, Bayer, Takeda, and other companies.
SOURCE: Perl A et al. EHA Congress, Abstract S876.
REPORTING FROM EHA CONGRESS
Guadecitabine offers limited advantage over other standards for high-risk AML
AMSTERDAM – For treatment-naive patients with acute myeloid leukemia (AML) who are ineligible for chemotherapy, guadecitabine offers similar efficacy to other standard treatment options until four cycles are administered, after which guadecitabine offers a slight survival advantage, based on results from the phase 3 ASTRAL-1 trial.
Complete responders also derived greater benefit from guadecitabine, a new hypomethylating agent, reported lead author Pierre Fenaux, MD, PhD, of the Hôpital Saint Louis, Paris.
With 815 patients, ASTRAL-1 was the largest global, randomized trial to compare low-intensity therapy options in this elderly, unfit population – specifically, patients who were at least 75 years old or had an Eastern Cooperative Oncology Group (ECOG) performance status of 3 or more, Dr. Fenaux said at the annual congress of the European Hematology Association.
They were randomized in a 1:1 ratio to receive guadecitabine or one of three other treatment options: azacitidine, decitabine, or low-dose cytarabine. The coprimary endpoints of the trial were complete response rate and median overall survival. Safety measures were also investigated.
A demographic analysis showed that almost two-thirds of patients were at least 75 years old (62%), and about half had an ECOG status of 2 or 3, or bone marrow blasts. Approximately one-third of patients had poor-risk cytogenetics and a slightly higher proportion had secondary AML.
After a median follow-up of 25.5 months, patients had received, on average, five cycles of therapy. However, many patients (42%) received three or fewer cycles because of early death or disease progression. This therapy cessation rate was similar between the guadecitabine group (42.4%) and the other treatment group (40.8%).
The study failed to meet either coprimary endpoint across the entire patient population. Median overall survival was 7.10 months for guadecitabine versus 8.47 months for the other treatments, but this difference was not statistically significant (P = .73). Similarly, the complete response rate was slightly higher for guadecitabine (19.4% vs. 17.4%), but again, this finding carried a nonsignificant P value (P = .48).
The benefit offered by guadecitabine was realized only with extended treatment and in complete responders.
Patients who received a minimum of four cycles of guadecitabine had a median overall survival of 15.6 months, compared with 13.0 months for other treatments (P = .02). This benefit became more pronounced in those who received at least six cycles, which was associated with median overall survival of 19.5 months versus 14.9 months (P = .002). Complete responders also had extended survival when treated with guadecitabine, although this benefit was of a lesser magnitude (22.6 vs. 20.6 months; P = .07).
Most subgroup analyses, accounting for various clinical and genetic factors, showed no significant differences in primary outcomes between treatment arms, with one exception: TP53 mutations were associated with poor responses to guadecitabine, and a lack of the TP53 mutation predicted better responses to guadecitabine.
Adverse events were common, although most measures were not significantly different between treatment arms. For example, serious adverse events occurred in 81% and 75.5% of patients treated with guadecitabine and other options, respectively, while grade 3 or higher adverse events occurred in 91.5% of guadecitabine patients and 87.5% of patients treated with other options, but neither difference was statistically significant.
Adverse events leading to death occurred in 28.7% of patients treated with guadecitabine versus 29.8% of other patients, a nonsignificant difference. In contrast, Dr. Fenaux noted that patients treated with guadecitabine were significantly more likely to develop febrile neutropenia (33.9% vs. 26.5%), neutropenia (27.4% vs. 20.7%), and pneumonia (29.4% vs. 19.6%).
“In those patients [that received at least four cycles], there seemed to be some advantage of guadecitabine, which needs to be further explored,” Dr. Fenaux said. “But at least [this finding] suggests once more that for a hypomethylating agent to be efficacious, it requires a certain number of cycles, and whenever possible, at least 6 cycles to have full efficacy.”
The study was funded by Astex and Otsuka. The investigators reported additional relationships with Celgene, Janssen, and other companies.
SOURCE: Fenaux P et al. EHA Congress, Abstract S879.
AMSTERDAM – For treatment-naive patients with acute myeloid leukemia (AML) who are ineligible for chemotherapy, guadecitabine offers similar efficacy to other standard treatment options until four cycles are administered, after which guadecitabine offers a slight survival advantage, based on results from the phase 3 ASTRAL-1 trial.
Complete responders also derived greater benefit from guadecitabine, a new hypomethylating agent, reported lead author Pierre Fenaux, MD, PhD, of the Hôpital Saint Louis, Paris.
With 815 patients, ASTRAL-1 was the largest global, randomized trial to compare low-intensity therapy options in this elderly, unfit population – specifically, patients who were at least 75 years old or had an Eastern Cooperative Oncology Group (ECOG) performance status of 3 or more, Dr. Fenaux said at the annual congress of the European Hematology Association.
They were randomized in a 1:1 ratio to receive guadecitabine or one of three other treatment options: azacitidine, decitabine, or low-dose cytarabine. The coprimary endpoints of the trial were complete response rate and median overall survival. Safety measures were also investigated.
A demographic analysis showed that almost two-thirds of patients were at least 75 years old (62%), and about half had an ECOG status of 2 or 3, or bone marrow blasts. Approximately one-third of patients had poor-risk cytogenetics and a slightly higher proportion had secondary AML.
After a median follow-up of 25.5 months, patients had received, on average, five cycles of therapy. However, many patients (42%) received three or fewer cycles because of early death or disease progression. This therapy cessation rate was similar between the guadecitabine group (42.4%) and the other treatment group (40.8%).
The study failed to meet either coprimary endpoint across the entire patient population. Median overall survival was 7.10 months for guadecitabine versus 8.47 months for the other treatments, but this difference was not statistically significant (P = .73). Similarly, the complete response rate was slightly higher for guadecitabine (19.4% vs. 17.4%), but again, this finding carried a nonsignificant P value (P = .48).
The benefit offered by guadecitabine was realized only with extended treatment and in complete responders.
Patients who received a minimum of four cycles of guadecitabine had a median overall survival of 15.6 months, compared with 13.0 months for other treatments (P = .02). This benefit became more pronounced in those who received at least six cycles, which was associated with median overall survival of 19.5 months versus 14.9 months (P = .002). Complete responders also had extended survival when treated with guadecitabine, although this benefit was of a lesser magnitude (22.6 vs. 20.6 months; P = .07).
Most subgroup analyses, accounting for various clinical and genetic factors, showed no significant differences in primary outcomes between treatment arms, with one exception: TP53 mutations were associated with poor responses to guadecitabine, and a lack of the TP53 mutation predicted better responses to guadecitabine.
Adverse events were common, although most measures were not significantly different between treatment arms. For example, serious adverse events occurred in 81% and 75.5% of patients treated with guadecitabine and other options, respectively, while grade 3 or higher adverse events occurred in 91.5% of guadecitabine patients and 87.5% of patients treated with other options, but neither difference was statistically significant.
Adverse events leading to death occurred in 28.7% of patients treated with guadecitabine versus 29.8% of other patients, a nonsignificant difference. In contrast, Dr. Fenaux noted that patients treated with guadecitabine were significantly more likely to develop febrile neutropenia (33.9% vs. 26.5%), neutropenia (27.4% vs. 20.7%), and pneumonia (29.4% vs. 19.6%).
“In those patients [that received at least four cycles], there seemed to be some advantage of guadecitabine, which needs to be further explored,” Dr. Fenaux said. “But at least [this finding] suggests once more that for a hypomethylating agent to be efficacious, it requires a certain number of cycles, and whenever possible, at least 6 cycles to have full efficacy.”
The study was funded by Astex and Otsuka. The investigators reported additional relationships with Celgene, Janssen, and other companies.
SOURCE: Fenaux P et al. EHA Congress, Abstract S879.
AMSTERDAM – For treatment-naive patients with acute myeloid leukemia (AML) who are ineligible for chemotherapy, guadecitabine offers similar efficacy to other standard treatment options until four cycles are administered, after which guadecitabine offers a slight survival advantage, based on results from the phase 3 ASTRAL-1 trial.
Complete responders also derived greater benefit from guadecitabine, a new hypomethylating agent, reported lead author Pierre Fenaux, MD, PhD, of the Hôpital Saint Louis, Paris.
With 815 patients, ASTRAL-1 was the largest global, randomized trial to compare low-intensity therapy options in this elderly, unfit population – specifically, patients who were at least 75 years old or had an Eastern Cooperative Oncology Group (ECOG) performance status of 3 or more, Dr. Fenaux said at the annual congress of the European Hematology Association.
They were randomized in a 1:1 ratio to receive guadecitabine or one of three other treatment options: azacitidine, decitabine, or low-dose cytarabine. The coprimary endpoints of the trial were complete response rate and median overall survival. Safety measures were also investigated.
A demographic analysis showed that almost two-thirds of patients were at least 75 years old (62%), and about half had an ECOG status of 2 or 3, or bone marrow blasts. Approximately one-third of patients had poor-risk cytogenetics and a slightly higher proportion had secondary AML.
After a median follow-up of 25.5 months, patients had received, on average, five cycles of therapy. However, many patients (42%) received three or fewer cycles because of early death or disease progression. This therapy cessation rate was similar between the guadecitabine group (42.4%) and the other treatment group (40.8%).
The study failed to meet either coprimary endpoint across the entire patient population. Median overall survival was 7.10 months for guadecitabine versus 8.47 months for the other treatments, but this difference was not statistically significant (P = .73). Similarly, the complete response rate was slightly higher for guadecitabine (19.4% vs. 17.4%), but again, this finding carried a nonsignificant P value (P = .48).
The benefit offered by guadecitabine was realized only with extended treatment and in complete responders.
Patients who received a minimum of four cycles of guadecitabine had a median overall survival of 15.6 months, compared with 13.0 months for other treatments (P = .02). This benefit became more pronounced in those who received at least six cycles, which was associated with median overall survival of 19.5 months versus 14.9 months (P = .002). Complete responders also had extended survival when treated with guadecitabine, although this benefit was of a lesser magnitude (22.6 vs. 20.6 months; P = .07).
Most subgroup analyses, accounting for various clinical and genetic factors, showed no significant differences in primary outcomes between treatment arms, with one exception: TP53 mutations were associated with poor responses to guadecitabine, and a lack of the TP53 mutation predicted better responses to guadecitabine.
Adverse events were common, although most measures were not significantly different between treatment arms. For example, serious adverse events occurred in 81% and 75.5% of patients treated with guadecitabine and other options, respectively, while grade 3 or higher adverse events occurred in 91.5% of guadecitabine patients and 87.5% of patients treated with other options, but neither difference was statistically significant.
Adverse events leading to death occurred in 28.7% of patients treated with guadecitabine versus 29.8% of other patients, a nonsignificant difference. In contrast, Dr. Fenaux noted that patients treated with guadecitabine were significantly more likely to develop febrile neutropenia (33.9% vs. 26.5%), neutropenia (27.4% vs. 20.7%), and pneumonia (29.4% vs. 19.6%).
“In those patients [that received at least four cycles], there seemed to be some advantage of guadecitabine, which needs to be further explored,” Dr. Fenaux said. “But at least [this finding] suggests once more that for a hypomethylating agent to be efficacious, it requires a certain number of cycles, and whenever possible, at least 6 cycles to have full efficacy.”
The study was funded by Astex and Otsuka. The investigators reported additional relationships with Celgene, Janssen, and other companies.
SOURCE: Fenaux P et al. EHA Congress, Abstract S879.
REPORTING FROM EHA CONGRESS
Rituximab and vemurafenib could challenge frontline chemotherapy for HCL
AMSTERDAM – A combination of rituximab and the BRAF inhibitor vemurafenib could be the one-two punch needed for relapsed or refractory hairy cell leukemia (HCL), according to investigators.
Among evaluable patients treated with this combination, 96% achieved complete remission, reported lead author, Enrico Tiacci, MD, of the University and Hospital of Perugia, Italy.
This level of efficacy is “clearly superior to historical results with either agent alone,” Dr. Tiacci said during a presentation at the annual congress of the European Hematology Association, citing previous complete response rates with vemurafenib alone of 35%-40%. “[This combination] has potential for challenging chemotherapy in the frontline setting,” he said.
The phase 2 trial involved 31 patients with relapsed or refractory HCL who had received a median of three previous therapies. Eight of the patients (26%) had primary refractory disease. Patients received vemurafenib 960 mg, twice daily for 8 weeks and rituximab 375 mg/m2, every 2 weeks. After finishing vemurafenib, patients received rituximab four more times, keeping the interval of 2 weeks. Complete remission was defined as a normal blood count, no leukemic cells in bone marrow biopsies and blood smears, and no palpable splenomegaly.
Out of 31 patients, 27 were evaluable at data cutoff. Of these, 26 (96%) achieved complete remission. The investigators noted that two complete responders had incomplete platelet recovery at the end of treatment that resolved soon after, and two patients had persistent splenomegaly, but were considered to be in complete remission at 22.5 and 25 months after finishing therapy.
All of the complete responders had previously received purine analogs, while a few had been refractory to a prior BRAF inhibitor (n = 7) and/or rituximab (n = 5).
The investigators also pointed out that 15 out of 24 evaluable patients (63%) achieved complete remission just 4 weeks after starting the trial regimen. Almost two-thirds of patients (65%) were negative for minimal residual disease (MRD). The rate of progression-free survival at a median follow-up of 29.5 months was 83%. Disease progression occurred exclusively in patients who were MRD positive.
The combination was well tolerated; most adverse events were of grade 1 or 2, overlapping with the safety profile of each agent alone.
Reflecting on the study findings, Dr. Tiacci suggested that the combination could be most effective if delivered immediately, instead of after BRAF failure.
“Interestingly,” he said, “the relapse-free survival in patients naive to a BRAF inhibitor remained significantly longer than the relapse-free interval that patients previously exposed to a BRAF inhibitor enjoyed, both following monotherapy with a BRAF inhibitor and following subsequent combination with rituximab, potentially suggesting that vemurafenib should be used directly in combination with rituximab rather than being delivered first as a monotherapy and then added to rituximab at relapse.”
Randomized testing of the combination against the chemotherapy-based standard of care in the frontline setting is warranted, the investigators concluded.
Dr. Tiacci reported financial relationships with Roche, AbbVie, and Shire.
SOURCE: Tiacci E et al. EHA Congress, Abstract S104.
AMSTERDAM – A combination of rituximab and the BRAF inhibitor vemurafenib could be the one-two punch needed for relapsed or refractory hairy cell leukemia (HCL), according to investigators.
Among evaluable patients treated with this combination, 96% achieved complete remission, reported lead author, Enrico Tiacci, MD, of the University and Hospital of Perugia, Italy.
This level of efficacy is “clearly superior to historical results with either agent alone,” Dr. Tiacci said during a presentation at the annual congress of the European Hematology Association, citing previous complete response rates with vemurafenib alone of 35%-40%. “[This combination] has potential for challenging chemotherapy in the frontline setting,” he said.
The phase 2 trial involved 31 patients with relapsed or refractory HCL who had received a median of three previous therapies. Eight of the patients (26%) had primary refractory disease. Patients received vemurafenib 960 mg, twice daily for 8 weeks and rituximab 375 mg/m2, every 2 weeks. After finishing vemurafenib, patients received rituximab four more times, keeping the interval of 2 weeks. Complete remission was defined as a normal blood count, no leukemic cells in bone marrow biopsies and blood smears, and no palpable splenomegaly.
Out of 31 patients, 27 were evaluable at data cutoff. Of these, 26 (96%) achieved complete remission. The investigators noted that two complete responders had incomplete platelet recovery at the end of treatment that resolved soon after, and two patients had persistent splenomegaly, but were considered to be in complete remission at 22.5 and 25 months after finishing therapy.
All of the complete responders had previously received purine analogs, while a few had been refractory to a prior BRAF inhibitor (n = 7) and/or rituximab (n = 5).
The investigators also pointed out that 15 out of 24 evaluable patients (63%) achieved complete remission just 4 weeks after starting the trial regimen. Almost two-thirds of patients (65%) were negative for minimal residual disease (MRD). The rate of progression-free survival at a median follow-up of 29.5 months was 83%. Disease progression occurred exclusively in patients who were MRD positive.
The combination was well tolerated; most adverse events were of grade 1 or 2, overlapping with the safety profile of each agent alone.
Reflecting on the study findings, Dr. Tiacci suggested that the combination could be most effective if delivered immediately, instead of after BRAF failure.
“Interestingly,” he said, “the relapse-free survival in patients naive to a BRAF inhibitor remained significantly longer than the relapse-free interval that patients previously exposed to a BRAF inhibitor enjoyed, both following monotherapy with a BRAF inhibitor and following subsequent combination with rituximab, potentially suggesting that vemurafenib should be used directly in combination with rituximab rather than being delivered first as a monotherapy and then added to rituximab at relapse.”
Randomized testing of the combination against the chemotherapy-based standard of care in the frontline setting is warranted, the investigators concluded.
Dr. Tiacci reported financial relationships with Roche, AbbVie, and Shire.
SOURCE: Tiacci E et al. EHA Congress, Abstract S104.
AMSTERDAM – A combination of rituximab and the BRAF inhibitor vemurafenib could be the one-two punch needed for relapsed or refractory hairy cell leukemia (HCL), according to investigators.
Among evaluable patients treated with this combination, 96% achieved complete remission, reported lead author, Enrico Tiacci, MD, of the University and Hospital of Perugia, Italy.
This level of efficacy is “clearly superior to historical results with either agent alone,” Dr. Tiacci said during a presentation at the annual congress of the European Hematology Association, citing previous complete response rates with vemurafenib alone of 35%-40%. “[This combination] has potential for challenging chemotherapy in the frontline setting,” he said.
The phase 2 trial involved 31 patients with relapsed or refractory HCL who had received a median of three previous therapies. Eight of the patients (26%) had primary refractory disease. Patients received vemurafenib 960 mg, twice daily for 8 weeks and rituximab 375 mg/m2, every 2 weeks. After finishing vemurafenib, patients received rituximab four more times, keeping the interval of 2 weeks. Complete remission was defined as a normal blood count, no leukemic cells in bone marrow biopsies and blood smears, and no palpable splenomegaly.
Out of 31 patients, 27 were evaluable at data cutoff. Of these, 26 (96%) achieved complete remission. The investigators noted that two complete responders had incomplete platelet recovery at the end of treatment that resolved soon after, and two patients had persistent splenomegaly, but were considered to be in complete remission at 22.5 and 25 months after finishing therapy.
All of the complete responders had previously received purine analogs, while a few had been refractory to a prior BRAF inhibitor (n = 7) and/or rituximab (n = 5).
The investigators also pointed out that 15 out of 24 evaluable patients (63%) achieved complete remission just 4 weeks after starting the trial regimen. Almost two-thirds of patients (65%) were negative for minimal residual disease (MRD). The rate of progression-free survival at a median follow-up of 29.5 months was 83%. Disease progression occurred exclusively in patients who were MRD positive.
The combination was well tolerated; most adverse events were of grade 1 or 2, overlapping with the safety profile of each agent alone.
Reflecting on the study findings, Dr. Tiacci suggested that the combination could be most effective if delivered immediately, instead of after BRAF failure.
“Interestingly,” he said, “the relapse-free survival in patients naive to a BRAF inhibitor remained significantly longer than the relapse-free interval that patients previously exposed to a BRAF inhibitor enjoyed, both following monotherapy with a BRAF inhibitor and following subsequent combination with rituximab, potentially suggesting that vemurafenib should be used directly in combination with rituximab rather than being delivered first as a monotherapy and then added to rituximab at relapse.”
Randomized testing of the combination against the chemotherapy-based standard of care in the frontline setting is warranted, the investigators concluded.
Dr. Tiacci reported financial relationships with Roche, AbbVie, and Shire.
SOURCE: Tiacci E et al. EHA Congress, Abstract S104.
REPORTING FROM EHA CONGRESS