LayerRx Mapping ID
205
Slot System
Featured Buckets
Featured Buckets Admin

Skeletal-Related Events in Patients With Multiple Myeloma and Prostate Cancer Who Receive Standard vs Extended-Interval Bisphosphonate Dosing (FULL)

Article Type
Changed
Display Headline
Skeletal-Related Events in Patients With Multiple Myeloma and Prostate Cancer Who Receive Standard vs Extended-Interval Bisphosphonate Dosing

In patients with multiple myeloma and prostate cancer, extending the bisphosphonatedosing interval may help decrease medication-related morbidity without compromising therapeutic benefit.

Bone pain is one of the most common causes of morbidity in multiple myeloma (MM) and metastatic prostate cancer (CaP). This pain originates with the underlying pathologic processes of the cancer and with downstream skeletal-related events (SREs). SREs—fractures, spinal cord compression, and irradiation or surgery performed in ≥ 1 bone sites—represent a significant health care burden, particularly given the incidence of the underlying malignancies. According to American Cancer Society statistics, CaP is the second most common cancer in American men, and MM the second most common hematologic malignancy, despite its relatively low overall lifetime risk.1,2 Regardless of the underlying malignancy, bisphosphonates are the cornerstone of SRE prevention, though the optimal dosing strategy is the subject of clinical debate.

Although similar in SRE incidence, MM and CaP have distinct pathophysiologic processes in the dysregulation of bone resorption. MM is a hematologic malignancy that increases the risk of SREs by osteoclast up-regulation, primarily through the RANK (receptor activator of nuclear factor α-B) signaling pathway.3 CaP is a solid tumor malignancy that metastasizes to bone. Dysregulation of the bone resorption or formation cycle and net bone loss are a result of endogenous osteoclast up-regulation in response to abnormal bone formation in osteoblastic bone metastases.4 Androgen-deprivation therapy, the cornerstone of CaP treatment, further predisposes CaP patients to osteoporosis and SREs.

Prevention of SREs is pharmacologically driven by bisphosphonates, which have antiresorptive effects on bone through promotion of osteoclast apoptosis.5 Two IV formulations, pamidronate and zoledronic acid (ZA), are US Food and Drug Administration approved for use in bone metastases from MM or solid tumors.6-10 Although generally well tolerated, bisphosphonates can cause osteonecrosis of the jaw (ONJ), an avascular death of bone tissue, particularly with prolonged use.11 With its documented incidence of 5% to 6.7% in bone metastasis, ONJ represents a significant morbidity risk in patients with MM and CaP who are treated with IV bisphosphonates.12

Investigators are exploring bisphosphonate dosing intervals to determine which is most appropriate in mitigating the risk of ONJ. Before 2006, bisphosphonates were consistently dosed once monthly in patients with MM or metastatic bone disease—a standard derived empirically rather than from comparative studies or compelling pharmacodynamic data.13-15 In a 2006 consensus statement, the Mayo Clinic issued an expert opinion recommendation for increasing the bisphosphonate dosing interval to every 3 months in patients with MM.16 The first objective evidence for the clinical applicability of extending the ZA dosing interval was reported by Himelstein and colleagues in 2017.17 The randomized clinical trial found no differences in SRE rates when ZA was dosed every 12 weeks,17 prompting a conditional recommendation for dosing interval extension in the American Society of Clinical Oncology MM treatment guidelines (2018).13 Because of the age and racial demographics of the patients in these studies, many questions remain unanswered.

For the US Department of Veterans Affairs (VA) population, the pharmacokinetic and dynamic differences imposed by age and race limit the applicability of the available data. However, in veterans with MM or CaP, extending the bisphosphonate dosing interval may help decrease medication-related morbidity (eg, ONJ, nephrotoxicity) without compromising therapeutic benefit. To this end at the Memphis VA Medical Center (VAMC), we assessed for differences in SRE rates by comparing outcomes of patients who received ZA in standard- vs extended-interval dosing.

 

 

Methods

We retrospectively reviewed the Computerized Patient Record System for veterans with MM or metastatic CaP treated with ZA at the Memphis VAMC. Study inclusion criteria were aged > 18 years and care provided by a Memphis VAMC oncologist between January 2003 and January 2018. The study was approved by the Memphis VAMC’s Institutional Review Board, and procedures were followed in accordance with the ethical standards of its committee on human experimentation.

Using Microsoft SQL 2016 (Redmond, WA), we performed a query to identify patients who were prescribed ZA during the study period. Exclusion criteria were ZA prescribed for an indication other than MM or CaP (ie, osteoporosis) and receipt of ≤ 1 dose of ZA. Once a list was compiled, patients were stratified by ZA dosing interval: standard (mean, every month) or extended (mean, every 3 months). Patients whose ZA dosing interval was changed during treatment were included as independent data points in each group.

Skeletal-related events included fractures, spinal compression, irradiation, and surgery. Fractures and spinal compression were pertinent in the presence of radiographic documentation (eg, X-ray, magnetic resonance imaging scan) during the period the patient received ZA or within 1 dosing interval of the last recorded ZA dose. Irradiation was defined as documented application of radiation therapy to ≥ 1 bone sites for palliation of pain or as an intervention in the setting of spinal compression. Surgery was defined as any procedure performed to correct a fracture or spinal compression. Each SRE was counted as a single occurrence.

Osteonecrosis of the jaw was defined as radiographically documented necrosis of the mandible or associated structures with assessment by a VA dentist. Records from non-VA dental practices were not available for assessment. Documentation of dental assessment before the first dose of ZA and any assessments during treatment were recorded.

Medication use was assessed before and during ZA treatment. Number of ZA doses and reasons for any discontinuations were documented, as was concomitant use of calcium supplements, vitamin D supplements, calcitriol, paricalcitol, calcitonin, cinacalcet, and pamidronate.

The primary study outcome was observed difference in incidence of SREs between standard- and extended-interval dosing of ZA. Secondary outcomes included difference in incidence of ONJ as well as incidence of SREs and ONJ by disease subtype (MM, CaP).

Descriptive statistics were used to summarize demographic data and assess prespecified outcomes. Differences in rates of SREs and ONJ between dosing interval groups were analyzed with the Pearson χ2 test. The predetermined a priori level of significance was .05.

Results

Of the 300 patients prescribed ZA at the Memphis VAMC, 177 were excluded (96 for indication,78 for receiving only 1 dose of ZA, 3 for not receiving any doses of ZA). The remaining 123 patients were stratified into a standard-interval dosing group (121) and an extended-interval dosing group (35). Of the 123 patients, 33 received both standard- and extended-interval dosing of ZA over the course of the study period and were included discretely in each group for the duration of each dosing strategy. 

In each group, the ratio of CaP to MM patients was 5:1. The standard-interval dosing group mean age was 69 years and was 98% male and 62% African American; the extended-interval dosing group mean age was 68 years and was 97% male and 71% African American (Table 1).

 

 

Pre-ZA dental screenings were documented in 14% of standard-interval patients and 17% of extended-interval patients, and during-ZA screenings were documented in 17% of standard-interval patients and 20% of extended-interval patients. Chi-square analysis revealed no significant difference in rates of dental screening before or during use of ZA.

Standard-interval patients received a mean (SD) 11.4 (13.5) doses of ZA (range, 2-124). Extended-interval patients received a mean (SD) of 5.9 (3.18) doses (range, 2-14). All standard-interval patients had discontinued treatment at the time of the study, most commonly because of death or for an unknown reason. Sixty percent of extended-interval patients had discontinued treatment, most commonly because of patient/physician choice or for an unknown reason (Table 2). 

The bone-modifying agents used most commonly both before and during ZA treatment were calcium and vitamin D supplements (Table 3).

Skeletal-related events were observed in 31% of standard-interval patients and 23% of extended-interval patients. There were no statistically significant differences in SRE rates between groups (P = .374). The most common SRE in both groups was bone irradiation (42% and 60%, respectively), with no statistically significant difference in proportion between groups (Table 4). 

ONJ occurred in 3% of standard-interval patients and 0% of extended-interval patients. There were no statistically significant differences in ONJ rates between groups (P = .347) or in rates of SREs or ONJ within the MM and CaP subgroups (Table 5).

Discussion

This retrospective review of patients with MM and CaP receiving ZA for bone metastasesfound no differences in the rates of SREs when ZA was dosed monthly vs every 3 months. 

Although this study was not powered to assess noninferiority, its results reflect the emerging evidence supporting an extension of the ZA dosing interval.

Earlier studies found that ZA can decrease SRE rates, but a major concern is that frequent, prolonged exposure to IV bisphosphonates may increase the risk of ONJ. No significant differences in ONJ rates existed between dosing groups, but all documented cases of ONJ occurred in the standard-interval group, suggesting a trend toward decreased incidence with an extension of the dosing interval.

Limitations

This study had several limitations. Geriatric African American men comprised the majority of the study population, and patients with MM accounted for only 22% of included regimens, limiting external validity. Patient overlap between groups may have confounded the results. The retrospective design precluded the ability to control for confounding variables, such as concomitant medication use and medication adherence, and significant heterogeneity was noted in rates of adherence with ZA infusion schedules regardless of dosing group. Use of medications associated with increased risk of osteoporosis—including corticosteroids and proton pump inhibitors—was not assessed.

Assessment of ONJ incidence was limited by the lack of access to dental records from providers outside the VA. Many patients in this review were not eligible for VA dental benefits because of requirements involving time and service connection, a reimbursement measurement that reflects health conditions “incurred or aggravated during active military service.”18

The results of this study provide further support for extended-interval dosing of ZA as a potential method of increasing patient adherence and decreasing the possibility of adverse drug reactions without compromising therapeutic benefit. Further randomized controlled trials are needed to define the potential decrease in ONJ incidence.

 

 

Conclusion

In comparisons of standard- and extended-interval dosing of ZA, there was no difference in the incidence of skeletal-related events in veteran patients with bone metastases from MM or CaP.

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

References

1. American Cancer Society. Cancer Facts & Figures 2018. Atlanta, GA: American Cancer Society; 2018.

2. Howlader N, Noone AM, Krapcho M, et al, eds. SEER Cancer Statistics Review (CSR), 1975-2014 [based on November 2016 SEER data submission posted to SEER website April 2017]. Bethesda, MD: National Cancer Institute; 2017. https://seer.cancer.gov/archive/csr/1975_2014/. Accessed January 12, 2019.

3. Roodman GD. Pathogenesis of myeloma bone disease. Leukemia. 2009;23(3):435-441.

4. Sartor O, de Bono JS. Metastatic prostate cancer. N Engl J Med. 2018;378(7):645-657.

5. Drake MT, Clarke BL, Khosla S. Bisphosphonates: mechanism of action and role in clinical practice. Mayo Clin Proc. 2008;83(9):1032-1045.

6. Zometa [package insert]. East Hanover, NJ: Novartis; 2016.

7. Aredia [package insert]. East Hanover, NJ: Novartis; 2011.

8. Berenson JR, Rosen LS, Howell A, et al. Zoledronic acid reduces skeletal-related events in patients with osteolytic metastases: a double-blind, randomized dose-response study [published correction appears in Cancer. 2001;91(10):1956]. Cancer. 2001;91(7):1191-1200.

9. Berenson JR, Lichtenstein A, Porter L, et al. Efficacy of pamidronate in reducing skeletal events in patients with advanced multiple myeloma. Myeloma Aredia Study Group. N Engl J Med. 1996;334(8):488-493.

10. Mhaskar R, Redzepovic J, Wheatley K, et al. Bisphosphonates in multiple myeloma: a network meta-analysis. Cochrane Database Syst Rev. 2012;(5):CD003188.

11. Wu S, Dahut WL, Gulley JL. The use of bisphosphonates in cancer patients. Acta Oncol. 2007;46(5):581-591.

12. Bamias A, Kastritis E, Bamia C, et al. Osteonecrosis of the jaw in cancer after treatment with bisphosphonates: incidence and risk factors. J Clin Oncol. 2005;23(34):8580-8587.

13. Anderson K, Ismaila N, Flynn PJ, et al. Role of bone-modifying agents in multiple myeloma: American Society of Clinical Oncology clinical practice guideline update. J Clin Oncol. 2018;36(8):812-818.

14. National Comprehensive Cancer Network Clinical Practice Guidelines in Oncology (NCCN Guidelines). Multiple Myeloma. Version 2.2019. https://www.nccn.org/professionals/physician_gls/pdf/myeloma.pdf. Accessed January 29, 2019.

15. National Comprehensive Cancer Network Clinical Practice Guidelines in Oncology (NCCN Guidelines). Prostate Cancer. Version 4.2018. https://www.nccn.org/professionals/physician_gls/pdf/prostate.pdf. Accessed January 29, 2019.

16. Lacy MQ, Dispenzieri A, Gertz MA, et al. Mayo Clinic consensus statement for the use of bisphosphonates in multiple myeloma. Mayo Clin Proc. 2006;81(8):1047-1053.

17. Himelstein AL, Foster JC, Khatcheressian JL, et al. Effect of longer-interval vs. standard dosing of zoledronic acid on skeletal events in patients with bone metastases: a randomized clinical trial. JAMA. 2017;317(1):48-58.

18. Office of Public and Intergovernmental Affairs, US Department of Veterans Affairs. Service connected disabilities. In: Federal Benefits for Veterans, Dependents, and Survivors. https://www.va.gov/opa/publications/benefits_book/benefits_chap02.asp. Published April 2015. Accessed May 22, 2018.

Author and Disclosure Information

Abigail Shell is a Pharmacist at the Piedmont Atlanta Hospital in Georgia. Leigh Keough and Kothanur Rajanna are Clinical Pharmacy Specialists in the Department of Hematology/Oncology at the Memphis VAMC in Tennessee.
Correspondence: Abigail Shell (abigail [email protected])

Issue
Federal Practitioner - 36(1)s
Publications
Topics
Page Number
S22-S26
Sections
Author and Disclosure Information

Abigail Shell is a Pharmacist at the Piedmont Atlanta Hospital in Georgia. Leigh Keough and Kothanur Rajanna are Clinical Pharmacy Specialists in the Department of Hematology/Oncology at the Memphis VAMC in Tennessee.
Correspondence: Abigail Shell (abigail [email protected])

Author and Disclosure Information

Abigail Shell is a Pharmacist at the Piedmont Atlanta Hospital in Georgia. Leigh Keough and Kothanur Rajanna are Clinical Pharmacy Specialists in the Department of Hematology/Oncology at the Memphis VAMC in Tennessee.
Correspondence: Abigail Shell (abigail [email protected])

In patients with multiple myeloma and prostate cancer, extending the bisphosphonatedosing interval may help decrease medication-related morbidity without compromising therapeutic benefit.

In patients with multiple myeloma and prostate cancer, extending the bisphosphonatedosing interval may help decrease medication-related morbidity without compromising therapeutic benefit.

Bone pain is one of the most common causes of morbidity in multiple myeloma (MM) and metastatic prostate cancer (CaP). This pain originates with the underlying pathologic processes of the cancer and with downstream skeletal-related events (SREs). SREs—fractures, spinal cord compression, and irradiation or surgery performed in ≥ 1 bone sites—represent a significant health care burden, particularly given the incidence of the underlying malignancies. According to American Cancer Society statistics, CaP is the second most common cancer in American men, and MM the second most common hematologic malignancy, despite its relatively low overall lifetime risk.1,2 Regardless of the underlying malignancy, bisphosphonates are the cornerstone of SRE prevention, though the optimal dosing strategy is the subject of clinical debate.

Although similar in SRE incidence, MM and CaP have distinct pathophysiologic processes in the dysregulation of bone resorption. MM is a hematologic malignancy that increases the risk of SREs by osteoclast up-regulation, primarily through the RANK (receptor activator of nuclear factor α-B) signaling pathway.3 CaP is a solid tumor malignancy that metastasizes to bone. Dysregulation of the bone resorption or formation cycle and net bone loss are a result of endogenous osteoclast up-regulation in response to abnormal bone formation in osteoblastic bone metastases.4 Androgen-deprivation therapy, the cornerstone of CaP treatment, further predisposes CaP patients to osteoporosis and SREs.

Prevention of SREs is pharmacologically driven by bisphosphonates, which have antiresorptive effects on bone through promotion of osteoclast apoptosis.5 Two IV formulations, pamidronate and zoledronic acid (ZA), are US Food and Drug Administration approved for use in bone metastases from MM or solid tumors.6-10 Although generally well tolerated, bisphosphonates can cause osteonecrosis of the jaw (ONJ), an avascular death of bone tissue, particularly with prolonged use.11 With its documented incidence of 5% to 6.7% in bone metastasis, ONJ represents a significant morbidity risk in patients with MM and CaP who are treated with IV bisphosphonates.12

Investigators are exploring bisphosphonate dosing intervals to determine which is most appropriate in mitigating the risk of ONJ. Before 2006, bisphosphonates were consistently dosed once monthly in patients with MM or metastatic bone disease—a standard derived empirically rather than from comparative studies or compelling pharmacodynamic data.13-15 In a 2006 consensus statement, the Mayo Clinic issued an expert opinion recommendation for increasing the bisphosphonate dosing interval to every 3 months in patients with MM.16 The first objective evidence for the clinical applicability of extending the ZA dosing interval was reported by Himelstein and colleagues in 2017.17 The randomized clinical trial found no differences in SRE rates when ZA was dosed every 12 weeks,17 prompting a conditional recommendation for dosing interval extension in the American Society of Clinical Oncology MM treatment guidelines (2018).13 Because of the age and racial demographics of the patients in these studies, many questions remain unanswered.

For the US Department of Veterans Affairs (VA) population, the pharmacokinetic and dynamic differences imposed by age and race limit the applicability of the available data. However, in veterans with MM or CaP, extending the bisphosphonate dosing interval may help decrease medication-related morbidity (eg, ONJ, nephrotoxicity) without compromising therapeutic benefit. To this end at the Memphis VA Medical Center (VAMC), we assessed for differences in SRE rates by comparing outcomes of patients who received ZA in standard- vs extended-interval dosing.

 

 

Methods

We retrospectively reviewed the Computerized Patient Record System for veterans with MM or metastatic CaP treated with ZA at the Memphis VAMC. Study inclusion criteria were aged > 18 years and care provided by a Memphis VAMC oncologist between January 2003 and January 2018. The study was approved by the Memphis VAMC’s Institutional Review Board, and procedures were followed in accordance with the ethical standards of its committee on human experimentation.

Using Microsoft SQL 2016 (Redmond, WA), we performed a query to identify patients who were prescribed ZA during the study period. Exclusion criteria were ZA prescribed for an indication other than MM or CaP (ie, osteoporosis) and receipt of ≤ 1 dose of ZA. Once a list was compiled, patients were stratified by ZA dosing interval: standard (mean, every month) or extended (mean, every 3 months). Patients whose ZA dosing interval was changed during treatment were included as independent data points in each group.

Skeletal-related events included fractures, spinal compression, irradiation, and surgery. Fractures and spinal compression were pertinent in the presence of radiographic documentation (eg, X-ray, magnetic resonance imaging scan) during the period the patient received ZA or within 1 dosing interval of the last recorded ZA dose. Irradiation was defined as documented application of radiation therapy to ≥ 1 bone sites for palliation of pain or as an intervention in the setting of spinal compression. Surgery was defined as any procedure performed to correct a fracture or spinal compression. Each SRE was counted as a single occurrence.

Osteonecrosis of the jaw was defined as radiographically documented necrosis of the mandible or associated structures with assessment by a VA dentist. Records from non-VA dental practices were not available for assessment. Documentation of dental assessment before the first dose of ZA and any assessments during treatment were recorded.

Medication use was assessed before and during ZA treatment. Number of ZA doses and reasons for any discontinuations were documented, as was concomitant use of calcium supplements, vitamin D supplements, calcitriol, paricalcitol, calcitonin, cinacalcet, and pamidronate.

The primary study outcome was observed difference in incidence of SREs between standard- and extended-interval dosing of ZA. Secondary outcomes included difference in incidence of ONJ as well as incidence of SREs and ONJ by disease subtype (MM, CaP).

Descriptive statistics were used to summarize demographic data and assess prespecified outcomes. Differences in rates of SREs and ONJ between dosing interval groups were analyzed with the Pearson χ2 test. The predetermined a priori level of significance was .05.

Results

Of the 300 patients prescribed ZA at the Memphis VAMC, 177 were excluded (96 for indication,78 for receiving only 1 dose of ZA, 3 for not receiving any doses of ZA). The remaining 123 patients were stratified into a standard-interval dosing group (121) and an extended-interval dosing group (35). Of the 123 patients, 33 received both standard- and extended-interval dosing of ZA over the course of the study period and were included discretely in each group for the duration of each dosing strategy. 

In each group, the ratio of CaP to MM patients was 5:1. The standard-interval dosing group mean age was 69 years and was 98% male and 62% African American; the extended-interval dosing group mean age was 68 years and was 97% male and 71% African American (Table 1).

 

 

Pre-ZA dental screenings were documented in 14% of standard-interval patients and 17% of extended-interval patients, and during-ZA screenings were documented in 17% of standard-interval patients and 20% of extended-interval patients. Chi-square analysis revealed no significant difference in rates of dental screening before or during use of ZA.

Standard-interval patients received a mean (SD) 11.4 (13.5) doses of ZA (range, 2-124). Extended-interval patients received a mean (SD) of 5.9 (3.18) doses (range, 2-14). All standard-interval patients had discontinued treatment at the time of the study, most commonly because of death or for an unknown reason. Sixty percent of extended-interval patients had discontinued treatment, most commonly because of patient/physician choice or for an unknown reason (Table 2). 

The bone-modifying agents used most commonly both before and during ZA treatment were calcium and vitamin D supplements (Table 3).

Skeletal-related events were observed in 31% of standard-interval patients and 23% of extended-interval patients. There were no statistically significant differences in SRE rates between groups (P = .374). The most common SRE in both groups was bone irradiation (42% and 60%, respectively), with no statistically significant difference in proportion between groups (Table 4). 

ONJ occurred in 3% of standard-interval patients and 0% of extended-interval patients. There were no statistically significant differences in ONJ rates between groups (P = .347) or in rates of SREs or ONJ within the MM and CaP subgroups (Table 5).

Discussion

This retrospective review of patients with MM and CaP receiving ZA for bone metastasesfound no differences in the rates of SREs when ZA was dosed monthly vs every 3 months. 

Although this study was not powered to assess noninferiority, its results reflect the emerging evidence supporting an extension of the ZA dosing interval.

Earlier studies found that ZA can decrease SRE rates, but a major concern is that frequent, prolonged exposure to IV bisphosphonates may increase the risk of ONJ. No significant differences in ONJ rates existed between dosing groups, but all documented cases of ONJ occurred in the standard-interval group, suggesting a trend toward decreased incidence with an extension of the dosing interval.

Limitations

This study had several limitations. Geriatric African American men comprised the majority of the study population, and patients with MM accounted for only 22% of included regimens, limiting external validity. Patient overlap between groups may have confounded the results. The retrospective design precluded the ability to control for confounding variables, such as concomitant medication use and medication adherence, and significant heterogeneity was noted in rates of adherence with ZA infusion schedules regardless of dosing group. Use of medications associated with increased risk of osteoporosis—including corticosteroids and proton pump inhibitors—was not assessed.

Assessment of ONJ incidence was limited by the lack of access to dental records from providers outside the VA. Many patients in this review were not eligible for VA dental benefits because of requirements involving time and service connection, a reimbursement measurement that reflects health conditions “incurred or aggravated during active military service.”18

The results of this study provide further support for extended-interval dosing of ZA as a potential method of increasing patient adherence and decreasing the possibility of adverse drug reactions without compromising therapeutic benefit. Further randomized controlled trials are needed to define the potential decrease in ONJ incidence.

 

 

Conclusion

In comparisons of standard- and extended-interval dosing of ZA, there was no difference in the incidence of skeletal-related events in veteran patients with bone metastases from MM or CaP.

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Bone pain is one of the most common causes of morbidity in multiple myeloma (MM) and metastatic prostate cancer (CaP). This pain originates with the underlying pathologic processes of the cancer and with downstream skeletal-related events (SREs). SREs—fractures, spinal cord compression, and irradiation or surgery performed in ≥ 1 bone sites—represent a significant health care burden, particularly given the incidence of the underlying malignancies. According to American Cancer Society statistics, CaP is the second most common cancer in American men, and MM the second most common hematologic malignancy, despite its relatively low overall lifetime risk.1,2 Regardless of the underlying malignancy, bisphosphonates are the cornerstone of SRE prevention, though the optimal dosing strategy is the subject of clinical debate.

Although similar in SRE incidence, MM and CaP have distinct pathophysiologic processes in the dysregulation of bone resorption. MM is a hematologic malignancy that increases the risk of SREs by osteoclast up-regulation, primarily through the RANK (receptor activator of nuclear factor α-B) signaling pathway.3 CaP is a solid tumor malignancy that metastasizes to bone. Dysregulation of the bone resorption or formation cycle and net bone loss are a result of endogenous osteoclast up-regulation in response to abnormal bone formation in osteoblastic bone metastases.4 Androgen-deprivation therapy, the cornerstone of CaP treatment, further predisposes CaP patients to osteoporosis and SREs.

Prevention of SREs is pharmacologically driven by bisphosphonates, which have antiresorptive effects on bone through promotion of osteoclast apoptosis.5 Two IV formulations, pamidronate and zoledronic acid (ZA), are US Food and Drug Administration approved for use in bone metastases from MM or solid tumors.6-10 Although generally well tolerated, bisphosphonates can cause osteonecrosis of the jaw (ONJ), an avascular death of bone tissue, particularly with prolonged use.11 With its documented incidence of 5% to 6.7% in bone metastasis, ONJ represents a significant morbidity risk in patients with MM and CaP who are treated with IV bisphosphonates.12

Investigators are exploring bisphosphonate dosing intervals to determine which is most appropriate in mitigating the risk of ONJ. Before 2006, bisphosphonates were consistently dosed once monthly in patients with MM or metastatic bone disease—a standard derived empirically rather than from comparative studies or compelling pharmacodynamic data.13-15 In a 2006 consensus statement, the Mayo Clinic issued an expert opinion recommendation for increasing the bisphosphonate dosing interval to every 3 months in patients with MM.16 The first objective evidence for the clinical applicability of extending the ZA dosing interval was reported by Himelstein and colleagues in 2017.17 The randomized clinical trial found no differences in SRE rates when ZA was dosed every 12 weeks,17 prompting a conditional recommendation for dosing interval extension in the American Society of Clinical Oncology MM treatment guidelines (2018).13 Because of the age and racial demographics of the patients in these studies, many questions remain unanswered.

For the US Department of Veterans Affairs (VA) population, the pharmacokinetic and dynamic differences imposed by age and race limit the applicability of the available data. However, in veterans with MM or CaP, extending the bisphosphonate dosing interval may help decrease medication-related morbidity (eg, ONJ, nephrotoxicity) without compromising therapeutic benefit. To this end at the Memphis VA Medical Center (VAMC), we assessed for differences in SRE rates by comparing outcomes of patients who received ZA in standard- vs extended-interval dosing.

 

 

Methods

We retrospectively reviewed the Computerized Patient Record System for veterans with MM or metastatic CaP treated with ZA at the Memphis VAMC. Study inclusion criteria were aged > 18 years and care provided by a Memphis VAMC oncologist between January 2003 and January 2018. The study was approved by the Memphis VAMC’s Institutional Review Board, and procedures were followed in accordance with the ethical standards of its committee on human experimentation.

Using Microsoft SQL 2016 (Redmond, WA), we performed a query to identify patients who were prescribed ZA during the study period. Exclusion criteria were ZA prescribed for an indication other than MM or CaP (ie, osteoporosis) and receipt of ≤ 1 dose of ZA. Once a list was compiled, patients were stratified by ZA dosing interval: standard (mean, every month) or extended (mean, every 3 months). Patients whose ZA dosing interval was changed during treatment were included as independent data points in each group.

Skeletal-related events included fractures, spinal compression, irradiation, and surgery. Fractures and spinal compression were pertinent in the presence of radiographic documentation (eg, X-ray, magnetic resonance imaging scan) during the period the patient received ZA or within 1 dosing interval of the last recorded ZA dose. Irradiation was defined as documented application of radiation therapy to ≥ 1 bone sites for palliation of pain or as an intervention in the setting of spinal compression. Surgery was defined as any procedure performed to correct a fracture or spinal compression. Each SRE was counted as a single occurrence.

Osteonecrosis of the jaw was defined as radiographically documented necrosis of the mandible or associated structures with assessment by a VA dentist. Records from non-VA dental practices were not available for assessment. Documentation of dental assessment before the first dose of ZA and any assessments during treatment were recorded.

Medication use was assessed before and during ZA treatment. Number of ZA doses and reasons for any discontinuations were documented, as was concomitant use of calcium supplements, vitamin D supplements, calcitriol, paricalcitol, calcitonin, cinacalcet, and pamidronate.

The primary study outcome was observed difference in incidence of SREs between standard- and extended-interval dosing of ZA. Secondary outcomes included difference in incidence of ONJ as well as incidence of SREs and ONJ by disease subtype (MM, CaP).

Descriptive statistics were used to summarize demographic data and assess prespecified outcomes. Differences in rates of SREs and ONJ between dosing interval groups were analyzed with the Pearson χ2 test. The predetermined a priori level of significance was .05.

Results

Of the 300 patients prescribed ZA at the Memphis VAMC, 177 were excluded (96 for indication,78 for receiving only 1 dose of ZA, 3 for not receiving any doses of ZA). The remaining 123 patients were stratified into a standard-interval dosing group (121) and an extended-interval dosing group (35). Of the 123 patients, 33 received both standard- and extended-interval dosing of ZA over the course of the study period and were included discretely in each group for the duration of each dosing strategy. 

In each group, the ratio of CaP to MM patients was 5:1. The standard-interval dosing group mean age was 69 years and was 98% male and 62% African American; the extended-interval dosing group mean age was 68 years and was 97% male and 71% African American (Table 1).

 

 

Pre-ZA dental screenings were documented in 14% of standard-interval patients and 17% of extended-interval patients, and during-ZA screenings were documented in 17% of standard-interval patients and 20% of extended-interval patients. Chi-square analysis revealed no significant difference in rates of dental screening before or during use of ZA.

Standard-interval patients received a mean (SD) 11.4 (13.5) doses of ZA (range, 2-124). Extended-interval patients received a mean (SD) of 5.9 (3.18) doses (range, 2-14). All standard-interval patients had discontinued treatment at the time of the study, most commonly because of death or for an unknown reason. Sixty percent of extended-interval patients had discontinued treatment, most commonly because of patient/physician choice or for an unknown reason (Table 2). 

The bone-modifying agents used most commonly both before and during ZA treatment were calcium and vitamin D supplements (Table 3).

Skeletal-related events were observed in 31% of standard-interval patients and 23% of extended-interval patients. There were no statistically significant differences in SRE rates between groups (P = .374). The most common SRE in both groups was bone irradiation (42% and 60%, respectively), with no statistically significant difference in proportion between groups (Table 4). 

ONJ occurred in 3% of standard-interval patients and 0% of extended-interval patients. There were no statistically significant differences in ONJ rates between groups (P = .347) or in rates of SREs or ONJ within the MM and CaP subgroups (Table 5).

Discussion

This retrospective review of patients with MM and CaP receiving ZA for bone metastasesfound no differences in the rates of SREs when ZA was dosed monthly vs every 3 months. 

Although this study was not powered to assess noninferiority, its results reflect the emerging evidence supporting an extension of the ZA dosing interval.

Earlier studies found that ZA can decrease SRE rates, but a major concern is that frequent, prolonged exposure to IV bisphosphonates may increase the risk of ONJ. No significant differences in ONJ rates existed between dosing groups, but all documented cases of ONJ occurred in the standard-interval group, suggesting a trend toward decreased incidence with an extension of the dosing interval.

Limitations

This study had several limitations. Geriatric African American men comprised the majority of the study population, and patients with MM accounted for only 22% of included regimens, limiting external validity. Patient overlap between groups may have confounded the results. The retrospective design precluded the ability to control for confounding variables, such as concomitant medication use and medication adherence, and significant heterogeneity was noted in rates of adherence with ZA infusion schedules regardless of dosing group. Use of medications associated with increased risk of osteoporosis—including corticosteroids and proton pump inhibitors—was not assessed.

Assessment of ONJ incidence was limited by the lack of access to dental records from providers outside the VA. Many patients in this review were not eligible for VA dental benefits because of requirements involving time and service connection, a reimbursement measurement that reflects health conditions “incurred or aggravated during active military service.”18

The results of this study provide further support for extended-interval dosing of ZA as a potential method of increasing patient adherence and decreasing the possibility of adverse drug reactions without compromising therapeutic benefit. Further randomized controlled trials are needed to define the potential decrease in ONJ incidence.

 

 

Conclusion

In comparisons of standard- and extended-interval dosing of ZA, there was no difference in the incidence of skeletal-related events in veteran patients with bone metastases from MM or CaP.

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

References

1. American Cancer Society. Cancer Facts & Figures 2018. Atlanta, GA: American Cancer Society; 2018.

2. Howlader N, Noone AM, Krapcho M, et al, eds. SEER Cancer Statistics Review (CSR), 1975-2014 [based on November 2016 SEER data submission posted to SEER website April 2017]. Bethesda, MD: National Cancer Institute; 2017. https://seer.cancer.gov/archive/csr/1975_2014/. Accessed January 12, 2019.

3. Roodman GD. Pathogenesis of myeloma bone disease. Leukemia. 2009;23(3):435-441.

4. Sartor O, de Bono JS. Metastatic prostate cancer. N Engl J Med. 2018;378(7):645-657.

5. Drake MT, Clarke BL, Khosla S. Bisphosphonates: mechanism of action and role in clinical practice. Mayo Clin Proc. 2008;83(9):1032-1045.

6. Zometa [package insert]. East Hanover, NJ: Novartis; 2016.

7. Aredia [package insert]. East Hanover, NJ: Novartis; 2011.

8. Berenson JR, Rosen LS, Howell A, et al. Zoledronic acid reduces skeletal-related events in patients with osteolytic metastases: a double-blind, randomized dose-response study [published correction appears in Cancer. 2001;91(10):1956]. Cancer. 2001;91(7):1191-1200.

9. Berenson JR, Lichtenstein A, Porter L, et al. Efficacy of pamidronate in reducing skeletal events in patients with advanced multiple myeloma. Myeloma Aredia Study Group. N Engl J Med. 1996;334(8):488-493.

10. Mhaskar R, Redzepovic J, Wheatley K, et al. Bisphosphonates in multiple myeloma: a network meta-analysis. Cochrane Database Syst Rev. 2012;(5):CD003188.

11. Wu S, Dahut WL, Gulley JL. The use of bisphosphonates in cancer patients. Acta Oncol. 2007;46(5):581-591.

12. Bamias A, Kastritis E, Bamia C, et al. Osteonecrosis of the jaw in cancer after treatment with bisphosphonates: incidence and risk factors. J Clin Oncol. 2005;23(34):8580-8587.

13. Anderson K, Ismaila N, Flynn PJ, et al. Role of bone-modifying agents in multiple myeloma: American Society of Clinical Oncology clinical practice guideline update. J Clin Oncol. 2018;36(8):812-818.

14. National Comprehensive Cancer Network Clinical Practice Guidelines in Oncology (NCCN Guidelines). Multiple Myeloma. Version 2.2019. https://www.nccn.org/professionals/physician_gls/pdf/myeloma.pdf. Accessed January 29, 2019.

15. National Comprehensive Cancer Network Clinical Practice Guidelines in Oncology (NCCN Guidelines). Prostate Cancer. Version 4.2018. https://www.nccn.org/professionals/physician_gls/pdf/prostate.pdf. Accessed January 29, 2019.

16. Lacy MQ, Dispenzieri A, Gertz MA, et al. Mayo Clinic consensus statement for the use of bisphosphonates in multiple myeloma. Mayo Clin Proc. 2006;81(8):1047-1053.

17. Himelstein AL, Foster JC, Khatcheressian JL, et al. Effect of longer-interval vs. standard dosing of zoledronic acid on skeletal events in patients with bone metastases: a randomized clinical trial. JAMA. 2017;317(1):48-58.

18. Office of Public and Intergovernmental Affairs, US Department of Veterans Affairs. Service connected disabilities. In: Federal Benefits for Veterans, Dependents, and Survivors. https://www.va.gov/opa/publications/benefits_book/benefits_chap02.asp. Published April 2015. Accessed May 22, 2018.

References

1. American Cancer Society. Cancer Facts & Figures 2018. Atlanta, GA: American Cancer Society; 2018.

2. Howlader N, Noone AM, Krapcho M, et al, eds. SEER Cancer Statistics Review (CSR), 1975-2014 [based on November 2016 SEER data submission posted to SEER website April 2017]. Bethesda, MD: National Cancer Institute; 2017. https://seer.cancer.gov/archive/csr/1975_2014/. Accessed January 12, 2019.

3. Roodman GD. Pathogenesis of myeloma bone disease. Leukemia. 2009;23(3):435-441.

4. Sartor O, de Bono JS. Metastatic prostate cancer. N Engl J Med. 2018;378(7):645-657.

5. Drake MT, Clarke BL, Khosla S. Bisphosphonates: mechanism of action and role in clinical practice. Mayo Clin Proc. 2008;83(9):1032-1045.

6. Zometa [package insert]. East Hanover, NJ: Novartis; 2016.

7. Aredia [package insert]. East Hanover, NJ: Novartis; 2011.

8. Berenson JR, Rosen LS, Howell A, et al. Zoledronic acid reduces skeletal-related events in patients with osteolytic metastases: a double-blind, randomized dose-response study [published correction appears in Cancer. 2001;91(10):1956]. Cancer. 2001;91(7):1191-1200.

9. Berenson JR, Lichtenstein A, Porter L, et al. Efficacy of pamidronate in reducing skeletal events in patients with advanced multiple myeloma. Myeloma Aredia Study Group. N Engl J Med. 1996;334(8):488-493.

10. Mhaskar R, Redzepovic J, Wheatley K, et al. Bisphosphonates in multiple myeloma: a network meta-analysis. Cochrane Database Syst Rev. 2012;(5):CD003188.

11. Wu S, Dahut WL, Gulley JL. The use of bisphosphonates in cancer patients. Acta Oncol. 2007;46(5):581-591.

12. Bamias A, Kastritis E, Bamia C, et al. Osteonecrosis of the jaw in cancer after treatment with bisphosphonates: incidence and risk factors. J Clin Oncol. 2005;23(34):8580-8587.

13. Anderson K, Ismaila N, Flynn PJ, et al. Role of bone-modifying agents in multiple myeloma: American Society of Clinical Oncology clinical practice guideline update. J Clin Oncol. 2018;36(8):812-818.

14. National Comprehensive Cancer Network Clinical Practice Guidelines in Oncology (NCCN Guidelines). Multiple Myeloma. Version 2.2019. https://www.nccn.org/professionals/physician_gls/pdf/myeloma.pdf. Accessed January 29, 2019.

15. National Comprehensive Cancer Network Clinical Practice Guidelines in Oncology (NCCN Guidelines). Prostate Cancer. Version 4.2018. https://www.nccn.org/professionals/physician_gls/pdf/prostate.pdf. Accessed January 29, 2019.

16. Lacy MQ, Dispenzieri A, Gertz MA, et al. Mayo Clinic consensus statement for the use of bisphosphonates in multiple myeloma. Mayo Clin Proc. 2006;81(8):1047-1053.

17. Himelstein AL, Foster JC, Khatcheressian JL, et al. Effect of longer-interval vs. standard dosing of zoledronic acid on skeletal events in patients with bone metastases: a randomized clinical trial. JAMA. 2017;317(1):48-58.

18. Office of Public and Intergovernmental Affairs, US Department of Veterans Affairs. Service connected disabilities. In: Federal Benefits for Veterans, Dependents, and Survivors. https://www.va.gov/opa/publications/benefits_book/benefits_chap02.asp. Published April 2015. Accessed May 22, 2018.

Issue
Federal Practitioner - 36(1)s
Issue
Federal Practitioner - 36(1)s
Page Number
S22-S26
Page Number
S22-S26
Publications
Publications
Topics
Article Type
Display Headline
Skeletal-Related Events in Patients With Multiple Myeloma and Prostate Cancer Who Receive Standard vs Extended-Interval Bisphosphonate Dosing
Display Headline
Skeletal-Related Events in Patients With Multiple Myeloma and Prostate Cancer Who Receive Standard vs Extended-Interval Bisphosphonate Dosing
Sections
Citation Override
Fed Pract. 2019 February;36(suppl 1):S22-S26
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Prostate Cancer Surveillance After Radiation Therapy in a National Delivery System (FULL)

Article Type
Changed
Display Headline
Prostate Cancer Surveillance After Radiation Therapy in a National Delivery System

Guideline concordance with PSA surveillance among veterans treated with definitiveradiation therapy was generally high, but opportunities may exist to improve surveillance among select groups.

Guidelines recommend prostate-specific antigen (PSA) surveillance among men treated with definitive radiation therapy (RT) for prostate cancer. Specifically, the National Comprehensive Cancer Network recommends testing every 6 to 12 months for 5 years and annually thereafter (with no specific stopping period specified), while the American Urology Association recommends testing for at least 10 years, with the frequency to be determined by the risk of relapse and patient preferences for monitoring.1,2 Salvage treatments exist for men with localized recurrence identified early through PSA testing, so adherence to follow-up guidelines is important for quality prostate cancer survivorship care.1,2

However, few studies focus on adherence to PSA surveillance following radiation therapy. Posttreatment surveillance among surgical patients is generally high, but sociodemographic disparities exist. Racial and ethnic minorities and unmarried men are less likely to undergo guideline concordant surveillance than is the general population, potentially preventing effective salvage therapy.3,4 A recent Department of Veterans Affairs (VA) study on posttreatment surveillance included radiation therapy patients but did not examine the impact of younger age, concurrent androgen deprivation therapy (ADT), or treatment facility (ie, diagnosed and treated at the same vs different facilities, with the latter including a separate VA facility or the community) on surveillance patterns.5 The latter is particularly relevant given increasing efforts to coordinate care outside the VA delivery system supported by the 2018 VA Maintaining Systems and Strengthening Integrated Outside Networks (MISSION) Act. Furthermore, these patient, treatment, and delivery system factors may each uniquely contribute to whether patients receive guideline-recommended PSA surveillance after prostate cancer treatment.

For these reasons, we conducted a study to better understand determinants of adherence to guideline-recommended PSA surveillance among veterans undergoing definitive radiation therapy with or without concurrent ADT. Our study uniquely included both elderly and nonelderly patients as well as investigated relationships between treatment at or away from the diagnosing facility. Although we found high overall levels of adherence to PSA surveillance, our findings do offer insights into determinants associated with worse adherence and provide opportunities to improve prostate cancer survivorship care after RT.

Methods

This study population included men with biopsy-proven nonmetastatic incident prostate cancer diagnosed between January 2005 and December 2008, with follow-up through 2012, identified using the VA Central Cancer Registry. We included men who underwent definitive RT with or without concurrent ADT injections, determined using the VA pharmacy files. We excluded men with a prior diagnosis of prostate or other malignancy (given the presence of other malignancies might affect life expectancy and surveillance patterns), hospice enrollment within 30 days, diagnosis at autopsy, and those treated with radical prostatectomy. We extracted cancer registry data, including biopsy Gleason score, pretreatment PSA level, clinical tumor stage, and whether RT was delivered at the patient’s diagnosing facility. For the latter, we used data on radiation location coded by the tumor registrar. We also collected demographic information, including age at diagnosis, race, ethnicity, marital status, and ZIP code. We used diagnosis codes to determine Charlson comorbidity scores similar to prior studies.6-8

 

 

Primary Outcome

The primary outcome was receipt of guideline concordant annual PSA surveillance in the initial 5 years following RT. We used laboratory files within the VA Corporate Data Warehouse to identify the date and value for each PSA test after RT for the entire cohort. Specifically, we defined the surveillance period as 60 days after initiation of RT through December 31, 2012. We defined guideline concordance as receiving at least 1 PSA test for each 12-month period after RT.

Statistical Analysis

We used descriptive statistics to characterize our cohort of veterans with prostate cancer treated with RT with or without concurrent ADT. To handle missing data, we performed multiple imputation, generating 10 imputations using all baseline clinical and demographic variables, year of diagnosis, and the regional VA network (ie, the Veterans Integrated Services Network [VISN]) for each patient.

Next, we calculated the annual guideline concordance rate for each year of follow-up for each patient, for the overall cohort, as well as by age, race/ethnicity, and concurrent ADT use. We examined bivariable relationships between guideline concordance and baseline demographic, clinical, and delivery system factors, including year of diagnosis and whether patients were treated at the diagnosing facility, using multilevel logistic regression modeling to account for clustering at the patient level.

Analyses were performed using Stata Version 15 (College Station, TX). We considered a 2-sided P value of < .05 as statistically significant. This study was approved by the VA Ann Arbor Health Care System Institution Review Board.

Results

We evaluated annual PSA surveillance for 15,538 men treated with RT with or without concurrent ADT (Table 1). 

Most men were white (70%), with 29% black and 3% Hispanic. Half (51%) the men were married, and the minority lived in rural areas (16%). The majority of men had screen-detected prostate cancer that was Gleason score ≥ 7, and with PSA ≤ 10 ng/mL. Most men were treated without concurrent ADT (60%), while those with concurrent ADT tended to have more aggressive disease factors (ie, higher PSA and Gleason score). Approximately half (52%) of veterans with prostate cancer received RT away from their diagnosing facility.

On unadjusted analysis, annual guideline concordance was less common among patients who were at the extremes of age, white, had Gleason 6 disease, PSA ≤ 10 ng/mL, did not receive concurrent ADT, and were treated away from their diagnosing facility (P < .05) (data not shown). We did find slight differences in patient characteristics based on whether patients were treated at their diagnosing facility (Table 2). 

Patients treated at facilities other than where they were diagnosed were more rural, white, and married, with slight differences in baseline PSA and Gleason scores but similar use of radiation monotherapy and concurrent ADT.

Overall, we found annual guideline concordance was initially very high, though declined slightly over the study period. For example, guideline concordance dropped from 96% in year 1 to 85% in year 5, with an average patient-level guideline concordance of 91% during the study period. We found minimal differences in annual surveillance after RT by race/ethnicity (Figure 1).

On multilevel multivariable analysis to adjust for clustering at the patient level, we found that race and PSA level were no longer significant predictors of annual surveillance (Table 3).  However, the following factors remained significant determinants of lower guideline concordance: extremes of age, Gleason 6 disease, RT without concurrent ADT (adjusted odds ratio [aOR] 1.00 radiation therapy alone vs 1.84 radiation therapy with ADT, P < .01; 95% CI, 1.62-2.09), and treatment at a different facility from where one was diagnosed (aOR 1.00 different facility vs 1.70 same facility, P < .01; 95% CI, 1.53-1.90). The following factors became significant on multivariable analysis: being nonmarried (aOR 1.00 nonmarried vs 1.12 married, P = .03; 95% CI, 1.01-1.25), and urban residence (aOR 1.00 urban vs 1.20 rural, P = .02; 95% CI, 1.03-1.39).  Men treated with RT with concurrent ADT were more likely to have greater annual surveillance whether they were treated within or outside of their diagnosing facility (Figure 2).

 

 

Discussion

We investigated adherence to guideline-recommended annual surveillance PSA testing in a national cohort of veterans treated with definitive RT for prostate cancer. We found guideline concordance was initially high and decreased slightly over time. We also found guideline concordance with PSA surveillance varied based on a number of clinical and delivery system factors, including marital status, rurality, receipt of concurrent ADT, as well as whether the veteran was treated at his diagnosing facility. Taken together, these overall results are promising, however, also point to unique considerations for some patient groups and potentially those treated in the community.

Our finding of lower guideline concordance among nonmarried patients is consistent with prior research, including our study of patients undergoing surgery for prostate cancer.4 Addressing surveillance in this population is important, as they may have less social support than do their married counterparts. We also found surveillance was lower at the extremes of age, which may be appropriate in elderly patients with limited life expectancy but is concerning for younger men with low competing mortality risks.7 Future work should explore whether younger patients experience barriers to care, including employment challenges, as these men are at greatest risk of cancer progression if recurrence goes undetected.

Although rural patients are less likely to undergo definitive prostate cancer treatment, possibly reflecting barriers to care, in our study, surveillance was actually higher among this population than that for urban patients.9 This could reflect the VA’s success in connecting rural patients to appropriate services despite travel distances to maintain quality of cancer care.10 Given annual PSA surveillance is relatively infrequent and not particularly resource intensive, these high surveillance rates might not apply to patients with cancers who need more frequent survivorship care, such as those with head and neck cancer. Future work should examine why surveillance rates among urban patients might be slightly lower, as living in a metropolitan area does not equate to the absence of barriers to survivorship care, especially for veterans who may not be able to take time off from work or have transportation barriers.

We found guideline concordance was higher among patients with higher Gleason scores, which is important given their higher likelihood of failure. However, low- and intermediate-risk patients also are at risk for treatment failure, so annual PSA surveillance should be optimized in this population unless future studies support the safety and feasibility of less frequent surveillance.10-13 Our finding of increased surveillance in patients who receive concurrent ADT may relate to the increased frequency of survivorship care given the need for injections, often every 3 to 6 months. Future studies might examine whether surveillance decreases in this population once they complete their short or long-term ADT, typically given for a maximum of 3 years.

A particularly relevant finding given recent VA policy changes includes lower guideline concordance for patients receiving RT at a different facility than where they were diagnosed. One possible explanation is that a proportion of patients treated outside of their home facilities use Medicare or private insurance and may have surveillance performed outside of the VA, which would not have been captured in our study.14 However, it remains plausible that there are challenges related to coordination and fragmentation of survivorship care for veterans who receive care at separate VA facilities or receive their initial treatment in the community.15 Future studies can help quantify how much this difference is driven by diagnosis and treatment at separate VA sites vs treatment outside of the VA, as different strategies might be necessary to improve surveillance in these 2 populations. Moreover, electronic health record-based tracking has been proposed as a strategy to identify patients who have not received guideline concordant PSA surveillance.14 This strategy may help increase guideline concordance regardless of initial treatment location if VA survivorship care is intended.

Although our study examined receipt of PSA testing, it did not examine whether patients are physically seen back in radiation oncology clinics, or whether their PSAs have been reviewed by radiation oncology providers. Although many surgical patients return to primary care providers for PSA surveillance, surveillance after RT is more complex and likely best managed in the initial years by radiation oncologists. Unlike the postoperative setting in which the definition of PSA failure is straightforward at > 0.2 ng/mL, the definition of treatment failure after RT is more complicated as described below.

For patients who did not receive concurrent ADT, failure is defined as a PSA nadir + 2 ng/mL, which first requires establishing the nadir using the first few postradiation PSA values.15 It becomes even more complex in the setting of ADT as it causes PSA suppression even in the absence of RT due to testosterone suppression.2 At the conclusion of ADT (short term 4-6 months or long term 18-36 months), the PSA may rise as testosterone recovers.15,16 This is not necessarily indicative of treatment failure, as some normal PSA-producing prostatic tissue may remain after treatment. Given these complexities, ongoing survivorship care with radiation oncology is recommended at least in the short term.

Physical visits are a challenge for some patients undergoing prostate cancer surveillance after treatment. Therefore, exploring the safety and feasibility of automated PSA tracking15 and strategies for increasing utilization of telemedicine, including clinical video telehealth appointments that are already used for survivorship and other urologic care in a number of VA clinics, represents opportunities to systematically provide highest quality survivorship care in VA.17,18

 

 

Conclusion

Most veterans receive guideline concordant PSA surveillance after RT for prostate cancer. Nonetheless, at the beginning of treatment, providers should screen veterans for risk factors for loss to follow-up (eg, care at a different or non-VA facility), discuss geographic, financial, and other barriers, and plan to leverage existing VA resources (eg, travel support) to continue to achieve high-quality PSA surveillance and survivorship care. Future research should investigate ways to take advantage of the VA’s robust electronic health record system and telemedicine infrastructure to further optimize prostate cancer survivorship care and PSA surveillance particularly among vulnerable patient groups and those treated outside of their diagnosing facility.

Acknowledgments
Funding Sources: VA HSR&D Career Development Award: 2 (CDA 12−171) and NCI R37 R37CA222885 (TAS).

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

References

1. National Comprehensive Cancer Network. NCCN clinical practice guidelines in oncology: prostate cancer v4.2018. https://www.nccn.org/professionals/physician_gls/pdf/prostate.pdf. Updated August 15, 2018. Accessed January 23, 2019.

2. Sanda MG, Chen RC, Crispino T, et al. Clinically localized prostate cancer: AUA/ASTRO/SUO guideline. https://www.auanet.org/guidelines/prostate-cancer-clinically-localized-(2017). Published 2017. Accessed January 22,2019.

3. Zeliadt SB, Penson DF, Albertsen PC, Concato J, Etzioni RD. Race independently predicts prostate specific antigen testing frequency following a prostate carcinoma diagnosis. Cancer. 2003;98(3):496-503.

4. Trantham LC, Nielsen ME, Mobley LR, Wheeler SB, Carpenter WR, Biddle AK. Use of prostate-specific antigen testing as a disease surveillance tool following radical prostatectomy. Cancer. 2013;119(19):3523-3530.

5. Shi Y, Fung KZ, John Boscardin W, et al. Individualizing PSA monitoring among older prostate cancer survivors. J Gen Intern Med. 2018;33(5):602-604.

6. Chapman C, Burns J, Caram M, Zaslavsky A, Tsodikov A, Skolarus TA. Multilevel predictors of surveillance PSA guideline concordance after radical prostatectomy: a national Veterans Affairs study. Paper presented at: Association of VA Hematology/Oncology Annual Meeting;
September 28-30, 2018; Chicago, IL. Abstract 34. https://www.mdedge.com/fedprac/avaho/article/175094/prostate-cancer/multilevel-predictors-surveillance-psa-guideline. Accessed January 22, 2019.

7. Kirk PS, Borza T, Caram MEV, et al. Characterising potential bone scan overuse amongst men treated with radical prostatectomy. BJU Int. 2018. [Epub ahead of print.]

8. Kirk PS, Borza T, Shahinian VB, et al. The implications of baseline bone-health assessment at initiation of androgen-deprivation therapy for prostate cancer. BJU Int. 2018;121(4):558-564.

9. Baldwin LM, Andrilla CH, Porter MP, Rosenblatt RA, Patel S, Doescher MP. Treatment of early-stage prostate cancer among rural and urban patients. Cancer. 2013;119(16):3067-3075.

10. Skolarus TA, Chan S, Shelton JB, et al. Quality of prostate cancer care among rural men in the Veterans Health Administration. Cancer. 2013;119(20):3629-3635.

11. Hamdy FC, Donovan JL, Lane JA, et al; ProtecT Study Group. 10-year outcomes after monitoring, surgery, or radiotherapy for localized prostate cancer. N Engl J Med. 2016;375(15):1415-1424.

12. Michalski JM, Moughan J, Purdy J, et al. Effect of standard vs dose-escalated radiation therapy for patients with intermediate-risk prostate cancer: the NRG Oncology RTOG 0126 randomized clinical trial. JAMA Oncol.2018;4(6):e180039.

13. Chang MG, DeSotto K, Taibi P, Troeschel S. Development of a PSA tracking system for patients with prostate cancer following definitive radiotherapy to enhance rural health. J Clin Oncol. 2016;34(suppl 2):39-39.

14. Skolarus TA, Zhang Y, Hollenbeck BK. Understanding fragmentation of prostate cancer survivorship care: implications for cost and quality. Cancer. 2012;118(11):2837-2845.

15. Roach M, 3rd, Hanks G, Thames H Jr, et al. Defining biochemical failure following radiotherapy with or without hormonal therapy in men with clinically localized prostate cancer: recommendations of the RTOG-ASTRO Phoenix Consensus Conference. Int J Radiat Oncol Biol Phys. 2006;65(4):965-974.

16. Buyyounouski MK, Hanlon AL, Horwitz EM, Uzzo RG, Pollack A. Biochemical failure and the temporal kinetics of prostate-specific antigen after radiation therapy with androgen deprivation. Int J Radiat Oncol Biol Phys. 2005;61(5):1291-1298.

17. Chu S, Boxer R, Madison P, et al. Veterans Affairs telemedicine: bringing urologic care to remote clinics. Urology. 2015;86(2):255-260.

18. Safir IJ, Gabale S, David SA, et al. Implementation of a tele-urology program for outpatient hematuria referrals: initial results and patient satisfaction. Urology. 2016;97:33-39.

Author and Disclosure Information

Christina Chapman and Ted Skolarus are Investigators, and Jennifer Burns is a Data Analyst; all at the Center for Clinical Management Research, Veterans Affairs Ann Arbor Health Care System in Michigan. Christina Chapman is an Assistant Professor, Radiation Oncology, and Ted Skolarus is an Associate Professor, Dow Division of Urology Health Services Research, Division of Oncology, Department of Urology, both at the University of Michigan. Correspondence: Ted Skolarus (tskolar@med .umich.edu

Issue
Federal Practitioner - 36(1)s
Publications
Topics
Page Number
S16-S21
Sections
Author and Disclosure Information

Christina Chapman and Ted Skolarus are Investigators, and Jennifer Burns is a Data Analyst; all at the Center for Clinical Management Research, Veterans Affairs Ann Arbor Health Care System in Michigan. Christina Chapman is an Assistant Professor, Radiation Oncology, and Ted Skolarus is an Associate Professor, Dow Division of Urology Health Services Research, Division of Oncology, Department of Urology, both at the University of Michigan. Correspondence: Ted Skolarus (tskolar@med .umich.edu

Author and Disclosure Information

Christina Chapman and Ted Skolarus are Investigators, and Jennifer Burns is a Data Analyst; all at the Center for Clinical Management Research, Veterans Affairs Ann Arbor Health Care System in Michigan. Christina Chapman is an Assistant Professor, Radiation Oncology, and Ted Skolarus is an Associate Professor, Dow Division of Urology Health Services Research, Division of Oncology, Department of Urology, both at the University of Michigan. Correspondence: Ted Skolarus (tskolar@med .umich.edu

Guideline concordance with PSA surveillance among veterans treated with definitiveradiation therapy was generally high, but opportunities may exist to improve surveillance among select groups.

Guideline concordance with PSA surveillance among veterans treated with definitiveradiation therapy was generally high, but opportunities may exist to improve surveillance among select groups.

Guidelines recommend prostate-specific antigen (PSA) surveillance among men treated with definitive radiation therapy (RT) for prostate cancer. Specifically, the National Comprehensive Cancer Network recommends testing every 6 to 12 months for 5 years and annually thereafter (with no specific stopping period specified), while the American Urology Association recommends testing for at least 10 years, with the frequency to be determined by the risk of relapse and patient preferences for monitoring.1,2 Salvage treatments exist for men with localized recurrence identified early through PSA testing, so adherence to follow-up guidelines is important for quality prostate cancer survivorship care.1,2

However, few studies focus on adherence to PSA surveillance following radiation therapy. Posttreatment surveillance among surgical patients is generally high, but sociodemographic disparities exist. Racial and ethnic minorities and unmarried men are less likely to undergo guideline concordant surveillance than is the general population, potentially preventing effective salvage therapy.3,4 A recent Department of Veterans Affairs (VA) study on posttreatment surveillance included radiation therapy patients but did not examine the impact of younger age, concurrent androgen deprivation therapy (ADT), or treatment facility (ie, diagnosed and treated at the same vs different facilities, with the latter including a separate VA facility or the community) on surveillance patterns.5 The latter is particularly relevant given increasing efforts to coordinate care outside the VA delivery system supported by the 2018 VA Maintaining Systems and Strengthening Integrated Outside Networks (MISSION) Act. Furthermore, these patient, treatment, and delivery system factors may each uniquely contribute to whether patients receive guideline-recommended PSA surveillance after prostate cancer treatment.

For these reasons, we conducted a study to better understand determinants of adherence to guideline-recommended PSA surveillance among veterans undergoing definitive radiation therapy with or without concurrent ADT. Our study uniquely included both elderly and nonelderly patients as well as investigated relationships between treatment at or away from the diagnosing facility. Although we found high overall levels of adherence to PSA surveillance, our findings do offer insights into determinants associated with worse adherence and provide opportunities to improve prostate cancer survivorship care after RT.

Methods

This study population included men with biopsy-proven nonmetastatic incident prostate cancer diagnosed between January 2005 and December 2008, with follow-up through 2012, identified using the VA Central Cancer Registry. We included men who underwent definitive RT with or without concurrent ADT injections, determined using the VA pharmacy files. We excluded men with a prior diagnosis of prostate or other malignancy (given the presence of other malignancies might affect life expectancy and surveillance patterns), hospice enrollment within 30 days, diagnosis at autopsy, and those treated with radical prostatectomy. We extracted cancer registry data, including biopsy Gleason score, pretreatment PSA level, clinical tumor stage, and whether RT was delivered at the patient’s diagnosing facility. For the latter, we used data on radiation location coded by the tumor registrar. We also collected demographic information, including age at diagnosis, race, ethnicity, marital status, and ZIP code. We used diagnosis codes to determine Charlson comorbidity scores similar to prior studies.6-8

 

 

Primary Outcome

The primary outcome was receipt of guideline concordant annual PSA surveillance in the initial 5 years following RT. We used laboratory files within the VA Corporate Data Warehouse to identify the date and value for each PSA test after RT for the entire cohort. Specifically, we defined the surveillance period as 60 days after initiation of RT through December 31, 2012. We defined guideline concordance as receiving at least 1 PSA test for each 12-month period after RT.

Statistical Analysis

We used descriptive statistics to characterize our cohort of veterans with prostate cancer treated with RT with or without concurrent ADT. To handle missing data, we performed multiple imputation, generating 10 imputations using all baseline clinical and demographic variables, year of diagnosis, and the regional VA network (ie, the Veterans Integrated Services Network [VISN]) for each patient.

Next, we calculated the annual guideline concordance rate for each year of follow-up for each patient, for the overall cohort, as well as by age, race/ethnicity, and concurrent ADT use. We examined bivariable relationships between guideline concordance and baseline demographic, clinical, and delivery system factors, including year of diagnosis and whether patients were treated at the diagnosing facility, using multilevel logistic regression modeling to account for clustering at the patient level.

Analyses were performed using Stata Version 15 (College Station, TX). We considered a 2-sided P value of < .05 as statistically significant. This study was approved by the VA Ann Arbor Health Care System Institution Review Board.

Results

We evaluated annual PSA surveillance for 15,538 men treated with RT with or without concurrent ADT (Table 1). 

Most men were white (70%), with 29% black and 3% Hispanic. Half (51%) the men were married, and the minority lived in rural areas (16%). The majority of men had screen-detected prostate cancer that was Gleason score ≥ 7, and with PSA ≤ 10 ng/mL. Most men were treated without concurrent ADT (60%), while those with concurrent ADT tended to have more aggressive disease factors (ie, higher PSA and Gleason score). Approximately half (52%) of veterans with prostate cancer received RT away from their diagnosing facility.

On unadjusted analysis, annual guideline concordance was less common among patients who were at the extremes of age, white, had Gleason 6 disease, PSA ≤ 10 ng/mL, did not receive concurrent ADT, and were treated away from their diagnosing facility (P < .05) (data not shown). We did find slight differences in patient characteristics based on whether patients were treated at their diagnosing facility (Table 2). 

Patients treated at facilities other than where they were diagnosed were more rural, white, and married, with slight differences in baseline PSA and Gleason scores but similar use of radiation monotherapy and concurrent ADT.

Overall, we found annual guideline concordance was initially very high, though declined slightly over the study period. For example, guideline concordance dropped from 96% in year 1 to 85% in year 5, with an average patient-level guideline concordance of 91% during the study period. We found minimal differences in annual surveillance after RT by race/ethnicity (Figure 1).

On multilevel multivariable analysis to adjust for clustering at the patient level, we found that race and PSA level were no longer significant predictors of annual surveillance (Table 3).  However, the following factors remained significant determinants of lower guideline concordance: extremes of age, Gleason 6 disease, RT without concurrent ADT (adjusted odds ratio [aOR] 1.00 radiation therapy alone vs 1.84 radiation therapy with ADT, P < .01; 95% CI, 1.62-2.09), and treatment at a different facility from where one was diagnosed (aOR 1.00 different facility vs 1.70 same facility, P < .01; 95% CI, 1.53-1.90). The following factors became significant on multivariable analysis: being nonmarried (aOR 1.00 nonmarried vs 1.12 married, P = .03; 95% CI, 1.01-1.25), and urban residence (aOR 1.00 urban vs 1.20 rural, P = .02; 95% CI, 1.03-1.39).  Men treated with RT with concurrent ADT were more likely to have greater annual surveillance whether they were treated within or outside of their diagnosing facility (Figure 2).

 

 

Discussion

We investigated adherence to guideline-recommended annual surveillance PSA testing in a national cohort of veterans treated with definitive RT for prostate cancer. We found guideline concordance was initially high and decreased slightly over time. We also found guideline concordance with PSA surveillance varied based on a number of clinical and delivery system factors, including marital status, rurality, receipt of concurrent ADT, as well as whether the veteran was treated at his diagnosing facility. Taken together, these overall results are promising, however, also point to unique considerations for some patient groups and potentially those treated in the community.

Our finding of lower guideline concordance among nonmarried patients is consistent with prior research, including our study of patients undergoing surgery for prostate cancer.4 Addressing surveillance in this population is important, as they may have less social support than do their married counterparts. We also found surveillance was lower at the extremes of age, which may be appropriate in elderly patients with limited life expectancy but is concerning for younger men with low competing mortality risks.7 Future work should explore whether younger patients experience barriers to care, including employment challenges, as these men are at greatest risk of cancer progression if recurrence goes undetected.

Although rural patients are less likely to undergo definitive prostate cancer treatment, possibly reflecting barriers to care, in our study, surveillance was actually higher among this population than that for urban patients.9 This could reflect the VA’s success in connecting rural patients to appropriate services despite travel distances to maintain quality of cancer care.10 Given annual PSA surveillance is relatively infrequent and not particularly resource intensive, these high surveillance rates might not apply to patients with cancers who need more frequent survivorship care, such as those with head and neck cancer. Future work should examine why surveillance rates among urban patients might be slightly lower, as living in a metropolitan area does not equate to the absence of barriers to survivorship care, especially for veterans who may not be able to take time off from work or have transportation barriers.

We found guideline concordance was higher among patients with higher Gleason scores, which is important given their higher likelihood of failure. However, low- and intermediate-risk patients also are at risk for treatment failure, so annual PSA surveillance should be optimized in this population unless future studies support the safety and feasibility of less frequent surveillance.10-13 Our finding of increased surveillance in patients who receive concurrent ADT may relate to the increased frequency of survivorship care given the need for injections, often every 3 to 6 months. Future studies might examine whether surveillance decreases in this population once they complete their short or long-term ADT, typically given for a maximum of 3 years.

A particularly relevant finding given recent VA policy changes includes lower guideline concordance for patients receiving RT at a different facility than where they were diagnosed. One possible explanation is that a proportion of patients treated outside of their home facilities use Medicare or private insurance and may have surveillance performed outside of the VA, which would not have been captured in our study.14 However, it remains plausible that there are challenges related to coordination and fragmentation of survivorship care for veterans who receive care at separate VA facilities or receive their initial treatment in the community.15 Future studies can help quantify how much this difference is driven by diagnosis and treatment at separate VA sites vs treatment outside of the VA, as different strategies might be necessary to improve surveillance in these 2 populations. Moreover, electronic health record-based tracking has been proposed as a strategy to identify patients who have not received guideline concordant PSA surveillance.14 This strategy may help increase guideline concordance regardless of initial treatment location if VA survivorship care is intended.

Although our study examined receipt of PSA testing, it did not examine whether patients are physically seen back in radiation oncology clinics, or whether their PSAs have been reviewed by radiation oncology providers. Although many surgical patients return to primary care providers for PSA surveillance, surveillance after RT is more complex and likely best managed in the initial years by radiation oncologists. Unlike the postoperative setting in which the definition of PSA failure is straightforward at > 0.2 ng/mL, the definition of treatment failure after RT is more complicated as described below.

For patients who did not receive concurrent ADT, failure is defined as a PSA nadir + 2 ng/mL, which first requires establishing the nadir using the first few postradiation PSA values.15 It becomes even more complex in the setting of ADT as it causes PSA suppression even in the absence of RT due to testosterone suppression.2 At the conclusion of ADT (short term 4-6 months or long term 18-36 months), the PSA may rise as testosterone recovers.15,16 This is not necessarily indicative of treatment failure, as some normal PSA-producing prostatic tissue may remain after treatment. Given these complexities, ongoing survivorship care with radiation oncology is recommended at least in the short term.

Physical visits are a challenge for some patients undergoing prostate cancer surveillance after treatment. Therefore, exploring the safety and feasibility of automated PSA tracking15 and strategies for increasing utilization of telemedicine, including clinical video telehealth appointments that are already used for survivorship and other urologic care in a number of VA clinics, represents opportunities to systematically provide highest quality survivorship care in VA.17,18

 

 

Conclusion

Most veterans receive guideline concordant PSA surveillance after RT for prostate cancer. Nonetheless, at the beginning of treatment, providers should screen veterans for risk factors for loss to follow-up (eg, care at a different or non-VA facility), discuss geographic, financial, and other barriers, and plan to leverage existing VA resources (eg, travel support) to continue to achieve high-quality PSA surveillance and survivorship care. Future research should investigate ways to take advantage of the VA’s robust electronic health record system and telemedicine infrastructure to further optimize prostate cancer survivorship care and PSA surveillance particularly among vulnerable patient groups and those treated outside of their diagnosing facility.

Acknowledgments
Funding Sources: VA HSR&D Career Development Award: 2 (CDA 12−171) and NCI R37 R37CA222885 (TAS).

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Guidelines recommend prostate-specific antigen (PSA) surveillance among men treated with definitive radiation therapy (RT) for prostate cancer. Specifically, the National Comprehensive Cancer Network recommends testing every 6 to 12 months for 5 years and annually thereafter (with no specific stopping period specified), while the American Urology Association recommends testing for at least 10 years, with the frequency to be determined by the risk of relapse and patient preferences for monitoring.1,2 Salvage treatments exist for men with localized recurrence identified early through PSA testing, so adherence to follow-up guidelines is important for quality prostate cancer survivorship care.1,2

However, few studies focus on adherence to PSA surveillance following radiation therapy. Posttreatment surveillance among surgical patients is generally high, but sociodemographic disparities exist. Racial and ethnic minorities and unmarried men are less likely to undergo guideline concordant surveillance than is the general population, potentially preventing effective salvage therapy.3,4 A recent Department of Veterans Affairs (VA) study on posttreatment surveillance included radiation therapy patients but did not examine the impact of younger age, concurrent androgen deprivation therapy (ADT), or treatment facility (ie, diagnosed and treated at the same vs different facilities, with the latter including a separate VA facility or the community) on surveillance patterns.5 The latter is particularly relevant given increasing efforts to coordinate care outside the VA delivery system supported by the 2018 VA Maintaining Systems and Strengthening Integrated Outside Networks (MISSION) Act. Furthermore, these patient, treatment, and delivery system factors may each uniquely contribute to whether patients receive guideline-recommended PSA surveillance after prostate cancer treatment.

For these reasons, we conducted a study to better understand determinants of adherence to guideline-recommended PSA surveillance among veterans undergoing definitive radiation therapy with or without concurrent ADT. Our study uniquely included both elderly and nonelderly patients as well as investigated relationships between treatment at or away from the diagnosing facility. Although we found high overall levels of adherence to PSA surveillance, our findings do offer insights into determinants associated with worse adherence and provide opportunities to improve prostate cancer survivorship care after RT.

Methods

This study population included men with biopsy-proven nonmetastatic incident prostate cancer diagnosed between January 2005 and December 2008, with follow-up through 2012, identified using the VA Central Cancer Registry. We included men who underwent definitive RT with or without concurrent ADT injections, determined using the VA pharmacy files. We excluded men with a prior diagnosis of prostate or other malignancy (given the presence of other malignancies might affect life expectancy and surveillance patterns), hospice enrollment within 30 days, diagnosis at autopsy, and those treated with radical prostatectomy. We extracted cancer registry data, including biopsy Gleason score, pretreatment PSA level, clinical tumor stage, and whether RT was delivered at the patient’s diagnosing facility. For the latter, we used data on radiation location coded by the tumor registrar. We also collected demographic information, including age at diagnosis, race, ethnicity, marital status, and ZIP code. We used diagnosis codes to determine Charlson comorbidity scores similar to prior studies.6-8

 

 

Primary Outcome

The primary outcome was receipt of guideline concordant annual PSA surveillance in the initial 5 years following RT. We used laboratory files within the VA Corporate Data Warehouse to identify the date and value for each PSA test after RT for the entire cohort. Specifically, we defined the surveillance period as 60 days after initiation of RT through December 31, 2012. We defined guideline concordance as receiving at least 1 PSA test for each 12-month period after RT.

Statistical Analysis

We used descriptive statistics to characterize our cohort of veterans with prostate cancer treated with RT with or without concurrent ADT. To handle missing data, we performed multiple imputation, generating 10 imputations using all baseline clinical and demographic variables, year of diagnosis, and the regional VA network (ie, the Veterans Integrated Services Network [VISN]) for each patient.

Next, we calculated the annual guideline concordance rate for each year of follow-up for each patient, for the overall cohort, as well as by age, race/ethnicity, and concurrent ADT use. We examined bivariable relationships between guideline concordance and baseline demographic, clinical, and delivery system factors, including year of diagnosis and whether patients were treated at the diagnosing facility, using multilevel logistic regression modeling to account for clustering at the patient level.

Analyses were performed using Stata Version 15 (College Station, TX). We considered a 2-sided P value of < .05 as statistically significant. This study was approved by the VA Ann Arbor Health Care System Institution Review Board.

Results

We evaluated annual PSA surveillance for 15,538 men treated with RT with or without concurrent ADT (Table 1). 

Most men were white (70%), with 29% black and 3% Hispanic. Half (51%) the men were married, and the minority lived in rural areas (16%). The majority of men had screen-detected prostate cancer that was Gleason score ≥ 7, and with PSA ≤ 10 ng/mL. Most men were treated without concurrent ADT (60%), while those with concurrent ADT tended to have more aggressive disease factors (ie, higher PSA and Gleason score). Approximately half (52%) of veterans with prostate cancer received RT away from their diagnosing facility.

On unadjusted analysis, annual guideline concordance was less common among patients who were at the extremes of age, white, had Gleason 6 disease, PSA ≤ 10 ng/mL, did not receive concurrent ADT, and were treated away from their diagnosing facility (P < .05) (data not shown). We did find slight differences in patient characteristics based on whether patients were treated at their diagnosing facility (Table 2). 

Patients treated at facilities other than where they were diagnosed were more rural, white, and married, with slight differences in baseline PSA and Gleason scores but similar use of radiation monotherapy and concurrent ADT.

Overall, we found annual guideline concordance was initially very high, though declined slightly over the study period. For example, guideline concordance dropped from 96% in year 1 to 85% in year 5, with an average patient-level guideline concordance of 91% during the study period. We found minimal differences in annual surveillance after RT by race/ethnicity (Figure 1).

On multilevel multivariable analysis to adjust for clustering at the patient level, we found that race and PSA level were no longer significant predictors of annual surveillance (Table 3).  However, the following factors remained significant determinants of lower guideline concordance: extremes of age, Gleason 6 disease, RT without concurrent ADT (adjusted odds ratio [aOR] 1.00 radiation therapy alone vs 1.84 radiation therapy with ADT, P < .01; 95% CI, 1.62-2.09), and treatment at a different facility from where one was diagnosed (aOR 1.00 different facility vs 1.70 same facility, P < .01; 95% CI, 1.53-1.90). The following factors became significant on multivariable analysis: being nonmarried (aOR 1.00 nonmarried vs 1.12 married, P = .03; 95% CI, 1.01-1.25), and urban residence (aOR 1.00 urban vs 1.20 rural, P = .02; 95% CI, 1.03-1.39).  Men treated with RT with concurrent ADT were more likely to have greater annual surveillance whether they were treated within or outside of their diagnosing facility (Figure 2).

 

 

Discussion

We investigated adherence to guideline-recommended annual surveillance PSA testing in a national cohort of veterans treated with definitive RT for prostate cancer. We found guideline concordance was initially high and decreased slightly over time. We also found guideline concordance with PSA surveillance varied based on a number of clinical and delivery system factors, including marital status, rurality, receipt of concurrent ADT, as well as whether the veteran was treated at his diagnosing facility. Taken together, these overall results are promising, however, also point to unique considerations for some patient groups and potentially those treated in the community.

Our finding of lower guideline concordance among nonmarried patients is consistent with prior research, including our study of patients undergoing surgery for prostate cancer.4 Addressing surveillance in this population is important, as they may have less social support than do their married counterparts. We also found surveillance was lower at the extremes of age, which may be appropriate in elderly patients with limited life expectancy but is concerning for younger men with low competing mortality risks.7 Future work should explore whether younger patients experience barriers to care, including employment challenges, as these men are at greatest risk of cancer progression if recurrence goes undetected.

Although rural patients are less likely to undergo definitive prostate cancer treatment, possibly reflecting barriers to care, in our study, surveillance was actually higher among this population than that for urban patients.9 This could reflect the VA’s success in connecting rural patients to appropriate services despite travel distances to maintain quality of cancer care.10 Given annual PSA surveillance is relatively infrequent and not particularly resource intensive, these high surveillance rates might not apply to patients with cancers who need more frequent survivorship care, such as those with head and neck cancer. Future work should examine why surveillance rates among urban patients might be slightly lower, as living in a metropolitan area does not equate to the absence of barriers to survivorship care, especially for veterans who may not be able to take time off from work or have transportation barriers.

We found guideline concordance was higher among patients with higher Gleason scores, which is important given their higher likelihood of failure. However, low- and intermediate-risk patients also are at risk for treatment failure, so annual PSA surveillance should be optimized in this population unless future studies support the safety and feasibility of less frequent surveillance.10-13 Our finding of increased surveillance in patients who receive concurrent ADT may relate to the increased frequency of survivorship care given the need for injections, often every 3 to 6 months. Future studies might examine whether surveillance decreases in this population once they complete their short or long-term ADT, typically given for a maximum of 3 years.

A particularly relevant finding given recent VA policy changes includes lower guideline concordance for patients receiving RT at a different facility than where they were diagnosed. One possible explanation is that a proportion of patients treated outside of their home facilities use Medicare or private insurance and may have surveillance performed outside of the VA, which would not have been captured in our study.14 However, it remains plausible that there are challenges related to coordination and fragmentation of survivorship care for veterans who receive care at separate VA facilities or receive their initial treatment in the community.15 Future studies can help quantify how much this difference is driven by diagnosis and treatment at separate VA sites vs treatment outside of the VA, as different strategies might be necessary to improve surveillance in these 2 populations. Moreover, electronic health record-based tracking has been proposed as a strategy to identify patients who have not received guideline concordant PSA surveillance.14 This strategy may help increase guideline concordance regardless of initial treatment location if VA survivorship care is intended.

Although our study examined receipt of PSA testing, it did not examine whether patients are physically seen back in radiation oncology clinics, or whether their PSAs have been reviewed by radiation oncology providers. Although many surgical patients return to primary care providers for PSA surveillance, surveillance after RT is more complex and likely best managed in the initial years by radiation oncologists. Unlike the postoperative setting in which the definition of PSA failure is straightforward at > 0.2 ng/mL, the definition of treatment failure after RT is more complicated as described below.

For patients who did not receive concurrent ADT, failure is defined as a PSA nadir + 2 ng/mL, which first requires establishing the nadir using the first few postradiation PSA values.15 It becomes even more complex in the setting of ADT as it causes PSA suppression even in the absence of RT due to testosterone suppression.2 At the conclusion of ADT (short term 4-6 months or long term 18-36 months), the PSA may rise as testosterone recovers.15,16 This is not necessarily indicative of treatment failure, as some normal PSA-producing prostatic tissue may remain after treatment. Given these complexities, ongoing survivorship care with radiation oncology is recommended at least in the short term.

Physical visits are a challenge for some patients undergoing prostate cancer surveillance after treatment. Therefore, exploring the safety and feasibility of automated PSA tracking15 and strategies for increasing utilization of telemedicine, including clinical video telehealth appointments that are already used for survivorship and other urologic care in a number of VA clinics, represents opportunities to systematically provide highest quality survivorship care in VA.17,18

 

 

Conclusion

Most veterans receive guideline concordant PSA surveillance after RT for prostate cancer. Nonetheless, at the beginning of treatment, providers should screen veterans for risk factors for loss to follow-up (eg, care at a different or non-VA facility), discuss geographic, financial, and other barriers, and plan to leverage existing VA resources (eg, travel support) to continue to achieve high-quality PSA surveillance and survivorship care. Future research should investigate ways to take advantage of the VA’s robust electronic health record system and telemedicine infrastructure to further optimize prostate cancer survivorship care and PSA surveillance particularly among vulnerable patient groups and those treated outside of their diagnosing facility.

Acknowledgments
Funding Sources: VA HSR&D Career Development Award: 2 (CDA 12−171) and NCI R37 R37CA222885 (TAS).

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

References

1. National Comprehensive Cancer Network. NCCN clinical practice guidelines in oncology: prostate cancer v4.2018. https://www.nccn.org/professionals/physician_gls/pdf/prostate.pdf. Updated August 15, 2018. Accessed January 23, 2019.

2. Sanda MG, Chen RC, Crispino T, et al. Clinically localized prostate cancer: AUA/ASTRO/SUO guideline. https://www.auanet.org/guidelines/prostate-cancer-clinically-localized-(2017). Published 2017. Accessed January 22,2019.

3. Zeliadt SB, Penson DF, Albertsen PC, Concato J, Etzioni RD. Race independently predicts prostate specific antigen testing frequency following a prostate carcinoma diagnosis. Cancer. 2003;98(3):496-503.

4. Trantham LC, Nielsen ME, Mobley LR, Wheeler SB, Carpenter WR, Biddle AK. Use of prostate-specific antigen testing as a disease surveillance tool following radical prostatectomy. Cancer. 2013;119(19):3523-3530.

5. Shi Y, Fung KZ, John Boscardin W, et al. Individualizing PSA monitoring among older prostate cancer survivors. J Gen Intern Med. 2018;33(5):602-604.

6. Chapman C, Burns J, Caram M, Zaslavsky A, Tsodikov A, Skolarus TA. Multilevel predictors of surveillance PSA guideline concordance after radical prostatectomy: a national Veterans Affairs study. Paper presented at: Association of VA Hematology/Oncology Annual Meeting;
September 28-30, 2018; Chicago, IL. Abstract 34. https://www.mdedge.com/fedprac/avaho/article/175094/prostate-cancer/multilevel-predictors-surveillance-psa-guideline. Accessed January 22, 2019.

7. Kirk PS, Borza T, Caram MEV, et al. Characterising potential bone scan overuse amongst men treated with radical prostatectomy. BJU Int. 2018. [Epub ahead of print.]

8. Kirk PS, Borza T, Shahinian VB, et al. The implications of baseline bone-health assessment at initiation of androgen-deprivation therapy for prostate cancer. BJU Int. 2018;121(4):558-564.

9. Baldwin LM, Andrilla CH, Porter MP, Rosenblatt RA, Patel S, Doescher MP. Treatment of early-stage prostate cancer among rural and urban patients. Cancer. 2013;119(16):3067-3075.

10. Skolarus TA, Chan S, Shelton JB, et al. Quality of prostate cancer care among rural men in the Veterans Health Administration. Cancer. 2013;119(20):3629-3635.

11. Hamdy FC, Donovan JL, Lane JA, et al; ProtecT Study Group. 10-year outcomes after monitoring, surgery, or radiotherapy for localized prostate cancer. N Engl J Med. 2016;375(15):1415-1424.

12. Michalski JM, Moughan J, Purdy J, et al. Effect of standard vs dose-escalated radiation therapy for patients with intermediate-risk prostate cancer: the NRG Oncology RTOG 0126 randomized clinical trial. JAMA Oncol.2018;4(6):e180039.

13. Chang MG, DeSotto K, Taibi P, Troeschel S. Development of a PSA tracking system for patients with prostate cancer following definitive radiotherapy to enhance rural health. J Clin Oncol. 2016;34(suppl 2):39-39.

14. Skolarus TA, Zhang Y, Hollenbeck BK. Understanding fragmentation of prostate cancer survivorship care: implications for cost and quality. Cancer. 2012;118(11):2837-2845.

15. Roach M, 3rd, Hanks G, Thames H Jr, et al. Defining biochemical failure following radiotherapy with or without hormonal therapy in men with clinically localized prostate cancer: recommendations of the RTOG-ASTRO Phoenix Consensus Conference. Int J Radiat Oncol Biol Phys. 2006;65(4):965-974.

16. Buyyounouski MK, Hanlon AL, Horwitz EM, Uzzo RG, Pollack A. Biochemical failure and the temporal kinetics of prostate-specific antigen after radiation therapy with androgen deprivation. Int J Radiat Oncol Biol Phys. 2005;61(5):1291-1298.

17. Chu S, Boxer R, Madison P, et al. Veterans Affairs telemedicine: bringing urologic care to remote clinics. Urology. 2015;86(2):255-260.

18. Safir IJ, Gabale S, David SA, et al. Implementation of a tele-urology program for outpatient hematuria referrals: initial results and patient satisfaction. Urology. 2016;97:33-39.

References

1. National Comprehensive Cancer Network. NCCN clinical practice guidelines in oncology: prostate cancer v4.2018. https://www.nccn.org/professionals/physician_gls/pdf/prostate.pdf. Updated August 15, 2018. Accessed January 23, 2019.

2. Sanda MG, Chen RC, Crispino T, et al. Clinically localized prostate cancer: AUA/ASTRO/SUO guideline. https://www.auanet.org/guidelines/prostate-cancer-clinically-localized-(2017). Published 2017. Accessed January 22,2019.

3. Zeliadt SB, Penson DF, Albertsen PC, Concato J, Etzioni RD. Race independently predicts prostate specific antigen testing frequency following a prostate carcinoma diagnosis. Cancer. 2003;98(3):496-503.

4. Trantham LC, Nielsen ME, Mobley LR, Wheeler SB, Carpenter WR, Biddle AK. Use of prostate-specific antigen testing as a disease surveillance tool following radical prostatectomy. Cancer. 2013;119(19):3523-3530.

5. Shi Y, Fung KZ, John Boscardin W, et al. Individualizing PSA monitoring among older prostate cancer survivors. J Gen Intern Med. 2018;33(5):602-604.

6. Chapman C, Burns J, Caram M, Zaslavsky A, Tsodikov A, Skolarus TA. Multilevel predictors of surveillance PSA guideline concordance after radical prostatectomy: a national Veterans Affairs study. Paper presented at: Association of VA Hematology/Oncology Annual Meeting;
September 28-30, 2018; Chicago, IL. Abstract 34. https://www.mdedge.com/fedprac/avaho/article/175094/prostate-cancer/multilevel-predictors-surveillance-psa-guideline. Accessed January 22, 2019.

7. Kirk PS, Borza T, Caram MEV, et al. Characterising potential bone scan overuse amongst men treated with radical prostatectomy. BJU Int. 2018. [Epub ahead of print.]

8. Kirk PS, Borza T, Shahinian VB, et al. The implications of baseline bone-health assessment at initiation of androgen-deprivation therapy for prostate cancer. BJU Int. 2018;121(4):558-564.

9. Baldwin LM, Andrilla CH, Porter MP, Rosenblatt RA, Patel S, Doescher MP. Treatment of early-stage prostate cancer among rural and urban patients. Cancer. 2013;119(16):3067-3075.

10. Skolarus TA, Chan S, Shelton JB, et al. Quality of prostate cancer care among rural men in the Veterans Health Administration. Cancer. 2013;119(20):3629-3635.

11. Hamdy FC, Donovan JL, Lane JA, et al; ProtecT Study Group. 10-year outcomes after monitoring, surgery, or radiotherapy for localized prostate cancer. N Engl J Med. 2016;375(15):1415-1424.

12. Michalski JM, Moughan J, Purdy J, et al. Effect of standard vs dose-escalated radiation therapy for patients with intermediate-risk prostate cancer: the NRG Oncology RTOG 0126 randomized clinical trial. JAMA Oncol.2018;4(6):e180039.

13. Chang MG, DeSotto K, Taibi P, Troeschel S. Development of a PSA tracking system for patients with prostate cancer following definitive radiotherapy to enhance rural health. J Clin Oncol. 2016;34(suppl 2):39-39.

14. Skolarus TA, Zhang Y, Hollenbeck BK. Understanding fragmentation of prostate cancer survivorship care: implications for cost and quality. Cancer. 2012;118(11):2837-2845.

15. Roach M, 3rd, Hanks G, Thames H Jr, et al. Defining biochemical failure following radiotherapy with or without hormonal therapy in men with clinically localized prostate cancer: recommendations of the RTOG-ASTRO Phoenix Consensus Conference. Int J Radiat Oncol Biol Phys. 2006;65(4):965-974.

16. Buyyounouski MK, Hanlon AL, Horwitz EM, Uzzo RG, Pollack A. Biochemical failure and the temporal kinetics of prostate-specific antigen after radiation therapy with androgen deprivation. Int J Radiat Oncol Biol Phys. 2005;61(5):1291-1298.

17. Chu S, Boxer R, Madison P, et al. Veterans Affairs telemedicine: bringing urologic care to remote clinics. Urology. 2015;86(2):255-260.

18. Safir IJ, Gabale S, David SA, et al. Implementation of a tele-urology program for outpatient hematuria referrals: initial results and patient satisfaction. Urology. 2016;97:33-39.

Issue
Federal Practitioner - 36(1)s
Issue
Federal Practitioner - 36(1)s
Page Number
S16-S21
Page Number
S16-S21
Publications
Publications
Topics
Article Type
Display Headline
Prostate Cancer Surveillance After Radiation Therapy in a National Delivery System
Display Headline
Prostate Cancer Surveillance After Radiation Therapy in a National Delivery System
Sections
Citation Override
Fed Pract. 2019 February;36(suppl 1):S16-S21
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Sociodemographic disadvantage confers poorer survival in young adults with CRC

Article Type
Changed

– Young adults with colorectal cancer who live in neighborhoods with higher levels of disadvantage differ on health measures, present with more advanced disease, and have poorer survival. These were among key findings of a retrospective cohort study reported at the 2020 GI Cancers Symposium.

MDedge/Susan London
Dr. Ashley Matusz-Fisher

The incidence of colorectal cancer has risen sharply – 51% – since 1994 among individuals aged younger than age 50 years, with the greatest uptick seen among those aged 20-29 years (J Natl Cancer Inst. 2017;109[8]. doi: 10.1093/jnci/djw322).

“Sociodemographic disparities have been linked to inferior survival. However, their impact and association with outcome in young adults is not well described,” said lead investigator Ashley Matusz-Fisher, MD, of the Levine Cancer Institute in Charlotte, N.C.

The investigators analyzed data from the National Cancer Database for the years 2004-2016, identifying 26,768 patients who received a colorectal cancer diagnosis when aged 18-40 years.

Results showed that those living in areas with low income (less than $38,000 annually) and low educational attainment (high school graduation rate less than 79%), and those living in urban or rural areas (versus metropolitan areas) had 24% and 10% higher risks of death, respectively.

Patients in the low-income, low-education group were more than six times as likely to be black and to lack private health insurance, had greater comorbidity, had larger tumors and more nodal involvement at diagnosis, and were less likely to undergo surgery.

Several factors may be at play for the low-income, low-education group, Dr. Matusz-Fisher speculated: limited access to care, lack of awareness of important symptoms, and inability to afford treatment when it is needed. “That could very well be contributing to them presenting at later stages and then maybe not getting the treatment that other people who have insurance would be getting.

“To try to eliminate these disparities, the first step is recognition, which is what we are doing – recognizing there are disparities – and then making people aware of these disparities,” she commented. “More efforts are needed to increase access and remove barriers to care, with the hope of eliminating disparities and achieving health equity.”

Mitigating disparities

Several studies have looked at mitigating sociodemographic-related disparities in colorectal cancer outcomes, according to session cochair John M. Carethers, MD, AGAF, professor and chair of the department of internal medicine at the University of Michigan, Ann Arbor.

MDedge/Susan London
Dr. John M. Carethers

A large Delaware initiative tackled the problem via screening (J Clin Oncol. 2013;31:1928-30). “Now this was over 50 – we don’t typically screen under 50 – but over 50, you can essentially eliminate this disparity with navigation services and screening. How do you do that under 50? I’m not quite sure,” he said in an interview, adding that some organizations are recommending lowering the screening age to 45 or even 40 years in light of rising incidence among young adults.

However, accumulating evidence suggests that there may be inherent biological differences that are harder to overcome. “There is a lot of data … showing that polyps happen earlier and they are bigger in certain racial groups, particularly African Americans and American Indians,” Dr. Carethers elaborated. What is driving the biology is unknown, but the microbiome has come under scrutiny.

“So you are a victim of your circumstances,” he summarized. “You are living in a low-income area, you are eating more proinflammatory-type foods, you are getting your polyps earlier, and then you are getting your cancers earlier.”

 

 

Study details

Rural, urban, or metropolitan status was ascertained for 25,861 patients in the study, and area income and education were ascertained for 7,743 patients, according to data reported at the symposium, sponsored by the American Gastroenterological Association, the American Society of Clinical Oncology, the American Society for Radiation Oncology, and the Society of Surgical Oncology.

Compared with counterparts living in areas with both high annual income (greater than $68,000) and education (greater than 93% high school graduation rate), patients living in areas with both low annual income (less than $38,000) and education ( less than 79% high school graduation rate) were significantly more likely to be black (odds ratio, 6.4), not have private insurance (odds ratio, 6.3), have pathologic T3/T4 stage (OR, 1.4), have positive nodes (OR, 1.2), and have a Charlson-Deyo comorbidity score of 1 or greater (OR, 1.6). They also were less likely to undergo surgery (OR, 0.63) and more likely to be rehospitalized within 30 days (OR, 1.3).

After adjusting for race, insurance status, T/N stage, and comorbidity score, relative to counterparts in the high-income, high-education group, patients in the low-income, low-education group had an increased risk of death (hazard ratio, 1.24; P = .004). And relative to counterparts living in metropolitan areas, patients living in urban or rural areas had an increased risk of death (HR, 1.10; P = .02).

Among patients with stage IV disease, median overall survival was 26.1 months for those from high-income, high-education areas, but 20.7 months for those from low-income, low-education areas (P less than .001).

Dr. Matusz-Fisher did not report any conflicts of interest. The study did not receive any funding.

SOURCE: Matusz-Fisher A et al. 2020 GI Cancers Symposium, Abstract 13.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Young adults with colorectal cancer who live in neighborhoods with higher levels of disadvantage differ on health measures, present with more advanced disease, and have poorer survival. These were among key findings of a retrospective cohort study reported at the 2020 GI Cancers Symposium.

MDedge/Susan London
Dr. Ashley Matusz-Fisher

The incidence of colorectal cancer has risen sharply – 51% – since 1994 among individuals aged younger than age 50 years, with the greatest uptick seen among those aged 20-29 years (J Natl Cancer Inst. 2017;109[8]. doi: 10.1093/jnci/djw322).

“Sociodemographic disparities have been linked to inferior survival. However, their impact and association with outcome in young adults is not well described,” said lead investigator Ashley Matusz-Fisher, MD, of the Levine Cancer Institute in Charlotte, N.C.

The investigators analyzed data from the National Cancer Database for the years 2004-2016, identifying 26,768 patients who received a colorectal cancer diagnosis when aged 18-40 years.

Results showed that those living in areas with low income (less than $38,000 annually) and low educational attainment (high school graduation rate less than 79%), and those living in urban or rural areas (versus metropolitan areas) had 24% and 10% higher risks of death, respectively.

Patients in the low-income, low-education group were more than six times as likely to be black and to lack private health insurance, had greater comorbidity, had larger tumors and more nodal involvement at diagnosis, and were less likely to undergo surgery.

Several factors may be at play for the low-income, low-education group, Dr. Matusz-Fisher speculated: limited access to care, lack of awareness of important symptoms, and inability to afford treatment when it is needed. “That could very well be contributing to them presenting at later stages and then maybe not getting the treatment that other people who have insurance would be getting.

“To try to eliminate these disparities, the first step is recognition, which is what we are doing – recognizing there are disparities – and then making people aware of these disparities,” she commented. “More efforts are needed to increase access and remove barriers to care, with the hope of eliminating disparities and achieving health equity.”

Mitigating disparities

Several studies have looked at mitigating sociodemographic-related disparities in colorectal cancer outcomes, according to session cochair John M. Carethers, MD, AGAF, professor and chair of the department of internal medicine at the University of Michigan, Ann Arbor.

MDedge/Susan London
Dr. John M. Carethers

A large Delaware initiative tackled the problem via screening (J Clin Oncol. 2013;31:1928-30). “Now this was over 50 – we don’t typically screen under 50 – but over 50, you can essentially eliminate this disparity with navigation services and screening. How do you do that under 50? I’m not quite sure,” he said in an interview, adding that some organizations are recommending lowering the screening age to 45 or even 40 years in light of rising incidence among young adults.

However, accumulating evidence suggests that there may be inherent biological differences that are harder to overcome. “There is a lot of data … showing that polyps happen earlier and they are bigger in certain racial groups, particularly African Americans and American Indians,” Dr. Carethers elaborated. What is driving the biology is unknown, but the microbiome has come under scrutiny.

“So you are a victim of your circumstances,” he summarized. “You are living in a low-income area, you are eating more proinflammatory-type foods, you are getting your polyps earlier, and then you are getting your cancers earlier.”

 

 

Study details

Rural, urban, or metropolitan status was ascertained for 25,861 patients in the study, and area income and education were ascertained for 7,743 patients, according to data reported at the symposium, sponsored by the American Gastroenterological Association, the American Society of Clinical Oncology, the American Society for Radiation Oncology, and the Society of Surgical Oncology.

Compared with counterparts living in areas with both high annual income (greater than $68,000) and education (greater than 93% high school graduation rate), patients living in areas with both low annual income (less than $38,000) and education ( less than 79% high school graduation rate) were significantly more likely to be black (odds ratio, 6.4), not have private insurance (odds ratio, 6.3), have pathologic T3/T4 stage (OR, 1.4), have positive nodes (OR, 1.2), and have a Charlson-Deyo comorbidity score of 1 or greater (OR, 1.6). They also were less likely to undergo surgery (OR, 0.63) and more likely to be rehospitalized within 30 days (OR, 1.3).

After adjusting for race, insurance status, T/N stage, and comorbidity score, relative to counterparts in the high-income, high-education group, patients in the low-income, low-education group had an increased risk of death (hazard ratio, 1.24; P = .004). And relative to counterparts living in metropolitan areas, patients living in urban or rural areas had an increased risk of death (HR, 1.10; P = .02).

Among patients with stage IV disease, median overall survival was 26.1 months for those from high-income, high-education areas, but 20.7 months for those from low-income, low-education areas (P less than .001).

Dr. Matusz-Fisher did not report any conflicts of interest. The study did not receive any funding.

SOURCE: Matusz-Fisher A et al. 2020 GI Cancers Symposium, Abstract 13.

– Young adults with colorectal cancer who live in neighborhoods with higher levels of disadvantage differ on health measures, present with more advanced disease, and have poorer survival. These were among key findings of a retrospective cohort study reported at the 2020 GI Cancers Symposium.

MDedge/Susan London
Dr. Ashley Matusz-Fisher

The incidence of colorectal cancer has risen sharply – 51% – since 1994 among individuals aged younger than age 50 years, with the greatest uptick seen among those aged 20-29 years (J Natl Cancer Inst. 2017;109[8]. doi: 10.1093/jnci/djw322).

“Sociodemographic disparities have been linked to inferior survival. However, their impact and association with outcome in young adults is not well described,” said lead investigator Ashley Matusz-Fisher, MD, of the Levine Cancer Institute in Charlotte, N.C.

The investigators analyzed data from the National Cancer Database for the years 2004-2016, identifying 26,768 patients who received a colorectal cancer diagnosis when aged 18-40 years.

Results showed that those living in areas with low income (less than $38,000 annually) and low educational attainment (high school graduation rate less than 79%), and those living in urban or rural areas (versus metropolitan areas) had 24% and 10% higher risks of death, respectively.

Patients in the low-income, low-education group were more than six times as likely to be black and to lack private health insurance, had greater comorbidity, had larger tumors and more nodal involvement at diagnosis, and were less likely to undergo surgery.

Several factors may be at play for the low-income, low-education group, Dr. Matusz-Fisher speculated: limited access to care, lack of awareness of important symptoms, and inability to afford treatment when it is needed. “That could very well be contributing to them presenting at later stages and then maybe not getting the treatment that other people who have insurance would be getting.

“To try to eliminate these disparities, the first step is recognition, which is what we are doing – recognizing there are disparities – and then making people aware of these disparities,” she commented. “More efforts are needed to increase access and remove barriers to care, with the hope of eliminating disparities and achieving health equity.”

Mitigating disparities

Several studies have looked at mitigating sociodemographic-related disparities in colorectal cancer outcomes, according to session cochair John M. Carethers, MD, AGAF, professor and chair of the department of internal medicine at the University of Michigan, Ann Arbor.

MDedge/Susan London
Dr. John M. Carethers

A large Delaware initiative tackled the problem via screening (J Clin Oncol. 2013;31:1928-30). “Now this was over 50 – we don’t typically screen under 50 – but over 50, you can essentially eliminate this disparity with navigation services and screening. How do you do that under 50? I’m not quite sure,” he said in an interview, adding that some organizations are recommending lowering the screening age to 45 or even 40 years in light of rising incidence among young adults.

However, accumulating evidence suggests that there may be inherent biological differences that are harder to overcome. “There is a lot of data … showing that polyps happen earlier and they are bigger in certain racial groups, particularly African Americans and American Indians,” Dr. Carethers elaborated. What is driving the biology is unknown, but the microbiome has come under scrutiny.

“So you are a victim of your circumstances,” he summarized. “You are living in a low-income area, you are eating more proinflammatory-type foods, you are getting your polyps earlier, and then you are getting your cancers earlier.”

 

 

Study details

Rural, urban, or metropolitan status was ascertained for 25,861 patients in the study, and area income and education were ascertained for 7,743 patients, according to data reported at the symposium, sponsored by the American Gastroenterological Association, the American Society of Clinical Oncology, the American Society for Radiation Oncology, and the Society of Surgical Oncology.

Compared with counterparts living in areas with both high annual income (greater than $68,000) and education (greater than 93% high school graduation rate), patients living in areas with both low annual income (less than $38,000) and education ( less than 79% high school graduation rate) were significantly more likely to be black (odds ratio, 6.4), not have private insurance (odds ratio, 6.3), have pathologic T3/T4 stage (OR, 1.4), have positive nodes (OR, 1.2), and have a Charlson-Deyo comorbidity score of 1 or greater (OR, 1.6). They also were less likely to undergo surgery (OR, 0.63) and more likely to be rehospitalized within 30 days (OR, 1.3).

After adjusting for race, insurance status, T/N stage, and comorbidity score, relative to counterparts in the high-income, high-education group, patients in the low-income, low-education group had an increased risk of death (hazard ratio, 1.24; P = .004). And relative to counterparts living in metropolitan areas, patients living in urban or rural areas had an increased risk of death (HR, 1.10; P = .02).

Among patients with stage IV disease, median overall survival was 26.1 months for those from high-income, high-education areas, but 20.7 months for those from low-income, low-education areas (P less than .001).

Dr. Matusz-Fisher did not report any conflicts of interest. The study did not receive any funding.

SOURCE: Matusz-Fisher A et al. 2020 GI Cancers Symposium, Abstract 13.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE 2020 GI CANCERS SYMPOSIUM

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Opioid prescribing patterns mostly ‘unchanged’ with laparoscopy

Article Type
Changed

– Opioid prescription is surprisingly high after laparoscopic colorectal surgery, and is higher at larger hospitals and in some regions of the United States, according to a new study.

“In theory, for management of pain, any opioids required should be lower after laparoscopic surgery, but opioids are still ubiquitous and prescribing patterns have largely been unchanged with laparoscopy,” Deborah S. Keller, MD, assistant professor of surgery at Columbia University, New York, said at the annual clinical congress of the American College of Surgeons.

Several studies show wide variation in opioid use and prescribing after laparoscopic colorectal surgery, Dr. Keller said. She also pointed out that available data are limited on inpatient opioid use and the causes of high opioid use.

The team analyzed records of 18,395 subjects from the Premier Inpatient Database, between Jan. 1, 2014, and Sept. 30, 2015. The mean age was 61 years, and 54% were female. The distribution of hospital-stay milligram morphine equivalents (MME) was 48 at the 25th percentile, 108 at the 50th, and 246 at the 75th percentile. Overall, 18% of patients were in the high use category.

Some factors were associated with high opioid use, including emergency surgery (odds ratio, 1.28; P = .0002), being aged 18-34 (OR, 5.8; P less than .0001), major severity of illness (OR, 4.2; P less than .0001), chronic obstructive pulmonary disease comorbidity (OR, 1.13; P =.0350), having Medicaid insurance (OR, 1.35; P less than .0001), and being treated in a rural hospital (OR, 1.44; P less than.0001).

Factors associated with lower opioid use included female sex (OR, 0.90; P = .0064), being treated in a facility with fewer than 500 beds (OR, 0.706-0.822, all statistically significant), being treated in the Midwest (OR, 0.62; P less than .0001) or the South (OR, 0.66; P less than .0001), and treatment by a surgeon with a lower surgical volume (fewer than 65 cases vs. 300; OR, 0.58; P = .0286).

The study was limited by its reliance on administrative data, and one questioner at the session wondered about the validity of the 75% cutoff for high use, suggesting that it would be better to pick a value that was associated with a known increased risk of opioid dependence.

The findings could help inform future guidelines, Dr. Keller said. “On a local level, it can help optimize enhanced recovery protocols, (assist) providers to proactively recognize patients and scenarios at risk for high use, and create targeted education for younger patients, hospitals in specific geographic regions, and larger bedside hospitals so that they can follow best practices,” she added.

The finding that institutions with fewer beds were associated with lower chances of high opioid use was a surprise. “We’re looking into that,” she said.

Dr. Keller had no relevant financial disclosures.

SOURCE: Keller DS et al. Clinical Congress 2019, Abstract.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Opioid prescription is surprisingly high after laparoscopic colorectal surgery, and is higher at larger hospitals and in some regions of the United States, according to a new study.

“In theory, for management of pain, any opioids required should be lower after laparoscopic surgery, but opioids are still ubiquitous and prescribing patterns have largely been unchanged with laparoscopy,” Deborah S. Keller, MD, assistant professor of surgery at Columbia University, New York, said at the annual clinical congress of the American College of Surgeons.

Several studies show wide variation in opioid use and prescribing after laparoscopic colorectal surgery, Dr. Keller said. She also pointed out that available data are limited on inpatient opioid use and the causes of high opioid use.

The team analyzed records of 18,395 subjects from the Premier Inpatient Database, between Jan. 1, 2014, and Sept. 30, 2015. The mean age was 61 years, and 54% were female. The distribution of hospital-stay milligram morphine equivalents (MME) was 48 at the 25th percentile, 108 at the 50th, and 246 at the 75th percentile. Overall, 18% of patients were in the high use category.

Some factors were associated with high opioid use, including emergency surgery (odds ratio, 1.28; P = .0002), being aged 18-34 (OR, 5.8; P less than .0001), major severity of illness (OR, 4.2; P less than .0001), chronic obstructive pulmonary disease comorbidity (OR, 1.13; P =.0350), having Medicaid insurance (OR, 1.35; P less than .0001), and being treated in a rural hospital (OR, 1.44; P less than.0001).

Factors associated with lower opioid use included female sex (OR, 0.90; P = .0064), being treated in a facility with fewer than 500 beds (OR, 0.706-0.822, all statistically significant), being treated in the Midwest (OR, 0.62; P less than .0001) or the South (OR, 0.66; P less than .0001), and treatment by a surgeon with a lower surgical volume (fewer than 65 cases vs. 300; OR, 0.58; P = .0286).

The study was limited by its reliance on administrative data, and one questioner at the session wondered about the validity of the 75% cutoff for high use, suggesting that it would be better to pick a value that was associated with a known increased risk of opioid dependence.

The findings could help inform future guidelines, Dr. Keller said. “On a local level, it can help optimize enhanced recovery protocols, (assist) providers to proactively recognize patients and scenarios at risk for high use, and create targeted education for younger patients, hospitals in specific geographic regions, and larger bedside hospitals so that they can follow best practices,” she added.

The finding that institutions with fewer beds were associated with lower chances of high opioid use was a surprise. “We’re looking into that,” she said.

Dr. Keller had no relevant financial disclosures.

SOURCE: Keller DS et al. Clinical Congress 2019, Abstract.

– Opioid prescription is surprisingly high after laparoscopic colorectal surgery, and is higher at larger hospitals and in some regions of the United States, according to a new study.

“In theory, for management of pain, any opioids required should be lower after laparoscopic surgery, but opioids are still ubiquitous and prescribing patterns have largely been unchanged with laparoscopy,” Deborah S. Keller, MD, assistant professor of surgery at Columbia University, New York, said at the annual clinical congress of the American College of Surgeons.

Several studies show wide variation in opioid use and prescribing after laparoscopic colorectal surgery, Dr. Keller said. She also pointed out that available data are limited on inpatient opioid use and the causes of high opioid use.

The team analyzed records of 18,395 subjects from the Premier Inpatient Database, between Jan. 1, 2014, and Sept. 30, 2015. The mean age was 61 years, and 54% were female. The distribution of hospital-stay milligram morphine equivalents (MME) was 48 at the 25th percentile, 108 at the 50th, and 246 at the 75th percentile. Overall, 18% of patients were in the high use category.

Some factors were associated with high opioid use, including emergency surgery (odds ratio, 1.28; P = .0002), being aged 18-34 (OR, 5.8; P less than .0001), major severity of illness (OR, 4.2; P less than .0001), chronic obstructive pulmonary disease comorbidity (OR, 1.13; P =.0350), having Medicaid insurance (OR, 1.35; P less than .0001), and being treated in a rural hospital (OR, 1.44; P less than.0001).

Factors associated with lower opioid use included female sex (OR, 0.90; P = .0064), being treated in a facility with fewer than 500 beds (OR, 0.706-0.822, all statistically significant), being treated in the Midwest (OR, 0.62; P less than .0001) or the South (OR, 0.66; P less than .0001), and treatment by a surgeon with a lower surgical volume (fewer than 65 cases vs. 300; OR, 0.58; P = .0286).

The study was limited by its reliance on administrative data, and one questioner at the session wondered about the validity of the 75% cutoff for high use, suggesting that it would be better to pick a value that was associated with a known increased risk of opioid dependence.

The findings could help inform future guidelines, Dr. Keller said. “On a local level, it can help optimize enhanced recovery protocols, (assist) providers to proactively recognize patients and scenarios at risk for high use, and create targeted education for younger patients, hospitals in specific geographic regions, and larger bedside hospitals so that they can follow best practices,” she added.

The finding that institutions with fewer beds were associated with lower chances of high opioid use was a surprise. “We’re looking into that,” she said.

Dr. Keller had no relevant financial disclosures.

SOURCE: Keller DS et al. Clinical Congress 2019, Abstract.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM CLINICAL CONGRESS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Prescribing guide recommends fewer opioids after colorectal surgery

Article Type
Changed

– Opioids may not always be necessary following elective colorectal surgery. That’s the message coming from a retrospective study of medical records at the University of Massachusetts Medical Center, Worcester, which found that over half of patients never even filled out their prescription after colorectal surgery.

“We found that over half of the patients took no opioid pills after discharge, and 60% of the prescribed pills were left over,” said David Meyer, MD, during a presentation of the study at the annual clinical congress of the American College of Surgeons. Dr. Meyer is a surgical resident at the University of Massachusetts.

The team also used the results of their analysis to develop a guideline for the amount of opioid to prescribe following major colorectal surgery, with specific amounts of pills recommended based on the amount of opioid use during the last 24 hours of hospitalization.

“It shows a real interest in tailoring our postoperative care in pain management. They’re trying to find a way to hit a sweet spot to get patients the right amount of pain control,” said Jonathan Mitchem, MD, in an interview. Dr. Mitchem is an assistant professor at University of Missouri–Columbia, and comoderated the session where the research was presented.

The researchers performed a retrospective analysis of major elective colorectal procedures at their institution, including colectomy, rectal resection, and ostomy reversal. The analysis included 100 patients (55 female), with a mean age of 59 years. A total of 71% were opioid naive, meaning there was no evidence of an opioid prescription in the year prior to surgery. A total of 74% underwent a laparoscopic procedure, and 75% had a partial colectomy. The postoperative stay averaged 4.5 days.

The researchers converted in-hospital opioid use categories (IOUC) to equianalgesic 5-mg oxycodone pills (EOPs). In the last 24 hours before release, 53% of patients had no opioids at all (no IOUC, 0 EOPs), 25% received low amounts of opioids (low IOUC, 0.1-3.0 EOPs), and 22% high amounts (high IOUC, more than 3.1 EOPs). Overall, prescribed EOP was 17.5, and just 38% was consumed. These numbers were lowest in the no-IOUC group (15.7, 16%), followed by the low-IOUC group (16.0, 32%), and the high group (23.7, 79%; P less than .01).

The researchers then looked at the 85th percentile of EOPs for each group, and used that to develop a guideline for opioid prescription. For the no-IOUC group, they recommend 3 EOPs, for the low-IOUC group they recommend 12 EOPs, and for the high-IOUC group they recommend 30 EOPs.

The researchers examined various factors that might have influenced opioid use, correcting for whether the patient was opioid naive, case type, postoperative length of stay, and new ostomy creation. The only factor with a significant association for excessive opioid use was inflammatory bowel disease, which was linked to a nearly 900% increased risk of using more than the guideline amounts (adjusted odds ratio, 8.3; P less than .01; area under the curve, 0.85).

The study is limited by the fact that it was conducted at a single center, and that patient opioid use was self-reported. The guidelines need to be validated prospectively.

No funding information was disclosed. Dr. Meyer and Dr. Mitchem had no relevant financial disclosures.

SOURCE: Meyer D et al. Clinical Congress 2019, Abstract.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Opioids may not always be necessary following elective colorectal surgery. That’s the message coming from a retrospective study of medical records at the University of Massachusetts Medical Center, Worcester, which found that over half of patients never even filled out their prescription after colorectal surgery.

“We found that over half of the patients took no opioid pills after discharge, and 60% of the prescribed pills were left over,” said David Meyer, MD, during a presentation of the study at the annual clinical congress of the American College of Surgeons. Dr. Meyer is a surgical resident at the University of Massachusetts.

The team also used the results of their analysis to develop a guideline for the amount of opioid to prescribe following major colorectal surgery, with specific amounts of pills recommended based on the amount of opioid use during the last 24 hours of hospitalization.

“It shows a real interest in tailoring our postoperative care in pain management. They’re trying to find a way to hit a sweet spot to get patients the right amount of pain control,” said Jonathan Mitchem, MD, in an interview. Dr. Mitchem is an assistant professor at University of Missouri–Columbia, and comoderated the session where the research was presented.

The researchers performed a retrospective analysis of major elective colorectal procedures at their institution, including colectomy, rectal resection, and ostomy reversal. The analysis included 100 patients (55 female), with a mean age of 59 years. A total of 71% were opioid naive, meaning there was no evidence of an opioid prescription in the year prior to surgery. A total of 74% underwent a laparoscopic procedure, and 75% had a partial colectomy. The postoperative stay averaged 4.5 days.

The researchers converted in-hospital opioid use categories (IOUC) to equianalgesic 5-mg oxycodone pills (EOPs). In the last 24 hours before release, 53% of patients had no opioids at all (no IOUC, 0 EOPs), 25% received low amounts of opioids (low IOUC, 0.1-3.0 EOPs), and 22% high amounts (high IOUC, more than 3.1 EOPs). Overall, prescribed EOP was 17.5, and just 38% was consumed. These numbers were lowest in the no-IOUC group (15.7, 16%), followed by the low-IOUC group (16.0, 32%), and the high group (23.7, 79%; P less than .01).

The researchers then looked at the 85th percentile of EOPs for each group, and used that to develop a guideline for opioid prescription. For the no-IOUC group, they recommend 3 EOPs, for the low-IOUC group they recommend 12 EOPs, and for the high-IOUC group they recommend 30 EOPs.

The researchers examined various factors that might have influenced opioid use, correcting for whether the patient was opioid naive, case type, postoperative length of stay, and new ostomy creation. The only factor with a significant association for excessive opioid use was inflammatory bowel disease, which was linked to a nearly 900% increased risk of using more than the guideline amounts (adjusted odds ratio, 8.3; P less than .01; area under the curve, 0.85).

The study is limited by the fact that it was conducted at a single center, and that patient opioid use was self-reported. The guidelines need to be validated prospectively.

No funding information was disclosed. Dr. Meyer and Dr. Mitchem had no relevant financial disclosures.

SOURCE: Meyer D et al. Clinical Congress 2019, Abstract.

– Opioids may not always be necessary following elective colorectal surgery. That’s the message coming from a retrospective study of medical records at the University of Massachusetts Medical Center, Worcester, which found that over half of patients never even filled out their prescription after colorectal surgery.

“We found that over half of the patients took no opioid pills after discharge, and 60% of the prescribed pills were left over,” said David Meyer, MD, during a presentation of the study at the annual clinical congress of the American College of Surgeons. Dr. Meyer is a surgical resident at the University of Massachusetts.

The team also used the results of their analysis to develop a guideline for the amount of opioid to prescribe following major colorectal surgery, with specific amounts of pills recommended based on the amount of opioid use during the last 24 hours of hospitalization.

“It shows a real interest in tailoring our postoperative care in pain management. They’re trying to find a way to hit a sweet spot to get patients the right amount of pain control,” said Jonathan Mitchem, MD, in an interview. Dr. Mitchem is an assistant professor at University of Missouri–Columbia, and comoderated the session where the research was presented.

The researchers performed a retrospective analysis of major elective colorectal procedures at their institution, including colectomy, rectal resection, and ostomy reversal. The analysis included 100 patients (55 female), with a mean age of 59 years. A total of 71% were opioid naive, meaning there was no evidence of an opioid prescription in the year prior to surgery. A total of 74% underwent a laparoscopic procedure, and 75% had a partial colectomy. The postoperative stay averaged 4.5 days.

The researchers converted in-hospital opioid use categories (IOUC) to equianalgesic 5-mg oxycodone pills (EOPs). In the last 24 hours before release, 53% of patients had no opioids at all (no IOUC, 0 EOPs), 25% received low amounts of opioids (low IOUC, 0.1-3.0 EOPs), and 22% high amounts (high IOUC, more than 3.1 EOPs). Overall, prescribed EOP was 17.5, and just 38% was consumed. These numbers were lowest in the no-IOUC group (15.7, 16%), followed by the low-IOUC group (16.0, 32%), and the high group (23.7, 79%; P less than .01).

The researchers then looked at the 85th percentile of EOPs for each group, and used that to develop a guideline for opioid prescription. For the no-IOUC group, they recommend 3 EOPs, for the low-IOUC group they recommend 12 EOPs, and for the high-IOUC group they recommend 30 EOPs.

The researchers examined various factors that might have influenced opioid use, correcting for whether the patient was opioid naive, case type, postoperative length of stay, and new ostomy creation. The only factor with a significant association for excessive opioid use was inflammatory bowel disease, which was linked to a nearly 900% increased risk of using more than the guideline amounts (adjusted odds ratio, 8.3; P less than .01; area under the curve, 0.85).

The study is limited by the fact that it was conducted at a single center, and that patient opioid use was self-reported. The guidelines need to be validated prospectively.

No funding information was disclosed. Dr. Meyer and Dr. Mitchem had no relevant financial disclosures.

SOURCE: Meyer D et al. Clinical Congress 2019, Abstract.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM CLINICAL CONGRESS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

KRAS-mutation colon, rectal cancers have distinct survival profiles

Article Type
Changed

 

– When it comes to KRAS mutational status, liver metastases originating from left-sided colon tumors have different clinical and survival characteristics from those originating from primary rectal tumors. There was no significant difference in survival between bearers of mutated versus wild-type (WT) KRAS in rectal tumor cases, while there was a significant difference in survival from left-sided colon cases, according to a first-time analysis of the effect of KRAS status in this specific population.

The work was presented at the annual clinical congress of the American College of Surgeons by Neda Amini, MD. “The liver metastasis originating from a rectal tumor might have a different biology than from a primary colon tumor, and we should have stratification according to the primary tumor location in clinical trials testing chemotherapy and targeted agents,” said Dr. Amini, who is a surgical resident at Sinai Hospital of Baltimore, during her presentation of the research.

“I thought that was interesting because most of the studies that have been done look at KRAS mutations in colorectal cancers, and [colon and rectal cancers] are two completely different entities,” said session comoderator Valentine Nfonsam, MD, associate professor of surgery at the University of Arizona, Tucson, in an interview. The findings could also impact clinical practice. “If a patient has rectal cancer, if they have a KRAS mutation, whether you treat them with cetuximab or not, the overall survival doesn’t really change. Whereas for colon cancer patients you really want to make that distinction. You want to truly personalize their therapy, because of the difference in survival in a patient with the KRAS mutation in colon cancer” Dr. Nfonsam said.

“It gets to the heart that there might be different biology between colon cancers and rectal cancers. It’s important to understand the differences in the basic biology, which affects the treatment and the surgery,” said the other comoderator, Jonathan Mitchem, MD, in an interview. Dr. Mitchem is an assistant professor at the University of Missouri–Columbia.

KRAS is common in colorectal cancer, occurring in 30% of cases, and multiple trials have shown it is associated with nonresponse to the epidermal growth factor receptor inhibitors cetuximab or panitumumab. All colorectal cancer patients with liver metastases should be screened for KRAS mutations, according to National Comprehensive Cancer Network guidelines.

The researchers conducted a retrospective analysis of 1,304 patients who underwent curative-intent surgery for colorectal liver metastases at nine institutions between 2000 and 2016. The KRAS mutation rate was similar in the primary colon and rectal tumors (34.2% vs. 30.9%; P = .24). The frequency was highest in right-sided colon tumors (39.4%). There was a statistically significant difference in the frequency of KRAS mutation between primary rectal tumors (30.9%), and left-sided colon tumors (21.1%; P = .001).

There were several differences in clinical characteristics between left-sided colon cancers and rectal cancers. Rectal cancer patients were more likely to be male (73.4% vs. 62.4%; P = .001); more likely to be stage T1-T2 (16.6% vs. 10.6%; P = .012); less likely to have serum carcinoembryonic antigen greater than 100 ng/mL (8.4% vs. 14.1%; P = .018); and less likely to have a liver metastasis under 3 cm (36.1% vs. 49.3%).

There were significant differences between KRAS mutant and KRAS wild-type patients with a colon primary tumor, including greater likelihood of lymph node metastasis in WT (65.2% vs. 55.37%; P = .004), greater likelihood of liver metastasis greater than 3 cm in WT (48.8% versus 39.3%; P = .01), greater likelihood of extrahepatic disease in mutant KRAS (16.3% vs. 10.4%; P = .01), greater likelihood of prehepatic resection chemotherapy in WT (65.5% vs. 56.0%; P = .005), greater likelihood of posthepatic resection chemotherapy in mutant KRAS (64.5% vs. 55.5%; P = .01), and greater likelihood of receiving anti–epidermal growth factor therapy in WT (5.7% vs. 0.3%; P less than .001). The only difference seen in patients with rectal primary tumors was the odds of receiving post-hepatic surgery chemotherapy, which was higher among patients with mutated KRAS (70.8% vs. 59.0%; P = .03).

After a median follow-up of 26.4 months, the 1-, 3-, and 5-year overall survival rates were 88.9%, 62.5%, and 44.5%. Among patients with primary colon cancer, there was a statistically significant lower survival curve in patients with a KRAS mutation overall and in those with left-sided colon tumors (log rank P less than .001 for both), but there was no significant survival difference between mutation bearers and wild-type patients with a primary rectal tumor (log rank P = .53). A multivariate analysis showed an 82% risk of death from KRAS mutation in primary colon cancer (hazard ratio, 1.82; P less than .001), but a univariate analysis showed no significant mortality association in rectal primary tumors (hazard ratio, 1.13; P = .46).

The funding source was not disclosed. The authors had no relevant financial disclosures.

SOURCE: Amini N et al. J Am Coll Surg. 2019 Oct;229(4):Suppl 1, S69-70.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– When it comes to KRAS mutational status, liver metastases originating from left-sided colon tumors have different clinical and survival characteristics from those originating from primary rectal tumors. There was no significant difference in survival between bearers of mutated versus wild-type (WT) KRAS in rectal tumor cases, while there was a significant difference in survival from left-sided colon cases, according to a first-time analysis of the effect of KRAS status in this specific population.

The work was presented at the annual clinical congress of the American College of Surgeons by Neda Amini, MD. “The liver metastasis originating from a rectal tumor might have a different biology than from a primary colon tumor, and we should have stratification according to the primary tumor location in clinical trials testing chemotherapy and targeted agents,” said Dr. Amini, who is a surgical resident at Sinai Hospital of Baltimore, during her presentation of the research.

“I thought that was interesting because most of the studies that have been done look at KRAS mutations in colorectal cancers, and [colon and rectal cancers] are two completely different entities,” said session comoderator Valentine Nfonsam, MD, associate professor of surgery at the University of Arizona, Tucson, in an interview. The findings could also impact clinical practice. “If a patient has rectal cancer, if they have a KRAS mutation, whether you treat them with cetuximab or not, the overall survival doesn’t really change. Whereas for colon cancer patients you really want to make that distinction. You want to truly personalize their therapy, because of the difference in survival in a patient with the KRAS mutation in colon cancer” Dr. Nfonsam said.

“It gets to the heart that there might be different biology between colon cancers and rectal cancers. It’s important to understand the differences in the basic biology, which affects the treatment and the surgery,” said the other comoderator, Jonathan Mitchem, MD, in an interview. Dr. Mitchem is an assistant professor at the University of Missouri–Columbia.

KRAS is common in colorectal cancer, occurring in 30% of cases, and multiple trials have shown it is associated with nonresponse to the epidermal growth factor receptor inhibitors cetuximab or panitumumab. All colorectal cancer patients with liver metastases should be screened for KRAS mutations, according to National Comprehensive Cancer Network guidelines.

The researchers conducted a retrospective analysis of 1,304 patients who underwent curative-intent surgery for colorectal liver metastases at nine institutions between 2000 and 2016. The KRAS mutation rate was similar in the primary colon and rectal tumors (34.2% vs. 30.9%; P = .24). The frequency was highest in right-sided colon tumors (39.4%). There was a statistically significant difference in the frequency of KRAS mutation between primary rectal tumors (30.9%), and left-sided colon tumors (21.1%; P = .001).

There were several differences in clinical characteristics between left-sided colon cancers and rectal cancers. Rectal cancer patients were more likely to be male (73.4% vs. 62.4%; P = .001); more likely to be stage T1-T2 (16.6% vs. 10.6%; P = .012); less likely to have serum carcinoembryonic antigen greater than 100 ng/mL (8.4% vs. 14.1%; P = .018); and less likely to have a liver metastasis under 3 cm (36.1% vs. 49.3%).

There were significant differences between KRAS mutant and KRAS wild-type patients with a colon primary tumor, including greater likelihood of lymph node metastasis in WT (65.2% vs. 55.37%; P = .004), greater likelihood of liver metastasis greater than 3 cm in WT (48.8% versus 39.3%; P = .01), greater likelihood of extrahepatic disease in mutant KRAS (16.3% vs. 10.4%; P = .01), greater likelihood of prehepatic resection chemotherapy in WT (65.5% vs. 56.0%; P = .005), greater likelihood of posthepatic resection chemotherapy in mutant KRAS (64.5% vs. 55.5%; P = .01), and greater likelihood of receiving anti–epidermal growth factor therapy in WT (5.7% vs. 0.3%; P less than .001). The only difference seen in patients with rectal primary tumors was the odds of receiving post-hepatic surgery chemotherapy, which was higher among patients with mutated KRAS (70.8% vs. 59.0%; P = .03).

After a median follow-up of 26.4 months, the 1-, 3-, and 5-year overall survival rates were 88.9%, 62.5%, and 44.5%. Among patients with primary colon cancer, there was a statistically significant lower survival curve in patients with a KRAS mutation overall and in those with left-sided colon tumors (log rank P less than .001 for both), but there was no significant survival difference between mutation bearers and wild-type patients with a primary rectal tumor (log rank P = .53). A multivariate analysis showed an 82% risk of death from KRAS mutation in primary colon cancer (hazard ratio, 1.82; P less than .001), but a univariate analysis showed no significant mortality association in rectal primary tumors (hazard ratio, 1.13; P = .46).

The funding source was not disclosed. The authors had no relevant financial disclosures.

SOURCE: Amini N et al. J Am Coll Surg. 2019 Oct;229(4):Suppl 1, S69-70.

 

– When it comes to KRAS mutational status, liver metastases originating from left-sided colon tumors have different clinical and survival characteristics from those originating from primary rectal tumors. There was no significant difference in survival between bearers of mutated versus wild-type (WT) KRAS in rectal tumor cases, while there was a significant difference in survival from left-sided colon cases, according to a first-time analysis of the effect of KRAS status in this specific population.

The work was presented at the annual clinical congress of the American College of Surgeons by Neda Amini, MD. “The liver metastasis originating from a rectal tumor might have a different biology than from a primary colon tumor, and we should have stratification according to the primary tumor location in clinical trials testing chemotherapy and targeted agents,” said Dr. Amini, who is a surgical resident at Sinai Hospital of Baltimore, during her presentation of the research.

“I thought that was interesting because most of the studies that have been done look at KRAS mutations in colorectal cancers, and [colon and rectal cancers] are two completely different entities,” said session comoderator Valentine Nfonsam, MD, associate professor of surgery at the University of Arizona, Tucson, in an interview. The findings could also impact clinical practice. “If a patient has rectal cancer, if they have a KRAS mutation, whether you treat them with cetuximab or not, the overall survival doesn’t really change. Whereas for colon cancer patients you really want to make that distinction. You want to truly personalize their therapy, because of the difference in survival in a patient with the KRAS mutation in colon cancer” Dr. Nfonsam said.

“It gets to the heart that there might be different biology between colon cancers and rectal cancers. It’s important to understand the differences in the basic biology, which affects the treatment and the surgery,” said the other comoderator, Jonathan Mitchem, MD, in an interview. Dr. Mitchem is an assistant professor at the University of Missouri–Columbia.

KRAS is common in colorectal cancer, occurring in 30% of cases, and multiple trials have shown it is associated with nonresponse to the epidermal growth factor receptor inhibitors cetuximab or panitumumab. All colorectal cancer patients with liver metastases should be screened for KRAS mutations, according to National Comprehensive Cancer Network guidelines.

The researchers conducted a retrospective analysis of 1,304 patients who underwent curative-intent surgery for colorectal liver metastases at nine institutions between 2000 and 2016. The KRAS mutation rate was similar in the primary colon and rectal tumors (34.2% vs. 30.9%; P = .24). The frequency was highest in right-sided colon tumors (39.4%). There was a statistically significant difference in the frequency of KRAS mutation between primary rectal tumors (30.9%), and left-sided colon tumors (21.1%; P = .001).

There were several differences in clinical characteristics between left-sided colon cancers and rectal cancers. Rectal cancer patients were more likely to be male (73.4% vs. 62.4%; P = .001); more likely to be stage T1-T2 (16.6% vs. 10.6%; P = .012); less likely to have serum carcinoembryonic antigen greater than 100 ng/mL (8.4% vs. 14.1%; P = .018); and less likely to have a liver metastasis under 3 cm (36.1% vs. 49.3%).

There were significant differences between KRAS mutant and KRAS wild-type patients with a colon primary tumor, including greater likelihood of lymph node metastasis in WT (65.2% vs. 55.37%; P = .004), greater likelihood of liver metastasis greater than 3 cm in WT (48.8% versus 39.3%; P = .01), greater likelihood of extrahepatic disease in mutant KRAS (16.3% vs. 10.4%; P = .01), greater likelihood of prehepatic resection chemotherapy in WT (65.5% vs. 56.0%; P = .005), greater likelihood of posthepatic resection chemotherapy in mutant KRAS (64.5% vs. 55.5%; P = .01), and greater likelihood of receiving anti–epidermal growth factor therapy in WT (5.7% vs. 0.3%; P less than .001). The only difference seen in patients with rectal primary tumors was the odds of receiving post-hepatic surgery chemotherapy, which was higher among patients with mutated KRAS (70.8% vs. 59.0%; P = .03).

After a median follow-up of 26.4 months, the 1-, 3-, and 5-year overall survival rates were 88.9%, 62.5%, and 44.5%. Among patients with primary colon cancer, there was a statistically significant lower survival curve in patients with a KRAS mutation overall and in those with left-sided colon tumors (log rank P less than .001 for both), but there was no significant survival difference between mutation bearers and wild-type patients with a primary rectal tumor (log rank P = .53). A multivariate analysis showed an 82% risk of death from KRAS mutation in primary colon cancer (hazard ratio, 1.82; P less than .001), but a univariate analysis showed no significant mortality association in rectal primary tumors (hazard ratio, 1.13; P = .46).

The funding source was not disclosed. The authors had no relevant financial disclosures.

SOURCE: Amini N et al. J Am Coll Surg. 2019 Oct;229(4):Suppl 1, S69-70.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM CLINICAL CONGRESS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

New practice guideline: CRC screening isn’t necessary for low-risk patients aged 50-75 years

Current models that predict risk lack precision
Article Type
Changed

 

Patients 50-79 years old with a demonstrably low risk of developing the disease within 15 years probably don’t need to be screened for colorectal cancer. But if their risk of disease is at least 3% over 15 years, patients should be screened, Lise M. Helsingen, MD, and colleagues wrote in BMJ (2019;367:l5515 doi: 10.1136/bmj.l5515).

For these patients, “We suggest screening with one of the four screening options: fecal immunochemical test (FIT) every year, FIT every 2 years, a single sigmoidoscopy, or a single colonoscopy,” wrote Dr. Helsingen of the University of Oslo, and her team.

She chaired a 22-member international panel that developed a collaborative effort from the MAGIC research and innovation program as a part of the BMJ Rapid Recommendations project. The team reviewed 12 research papers comprising almost 1.4 million patients from Denmark, Italy, the Netherlands, Norway, Poland, Spain, Sweden, the United Kingdom, and the United States. Follow-up ranged from 0 to 19.5 years for colorectal cancer incidence and up to 30 years for mortality.

Because of the dearth of relevant data in some studies, however, the projected outcomes had to be simulated, with benefits and harms calculations based on 100% screening adherence. However, the team noted, it’s impossible to achieve complete adherence. Most studies of colorectal screening don’t exceed a 50% adherence level.

“All the modeling data are of low certainty. It is a useful indication, but there is a high chance that new evidence will show a smaller or larger benefit, which in turn may alter these recommendations.”

Compared with no screening, all four screening models reduced the risk of colorectal cancer mortality to a similar level.

  • FIT every year, 59%.
  • FIT every 2 years, 50%.
  • Single sigmoidoscopy, 52%.
  • Single colonoscopy, 67%.

Screening had less of an impact on reducing the incidence of colorectal cancer:

  • FIT every 2 years, 0.05%.
  • FIT every year, 0.15%.
  • Single sigmoidoscopy, 27%.
  • Single colonoscopy, 34%.

The panel also assessed potential harms. Among almost 1 million patients, the colonoscopy-related mortality rate was 0.03 per 1,000 procedures. The perforation rate was 0.8 per 1,000 colonoscopies after a positive fecal test, and 1.4 per 1,000 screened with sigmoidoscopy. The bleeding rate was 1.9 per 1,000 colonoscopies performed after a positive fecal test, and 3-4 per 1,000 screened with sigmoidoscopy.

Successful implementation of these recommendations hinges on accurate risk assessment, however. The team recommended the QCancer platform as “one of the best performing models for both men and women.”

The calculator includes age, sex, ethnicity, smoking status, alcohol use, family history of gastrointestinal cancer, personal history of other cancers, diabetes, ulcerative colitis, colonic polyps, and body mass index.

“We suggest this model because it is available as an online calculator; includes only risk factors available in routine health care; has been validated in a population separate from the derivation population; has reasonable discriminatory ability; and has a good fit between predicted and observed outcomes. In addition, it is the only online risk calculator we know of that predicts risk over a 15-year time horizon.”

The team stressed that their recommendations can’t be applied to all patients. Because evidence for both screening recommendations was weak – largely because of the dearth of supporting data – patients and physicians should cocreate a personalized screening plan.

“Several factors influence individuals’ decisions whether to be screened, even when they are presented with the same information,” the authors said. These include variation in an individual’s values and preferences, a close balance of benefits versus harms and burdens, and personal preference.

“Some individuals may value a minimally invasive test such as FIT, and the possibility of invasive screening with colonoscopy might put them off screening altogether. Those who most value preventing colorectal cancer or avoiding repeated testing are likely to choose sigmoidoscopy or colonoscopy.”

The authors had no financial conflicts of interest.

SOURCE: BMJ 2019;367:l5515. doi: 10.1136/bmj.l5515.

Body

There is compelling evidence that CRC screening of average-risk individuals is effective – screening with one of several modalities can reduce CRC incidence and mortality in average-risk individuals. Various guidelines throughout the world have recommended screening, usually beginning at age 50 years, in a one-size-fits-all manner. Despite our knowledge that different people have a different lifetime risk of CRC, no prior guidelines have suggested that risk stratification be built into the decision making.

Dr. David Lieberman
A new clinical practice guideline from an international panel applies principles of precision medicine to CRC screening and proposes a paradigm shift by recommending screening to higher-risk individuals, and not recommending screening if the risk of CRC is low. Intuitively, this makes sense and conserves resources – if we can accurately determine risk of CRC. This guideline uses a calculator (QCancer) derived from United Kingdom data to estimate 15-year risk of CRC. The panel suggests that for screening to be initiated there should be a certain level of benefit: a CRC mortality or incidence reduction of 5 per 1,000 screenees for a noninvasive test like fecal immunochemical test (FIT) and a reduction of 10 per 1,000 screenees for invasive tests like sigmoidoscopy and colonoscopy. When these estimates of benefit are placed into a microsimulation model, the cutoff for recommending screening is a 3% risk of CRC over the next 15 years. This approach would largely eliminate any screening before age 60 years, based on the calculator rating, unless there is a family history of GI cancer.

All of the recommendations in this practice guideline are weak because they are derived from models that lack adequate precision. Nevertheless, the authors have proposed a new approach to CRC screening, similar to management plans for patients with cardiovascular disease. Before adopting such an approach, we need to be more comfortable with the precision of the risk estimates. These estimates, derived entirely from demographic and clinical information, may be enhanced by genomic data to achieve more precision. Further data on the willingness of the public to accept no screening if their risk is below a certain threshold needs to be evaluated. Despite these issues, the guideline presents a provocative approach which demands our attention.

David Lieberman, MD, AGAF, is professor of medicine and chief of the division of gastroenterology and hepatology, Oregon Health & Science University, Portland. He is Past President of the AGA Institute. He has no conflicts of interest.

Publications
Topics
Sections
Body

There is compelling evidence that CRC screening of average-risk individuals is effective – screening with one of several modalities can reduce CRC incidence and mortality in average-risk individuals. Various guidelines throughout the world have recommended screening, usually beginning at age 50 years, in a one-size-fits-all manner. Despite our knowledge that different people have a different lifetime risk of CRC, no prior guidelines have suggested that risk stratification be built into the decision making.

Dr. David Lieberman
A new clinical practice guideline from an international panel applies principles of precision medicine to CRC screening and proposes a paradigm shift by recommending screening to higher-risk individuals, and not recommending screening if the risk of CRC is low. Intuitively, this makes sense and conserves resources – if we can accurately determine risk of CRC. This guideline uses a calculator (QCancer) derived from United Kingdom data to estimate 15-year risk of CRC. The panel suggests that for screening to be initiated there should be a certain level of benefit: a CRC mortality or incidence reduction of 5 per 1,000 screenees for a noninvasive test like fecal immunochemical test (FIT) and a reduction of 10 per 1,000 screenees for invasive tests like sigmoidoscopy and colonoscopy. When these estimates of benefit are placed into a microsimulation model, the cutoff for recommending screening is a 3% risk of CRC over the next 15 years. This approach would largely eliminate any screening before age 60 years, based on the calculator rating, unless there is a family history of GI cancer.

All of the recommendations in this practice guideline are weak because they are derived from models that lack adequate precision. Nevertheless, the authors have proposed a new approach to CRC screening, similar to management plans for patients with cardiovascular disease. Before adopting such an approach, we need to be more comfortable with the precision of the risk estimates. These estimates, derived entirely from demographic and clinical information, may be enhanced by genomic data to achieve more precision. Further data on the willingness of the public to accept no screening if their risk is below a certain threshold needs to be evaluated. Despite these issues, the guideline presents a provocative approach which demands our attention.

David Lieberman, MD, AGAF, is professor of medicine and chief of the division of gastroenterology and hepatology, Oregon Health & Science University, Portland. He is Past President of the AGA Institute. He has no conflicts of interest.

Body

There is compelling evidence that CRC screening of average-risk individuals is effective – screening with one of several modalities can reduce CRC incidence and mortality in average-risk individuals. Various guidelines throughout the world have recommended screening, usually beginning at age 50 years, in a one-size-fits-all manner. Despite our knowledge that different people have a different lifetime risk of CRC, no prior guidelines have suggested that risk stratification be built into the decision making.

Dr. David Lieberman
A new clinical practice guideline from an international panel applies principles of precision medicine to CRC screening and proposes a paradigm shift by recommending screening to higher-risk individuals, and not recommending screening if the risk of CRC is low. Intuitively, this makes sense and conserves resources – if we can accurately determine risk of CRC. This guideline uses a calculator (QCancer) derived from United Kingdom data to estimate 15-year risk of CRC. The panel suggests that for screening to be initiated there should be a certain level of benefit: a CRC mortality or incidence reduction of 5 per 1,000 screenees for a noninvasive test like fecal immunochemical test (FIT) and a reduction of 10 per 1,000 screenees for invasive tests like sigmoidoscopy and colonoscopy. When these estimates of benefit are placed into a microsimulation model, the cutoff for recommending screening is a 3% risk of CRC over the next 15 years. This approach would largely eliminate any screening before age 60 years, based on the calculator rating, unless there is a family history of GI cancer.

All of the recommendations in this practice guideline are weak because they are derived from models that lack adequate precision. Nevertheless, the authors have proposed a new approach to CRC screening, similar to management plans for patients with cardiovascular disease. Before adopting such an approach, we need to be more comfortable with the precision of the risk estimates. These estimates, derived entirely from demographic and clinical information, may be enhanced by genomic data to achieve more precision. Further data on the willingness of the public to accept no screening if their risk is below a certain threshold needs to be evaluated. Despite these issues, the guideline presents a provocative approach which demands our attention.

David Lieberman, MD, AGAF, is professor of medicine and chief of the division of gastroenterology and hepatology, Oregon Health & Science University, Portland. He is Past President of the AGA Institute. He has no conflicts of interest.

Title
Current models that predict risk lack precision
Current models that predict risk lack precision

 

Patients 50-79 years old with a demonstrably low risk of developing the disease within 15 years probably don’t need to be screened for colorectal cancer. But if their risk of disease is at least 3% over 15 years, patients should be screened, Lise M. Helsingen, MD, and colleagues wrote in BMJ (2019;367:l5515 doi: 10.1136/bmj.l5515).

For these patients, “We suggest screening with one of the four screening options: fecal immunochemical test (FIT) every year, FIT every 2 years, a single sigmoidoscopy, or a single colonoscopy,” wrote Dr. Helsingen of the University of Oslo, and her team.

She chaired a 22-member international panel that developed a collaborative effort from the MAGIC research and innovation program as a part of the BMJ Rapid Recommendations project. The team reviewed 12 research papers comprising almost 1.4 million patients from Denmark, Italy, the Netherlands, Norway, Poland, Spain, Sweden, the United Kingdom, and the United States. Follow-up ranged from 0 to 19.5 years for colorectal cancer incidence and up to 30 years for mortality.

Because of the dearth of relevant data in some studies, however, the projected outcomes had to be simulated, with benefits and harms calculations based on 100% screening adherence. However, the team noted, it’s impossible to achieve complete adherence. Most studies of colorectal screening don’t exceed a 50% adherence level.

“All the modeling data are of low certainty. It is a useful indication, but there is a high chance that new evidence will show a smaller or larger benefit, which in turn may alter these recommendations.”

Compared with no screening, all four screening models reduced the risk of colorectal cancer mortality to a similar level.

  • FIT every year, 59%.
  • FIT every 2 years, 50%.
  • Single sigmoidoscopy, 52%.
  • Single colonoscopy, 67%.

Screening had less of an impact on reducing the incidence of colorectal cancer:

  • FIT every 2 years, 0.05%.
  • FIT every year, 0.15%.
  • Single sigmoidoscopy, 27%.
  • Single colonoscopy, 34%.

The panel also assessed potential harms. Among almost 1 million patients, the colonoscopy-related mortality rate was 0.03 per 1,000 procedures. The perforation rate was 0.8 per 1,000 colonoscopies after a positive fecal test, and 1.4 per 1,000 screened with sigmoidoscopy. The bleeding rate was 1.9 per 1,000 colonoscopies performed after a positive fecal test, and 3-4 per 1,000 screened with sigmoidoscopy.

Successful implementation of these recommendations hinges on accurate risk assessment, however. The team recommended the QCancer platform as “one of the best performing models for both men and women.”

The calculator includes age, sex, ethnicity, smoking status, alcohol use, family history of gastrointestinal cancer, personal history of other cancers, diabetes, ulcerative colitis, colonic polyps, and body mass index.

“We suggest this model because it is available as an online calculator; includes only risk factors available in routine health care; has been validated in a population separate from the derivation population; has reasonable discriminatory ability; and has a good fit between predicted and observed outcomes. In addition, it is the only online risk calculator we know of that predicts risk over a 15-year time horizon.”

The team stressed that their recommendations can’t be applied to all patients. Because evidence for both screening recommendations was weak – largely because of the dearth of supporting data – patients and physicians should cocreate a personalized screening plan.

“Several factors influence individuals’ decisions whether to be screened, even when they are presented with the same information,” the authors said. These include variation in an individual’s values and preferences, a close balance of benefits versus harms and burdens, and personal preference.

“Some individuals may value a minimally invasive test such as FIT, and the possibility of invasive screening with colonoscopy might put them off screening altogether. Those who most value preventing colorectal cancer or avoiding repeated testing are likely to choose sigmoidoscopy or colonoscopy.”

The authors had no financial conflicts of interest.

SOURCE: BMJ 2019;367:l5515. doi: 10.1136/bmj.l5515.

 

Patients 50-79 years old with a demonstrably low risk of developing the disease within 15 years probably don’t need to be screened for colorectal cancer. But if their risk of disease is at least 3% over 15 years, patients should be screened, Lise M. Helsingen, MD, and colleagues wrote in BMJ (2019;367:l5515 doi: 10.1136/bmj.l5515).

For these patients, “We suggest screening with one of the four screening options: fecal immunochemical test (FIT) every year, FIT every 2 years, a single sigmoidoscopy, or a single colonoscopy,” wrote Dr. Helsingen of the University of Oslo, and her team.

She chaired a 22-member international panel that developed a collaborative effort from the MAGIC research and innovation program as a part of the BMJ Rapid Recommendations project. The team reviewed 12 research papers comprising almost 1.4 million patients from Denmark, Italy, the Netherlands, Norway, Poland, Spain, Sweden, the United Kingdom, and the United States. Follow-up ranged from 0 to 19.5 years for colorectal cancer incidence and up to 30 years for mortality.

Because of the dearth of relevant data in some studies, however, the projected outcomes had to be simulated, with benefits and harms calculations based on 100% screening adherence. However, the team noted, it’s impossible to achieve complete adherence. Most studies of colorectal screening don’t exceed a 50% adherence level.

“All the modeling data are of low certainty. It is a useful indication, but there is a high chance that new evidence will show a smaller or larger benefit, which in turn may alter these recommendations.”

Compared with no screening, all four screening models reduced the risk of colorectal cancer mortality to a similar level.

  • FIT every year, 59%.
  • FIT every 2 years, 50%.
  • Single sigmoidoscopy, 52%.
  • Single colonoscopy, 67%.

Screening had less of an impact on reducing the incidence of colorectal cancer:

  • FIT every 2 years, 0.05%.
  • FIT every year, 0.15%.
  • Single sigmoidoscopy, 27%.
  • Single colonoscopy, 34%.

The panel also assessed potential harms. Among almost 1 million patients, the colonoscopy-related mortality rate was 0.03 per 1,000 procedures. The perforation rate was 0.8 per 1,000 colonoscopies after a positive fecal test, and 1.4 per 1,000 screened with sigmoidoscopy. The bleeding rate was 1.9 per 1,000 colonoscopies performed after a positive fecal test, and 3-4 per 1,000 screened with sigmoidoscopy.

Successful implementation of these recommendations hinges on accurate risk assessment, however. The team recommended the QCancer platform as “one of the best performing models for both men and women.”

The calculator includes age, sex, ethnicity, smoking status, alcohol use, family history of gastrointestinal cancer, personal history of other cancers, diabetes, ulcerative colitis, colonic polyps, and body mass index.

“We suggest this model because it is available as an online calculator; includes only risk factors available in routine health care; has been validated in a population separate from the derivation population; has reasonable discriminatory ability; and has a good fit between predicted and observed outcomes. In addition, it is the only online risk calculator we know of that predicts risk over a 15-year time horizon.”

The team stressed that their recommendations can’t be applied to all patients. Because evidence for both screening recommendations was weak – largely because of the dearth of supporting data – patients and physicians should cocreate a personalized screening plan.

“Several factors influence individuals’ decisions whether to be screened, even when they are presented with the same information,” the authors said. These include variation in an individual’s values and preferences, a close balance of benefits versus harms and burdens, and personal preference.

“Some individuals may value a minimally invasive test such as FIT, and the possibility of invasive screening with colonoscopy might put them off screening altogether. Those who most value preventing colorectal cancer or avoiding repeated testing are likely to choose sigmoidoscopy or colonoscopy.”

The authors had no financial conflicts of interest.

SOURCE: BMJ 2019;367:l5515. doi: 10.1136/bmj.l5515.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BMJ

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Laparoscopic surgery has short-term advantages for elderly patients with colorectal cancer

Article Type
Changed

 

Elderly patients with colorectal cancer have better perioperative outcomes than do those who undergo open surgery, although in this sample, long-term outcomes are similar, Sicheng Zhou, and colleagues wrote in BMC Surgery.

HRAUN/Getty Images

“Laparoscopic surgery showed better results than the open surgery in short-term outcomes,” said Dr. Zhou of the Chinese Academy of Medical Sciences, and coauthors. “[Carcinoembryonic antigen] level, III/IV stage, and perineural invasion were all reliable predictors of overall survival and disease-free survival for the treatment of laparoscopic surgery and open surgery for elderly Chinese patients over 80 years old with colorectal cancer.”

The study comprised 313 patients aged 80 years or older who underwent surgery for colorectal cancer. The group was equally divided between those who had laparoscopic and open surgery. They were matched 1:1, for a total of 93 pairs included. The patients’ mean age was 82 years. Medical comorbidities were present in about 63%. The tumor was more likely to present in the in the rectum and right colon (about 34% each). The next most common disease site was the sigmoid colon (22%).

Most tumors were stage III (58%), and II (about 30%). About 70% were moderately differentiated and 20% poorly differentiated. Carcinoembryonic antigen was greater than 5 ng/mL in about three-fourths of the group, and higher in the reminder.

Surgery duration was somewhat shorter in the open group but not significantly so. However, intraoperative complications were higher in the open group, including transfusion (22.6% vs. 16% and blood loss 50.9 vs. 108 mL). There was a lower occurrence of postoperative complications (10.8% vs. 26.9%) in the laparoscopic group.

Intraoperative complications occurred in only one patient, who was in the laparoscopic group, but perioperative complications were significantly more common in the open group (17.2% vs. 6.5%). In the open group these included wound infection (9.7%), followed by ileus (5.4%), anastomosis leakage (4.3%), and delayed gastric emptying (4.3%). In the laparoscopic group, the most common morbidities were anastomosis leakage (2.2%), ileus (2.2%) and pneumonia (2.2%).

The number of retrieved lymph nodes was also significantly higher in the laparoscopic group (20 vs. 17).

Yet, despite the short-term perioperative advantages of the laparoscopic approach in elderly patients, the 3- and 5-year overall and disease-free survival rates were not significantly different in these groups. The investigators concluded “it is noteworthy that the 3-year and 5-year [overall survival] rates, and 3- year and 5-year [disease-free survival] rates of patients in the laparoscopic group were generally higher than the open group. The 5-year [disease-free survival] rate in the laparoscopic group was even higher than that in the open group by more than 10%. This difference might be due to the difference in the number of dissected lymph nodes between the open group and the laparoscopic group. Hence, although there was no significant difference in survival outcomes between the two surgical methods, the laparoscopic surgery in elderly patients with colorectal cancer might achieve better survival outcomes than the open surgery.”

The authors declared that they had no competing interests. This work was supported by the Beijing Hope Run Special Fund of Cancer Foundation of China and Capital Health Research and Development.

SOURCE: Zhou S et al. BMC Surg. 2019;19:137. doi: 10.1186/s12893-019-0596-3.

Publications
Topics
Sections

 

Elderly patients with colorectal cancer have better perioperative outcomes than do those who undergo open surgery, although in this sample, long-term outcomes are similar, Sicheng Zhou, and colleagues wrote in BMC Surgery.

HRAUN/Getty Images

“Laparoscopic surgery showed better results than the open surgery in short-term outcomes,” said Dr. Zhou of the Chinese Academy of Medical Sciences, and coauthors. “[Carcinoembryonic antigen] level, III/IV stage, and perineural invasion were all reliable predictors of overall survival and disease-free survival for the treatment of laparoscopic surgery and open surgery for elderly Chinese patients over 80 years old with colorectal cancer.”

The study comprised 313 patients aged 80 years or older who underwent surgery for colorectal cancer. The group was equally divided between those who had laparoscopic and open surgery. They were matched 1:1, for a total of 93 pairs included. The patients’ mean age was 82 years. Medical comorbidities were present in about 63%. The tumor was more likely to present in the in the rectum and right colon (about 34% each). The next most common disease site was the sigmoid colon (22%).

Most tumors were stage III (58%), and II (about 30%). About 70% were moderately differentiated and 20% poorly differentiated. Carcinoembryonic antigen was greater than 5 ng/mL in about three-fourths of the group, and higher in the reminder.

Surgery duration was somewhat shorter in the open group but not significantly so. However, intraoperative complications were higher in the open group, including transfusion (22.6% vs. 16% and blood loss 50.9 vs. 108 mL). There was a lower occurrence of postoperative complications (10.8% vs. 26.9%) in the laparoscopic group.

Intraoperative complications occurred in only one patient, who was in the laparoscopic group, but perioperative complications were significantly more common in the open group (17.2% vs. 6.5%). In the open group these included wound infection (9.7%), followed by ileus (5.4%), anastomosis leakage (4.3%), and delayed gastric emptying (4.3%). In the laparoscopic group, the most common morbidities were anastomosis leakage (2.2%), ileus (2.2%) and pneumonia (2.2%).

The number of retrieved lymph nodes was also significantly higher in the laparoscopic group (20 vs. 17).

Yet, despite the short-term perioperative advantages of the laparoscopic approach in elderly patients, the 3- and 5-year overall and disease-free survival rates were not significantly different in these groups. The investigators concluded “it is noteworthy that the 3-year and 5-year [overall survival] rates, and 3- year and 5-year [disease-free survival] rates of patients in the laparoscopic group were generally higher than the open group. The 5-year [disease-free survival] rate in the laparoscopic group was even higher than that in the open group by more than 10%. This difference might be due to the difference in the number of dissected lymph nodes between the open group and the laparoscopic group. Hence, although there was no significant difference in survival outcomes between the two surgical methods, the laparoscopic surgery in elderly patients with colorectal cancer might achieve better survival outcomes than the open surgery.”

The authors declared that they had no competing interests. This work was supported by the Beijing Hope Run Special Fund of Cancer Foundation of China and Capital Health Research and Development.

SOURCE: Zhou S et al. BMC Surg. 2019;19:137. doi: 10.1186/s12893-019-0596-3.

 

Elderly patients with colorectal cancer have better perioperative outcomes than do those who undergo open surgery, although in this sample, long-term outcomes are similar, Sicheng Zhou, and colleagues wrote in BMC Surgery.

HRAUN/Getty Images

“Laparoscopic surgery showed better results than the open surgery in short-term outcomes,” said Dr. Zhou of the Chinese Academy of Medical Sciences, and coauthors. “[Carcinoembryonic antigen] level, III/IV stage, and perineural invasion were all reliable predictors of overall survival and disease-free survival for the treatment of laparoscopic surgery and open surgery for elderly Chinese patients over 80 years old with colorectal cancer.”

The study comprised 313 patients aged 80 years or older who underwent surgery for colorectal cancer. The group was equally divided between those who had laparoscopic and open surgery. They were matched 1:1, for a total of 93 pairs included. The patients’ mean age was 82 years. Medical comorbidities were present in about 63%. The tumor was more likely to present in the in the rectum and right colon (about 34% each). The next most common disease site was the sigmoid colon (22%).

Most tumors were stage III (58%), and II (about 30%). About 70% were moderately differentiated and 20% poorly differentiated. Carcinoembryonic antigen was greater than 5 ng/mL in about three-fourths of the group, and higher in the reminder.

Surgery duration was somewhat shorter in the open group but not significantly so. However, intraoperative complications were higher in the open group, including transfusion (22.6% vs. 16% and blood loss 50.9 vs. 108 mL). There was a lower occurrence of postoperative complications (10.8% vs. 26.9%) in the laparoscopic group.

Intraoperative complications occurred in only one patient, who was in the laparoscopic group, but perioperative complications were significantly more common in the open group (17.2% vs. 6.5%). In the open group these included wound infection (9.7%), followed by ileus (5.4%), anastomosis leakage (4.3%), and delayed gastric emptying (4.3%). In the laparoscopic group, the most common morbidities were anastomosis leakage (2.2%), ileus (2.2%) and pneumonia (2.2%).

The number of retrieved lymph nodes was also significantly higher in the laparoscopic group (20 vs. 17).

Yet, despite the short-term perioperative advantages of the laparoscopic approach in elderly patients, the 3- and 5-year overall and disease-free survival rates were not significantly different in these groups. The investigators concluded “it is noteworthy that the 3-year and 5-year [overall survival] rates, and 3- year and 5-year [disease-free survival] rates of patients in the laparoscopic group were generally higher than the open group. The 5-year [disease-free survival] rate in the laparoscopic group was even higher than that in the open group by more than 10%. This difference might be due to the difference in the number of dissected lymph nodes between the open group and the laparoscopic group. Hence, although there was no significant difference in survival outcomes between the two surgical methods, the laparoscopic surgery in elderly patients with colorectal cancer might achieve better survival outcomes than the open surgery.”

The authors declared that they had no competing interests. This work was supported by the Beijing Hope Run Special Fund of Cancer Foundation of China and Capital Health Research and Development.

SOURCE: Zhou S et al. BMC Surg. 2019;19:137. doi: 10.1186/s12893-019-0596-3.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BMC SURGERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Comparing Artificial Intelligence Platforms for Histopathologic Cancer Diagnosis

Article Type
Changed
Two machine learning platforms were successfully used to provide diagnostic guidance in the differentiation between common cancer conditions in veteran populations.

Artificial intelligence (AI), first described in 1956, encompasses the field of computer science in which machines are trained to learn from experience. The term was popularized by the 1956 Dartmouth College Summer Research Project on Artificial Intelligence.1 The field of AI is rapidly growing and has the potential to affect many aspects of our lives. The emerging importance of AI is demonstrated by a February 2019 executive order that launched the American AI Initiative, allocating resources and funding for AI development.2 The executive order stresses the potential impact of AI in the health care field, including its potential utility to diagnose disease. Federal agencies were directed to invest in AI research and development to promote rapid breakthroughs in AI technology that may impact multiple areas of society.

Machine learning (ML), a subset of AI, was defined in 1959 by Arthur Samuel and is achieved by employing mathematic models to compute sample data sets.3 Originating from statistical linear models, neural networks were conceived to accomplish these tasks.4 These pioneering scientific achievements led to recent developments of deep neural networks. These models are developed to recognize patterns and achieve complex computational tasks within a matter of minutes, often far exceeding human ability.5 ML can increase efficiency with decreased computation time, high precision, and recall when compared with that of human decision making.6

ML has the potential for numerous applications in the health care field.7-9 One promising application is in the field of anatomic pathology. ML allows representative images to be used to train a computer to recognize patterns from labeled photographs. Based on a set of images selected to represent a specific tissue or disease process, the computer can be trained to evaluate and recognize new and unique images from patients and render a diagnosis.10 Prior to modern ML models, users would have to import many thousands of training images to produce algorithms that could recognize patterns with high accuracy. Modern ML algorithms allow for a model known as transfer learning, such that far fewer images are required for training.11-13

Two novel ML platforms available for public use are offered through Google (Mountain View, CA) and Apple (Cupertino, CA).14,15 They each offer a user-friendly interface with minimal experience required in computer science. Google AutoML uses ML via cloud services to store and retrieve data with ease. No coding knowledge is required. The Apple Create ML Module provides computer-based ML, requiring only a few lines of code.

The Veterans Health Administration (VHA) is the largest single health care system in the US, and nearly 50 000 cancer cases are diagnosed at the VHA annually.16 Cancers of the lung and colon are among the most common sources of invasive cancer and are the 2 most common causes of cancer deaths in America.16 We have previously reported using Apple ML in detecting non-small cell lung cancers (NSCLCs), including adenocarcinomas and squamous cell carcinomas (SCCs); and colon cancers with accuracy.17,18 In the present study, we expand on these findings by comparing Apple and Google ML platforms in a variety of common pathologic scenarios in veteran patients. Using limited training data, both programs are compared for precision and recall in differentiating conditions involving lung and colon pathology.

In the first 4 experiments, we evaluated the ability of the platforms to differentiate normal lung tissue from cancerous lung tissue, to distinguish lung adenocarcinoma from SCC, and to differentiate colon adenocarcinoma from normal colon tissue. Next, cases of colon adenocarcinoma were assessed to determine whether the presence or absence of the KRAS proto-oncogene could be determined histologically using the AI platforms. KRAS is found in a variety of cancers, including about 40% of colon adenocarcinomas.19 For colon cancers, the presence or absence of the mutation in KRAS has important implications for patients as it determines whether the tumor will respond to specific chemotherapy agents.20 The presence of the KRAS gene is currently determined by complex molecular testing of tumor tissue.21 However, we assessed the potential of ML to determine whether the mutation is present by computerized morphologic analysis alone. Our last experiment examined the ability of the Apple and Google platforms to differentiate between adenocarcinomas of lung origin vs colon origin. This has potential utility in determining the site of origin of metastatic carcinoma.22

 

 

Methods

Fifty cases of lung SCC, 50 cases of lung adenocarcinoma, and 50 cases of colon adenocarcinoma were randomly retrieved from our molecular database. Twenty-five colon adenocarcinoma cases were positive for mutation in KRAS, while 25 cases were negative for mutation in KRAS. Seven hundred fifty total images of lung tissue (250 benign lung tissue, 250 lung adenocarcinomas, and 250 lung SCCs) and 500 total images of colon tissue (250 benign colon tissue and 250 colon adenocarcinoma) were obtained using a Leica Microscope MC190 HD Camera (Wetzlar, Germany) connected to an Olympus BX41 microscope (Center Valley, PA) and the Leica Acquire 9072 software for Apple computers. All the images were captured at a resolution of 1024 x 768 pixels using a 60x dry objective. Lung tissue images were captured and saved on a 2012 Apple MacBook Pro computer, and colon images were captured and saved on a 2011 Apple iMac computer. Both computers were running macOS v10.13.

Creating Image Classifier Models Using Apple Create ML

Apple Create ML is a suite of products that use various tools to create and train custom ML models on Apple computers.15 The suite contains many features, including image classification to train a ML model to classify images, natural language processing to classify natural language text, and tabular data to train models that deal with labeling information or estimating new quantities. We used Create ML Image Classification to create image classifier models for our project (Appendix A).

Creating ML Modules Using Google Cloud AutoML Vision Beta

Google Cloud AutoML is a suite of machine learning products, including AutoML Vision, AutoML Natural Language and AutoML Translation.14 All Cloud AutoML machine learning products were in beta version at the time of experimentation. We used Cloud AutoML Vision beta to create ML modules for our project. Unlike Apple Create ML, which is run on a local Apple computer, the Google Cloud AutoML is run online using a Google Cloud account. There are no minimum specifications requirements for the local computer since it is using the cloud-based architecture (Appendix B).

 

Experiment 1

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to detect and subclassify NSCLC based on the histopathologic images. We created 3 classes of images (250 images each): benign lung tissue, lung adenocarcinoma, and lung SCC.

Experiment 2

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between normal lung tissue and NSCLC histopathologic images with 50/50 mixture of lung adenocarcinoma and lung SCC. We created 2 classes of images (250 images each): benign lung tissue and lung NSCLC.

Experiment 3

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between lung adenocarcinoma and lung SCC histopathologic images. We created 2 classes of images (250 images each): adenocarcinoma and SCC.

Experiment 4

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to detect colon cancer histopathologic images regardless of mutation in KRAS status. We created 2 classes of images (250 images each): benign colon tissue and colon adenocarcinoma.

 

 

Experiment 5

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between colon adenocarcinoma with mutations in KRAS and colon adenocarcinoma without the mutation in KRAS histopathologic images. We created 2 classes of images (125 images each): colon adenocarcinoma cases with mutation in KRAS and colon adenocarcinoma cases without the mutation in KRAS.

Experiment 6

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between lung adenocarcinoma and colon adenocarcinoma histopathologic images. We created 2 classes of images (250 images each): colon adenocarcinoma lung adenocarcinoma.

Results

Twelve machine learning models were created in 6 experiments using the Apple Create ML and the Google AutoML (Table). To investigate recall and precision differences between the Apple and the Google ML algorithms, we performed 2-tailed distribution, paired t tests. No statistically significant differences were found (P = .52 for recall and .60 for precision).

Overall, each model performed well in distinguishing between normal and neoplastic tissue for both lung and colon cancers. In subclassifying NSCLC into adenocarcinoma and SCC, the models were shown to have high levels of precision and recall. The models also were successful in distinguishing between lung and colonic origin of adenocarcinoma (Figures 1-4). However, both systems had trouble discerning colon adenocarcinoma with mutations in KRAS from adenocarcinoma without mutations in KRAS.

 

Discussion

Image classifier models using ML algorithms hold a promising future to revolutionize the health care field. ML products, such as those modules offered by Apple and Google, are easy to use and have a simple graphic user interface to allow individuals to train models to perform humanlike tasks in real time. In our experiments, we compared multiple algorithms to determine their ability to differentiate and subclassify histopathologic images with high precision and recall using common scenarios in treating veteran patients.

Analysis of the results revealed high precision and recall values illustrating the models’ ability to differentiate and detect benign lung tissue from lung SCC and lung adenocarcinoma in ML model 1, benign lung from NSCLC carcinoma in ML model 2, and benign colon from colonic adenocarcinoma in ML model 4. In ML model 3 and 6, both ML algorithms performed at a high level to differentiate lung SCC from lung adenocarcinoma and lung adenocarcinoma from colonic adenocarcinoma, respectively. Of note, ML model 5 had the lowest precision and recall values across both algorithms demonstrating the models’ limited utility in predicting molecular profiles, such as mutations in KRAS as tested here. This is not surprising as pathologists currently require complex molecular tests to detect mutations in KRAS reliably in colon cancer.

Both modules require minimal programming experience and are easy to use. In our comparison, we demonstrated critical distinguishing characteristics that differentiate the 2 products.

Apple Create ML image classifier is available for use on local Mac computers that use Xcode version 10 and macOS 10.14 or later, with just 3 lines of code required to perform computations. Although this product is limited to Apple computers, it is free to use, and images are stored on the computer hard drive. Of unique significance on the Apple system platform, images can be augmented to alter their appearance to enhance model training. For example, imported images can be cropped, rotated, blurred, and flipped, in order to optimize the model’s training abilities to recognize test images and perform pattern recognition. This feature is not as readily available on the Google platform. Apple Create ML Image classifier’s default training set consists of 75% of total imported images with 5% of the total images being randomly used as a validation set. The remaining 20% of images comprise the testing set. The module’s computational analysis to train the model is achieved in about 2 minutes on average. The score threshold is set at 50% and cannot be manipulated for each image class as in Google AutoML Vision.

Google AutoML Vision is open and can be accessed from many devices. It stores images on remote Google servers but requires computing fees after a $300 credit for 12 months. On AutoML Vision, random 80% of the total images are used in the training set, 10% are used in the validation set, and 10% are used in the testing set. It is important to highlight the different percentages used in the default settings on the respective modules. The time to train the Google AutoML Vision with default computational power is longer on average than Apple Create ML, with about 8 minutes required to train the machine learning module. However, it is possible to choose more computational power for an additional fee and decrease module training time. The user will receive e-mail alerts when the computer time begins and is completed. The computation time is calculated by subtracting the time of the initial e-mail from the final e-mail.

Based on our calculations, we determined there was no significant difference between the 2 machine learning algorithms tested at the default settings with recall and precision values obtained. These findings demonstrate the promise of using a ML algorithm to assist in the performance of human tasks and behaviors, specifically the diagnosis of histopathologic images. These results have numerous potential uses in clinical medicine. ML algorithms have been successfully applied to diagnostic and prognostic endeavors in pathology,23-28 dermatology,29-31 ophthalmology,32 cardiology,33 and radiology.34-36

Pathologists often use additional tests, such as special staining of tissues or molecular tests, to assist with accurate classification of tumors. ML platforms offer the potential of an additional tool for pathologists to use along with human microscopic interpretation.37,38 In addition, the number of pathologists in the US is dramatically decreasing, and many other countries have marked physician shortages, especially in fields of specialized training such as pathology.39-42 These models could readily assist physicians in underserved countries and impact shortages of pathologists elsewhere by providing more specific diagnoses in an expedited manner.43

Finally, although we have explored the application of these platforms in common cancer scenarios, great potential exists to use similar techniques in the detection of other conditions. These include the potential for classification and risk assessment of precancerous lesions, infectious processes in tissue (eg, detection of tuberculosis or malaria),24,44 inflammatory conditions (eg, arthritis subtypes, gout),45 blood disorders (eg, abnormal blood cell morphology),46 and many others. The potential of these technologies to improve health care delivery to veteran patients seems to be limited only by the imagination of the user.47

Regarding the limited effectiveness in determining the presence or absence of mutations in KRAS for colon adenocarcinoma, it is mentioned that currently pathologists rely on complex molecular tests to detect the mutations at the DNA level.21 It is possible that the use of more extensive training data sets may improve recall and precision in cases such as these and warrants further study. Our experiments were limited to the stipulations placed by the free trial software agreements; no costs were expended to use the algorithms, though an Apple computer was required.

 

 

Conclusion

We have demonstrated the successful application of 2 readily available ML platforms in providing diagnostic guidance in differentiation between common cancer conditions in veteran patient populations. Although both platforms performed very well with no statistically significant differences in results, some distinctions are worth noting. Apple Create ML can be used on local computers but is limited to an Apple operating system. Google AutoML is not platform-specific but runs only via Google Cloud with associated computational fees. Using these readily available models, we demonstrated the vast potential of AI in diagnostic pathology. The application of AI to clinical medicine remains in the very early stages. The VA is uniquely poised to provide leadership as AI technologies will continue to dramatically change the future of health care, both in veteran and nonveteran patients nationwide.

Acknowledgments

The authors thank Paul Borkowski for his constructive criticism and proofreading of this manuscript. This material is the result of work supported with resources and the use of facilities at the James A. Haley Veterans’ Hospital.

References

1. Moor J. The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 2006;27(4):87-91.

2. Trump D. Accelerating America’s leadership in artificial intelligence. https://www.whitehouse.gov/articles/accelerating-americas-leadership-in-artificial-intelligence. Published February 11, 2019. Accessed September 4, 2019.

3. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959;3(3):210-229.

4. SAS Users Group International. Neural networks and statistical models. In: Sarle WS. Proceedings of the Nineteenth Annual SAS Users Group International Conference. SAS Institute: Cary, North Carolina; 1994:1538-1550. http://www.sascommunity.org/sugi/SUGI94/Sugi-94-255%20Sarle.pdf. Accessed September 16, 2019.

5. Schmidhuber J. Deep learning in neural networks: an overview. Neural Networks. 2015;61:85-117.

6. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444.

7. Jiang F, Jiang Y, Li H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243.

8. Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. Radiographics. 2017;37(2):505-515.

9. Deo RC. Machine learning in medicine. Circulation. 2015;132(20):1920-1930.

10. Janowczyk A, Madabhushi A. Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases. J Pathol Inform. 2016;7(1):29.

11. Oquab M, Bottou L, Laptev I, Sivic J. Learning and transferring mid-level image representations using convolutional neural networks. Presented at: IEEE Conference on Computer Vision and Pattern Recognition, 2014. http://openaccess.thecvf.com/content_cvpr_2014/html/Oquab_Learning_and_Transferring_2014_CVPR_paper.html. Accessed September 4, 2019.

12. Shin HC, Roth HR, Gao M, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging. 2016;35(5):1285-1298.

13. Tajbakhsh N, Shin JY, Gurudu SR, et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2016;35(5):1299-1312.

14. Cloud AutoML. https://cloud.google.com/automl. Accessed September 4, 2019.

15. Create ML. https://developer.apple.com/documentation/createml. Accessed September 4, 2019.

16. Zullig LL, Sims KJ, McNeil R, et al. Cancer incidence among patients of the U.S. Veterans Affairs Health Care System: 2010 Update. Mil Med. 2017;182(7):e1883-e1891. 17. Borkowski AA, Wilson CP, Borkowski SA, Deland LA, Mastorides SM. Using Apple machine learning algorithms to detect and subclassify non-small cell lung cancer. https://arxiv.org/ftp/arxiv/papers/1808/1808.08230.pdf. Accessed September 4, 2019.

18. Borkowski AA, Wilson CP, Borkowski SA, Thomas LB, Deland LA, Mastorides SM. Apple machine learning algorithms successfully detect colon cancer but fail to predict KRAS mutation status. http://arxiv.org/abs/1812.04660. Revised January 15,2019. Accessed September 4, 2019.

19. Armaghany T, Wilson JD, Chu Q, Mills G. Genetic alterations in colorectal cancer. Gastrointest Cancer Res. 2012;5(1):19-27.

20. Herzig DO, Tsikitis VL. Molecular markers for colon diagnosis, prognosis and targeted therapy. J Surg Oncol. 2015;111(1):96-102.

21. Ma W, Brodie S, Agersborg S, Funari VA, Albitar M. Significant improvement in detecting BRAF, KRAS, and EGFR mutations using next-generation sequencing as compared with FDA-cleared kits. Mol Diagn Ther. 2017;21(5):571-579.

22. Greco FA. Molecular diagnosis of the tissue of origin in cancer of unknown primary site: useful in patient management. Curr Treat Options Oncol. 2013;14(4):634-642.

23. Bejnordi BE, Veta M, van Diest PJ, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA. 2017;318(22):2199-2210.

24. Xiong Y, Ba X, Hou A, Zhang K, Chen L, Li T. Automatic detection of mycobacterium tuberculosis using artificial intelligence. J Thorac Dis. 2018;10(3):1936-1940.

25. Cruz-Roa A, Gilmore H, Basavanhally A, et al. Accurate and reproducible invasive breast cancer detection in whole-slide images: a deep learning approach for quantifying tumor extent. Sci Rep. 2017;7:46450.

26. Coudray N, Ocampo PS, Sakellaropoulos T, et al. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat Med. 2018;24(10):1559-1567.

27. Ertosun MG, Rubin DL. Automated grading of gliomas using deep learning in digital pathology images: a modular approach with ensemble of convolutional neural networks. AMIA Annu Symp Proc. 2015;2015:1899-1908.

28. Wahab N, Khan A, Lee YS. Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detection. Comput Biol Med. 2017;85:86-97.

29. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118.

30. Han SS, Park GH, Lim W, et al. Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: automatic construction of onychomycosis datasets by region-based convolutional deep neural network. PLoS One. 2018;13(1):e0191493.

31. Fujisawa Y, Otomo Y, Ogata Y, et al. Deep-learning-based, computer-aided classifier developed with a small dataset of clinical images surpasses board-certified dermatologists in skin tumour diagnosis. Br J Dermatol. 2019;180(2):373-381.

32. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402-2010.

33. Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS One. 2017;12(4):e0174944.

34. Cheng J-Z, Ni D, Chou Y-H, et al. Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci Rep. 2016;6(1):24454.

35. Wang X, Yang W, Weinreb J, et al. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning. Sci Rep. 2017;7(1):15415.

36. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582.

37. Bardou D, Zhang K, Ahmad SM. Classification of breast cancer based on histology images using convolutional neural networks. IEEE Access. 2018;6(6):24680-24693.

38. Sheikhzadeh F, Ward RK, van Niekerk D, Guillaud M. Automatic labeling of molecular biomarkers of immunohistochemistry images using fully convolutional networks. PLoS One. 2018;13(1):e0190783.

39. Metter DM, Colgan TJ, Leung ST, Timmons CF, Park JY. Trends in the US and Canadian pathologist workforces from 2007 to 2017. JAMA Netw Open. 2019;2(5):e194337.

40. Benediktsson, H, Whitelaw J, Roy I. Pathology services in developing countries: a challenge. Arch Pathol Lab Med. 2007;131(11):1636-1639.

41. Graves D. The impact of the pathology workforce crisis on acute health care. Aust Health Rev. 2007;31(suppl 1):S28-S30.

42. NHS pathology shortages cause cancer diagnosis delays. https://www.gmjournal.co.uk/nhs-pathology-shortages-are-causing-cancer-diagnosis-delays. Published September 18, 2018. Accessed September 4, 2019.

43. Abbott LM, Smith SD. Smartphone apps for skin cancer diagnosis: Implications for patients and practitioners. Australas J Dermatol. 2018;59(3):168-170.

44. Poostchi M, Silamut K, Maude RJ, Jaeger S, Thoma G. Image analysis and machine learning for detecting malaria. Transl Res. 2018;194:36-55.

45. Orange DE, Agius P, DiCarlo EF, et al. Identification of three rheumatoid arthritis disease subtypes by machine learning integration of synovial histologic features and RNA sequencing data. Arthritis Rheumatol. 2018;70(5):690-701.

46. Rodellar J, Alférez S, Acevedo A, Molina A, Merino A. Image processing and machine learning in the morphological analysis of blood cells. Int J Lab Hematol. 2018;40(suppl 1):46-53.

47. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60-88.

Article PDF
Author and Disclosure Information

Andrew Borkowski is Chief of the Molecular Diagnostics Laboratory; Catherine Wilson is a Medical Technologist; Steven Borkowski is a Research Consultant; Brannon Thomas is Chief of the Microbiology Laboratory; Lauren Deland is a Research Coordinator; and Stephen Mastorides is Chief of the Pathology and Laboratory Medicine Service; all at James A. Haley Veterans’ Hospital in Tampa, Florida. Andrew Borkowski is a Professor; L. Brannon Thomas is an Assistant Professor; Stefanie Grewe is a Pathology Resident; and Stephen Mastorides is a Professor; all at the University of South Florida Morsani College of Medicine in Tampa.
Correspondence: Andrew Borkowski ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 36(10)a
Publications
Topics
Page Number
456-463
Sections
Author and Disclosure Information

Andrew Borkowski is Chief of the Molecular Diagnostics Laboratory; Catherine Wilson is a Medical Technologist; Steven Borkowski is a Research Consultant; Brannon Thomas is Chief of the Microbiology Laboratory; Lauren Deland is a Research Coordinator; and Stephen Mastorides is Chief of the Pathology and Laboratory Medicine Service; all at James A. Haley Veterans’ Hospital in Tampa, Florida. Andrew Borkowski is a Professor; L. Brannon Thomas is an Assistant Professor; Stefanie Grewe is a Pathology Resident; and Stephen Mastorides is a Professor; all at the University of South Florida Morsani College of Medicine in Tampa.
Correspondence: Andrew Borkowski ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

Andrew Borkowski is Chief of the Molecular Diagnostics Laboratory; Catherine Wilson is a Medical Technologist; Steven Borkowski is a Research Consultant; Brannon Thomas is Chief of the Microbiology Laboratory; Lauren Deland is a Research Coordinator; and Stephen Mastorides is Chief of the Pathology and Laboratory Medicine Service; all at James A. Haley Veterans’ Hospital in Tampa, Florida. Andrew Borkowski is a Professor; L. Brannon Thomas is an Assistant Professor; Stefanie Grewe is a Pathology Resident; and Stephen Mastorides is a Professor; all at the University of South Florida Morsani College of Medicine in Tampa.
Correspondence: Andrew Borkowski ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Article PDF
Article PDF
Related Articles
Two machine learning platforms were successfully used to provide diagnostic guidance in the differentiation between common cancer conditions in veteran populations.
Two machine learning platforms were successfully used to provide diagnostic guidance in the differentiation between common cancer conditions in veteran populations.

Artificial intelligence (AI), first described in 1956, encompasses the field of computer science in which machines are trained to learn from experience. The term was popularized by the 1956 Dartmouth College Summer Research Project on Artificial Intelligence.1 The field of AI is rapidly growing and has the potential to affect many aspects of our lives. The emerging importance of AI is demonstrated by a February 2019 executive order that launched the American AI Initiative, allocating resources and funding for AI development.2 The executive order stresses the potential impact of AI in the health care field, including its potential utility to diagnose disease. Federal agencies were directed to invest in AI research and development to promote rapid breakthroughs in AI technology that may impact multiple areas of society.

Machine learning (ML), a subset of AI, was defined in 1959 by Arthur Samuel and is achieved by employing mathematic models to compute sample data sets.3 Originating from statistical linear models, neural networks were conceived to accomplish these tasks.4 These pioneering scientific achievements led to recent developments of deep neural networks. These models are developed to recognize patterns and achieve complex computational tasks within a matter of minutes, often far exceeding human ability.5 ML can increase efficiency with decreased computation time, high precision, and recall when compared with that of human decision making.6

ML has the potential for numerous applications in the health care field.7-9 One promising application is in the field of anatomic pathology. ML allows representative images to be used to train a computer to recognize patterns from labeled photographs. Based on a set of images selected to represent a specific tissue or disease process, the computer can be trained to evaluate and recognize new and unique images from patients and render a diagnosis.10 Prior to modern ML models, users would have to import many thousands of training images to produce algorithms that could recognize patterns with high accuracy. Modern ML algorithms allow for a model known as transfer learning, such that far fewer images are required for training.11-13

Two novel ML platforms available for public use are offered through Google (Mountain View, CA) and Apple (Cupertino, CA).14,15 They each offer a user-friendly interface with minimal experience required in computer science. Google AutoML uses ML via cloud services to store and retrieve data with ease. No coding knowledge is required. The Apple Create ML Module provides computer-based ML, requiring only a few lines of code.

The Veterans Health Administration (VHA) is the largest single health care system in the US, and nearly 50 000 cancer cases are diagnosed at the VHA annually.16 Cancers of the lung and colon are among the most common sources of invasive cancer and are the 2 most common causes of cancer deaths in America.16 We have previously reported using Apple ML in detecting non-small cell lung cancers (NSCLCs), including adenocarcinomas and squamous cell carcinomas (SCCs); and colon cancers with accuracy.17,18 In the present study, we expand on these findings by comparing Apple and Google ML platforms in a variety of common pathologic scenarios in veteran patients. Using limited training data, both programs are compared for precision and recall in differentiating conditions involving lung and colon pathology.

In the first 4 experiments, we evaluated the ability of the platforms to differentiate normal lung tissue from cancerous lung tissue, to distinguish lung adenocarcinoma from SCC, and to differentiate colon adenocarcinoma from normal colon tissue. Next, cases of colon adenocarcinoma were assessed to determine whether the presence or absence of the KRAS proto-oncogene could be determined histologically using the AI platforms. KRAS is found in a variety of cancers, including about 40% of colon adenocarcinomas.19 For colon cancers, the presence or absence of the mutation in KRAS has important implications for patients as it determines whether the tumor will respond to specific chemotherapy agents.20 The presence of the KRAS gene is currently determined by complex molecular testing of tumor tissue.21 However, we assessed the potential of ML to determine whether the mutation is present by computerized morphologic analysis alone. Our last experiment examined the ability of the Apple and Google platforms to differentiate between adenocarcinomas of lung origin vs colon origin. This has potential utility in determining the site of origin of metastatic carcinoma.22

 

 

Methods

Fifty cases of lung SCC, 50 cases of lung adenocarcinoma, and 50 cases of colon adenocarcinoma were randomly retrieved from our molecular database. Twenty-five colon adenocarcinoma cases were positive for mutation in KRAS, while 25 cases were negative for mutation in KRAS. Seven hundred fifty total images of lung tissue (250 benign lung tissue, 250 lung adenocarcinomas, and 250 lung SCCs) and 500 total images of colon tissue (250 benign colon tissue and 250 colon adenocarcinoma) were obtained using a Leica Microscope MC190 HD Camera (Wetzlar, Germany) connected to an Olympus BX41 microscope (Center Valley, PA) and the Leica Acquire 9072 software for Apple computers. All the images were captured at a resolution of 1024 x 768 pixels using a 60x dry objective. Lung tissue images were captured and saved on a 2012 Apple MacBook Pro computer, and colon images were captured and saved on a 2011 Apple iMac computer. Both computers were running macOS v10.13.

Creating Image Classifier Models Using Apple Create ML

Apple Create ML is a suite of products that use various tools to create and train custom ML models on Apple computers.15 The suite contains many features, including image classification to train a ML model to classify images, natural language processing to classify natural language text, and tabular data to train models that deal with labeling information or estimating new quantities. We used Create ML Image Classification to create image classifier models for our project (Appendix A).

Creating ML Modules Using Google Cloud AutoML Vision Beta

Google Cloud AutoML is a suite of machine learning products, including AutoML Vision, AutoML Natural Language and AutoML Translation.14 All Cloud AutoML machine learning products were in beta version at the time of experimentation. We used Cloud AutoML Vision beta to create ML modules for our project. Unlike Apple Create ML, which is run on a local Apple computer, the Google Cloud AutoML is run online using a Google Cloud account. There are no minimum specifications requirements for the local computer since it is using the cloud-based architecture (Appendix B).

 

Experiment 1

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to detect and subclassify NSCLC based on the histopathologic images. We created 3 classes of images (250 images each): benign lung tissue, lung adenocarcinoma, and lung SCC.

Experiment 2

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between normal lung tissue and NSCLC histopathologic images with 50/50 mixture of lung adenocarcinoma and lung SCC. We created 2 classes of images (250 images each): benign lung tissue and lung NSCLC.

Experiment 3

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between lung adenocarcinoma and lung SCC histopathologic images. We created 2 classes of images (250 images each): adenocarcinoma and SCC.

Experiment 4

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to detect colon cancer histopathologic images regardless of mutation in KRAS status. We created 2 classes of images (250 images each): benign colon tissue and colon adenocarcinoma.

 

 

Experiment 5

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between colon adenocarcinoma with mutations in KRAS and colon adenocarcinoma without the mutation in KRAS histopathologic images. We created 2 classes of images (125 images each): colon adenocarcinoma cases with mutation in KRAS and colon adenocarcinoma cases without the mutation in KRAS.

Experiment 6

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between lung adenocarcinoma and colon adenocarcinoma histopathologic images. We created 2 classes of images (250 images each): colon adenocarcinoma lung adenocarcinoma.

Results

Twelve machine learning models were created in 6 experiments using the Apple Create ML and the Google AutoML (Table). To investigate recall and precision differences between the Apple and the Google ML algorithms, we performed 2-tailed distribution, paired t tests. No statistically significant differences were found (P = .52 for recall and .60 for precision).

Overall, each model performed well in distinguishing between normal and neoplastic tissue for both lung and colon cancers. In subclassifying NSCLC into adenocarcinoma and SCC, the models were shown to have high levels of precision and recall. The models also were successful in distinguishing between lung and colonic origin of adenocarcinoma (Figures 1-4). However, both systems had trouble discerning colon adenocarcinoma with mutations in KRAS from adenocarcinoma without mutations in KRAS.

 

Discussion

Image classifier models using ML algorithms hold a promising future to revolutionize the health care field. ML products, such as those modules offered by Apple and Google, are easy to use and have a simple graphic user interface to allow individuals to train models to perform humanlike tasks in real time. In our experiments, we compared multiple algorithms to determine their ability to differentiate and subclassify histopathologic images with high precision and recall using common scenarios in treating veteran patients.

Analysis of the results revealed high precision and recall values illustrating the models’ ability to differentiate and detect benign lung tissue from lung SCC and lung adenocarcinoma in ML model 1, benign lung from NSCLC carcinoma in ML model 2, and benign colon from colonic adenocarcinoma in ML model 4. In ML model 3 and 6, both ML algorithms performed at a high level to differentiate lung SCC from lung adenocarcinoma and lung adenocarcinoma from colonic adenocarcinoma, respectively. Of note, ML model 5 had the lowest precision and recall values across both algorithms demonstrating the models’ limited utility in predicting molecular profiles, such as mutations in KRAS as tested here. This is not surprising as pathologists currently require complex molecular tests to detect mutations in KRAS reliably in colon cancer.

Both modules require minimal programming experience and are easy to use. In our comparison, we demonstrated critical distinguishing characteristics that differentiate the 2 products.

Apple Create ML image classifier is available for use on local Mac computers that use Xcode version 10 and macOS 10.14 or later, with just 3 lines of code required to perform computations. Although this product is limited to Apple computers, it is free to use, and images are stored on the computer hard drive. Of unique significance on the Apple system platform, images can be augmented to alter their appearance to enhance model training. For example, imported images can be cropped, rotated, blurred, and flipped, in order to optimize the model’s training abilities to recognize test images and perform pattern recognition. This feature is not as readily available on the Google platform. Apple Create ML Image classifier’s default training set consists of 75% of total imported images with 5% of the total images being randomly used as a validation set. The remaining 20% of images comprise the testing set. The module’s computational analysis to train the model is achieved in about 2 minutes on average. The score threshold is set at 50% and cannot be manipulated for each image class as in Google AutoML Vision.

Google AutoML Vision is open and can be accessed from many devices. It stores images on remote Google servers but requires computing fees after a $300 credit for 12 months. On AutoML Vision, random 80% of the total images are used in the training set, 10% are used in the validation set, and 10% are used in the testing set. It is important to highlight the different percentages used in the default settings on the respective modules. The time to train the Google AutoML Vision with default computational power is longer on average than Apple Create ML, with about 8 minutes required to train the machine learning module. However, it is possible to choose more computational power for an additional fee and decrease module training time. The user will receive e-mail alerts when the computer time begins and is completed. The computation time is calculated by subtracting the time of the initial e-mail from the final e-mail.

Based on our calculations, we determined there was no significant difference between the 2 machine learning algorithms tested at the default settings with recall and precision values obtained. These findings demonstrate the promise of using a ML algorithm to assist in the performance of human tasks and behaviors, specifically the diagnosis of histopathologic images. These results have numerous potential uses in clinical medicine. ML algorithms have been successfully applied to diagnostic and prognostic endeavors in pathology,23-28 dermatology,29-31 ophthalmology,32 cardiology,33 and radiology.34-36

Pathologists often use additional tests, such as special staining of tissues or molecular tests, to assist with accurate classification of tumors. ML platforms offer the potential of an additional tool for pathologists to use along with human microscopic interpretation.37,38 In addition, the number of pathologists in the US is dramatically decreasing, and many other countries have marked physician shortages, especially in fields of specialized training such as pathology.39-42 These models could readily assist physicians in underserved countries and impact shortages of pathologists elsewhere by providing more specific diagnoses in an expedited manner.43

Finally, although we have explored the application of these platforms in common cancer scenarios, great potential exists to use similar techniques in the detection of other conditions. These include the potential for classification and risk assessment of precancerous lesions, infectious processes in tissue (eg, detection of tuberculosis or malaria),24,44 inflammatory conditions (eg, arthritis subtypes, gout),45 blood disorders (eg, abnormal blood cell morphology),46 and many others. The potential of these technologies to improve health care delivery to veteran patients seems to be limited only by the imagination of the user.47

Regarding the limited effectiveness in determining the presence or absence of mutations in KRAS for colon adenocarcinoma, it is mentioned that currently pathologists rely on complex molecular tests to detect the mutations at the DNA level.21 It is possible that the use of more extensive training data sets may improve recall and precision in cases such as these and warrants further study. Our experiments were limited to the stipulations placed by the free trial software agreements; no costs were expended to use the algorithms, though an Apple computer was required.

 

 

Conclusion

We have demonstrated the successful application of 2 readily available ML platforms in providing diagnostic guidance in differentiation between common cancer conditions in veteran patient populations. Although both platforms performed very well with no statistically significant differences in results, some distinctions are worth noting. Apple Create ML can be used on local computers but is limited to an Apple operating system. Google AutoML is not platform-specific but runs only via Google Cloud with associated computational fees. Using these readily available models, we demonstrated the vast potential of AI in diagnostic pathology. The application of AI to clinical medicine remains in the very early stages. The VA is uniquely poised to provide leadership as AI technologies will continue to dramatically change the future of health care, both in veteran and nonveteran patients nationwide.

Acknowledgments

The authors thank Paul Borkowski for his constructive criticism and proofreading of this manuscript. This material is the result of work supported with resources and the use of facilities at the James A. Haley Veterans’ Hospital.

Artificial intelligence (AI), first described in 1956, encompasses the field of computer science in which machines are trained to learn from experience. The term was popularized by the 1956 Dartmouth College Summer Research Project on Artificial Intelligence.1 The field of AI is rapidly growing and has the potential to affect many aspects of our lives. The emerging importance of AI is demonstrated by a February 2019 executive order that launched the American AI Initiative, allocating resources and funding for AI development.2 The executive order stresses the potential impact of AI in the health care field, including its potential utility to diagnose disease. Federal agencies were directed to invest in AI research and development to promote rapid breakthroughs in AI technology that may impact multiple areas of society.

Machine learning (ML), a subset of AI, was defined in 1959 by Arthur Samuel and is achieved by employing mathematic models to compute sample data sets.3 Originating from statistical linear models, neural networks were conceived to accomplish these tasks.4 These pioneering scientific achievements led to recent developments of deep neural networks. These models are developed to recognize patterns and achieve complex computational tasks within a matter of minutes, often far exceeding human ability.5 ML can increase efficiency with decreased computation time, high precision, and recall when compared with that of human decision making.6

ML has the potential for numerous applications in the health care field.7-9 One promising application is in the field of anatomic pathology. ML allows representative images to be used to train a computer to recognize patterns from labeled photographs. Based on a set of images selected to represent a specific tissue or disease process, the computer can be trained to evaluate and recognize new and unique images from patients and render a diagnosis.10 Prior to modern ML models, users would have to import many thousands of training images to produce algorithms that could recognize patterns with high accuracy. Modern ML algorithms allow for a model known as transfer learning, such that far fewer images are required for training.11-13

Two novel ML platforms available for public use are offered through Google (Mountain View, CA) and Apple (Cupertino, CA).14,15 They each offer a user-friendly interface with minimal experience required in computer science. Google AutoML uses ML via cloud services to store and retrieve data with ease. No coding knowledge is required. The Apple Create ML Module provides computer-based ML, requiring only a few lines of code.

The Veterans Health Administration (VHA) is the largest single health care system in the US, and nearly 50 000 cancer cases are diagnosed at the VHA annually.16 Cancers of the lung and colon are among the most common sources of invasive cancer and are the 2 most common causes of cancer deaths in America.16 We have previously reported using Apple ML in detecting non-small cell lung cancers (NSCLCs), including adenocarcinomas and squamous cell carcinomas (SCCs); and colon cancers with accuracy.17,18 In the present study, we expand on these findings by comparing Apple and Google ML platforms in a variety of common pathologic scenarios in veteran patients. Using limited training data, both programs are compared for precision and recall in differentiating conditions involving lung and colon pathology.

In the first 4 experiments, we evaluated the ability of the platforms to differentiate normal lung tissue from cancerous lung tissue, to distinguish lung adenocarcinoma from SCC, and to differentiate colon adenocarcinoma from normal colon tissue. Next, cases of colon adenocarcinoma were assessed to determine whether the presence or absence of the KRAS proto-oncogene could be determined histologically using the AI platforms. KRAS is found in a variety of cancers, including about 40% of colon adenocarcinomas.19 For colon cancers, the presence or absence of the mutation in KRAS has important implications for patients as it determines whether the tumor will respond to specific chemotherapy agents.20 The presence of the KRAS gene is currently determined by complex molecular testing of tumor tissue.21 However, we assessed the potential of ML to determine whether the mutation is present by computerized morphologic analysis alone. Our last experiment examined the ability of the Apple and Google platforms to differentiate between adenocarcinomas of lung origin vs colon origin. This has potential utility in determining the site of origin of metastatic carcinoma.22

 

 

Methods

Fifty cases of lung SCC, 50 cases of lung adenocarcinoma, and 50 cases of colon adenocarcinoma were randomly retrieved from our molecular database. Twenty-five colon adenocarcinoma cases were positive for mutation in KRAS, while 25 cases were negative for mutation in KRAS. Seven hundred fifty total images of lung tissue (250 benign lung tissue, 250 lung adenocarcinomas, and 250 lung SCCs) and 500 total images of colon tissue (250 benign colon tissue and 250 colon adenocarcinoma) were obtained using a Leica Microscope MC190 HD Camera (Wetzlar, Germany) connected to an Olympus BX41 microscope (Center Valley, PA) and the Leica Acquire 9072 software for Apple computers. All the images were captured at a resolution of 1024 x 768 pixels using a 60x dry objective. Lung tissue images were captured and saved on a 2012 Apple MacBook Pro computer, and colon images were captured and saved on a 2011 Apple iMac computer. Both computers were running macOS v10.13.

Creating Image Classifier Models Using Apple Create ML

Apple Create ML is a suite of products that use various tools to create and train custom ML models on Apple computers.15 The suite contains many features, including image classification to train a ML model to classify images, natural language processing to classify natural language text, and tabular data to train models that deal with labeling information or estimating new quantities. We used Create ML Image Classification to create image classifier models for our project (Appendix A).

Creating ML Modules Using Google Cloud AutoML Vision Beta

Google Cloud AutoML is a suite of machine learning products, including AutoML Vision, AutoML Natural Language and AutoML Translation.14 All Cloud AutoML machine learning products were in beta version at the time of experimentation. We used Cloud AutoML Vision beta to create ML modules for our project. Unlike Apple Create ML, which is run on a local Apple computer, the Google Cloud AutoML is run online using a Google Cloud account. There are no minimum specifications requirements for the local computer since it is using the cloud-based architecture (Appendix B).

 

Experiment 1

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to detect and subclassify NSCLC based on the histopathologic images. We created 3 classes of images (250 images each): benign lung tissue, lung adenocarcinoma, and lung SCC.

Experiment 2

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between normal lung tissue and NSCLC histopathologic images with 50/50 mixture of lung adenocarcinoma and lung SCC. We created 2 classes of images (250 images each): benign lung tissue and lung NSCLC.

Experiment 3

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between lung adenocarcinoma and lung SCC histopathologic images. We created 2 classes of images (250 images each): adenocarcinoma and SCC.

Experiment 4

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to detect colon cancer histopathologic images regardless of mutation in KRAS status. We created 2 classes of images (250 images each): benign colon tissue and colon adenocarcinoma.

 

 

Experiment 5

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between colon adenocarcinoma with mutations in KRAS and colon adenocarcinoma without the mutation in KRAS histopathologic images. We created 2 classes of images (125 images each): colon adenocarcinoma cases with mutation in KRAS and colon adenocarcinoma cases without the mutation in KRAS.

Experiment 6

We compared Apple Create ML Image Classifier and Google AutoML Vision in their ability to differentiate between lung adenocarcinoma and colon adenocarcinoma histopathologic images. We created 2 classes of images (250 images each): colon adenocarcinoma lung adenocarcinoma.

Results

Twelve machine learning models were created in 6 experiments using the Apple Create ML and the Google AutoML (Table). To investigate recall and precision differences between the Apple and the Google ML algorithms, we performed 2-tailed distribution, paired t tests. No statistically significant differences were found (P = .52 for recall and .60 for precision).

Overall, each model performed well in distinguishing between normal and neoplastic tissue for both lung and colon cancers. In subclassifying NSCLC into adenocarcinoma and SCC, the models were shown to have high levels of precision and recall. The models also were successful in distinguishing between lung and colonic origin of adenocarcinoma (Figures 1-4). However, both systems had trouble discerning colon adenocarcinoma with mutations in KRAS from adenocarcinoma without mutations in KRAS.

 

Discussion

Image classifier models using ML algorithms hold a promising future to revolutionize the health care field. ML products, such as those modules offered by Apple and Google, are easy to use and have a simple graphic user interface to allow individuals to train models to perform humanlike tasks in real time. In our experiments, we compared multiple algorithms to determine their ability to differentiate and subclassify histopathologic images with high precision and recall using common scenarios in treating veteran patients.

Analysis of the results revealed high precision and recall values illustrating the models’ ability to differentiate and detect benign lung tissue from lung SCC and lung adenocarcinoma in ML model 1, benign lung from NSCLC carcinoma in ML model 2, and benign colon from colonic adenocarcinoma in ML model 4. In ML model 3 and 6, both ML algorithms performed at a high level to differentiate lung SCC from lung adenocarcinoma and lung adenocarcinoma from colonic adenocarcinoma, respectively. Of note, ML model 5 had the lowest precision and recall values across both algorithms demonstrating the models’ limited utility in predicting molecular profiles, such as mutations in KRAS as tested here. This is not surprising as pathologists currently require complex molecular tests to detect mutations in KRAS reliably in colon cancer.

Both modules require minimal programming experience and are easy to use. In our comparison, we demonstrated critical distinguishing characteristics that differentiate the 2 products.

Apple Create ML image classifier is available for use on local Mac computers that use Xcode version 10 and macOS 10.14 or later, with just 3 lines of code required to perform computations. Although this product is limited to Apple computers, it is free to use, and images are stored on the computer hard drive. Of unique significance on the Apple system platform, images can be augmented to alter their appearance to enhance model training. For example, imported images can be cropped, rotated, blurred, and flipped, in order to optimize the model’s training abilities to recognize test images and perform pattern recognition. This feature is not as readily available on the Google platform. Apple Create ML Image classifier’s default training set consists of 75% of total imported images with 5% of the total images being randomly used as a validation set. The remaining 20% of images comprise the testing set. The module’s computational analysis to train the model is achieved in about 2 minutes on average. The score threshold is set at 50% and cannot be manipulated for each image class as in Google AutoML Vision.

Google AutoML Vision is open and can be accessed from many devices. It stores images on remote Google servers but requires computing fees after a $300 credit for 12 months. On AutoML Vision, random 80% of the total images are used in the training set, 10% are used in the validation set, and 10% are used in the testing set. It is important to highlight the different percentages used in the default settings on the respective modules. The time to train the Google AutoML Vision with default computational power is longer on average than Apple Create ML, with about 8 minutes required to train the machine learning module. However, it is possible to choose more computational power for an additional fee and decrease module training time. The user will receive e-mail alerts when the computer time begins and is completed. The computation time is calculated by subtracting the time of the initial e-mail from the final e-mail.

Based on our calculations, we determined there was no significant difference between the 2 machine learning algorithms tested at the default settings with recall and precision values obtained. These findings demonstrate the promise of using a ML algorithm to assist in the performance of human tasks and behaviors, specifically the diagnosis of histopathologic images. These results have numerous potential uses in clinical medicine. ML algorithms have been successfully applied to diagnostic and prognostic endeavors in pathology,23-28 dermatology,29-31 ophthalmology,32 cardiology,33 and radiology.34-36

Pathologists often use additional tests, such as special staining of tissues or molecular tests, to assist with accurate classification of tumors. ML platforms offer the potential of an additional tool for pathologists to use along with human microscopic interpretation.37,38 In addition, the number of pathologists in the US is dramatically decreasing, and many other countries have marked physician shortages, especially in fields of specialized training such as pathology.39-42 These models could readily assist physicians in underserved countries and impact shortages of pathologists elsewhere by providing more specific diagnoses in an expedited manner.43

Finally, although we have explored the application of these platforms in common cancer scenarios, great potential exists to use similar techniques in the detection of other conditions. These include the potential for classification and risk assessment of precancerous lesions, infectious processes in tissue (eg, detection of tuberculosis or malaria),24,44 inflammatory conditions (eg, arthritis subtypes, gout),45 blood disorders (eg, abnormal blood cell morphology),46 and many others. The potential of these technologies to improve health care delivery to veteran patients seems to be limited only by the imagination of the user.47

Regarding the limited effectiveness in determining the presence or absence of mutations in KRAS for colon adenocarcinoma, it is mentioned that currently pathologists rely on complex molecular tests to detect the mutations at the DNA level.21 It is possible that the use of more extensive training data sets may improve recall and precision in cases such as these and warrants further study. Our experiments were limited to the stipulations placed by the free trial software agreements; no costs were expended to use the algorithms, though an Apple computer was required.

 

 

Conclusion

We have demonstrated the successful application of 2 readily available ML platforms in providing diagnostic guidance in differentiation between common cancer conditions in veteran patient populations. Although both platforms performed very well with no statistically significant differences in results, some distinctions are worth noting. Apple Create ML can be used on local computers but is limited to an Apple operating system. Google AutoML is not platform-specific but runs only via Google Cloud with associated computational fees. Using these readily available models, we demonstrated the vast potential of AI in diagnostic pathology. The application of AI to clinical medicine remains in the very early stages. The VA is uniquely poised to provide leadership as AI technologies will continue to dramatically change the future of health care, both in veteran and nonveteran patients nationwide.

Acknowledgments

The authors thank Paul Borkowski for his constructive criticism and proofreading of this manuscript. This material is the result of work supported with resources and the use of facilities at the James A. Haley Veterans’ Hospital.

References

1. Moor J. The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 2006;27(4):87-91.

2. Trump D. Accelerating America’s leadership in artificial intelligence. https://www.whitehouse.gov/articles/accelerating-americas-leadership-in-artificial-intelligence. Published February 11, 2019. Accessed September 4, 2019.

3. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959;3(3):210-229.

4. SAS Users Group International. Neural networks and statistical models. In: Sarle WS. Proceedings of the Nineteenth Annual SAS Users Group International Conference. SAS Institute: Cary, North Carolina; 1994:1538-1550. http://www.sascommunity.org/sugi/SUGI94/Sugi-94-255%20Sarle.pdf. Accessed September 16, 2019.

5. Schmidhuber J. Deep learning in neural networks: an overview. Neural Networks. 2015;61:85-117.

6. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444.

7. Jiang F, Jiang Y, Li H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243.

8. Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. Radiographics. 2017;37(2):505-515.

9. Deo RC. Machine learning in medicine. Circulation. 2015;132(20):1920-1930.

10. Janowczyk A, Madabhushi A. Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases. J Pathol Inform. 2016;7(1):29.

11. Oquab M, Bottou L, Laptev I, Sivic J. Learning and transferring mid-level image representations using convolutional neural networks. Presented at: IEEE Conference on Computer Vision and Pattern Recognition, 2014. http://openaccess.thecvf.com/content_cvpr_2014/html/Oquab_Learning_and_Transferring_2014_CVPR_paper.html. Accessed September 4, 2019.

12. Shin HC, Roth HR, Gao M, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging. 2016;35(5):1285-1298.

13. Tajbakhsh N, Shin JY, Gurudu SR, et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2016;35(5):1299-1312.

14. Cloud AutoML. https://cloud.google.com/automl. Accessed September 4, 2019.

15. Create ML. https://developer.apple.com/documentation/createml. Accessed September 4, 2019.

16. Zullig LL, Sims KJ, McNeil R, et al. Cancer incidence among patients of the U.S. Veterans Affairs Health Care System: 2010 Update. Mil Med. 2017;182(7):e1883-e1891. 17. Borkowski AA, Wilson CP, Borkowski SA, Deland LA, Mastorides SM. Using Apple machine learning algorithms to detect and subclassify non-small cell lung cancer. https://arxiv.org/ftp/arxiv/papers/1808/1808.08230.pdf. Accessed September 4, 2019.

18. Borkowski AA, Wilson CP, Borkowski SA, Thomas LB, Deland LA, Mastorides SM. Apple machine learning algorithms successfully detect colon cancer but fail to predict KRAS mutation status. http://arxiv.org/abs/1812.04660. Revised January 15,2019. Accessed September 4, 2019.

19. Armaghany T, Wilson JD, Chu Q, Mills G. Genetic alterations in colorectal cancer. Gastrointest Cancer Res. 2012;5(1):19-27.

20. Herzig DO, Tsikitis VL. Molecular markers for colon diagnosis, prognosis and targeted therapy. J Surg Oncol. 2015;111(1):96-102.

21. Ma W, Brodie S, Agersborg S, Funari VA, Albitar M. Significant improvement in detecting BRAF, KRAS, and EGFR mutations using next-generation sequencing as compared with FDA-cleared kits. Mol Diagn Ther. 2017;21(5):571-579.

22. Greco FA. Molecular diagnosis of the tissue of origin in cancer of unknown primary site: useful in patient management. Curr Treat Options Oncol. 2013;14(4):634-642.

23. Bejnordi BE, Veta M, van Diest PJ, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA. 2017;318(22):2199-2210.

24. Xiong Y, Ba X, Hou A, Zhang K, Chen L, Li T. Automatic detection of mycobacterium tuberculosis using artificial intelligence. J Thorac Dis. 2018;10(3):1936-1940.

25. Cruz-Roa A, Gilmore H, Basavanhally A, et al. Accurate and reproducible invasive breast cancer detection in whole-slide images: a deep learning approach for quantifying tumor extent. Sci Rep. 2017;7:46450.

26. Coudray N, Ocampo PS, Sakellaropoulos T, et al. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat Med. 2018;24(10):1559-1567.

27. Ertosun MG, Rubin DL. Automated grading of gliomas using deep learning in digital pathology images: a modular approach with ensemble of convolutional neural networks. AMIA Annu Symp Proc. 2015;2015:1899-1908.

28. Wahab N, Khan A, Lee YS. Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detection. Comput Biol Med. 2017;85:86-97.

29. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118.

30. Han SS, Park GH, Lim W, et al. Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: automatic construction of onychomycosis datasets by region-based convolutional deep neural network. PLoS One. 2018;13(1):e0191493.

31. Fujisawa Y, Otomo Y, Ogata Y, et al. Deep-learning-based, computer-aided classifier developed with a small dataset of clinical images surpasses board-certified dermatologists in skin tumour diagnosis. Br J Dermatol. 2019;180(2):373-381.

32. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402-2010.

33. Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS One. 2017;12(4):e0174944.

34. Cheng J-Z, Ni D, Chou Y-H, et al. Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci Rep. 2016;6(1):24454.

35. Wang X, Yang W, Weinreb J, et al. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning. Sci Rep. 2017;7(1):15415.

36. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582.

37. Bardou D, Zhang K, Ahmad SM. Classification of breast cancer based on histology images using convolutional neural networks. IEEE Access. 2018;6(6):24680-24693.

38. Sheikhzadeh F, Ward RK, van Niekerk D, Guillaud M. Automatic labeling of molecular biomarkers of immunohistochemistry images using fully convolutional networks. PLoS One. 2018;13(1):e0190783.

39. Metter DM, Colgan TJ, Leung ST, Timmons CF, Park JY. Trends in the US and Canadian pathologist workforces from 2007 to 2017. JAMA Netw Open. 2019;2(5):e194337.

40. Benediktsson, H, Whitelaw J, Roy I. Pathology services in developing countries: a challenge. Arch Pathol Lab Med. 2007;131(11):1636-1639.

41. Graves D. The impact of the pathology workforce crisis on acute health care. Aust Health Rev. 2007;31(suppl 1):S28-S30.

42. NHS pathology shortages cause cancer diagnosis delays. https://www.gmjournal.co.uk/nhs-pathology-shortages-are-causing-cancer-diagnosis-delays. Published September 18, 2018. Accessed September 4, 2019.

43. Abbott LM, Smith SD. Smartphone apps for skin cancer diagnosis: Implications for patients and practitioners. Australas J Dermatol. 2018;59(3):168-170.

44. Poostchi M, Silamut K, Maude RJ, Jaeger S, Thoma G. Image analysis and machine learning for detecting malaria. Transl Res. 2018;194:36-55.

45. Orange DE, Agius P, DiCarlo EF, et al. Identification of three rheumatoid arthritis disease subtypes by machine learning integration of synovial histologic features and RNA sequencing data. Arthritis Rheumatol. 2018;70(5):690-701.

46. Rodellar J, Alférez S, Acevedo A, Molina A, Merino A. Image processing and machine learning in the morphological analysis of blood cells. Int J Lab Hematol. 2018;40(suppl 1):46-53.

47. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60-88.

References

1. Moor J. The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 2006;27(4):87-91.

2. Trump D. Accelerating America’s leadership in artificial intelligence. https://www.whitehouse.gov/articles/accelerating-americas-leadership-in-artificial-intelligence. Published February 11, 2019. Accessed September 4, 2019.

3. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959;3(3):210-229.

4. SAS Users Group International. Neural networks and statistical models. In: Sarle WS. Proceedings of the Nineteenth Annual SAS Users Group International Conference. SAS Institute: Cary, North Carolina; 1994:1538-1550. http://www.sascommunity.org/sugi/SUGI94/Sugi-94-255%20Sarle.pdf. Accessed September 16, 2019.

5. Schmidhuber J. Deep learning in neural networks: an overview. Neural Networks. 2015;61:85-117.

6. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444.

7. Jiang F, Jiang Y, Li H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243.

8. Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. Radiographics. 2017;37(2):505-515.

9. Deo RC. Machine learning in medicine. Circulation. 2015;132(20):1920-1930.

10. Janowczyk A, Madabhushi A. Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases. J Pathol Inform. 2016;7(1):29.

11. Oquab M, Bottou L, Laptev I, Sivic J. Learning and transferring mid-level image representations using convolutional neural networks. Presented at: IEEE Conference on Computer Vision and Pattern Recognition, 2014. http://openaccess.thecvf.com/content_cvpr_2014/html/Oquab_Learning_and_Transferring_2014_CVPR_paper.html. Accessed September 4, 2019.

12. Shin HC, Roth HR, Gao M, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging. 2016;35(5):1285-1298.

13. Tajbakhsh N, Shin JY, Gurudu SR, et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2016;35(5):1299-1312.

14. Cloud AutoML. https://cloud.google.com/automl. Accessed September 4, 2019.

15. Create ML. https://developer.apple.com/documentation/createml. Accessed September 4, 2019.

16. Zullig LL, Sims KJ, McNeil R, et al. Cancer incidence among patients of the U.S. Veterans Affairs Health Care System: 2010 Update. Mil Med. 2017;182(7):e1883-e1891. 17. Borkowski AA, Wilson CP, Borkowski SA, Deland LA, Mastorides SM. Using Apple machine learning algorithms to detect and subclassify non-small cell lung cancer. https://arxiv.org/ftp/arxiv/papers/1808/1808.08230.pdf. Accessed September 4, 2019.

18. Borkowski AA, Wilson CP, Borkowski SA, Thomas LB, Deland LA, Mastorides SM. Apple machine learning algorithms successfully detect colon cancer but fail to predict KRAS mutation status. http://arxiv.org/abs/1812.04660. Revised January 15,2019. Accessed September 4, 2019.

19. Armaghany T, Wilson JD, Chu Q, Mills G. Genetic alterations in colorectal cancer. Gastrointest Cancer Res. 2012;5(1):19-27.

20. Herzig DO, Tsikitis VL. Molecular markers for colon diagnosis, prognosis and targeted therapy. J Surg Oncol. 2015;111(1):96-102.

21. Ma W, Brodie S, Agersborg S, Funari VA, Albitar M. Significant improvement in detecting BRAF, KRAS, and EGFR mutations using next-generation sequencing as compared with FDA-cleared kits. Mol Diagn Ther. 2017;21(5):571-579.

22. Greco FA. Molecular diagnosis of the tissue of origin in cancer of unknown primary site: useful in patient management. Curr Treat Options Oncol. 2013;14(4):634-642.

23. Bejnordi BE, Veta M, van Diest PJ, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA. 2017;318(22):2199-2210.

24. Xiong Y, Ba X, Hou A, Zhang K, Chen L, Li T. Automatic detection of mycobacterium tuberculosis using artificial intelligence. J Thorac Dis. 2018;10(3):1936-1940.

25. Cruz-Roa A, Gilmore H, Basavanhally A, et al. Accurate and reproducible invasive breast cancer detection in whole-slide images: a deep learning approach for quantifying tumor extent. Sci Rep. 2017;7:46450.

26. Coudray N, Ocampo PS, Sakellaropoulos T, et al. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat Med. 2018;24(10):1559-1567.

27. Ertosun MG, Rubin DL. Automated grading of gliomas using deep learning in digital pathology images: a modular approach with ensemble of convolutional neural networks. AMIA Annu Symp Proc. 2015;2015:1899-1908.

28. Wahab N, Khan A, Lee YS. Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detection. Comput Biol Med. 2017;85:86-97.

29. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118.

30. Han SS, Park GH, Lim W, et al. Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: automatic construction of onychomycosis datasets by region-based convolutional deep neural network. PLoS One. 2018;13(1):e0191493.

31. Fujisawa Y, Otomo Y, Ogata Y, et al. Deep-learning-based, computer-aided classifier developed with a small dataset of clinical images surpasses board-certified dermatologists in skin tumour diagnosis. Br J Dermatol. 2019;180(2):373-381.

32. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402-2010.

33. Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS One. 2017;12(4):e0174944.

34. Cheng J-Z, Ni D, Chou Y-H, et al. Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci Rep. 2016;6(1):24454.

35. Wang X, Yang W, Weinreb J, et al. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning. Sci Rep. 2017;7(1):15415.

36. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582.

37. Bardou D, Zhang K, Ahmad SM. Classification of breast cancer based on histology images using convolutional neural networks. IEEE Access. 2018;6(6):24680-24693.

38. Sheikhzadeh F, Ward RK, van Niekerk D, Guillaud M. Automatic labeling of molecular biomarkers of immunohistochemistry images using fully convolutional networks. PLoS One. 2018;13(1):e0190783.

39. Metter DM, Colgan TJ, Leung ST, Timmons CF, Park JY. Trends in the US and Canadian pathologist workforces from 2007 to 2017. JAMA Netw Open. 2019;2(5):e194337.

40. Benediktsson, H, Whitelaw J, Roy I. Pathology services in developing countries: a challenge. Arch Pathol Lab Med. 2007;131(11):1636-1639.

41. Graves D. The impact of the pathology workforce crisis on acute health care. Aust Health Rev. 2007;31(suppl 1):S28-S30.

42. NHS pathology shortages cause cancer diagnosis delays. https://www.gmjournal.co.uk/nhs-pathology-shortages-are-causing-cancer-diagnosis-delays. Published September 18, 2018. Accessed September 4, 2019.

43. Abbott LM, Smith SD. Smartphone apps for skin cancer diagnosis: Implications for patients and practitioners. Australas J Dermatol. 2018;59(3):168-170.

44. Poostchi M, Silamut K, Maude RJ, Jaeger S, Thoma G. Image analysis and machine learning for detecting malaria. Transl Res. 2018;194:36-55.

45. Orange DE, Agius P, DiCarlo EF, et al. Identification of three rheumatoid arthritis disease subtypes by machine learning integration of synovial histologic features and RNA sequencing data. Arthritis Rheumatol. 2018;70(5):690-701.

46. Rodellar J, Alférez S, Acevedo A, Molina A, Merino A. Image processing and machine learning in the morphological analysis of blood cells. Int J Lab Hematol. 2018;40(suppl 1):46-53.

47. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60-88.

Issue
Federal Practitioner - 36(10)a
Issue
Federal Practitioner - 36(10)a
Page Number
456-463
Page Number
456-463
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Clip closure reduced bleeding after large lesion resection

Protective benefit seen with complete clip closure
Article Type
Changed

Use of clip closure significantly reduced delayed bleeding in patients who underwent resections for large colorectal lesions, based on data from 235 individuals.

Source: American Gastroenterological Association

“Closure of a mucosal defect with clips after resection has long been considered to reduce the risk of bleeding,” but evidence to support this practice is limited, wrote Eduardo Albéniz, MD, of the Public University of Navarra (Spain), and colleagues.

In a study published in Gastroenterology, the researchers identified 235 consecutive patients who had resections of large nonpedunculated colorectal lesions from May 2016 to June 2018. Patients had an average or high risk of delayed bleeding and were randomized to receive scar closure with either 11-mm through-the-scope clips (119 patients) or no clip (116 patients).

Delayed bleeding occurred in 14 control patients (12.1%), compared with 6 clip patients (5%), for a risk reduction of 7%. The clip group included 68 cases (57%) of complete closure and 33 cases (28%) with partial closure, as well as 18 cases of failure to close (15%); only 1 case of delayed bleeding occurred in the clip group after completion of clip closure. On average, six clips were needed for complete closure.

None of the patients who experienced delayed bleeding required surgical or angiographic intervention, although 15 of the 20 patients with bleeding underwent additional endoscopy. Other adverse events included immediate bleeding in 21 clip patients and 18 controls that was managed with snare soft-tip coagulation. No deaths were reported in connection with the study.

Demographics were similar between the two groups, but the subset of patients with complete closure included more individuals aged 75 years and older and more cases with smaller polyps, compared with other subgroups, the researchers noted.

The study findings were limited by several factors, including the difficulty in predicting delayed bleeding, the potential for selection bias given the timing of patient randomization, the lack of information about polyps that were excluded from treatment, and the difficulty in completely closing the mucosal defects, the researchers noted. However, the results suggest that complete clip closure, despite its challenges, “displays a clear trend to reduce delayed bleeding risk,” and is worth an attempt.

The study was supported by the Spanish Society of Digestive Endoscopy. The researchers had no financial conflicts to disclose. MicroTech (Nanjing, China) contributed the clips used in the study.

SOURCE: Albéniz E et al. Gastroenterology. 2019 Jul 27. doi: 10.1053/j.gastro.2019.07.037.

Body

With the advent of routine submucosal lifting prior to endoscopic mucosal resection, perforation now occurs less commonly; however, delayed bleeding following resection remains problematic given the aging population and increasing use of antithrombotic agents. In this study, clip closure resulted in a decrease in post-polypectomy bleeding in patients deemed to be at high risk (at least 8%) for delayed bleeding. 

The protective benefit of clip closure was seen almost exclusively in patients who had complete closure of the defect, which was achieved in only 57% of procedures. Clinical efficacy is largely driven by endoscopist skill level and the ability to achieve complete closure. Notably, defects that were successfully clipped were smaller in size, had better accessibility, and were technically easier. Defining such procedural factors a priori is important and may influence whether one should attempt clip closure if complete clip closure is unlikely. Interestingly, the bleeding rate was higher in the control group in lesions proximal to the transverse colon, where clip closure is likely to be most beneficial and cost effective, based on emerging data. It’s worth noting that the clips used in this study were relatively small (11 mm), and not currently available in the United States, although most endoscopic clips function similarly.

Studies such as this provide evidenced-based medicine to endoscopic practice. Hemostatic clips were introduced nearly 20 years ago without evidence for their effectiveness. Future studies are needed, such as those that compare electrocautery-based resection of high-risk polyps with standard clips to over-the-scope clips, and those that compare electrocautery-based resection to cold snare resection. 

Todd H. Baron, MD, is a gastroenterologist based at the University of North Carolina, Chapel Hill. He is a speaker and consultant for Olympus, Boston Scientific, and Cook Endoscopy.
 

Publications
Topics
Sections
Body

With the advent of routine submucosal lifting prior to endoscopic mucosal resection, perforation now occurs less commonly; however, delayed bleeding following resection remains problematic given the aging population and increasing use of antithrombotic agents. In this study, clip closure resulted in a decrease in post-polypectomy bleeding in patients deemed to be at high risk (at least 8%) for delayed bleeding. 

The protective benefit of clip closure was seen almost exclusively in patients who had complete closure of the defect, which was achieved in only 57% of procedures. Clinical efficacy is largely driven by endoscopist skill level and the ability to achieve complete closure. Notably, defects that were successfully clipped were smaller in size, had better accessibility, and were technically easier. Defining such procedural factors a priori is important and may influence whether one should attempt clip closure if complete clip closure is unlikely. Interestingly, the bleeding rate was higher in the control group in lesions proximal to the transverse colon, where clip closure is likely to be most beneficial and cost effective, based on emerging data. It’s worth noting that the clips used in this study were relatively small (11 mm), and not currently available in the United States, although most endoscopic clips function similarly.

Studies such as this provide evidenced-based medicine to endoscopic practice. Hemostatic clips were introduced nearly 20 years ago without evidence for their effectiveness. Future studies are needed, such as those that compare electrocautery-based resection of high-risk polyps with standard clips to over-the-scope clips, and those that compare electrocautery-based resection to cold snare resection. 

Todd H. Baron, MD, is a gastroenterologist based at the University of North Carolina, Chapel Hill. He is a speaker and consultant for Olympus, Boston Scientific, and Cook Endoscopy.
 

Body

With the advent of routine submucosal lifting prior to endoscopic mucosal resection, perforation now occurs less commonly; however, delayed bleeding following resection remains problematic given the aging population and increasing use of antithrombotic agents. In this study, clip closure resulted in a decrease in post-polypectomy bleeding in patients deemed to be at high risk (at least 8%) for delayed bleeding. 

The protective benefit of clip closure was seen almost exclusively in patients who had complete closure of the defect, which was achieved in only 57% of procedures. Clinical efficacy is largely driven by endoscopist skill level and the ability to achieve complete closure. Notably, defects that were successfully clipped were smaller in size, had better accessibility, and were technically easier. Defining such procedural factors a priori is important and may influence whether one should attempt clip closure if complete clip closure is unlikely. Interestingly, the bleeding rate was higher in the control group in lesions proximal to the transverse colon, where clip closure is likely to be most beneficial and cost effective, based on emerging data. It’s worth noting that the clips used in this study were relatively small (11 mm), and not currently available in the United States, although most endoscopic clips function similarly.

Studies such as this provide evidenced-based medicine to endoscopic practice. Hemostatic clips were introduced nearly 20 years ago without evidence for their effectiveness. Future studies are needed, such as those that compare electrocautery-based resection of high-risk polyps with standard clips to over-the-scope clips, and those that compare electrocautery-based resection to cold snare resection. 

Todd H. Baron, MD, is a gastroenterologist based at the University of North Carolina, Chapel Hill. He is a speaker and consultant for Olympus, Boston Scientific, and Cook Endoscopy.
 

Title
Protective benefit seen with complete clip closure
Protective benefit seen with complete clip closure

Use of clip closure significantly reduced delayed bleeding in patients who underwent resections for large colorectal lesions, based on data from 235 individuals.

Source: American Gastroenterological Association

“Closure of a mucosal defect with clips after resection has long been considered to reduce the risk of bleeding,” but evidence to support this practice is limited, wrote Eduardo Albéniz, MD, of the Public University of Navarra (Spain), and colleagues.

In a study published in Gastroenterology, the researchers identified 235 consecutive patients who had resections of large nonpedunculated colorectal lesions from May 2016 to June 2018. Patients had an average or high risk of delayed bleeding and were randomized to receive scar closure with either 11-mm through-the-scope clips (119 patients) or no clip (116 patients).

Delayed bleeding occurred in 14 control patients (12.1%), compared with 6 clip patients (5%), for a risk reduction of 7%. The clip group included 68 cases (57%) of complete closure and 33 cases (28%) with partial closure, as well as 18 cases of failure to close (15%); only 1 case of delayed bleeding occurred in the clip group after completion of clip closure. On average, six clips were needed for complete closure.

None of the patients who experienced delayed bleeding required surgical or angiographic intervention, although 15 of the 20 patients with bleeding underwent additional endoscopy. Other adverse events included immediate bleeding in 21 clip patients and 18 controls that was managed with snare soft-tip coagulation. No deaths were reported in connection with the study.

Demographics were similar between the two groups, but the subset of patients with complete closure included more individuals aged 75 years and older and more cases with smaller polyps, compared with other subgroups, the researchers noted.

The study findings were limited by several factors, including the difficulty in predicting delayed bleeding, the potential for selection bias given the timing of patient randomization, the lack of information about polyps that were excluded from treatment, and the difficulty in completely closing the mucosal defects, the researchers noted. However, the results suggest that complete clip closure, despite its challenges, “displays a clear trend to reduce delayed bleeding risk,” and is worth an attempt.

The study was supported by the Spanish Society of Digestive Endoscopy. The researchers had no financial conflicts to disclose. MicroTech (Nanjing, China) contributed the clips used in the study.

SOURCE: Albéniz E et al. Gastroenterology. 2019 Jul 27. doi: 10.1053/j.gastro.2019.07.037.

Use of clip closure significantly reduced delayed bleeding in patients who underwent resections for large colorectal lesions, based on data from 235 individuals.

Source: American Gastroenterological Association

“Closure of a mucosal defect with clips after resection has long been considered to reduce the risk of bleeding,” but evidence to support this practice is limited, wrote Eduardo Albéniz, MD, of the Public University of Navarra (Spain), and colleagues.

In a study published in Gastroenterology, the researchers identified 235 consecutive patients who had resections of large nonpedunculated colorectal lesions from May 2016 to June 2018. Patients had an average or high risk of delayed bleeding and were randomized to receive scar closure with either 11-mm through-the-scope clips (119 patients) or no clip (116 patients).

Delayed bleeding occurred in 14 control patients (12.1%), compared with 6 clip patients (5%), for a risk reduction of 7%. The clip group included 68 cases (57%) of complete closure and 33 cases (28%) with partial closure, as well as 18 cases of failure to close (15%); only 1 case of delayed bleeding occurred in the clip group after completion of clip closure. On average, six clips were needed for complete closure.

None of the patients who experienced delayed bleeding required surgical or angiographic intervention, although 15 of the 20 patients with bleeding underwent additional endoscopy. Other adverse events included immediate bleeding in 21 clip patients and 18 controls that was managed with snare soft-tip coagulation. No deaths were reported in connection with the study.

Demographics were similar between the two groups, but the subset of patients with complete closure included more individuals aged 75 years and older and more cases with smaller polyps, compared with other subgroups, the researchers noted.

The study findings were limited by several factors, including the difficulty in predicting delayed bleeding, the potential for selection bias given the timing of patient randomization, the lack of information about polyps that were excluded from treatment, and the difficulty in completely closing the mucosal defects, the researchers noted. However, the results suggest that complete clip closure, despite its challenges, “displays a clear trend to reduce delayed bleeding risk,” and is worth an attempt.

The study was supported by the Spanish Society of Digestive Endoscopy. The researchers had no financial conflicts to disclose. MicroTech (Nanjing, China) contributed the clips used in the study.

SOURCE: Albéniz E et al. Gastroenterology. 2019 Jul 27. doi: 10.1053/j.gastro.2019.07.037.

Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
209096
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.