User login
It’s not time to abandon routine screening mammography in average-risk women in their 40s
In the 1970s and early 1980s, population-based screening mammography was studied in numerous randomized control trials (RCTs), with the primary outcome of reduced breast cancer mortality. Although technology and the sensitivity of mammography in the 1980s was somewhat rudimentary compared with current screening, a meta-analysis of these RCTs demonstrated a clear mortality benefit for screening mammography.1 As a result, widespread population-based mammography was introduced in the mid-1980s in the United States and has become a standard for breast cancer screening.
Since that time, few RCTs of screening mammography versus observation have been conducted because of the ethical challenges of entering women into such studies as well as the difficulty and expense of long-term follow-up to measure the effect of screening on breast cancer mortality. Without ongoing RCTs of mammography, retrospective, observational, and computer simulation trials of the efficacy and harms of screening mammography have been conducted using proxy measures of mortality (such as stage at diagnosis), and some have questioned the overall benefit of screening mammography.2,3
To further complicate this controversy, some national guidelines have recommended against routinely recommending screening mammography for women aged 40 to 49 based on concerns that the harms (callbacks, benign breast biopsies, overdiagnosis) exceed the potential benefits (earlier diagnosis, possible decrease in needed treatments, reduced breast cancer mortality).4 This has resulted in a confusing morass of national recommendations with uncertainty regarding the question of whether to routinely offer screening mammography for women in their 40s at average risk for breast cancer.4-6
Recently, to address this question Duffy and colleagues conducted a large RCT of women in their 40s to evaluate the long-term effect of mammography on breast cancer mortality.7 Here, I review the study in depth and offer some guidance to clinicians and women struggling with screening decisions.
Breast cancer mortality significantly lower in the screening group
The RCT, known as the UK Age trial, was conducted in England, Wales, and Scotland and enrolled 160,921 women from 1990 through 1997.7 Women were randomly assigned in a 2:1 ratio to observation or annual screening mammogram beginning at age 39–41 until age 48. (In the United Kingdom, all women are screened starting at age 50.) Study enrollees were followed for a median of 22.8 years, and the primary outcome was breast cancer mortality.
The study results showed a 25% relative risk (RR) reduction in breast cancer mortality at 10 years of follow-up in the mammography group compared with the unscreened women (83 breast cancer deaths in the mammography group vs 219 in the observation group [RR, 0.75; 95% confidence interval (CI), 0.58–0.97; P = .029]). Based on the prevalence of breast cancer in women in their 40s, this 25% relative risk reduction translates into approximately 1 less death per 1,000 women who undergo routine screening in their 40s.
While there was no additional significant mortality reduction beyond 10 years of follow-up, as noted mammography is offered routinely starting at age 50 to all women in the United Kingdom. The authors concluded that “reducing the lower age limit for screening from 50 to 40 years [of age] could potentially reduce breast cancer mortality.”
Was overdiagnosis a concern? Another finding in this trial was related to overdiagnosis of breast cancer in the screened group. Overdiagnosis refers to mammographic-only diagnosis (that is, no clinical findings) of nonaggressive breast cancer, which would remain indolent and not harm the patient. The study results demonstrated essentially no overdiagnosis in women screened at age 40 compared with the unscreened group.
Continue to: Large trial, long follow-up are key strengths...
Large trial, long follow-up are key strengths
The UK Age trial’s primary strength is its study design: a large population-based RCT that included diverse participants with the critical study outcome for cancer screening (mortality). The study’s long-term follow-up is another key strength, since breast cancer mortality typically occurs 7 to 10 years after diagnosis. In addition, results were available for 99.9% of the women enrolled in the trial (that is, only 0.1% of women were lost to follow-up). Interestingly, the demonstrated mortality reduction with screening mammography for women in their 40s validates the mortality benefit demonstrated in other large RCTs of women in their 40s.1
Another strong point is that the study addresses the issue of whether screening women in their 40s results in overdiagnosis compared with women who start screening in their 50s. Further, this study validates a prior observational study that mammographic findings of nonprogressive cancers do not disappear, so nonaggressive cancers that present on mammography in women in their 40s still would be detected when women start screening in their 50s.8
Study limitations should be noted
The study has several limitations. For example, significant improvements have been made in breast cancer treatments that may mitigate against the positive impact of screening mammography. The impact of changed breast cancer management over the past 20 years could not be addressed with this study’s design since women would have been treated in the 1990s. In addition, substantial improvements have occurred in breast cancer screening standards (2 views vs the single view used in the study) and technology since the 1990s. Current mammography includes nearly uniform use of either digital mammography (DM) or digital breast tomosynthesis (DBT), both of which improve breast cancer detection for women in their 40s compared with the older film-screen technology. In addition, DBT reduces false-positive results by approximately 40%, resulting in fewer callbacks and biopsies. While improved cancer detection and reduced false-positive results are seen with DM and DBT, whether these technology improvements result in improved breast cancer mortality has not yet been sufficiently studied.
Perhaps the most important limitation in this study is that the women did not undergo routine risk assessment before trial entry to assure that they all were at “average risk.” As a result, both high- and average-risk women would have been included in this population-based trial. Without risk stratification, it remains uncertain whether the reduction in breast cancer mortality disproportionately exists within a high-risk subgroup (such as breast cancer gene mutation carriers).
Finally, the cost efficacy of routine screening mammography for women in their 40s was not evaluated in this study.
The UK Age trial in perspective
The good news is that there is the clear evidence that breast cancer mortality rates (deaths per 100,000) have decreased by about 40% over the past 50 years, likely due to improvements in breast cancer treatment and routine screening mammography.9 Breast cancer mortality reduction is particularly important because breast cancer remains the most common cancer and is the second leading cause of cancer death in women in the United States. In the past decade, considerable debate has arisen arguing whether this reduction in breast cancer mortality is due to improved treatments, routine screening mammography, or both. Authors of a retrospective trial in Australia, recently reviewed in OBG Management, suggested that the majority of improvement is due to improvements in treatment.3,10 However, as the authors pointed out, due to the trial’s retrospective design, causality only can be inferred. The current UK Age trial does add to the numerous prospective trials demonstrating mortality benefit for mammography in women in their 40s.11
What remains a challenge for clinicians, and for women struggling with the mammography question, is the absence of risk assessment in these long-term RCT trials as well as in the large retrospective database studies. Without risk stratification, these studies treated all the study population as “average risk.” Because breast cancer risk assessment is sporadically performed in clinical practice and there are no published RCTs of screening mammography in risk-assessed “average risk” women in their 40s, it remains uncertain whether the women benefiting from screening in their 40s are in a high-risk group or whether women of average risk in this age group also are benefiting from routine screening mammography.
Continue to: What’s next: Incorporate routine risk assessment into clinical practice...
What’s next: Incorporate routine risk assessment into clinical practice
It is not time to abandon screening mammography for all women in their 40s. Rather, routine risk assessment should be performed using one of many available validated or widely tested tools, a recommendation supported by the American College of Obstetricians and Gynecologists, the National Comprehensive Cancer Network, and the US Preventive Services Task Force.5,6,12
Ideally, these tools can be incorporated into an electronic health record and prepopulated using already available patient data (such as age, reproductive risk factors, current medications, breast density if available, and family history). Prepopulating available data into breast cancer risk calculators would allow clinicians to spend time on counseling women regarding breast cancer risk and appropriate screening methods. The TABLE provides a summary of useful breast cancer risk calculators and includes comments about their utility and significant limitations and benefits. In addition to breast cancer risk, the more comprehensive risk calculators (Tyrer-Cuzick and BOADICEA) allow calculation of ovarian cancer risk and gene mutation risk.
Routinely performing breast cancer risk assessment can guide discussions of screening mammography and can provide data for conducting a more individualized discussion on cancer genetic counseling and testing, risk reduction methods in high-risk women, and possible use of intensive breast cancer screening tools in identified high-risk women.
Ultimately, debating the question of whether all women should have routine breast cancer screening in their 40s should be passé. Ideally, all women should undergo breast cancer risk assessment in their 20s. Risk assessment results can then be used to guide the discussion of multiple potential interventions for women in their 40s (or earlier if appropriate), including routine screening mammography, cancer genetic counseling and testing in appropriate individuals, and intervention for women who are identified at high risk.
Absent breast cancer risk assessment, screening mammography still should be offered to women in their 40s, and the decision to proceed should be based on a discussion of risks, benefits, and the value the patient places on these factors.●
- Nelson HD, Fu R, Cantor A, et al. Effectiveness of breast cancer screening: systematic review and meta-analysis to update the 2009 US Preventive Services Task Force recommendation. Ann Intern Med. 2016;164:244-255.
- Bleyer A, Welch HG. Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med. 2012;367:1998-2005.
- Burton R, Stevenson C. Assessment of breast cancer mortality trends associated with mammographic screening and adjuvant therapy from 1986 to 2013 in the state of Victoria, Australia. JAMA Netw Open. 2020;3:e208249-e.
- Nelson HD, Cantor A, Humphrey L, et al. A systematic review to update the 2009 US Preventive Services Task Force recommendation. Evidence syntheses No. 124. AHRQ Publication No. 14-05201-EF-1. Rockville, MD: Agency for Healthcare Research and Quality; 2016.
- Bevers TB, Helvie M, Bonaccio E, et al. Breast cancer screening and diagnosis, version 3.2018, NCCN clinical practice guidelines in oncology. J Natl Compr Canc Netw. 2018;16:1362-1389.
- ACOG Committee on Practice Bulletins–Gynecology. Breast cancer risk assessment and screening in average-risk women. Obstet Gynecol. 2017;130:e1-e16.
- Duffy SW, Vulkan D, Cuckle H, et al. Effect of mammographic screening from age 40 years on breast cancer mortality (UK Age trial): final results of a randomised, controlled trial. Lancet Oncol. 2020;21:1165-1172.
- Arleo EK, Monticciolo DL, Monsees B, et al. Persistent untreated screening-detected breast cancer: an argument against delaying screening or increasing the interval between screenings. J Am Coll Radiol. 2017;14:863-867.
- DeSantis CE, Ma J, Gaudet MM, et al. Breast cancer statistics, 2019. CA Cancer J Clin. 2019;69:438-451.
- Kaunitz AM. How effective is screening mammography for preventing breast cancer mortality? OBG Manag. 2020;32(8):17,49.
- Oeffinger KC, Fontham ET, Etzioni R, et al; American Cancer Society. Breast cancer screening for women at average risk: 2015 guideline update from the American Cancer Society. JAMA. 2015;314:1599-1614.
- US Preventive Services Task Force; Owens DK, Davidson KW, Krist AH, et al. Risk assessment, genetic counseling, and genetic testing for BRCA-related cancer: US Preventive Services Task Force recommendation statement. JAMA. 2019;322:652-665.
In the 1970s and early 1980s, population-based screening mammography was studied in numerous randomized control trials (RCTs), with the primary outcome of reduced breast cancer mortality. Although technology and the sensitivity of mammography in the 1980s was somewhat rudimentary compared with current screening, a meta-analysis of these RCTs demonstrated a clear mortality benefit for screening mammography.1 As a result, widespread population-based mammography was introduced in the mid-1980s in the United States and has become a standard for breast cancer screening.
Since that time, few RCTs of screening mammography versus observation have been conducted because of the ethical challenges of entering women into such studies as well as the difficulty and expense of long-term follow-up to measure the effect of screening on breast cancer mortality. Without ongoing RCTs of mammography, retrospective, observational, and computer simulation trials of the efficacy and harms of screening mammography have been conducted using proxy measures of mortality (such as stage at diagnosis), and some have questioned the overall benefit of screening mammography.2,3
To further complicate this controversy, some national guidelines have recommended against routinely recommending screening mammography for women aged 40 to 49 based on concerns that the harms (callbacks, benign breast biopsies, overdiagnosis) exceed the potential benefits (earlier diagnosis, possible decrease in needed treatments, reduced breast cancer mortality).4 This has resulted in a confusing morass of national recommendations with uncertainty regarding the question of whether to routinely offer screening mammography for women in their 40s at average risk for breast cancer.4-6
Recently, to address this question Duffy and colleagues conducted a large RCT of women in their 40s to evaluate the long-term effect of mammography on breast cancer mortality.7 Here, I review the study in depth and offer some guidance to clinicians and women struggling with screening decisions.
Breast cancer mortality significantly lower in the screening group
The RCT, known as the UK Age trial, was conducted in England, Wales, and Scotland and enrolled 160,921 women from 1990 through 1997.7 Women were randomly assigned in a 2:1 ratio to observation or annual screening mammogram beginning at age 39–41 until age 48. (In the United Kingdom, all women are screened starting at age 50.) Study enrollees were followed for a median of 22.8 years, and the primary outcome was breast cancer mortality.
The study results showed a 25% relative risk (RR) reduction in breast cancer mortality at 10 years of follow-up in the mammography group compared with the unscreened women (83 breast cancer deaths in the mammography group vs 219 in the observation group [RR, 0.75; 95% confidence interval (CI), 0.58–0.97; P = .029]). Based on the prevalence of breast cancer in women in their 40s, this 25% relative risk reduction translates into approximately 1 less death per 1,000 women who undergo routine screening in their 40s.
While there was no additional significant mortality reduction beyond 10 years of follow-up, as noted mammography is offered routinely starting at age 50 to all women in the United Kingdom. The authors concluded that “reducing the lower age limit for screening from 50 to 40 years [of age] could potentially reduce breast cancer mortality.”
Was overdiagnosis a concern? Another finding in this trial was related to overdiagnosis of breast cancer in the screened group. Overdiagnosis refers to mammographic-only diagnosis (that is, no clinical findings) of nonaggressive breast cancer, which would remain indolent and not harm the patient. The study results demonstrated essentially no overdiagnosis in women screened at age 40 compared with the unscreened group.
Continue to: Large trial, long follow-up are key strengths...
Large trial, long follow-up are key strengths
The UK Age trial’s primary strength is its study design: a large population-based RCT that included diverse participants with the critical study outcome for cancer screening (mortality). The study’s long-term follow-up is another key strength, since breast cancer mortality typically occurs 7 to 10 years after diagnosis. In addition, results were available for 99.9% of the women enrolled in the trial (that is, only 0.1% of women were lost to follow-up). Interestingly, the demonstrated mortality reduction with screening mammography for women in their 40s validates the mortality benefit demonstrated in other large RCTs of women in their 40s.1
Another strong point is that the study addresses the issue of whether screening women in their 40s results in overdiagnosis compared with women who start screening in their 50s. Further, this study validates a prior observational study that mammographic findings of nonprogressive cancers do not disappear, so nonaggressive cancers that present on mammography in women in their 40s still would be detected when women start screening in their 50s.8
Study limitations should be noted
The study has several limitations. For example, significant improvements have been made in breast cancer treatments that may mitigate against the positive impact of screening mammography. The impact of changed breast cancer management over the past 20 years could not be addressed with this study’s design since women would have been treated in the 1990s. In addition, substantial improvements have occurred in breast cancer screening standards (2 views vs the single view used in the study) and technology since the 1990s. Current mammography includes nearly uniform use of either digital mammography (DM) or digital breast tomosynthesis (DBT), both of which improve breast cancer detection for women in their 40s compared with the older film-screen technology. In addition, DBT reduces false-positive results by approximately 40%, resulting in fewer callbacks and biopsies. While improved cancer detection and reduced false-positive results are seen with DM and DBT, whether these technology improvements result in improved breast cancer mortality has not yet been sufficiently studied.
Perhaps the most important limitation in this study is that the women did not undergo routine risk assessment before trial entry to assure that they all were at “average risk.” As a result, both high- and average-risk women would have been included in this population-based trial. Without risk stratification, it remains uncertain whether the reduction in breast cancer mortality disproportionately exists within a high-risk subgroup (such as breast cancer gene mutation carriers).
Finally, the cost efficacy of routine screening mammography for women in their 40s was not evaluated in this study.
The UK Age trial in perspective
The good news is that there is the clear evidence that breast cancer mortality rates (deaths per 100,000) have decreased by about 40% over the past 50 years, likely due to improvements in breast cancer treatment and routine screening mammography.9 Breast cancer mortality reduction is particularly important because breast cancer remains the most common cancer and is the second leading cause of cancer death in women in the United States. In the past decade, considerable debate has arisen arguing whether this reduction in breast cancer mortality is due to improved treatments, routine screening mammography, or both. Authors of a retrospective trial in Australia, recently reviewed in OBG Management, suggested that the majority of improvement is due to improvements in treatment.3,10 However, as the authors pointed out, due to the trial’s retrospective design, causality only can be inferred. The current UK Age trial does add to the numerous prospective trials demonstrating mortality benefit for mammography in women in their 40s.11
What remains a challenge for clinicians, and for women struggling with the mammography question, is the absence of risk assessment in these long-term RCT trials as well as in the large retrospective database studies. Without risk stratification, these studies treated all the study population as “average risk.” Because breast cancer risk assessment is sporadically performed in clinical practice and there are no published RCTs of screening mammography in risk-assessed “average risk” women in their 40s, it remains uncertain whether the women benefiting from screening in their 40s are in a high-risk group or whether women of average risk in this age group also are benefiting from routine screening mammography.
Continue to: What’s next: Incorporate routine risk assessment into clinical practice...
What’s next: Incorporate routine risk assessment into clinical practice
It is not time to abandon screening mammography for all women in their 40s. Rather, routine risk assessment should be performed using one of many available validated or widely tested tools, a recommendation supported by the American College of Obstetricians and Gynecologists, the National Comprehensive Cancer Network, and the US Preventive Services Task Force.5,6,12
Ideally, these tools can be incorporated into an electronic health record and prepopulated using already available patient data (such as age, reproductive risk factors, current medications, breast density if available, and family history). Prepopulating available data into breast cancer risk calculators would allow clinicians to spend time on counseling women regarding breast cancer risk and appropriate screening methods. The TABLE provides a summary of useful breast cancer risk calculators and includes comments about their utility and significant limitations and benefits. In addition to breast cancer risk, the more comprehensive risk calculators (Tyrer-Cuzick and BOADICEA) allow calculation of ovarian cancer risk and gene mutation risk.
Routinely performing breast cancer risk assessment can guide discussions of screening mammography and can provide data for conducting a more individualized discussion on cancer genetic counseling and testing, risk reduction methods in high-risk women, and possible use of intensive breast cancer screening tools in identified high-risk women.
Ultimately, debating the question of whether all women should have routine breast cancer screening in their 40s should be passé. Ideally, all women should undergo breast cancer risk assessment in their 20s. Risk assessment results can then be used to guide the discussion of multiple potential interventions for women in their 40s (or earlier if appropriate), including routine screening mammography, cancer genetic counseling and testing in appropriate individuals, and intervention for women who are identified at high risk.
Absent breast cancer risk assessment, screening mammography still should be offered to women in their 40s, and the decision to proceed should be based on a discussion of risks, benefits, and the value the patient places on these factors.●
In the 1970s and early 1980s, population-based screening mammography was studied in numerous randomized control trials (RCTs), with the primary outcome of reduced breast cancer mortality. Although technology and the sensitivity of mammography in the 1980s was somewhat rudimentary compared with current screening, a meta-analysis of these RCTs demonstrated a clear mortality benefit for screening mammography.1 As a result, widespread population-based mammography was introduced in the mid-1980s in the United States and has become a standard for breast cancer screening.
Since that time, few RCTs of screening mammography versus observation have been conducted because of the ethical challenges of entering women into such studies as well as the difficulty and expense of long-term follow-up to measure the effect of screening on breast cancer mortality. Without ongoing RCTs of mammography, retrospective, observational, and computer simulation trials of the efficacy and harms of screening mammography have been conducted using proxy measures of mortality (such as stage at diagnosis), and some have questioned the overall benefit of screening mammography.2,3
To further complicate this controversy, some national guidelines have recommended against routinely recommending screening mammography for women aged 40 to 49 based on concerns that the harms (callbacks, benign breast biopsies, overdiagnosis) exceed the potential benefits (earlier diagnosis, possible decrease in needed treatments, reduced breast cancer mortality).4 This has resulted in a confusing morass of national recommendations with uncertainty regarding the question of whether to routinely offer screening mammography for women in their 40s at average risk for breast cancer.4-6
Recently, to address this question Duffy and colleagues conducted a large RCT of women in their 40s to evaluate the long-term effect of mammography on breast cancer mortality.7 Here, I review the study in depth and offer some guidance to clinicians and women struggling with screening decisions.
Breast cancer mortality significantly lower in the screening group
The RCT, known as the UK Age trial, was conducted in England, Wales, and Scotland and enrolled 160,921 women from 1990 through 1997.7 Women were randomly assigned in a 2:1 ratio to observation or annual screening mammogram beginning at age 39–41 until age 48. (In the United Kingdom, all women are screened starting at age 50.) Study enrollees were followed for a median of 22.8 years, and the primary outcome was breast cancer mortality.
The study results showed a 25% relative risk (RR) reduction in breast cancer mortality at 10 years of follow-up in the mammography group compared with the unscreened women (83 breast cancer deaths in the mammography group vs 219 in the observation group [RR, 0.75; 95% confidence interval (CI), 0.58–0.97; P = .029]). Based on the prevalence of breast cancer in women in their 40s, this 25% relative risk reduction translates into approximately 1 less death per 1,000 women who undergo routine screening in their 40s.
While there was no additional significant mortality reduction beyond 10 years of follow-up, as noted mammography is offered routinely starting at age 50 to all women in the United Kingdom. The authors concluded that “reducing the lower age limit for screening from 50 to 40 years [of age] could potentially reduce breast cancer mortality.”
Was overdiagnosis a concern? Another finding in this trial was related to overdiagnosis of breast cancer in the screened group. Overdiagnosis refers to mammographic-only diagnosis (that is, no clinical findings) of nonaggressive breast cancer, which would remain indolent and not harm the patient. The study results demonstrated essentially no overdiagnosis in women screened at age 40 compared with the unscreened group.
Continue to: Large trial, long follow-up are key strengths...
Large trial, long follow-up are key strengths
The UK Age trial’s primary strength is its study design: a large population-based RCT that included diverse participants with the critical study outcome for cancer screening (mortality). The study’s long-term follow-up is another key strength, since breast cancer mortality typically occurs 7 to 10 years after diagnosis. In addition, results were available for 99.9% of the women enrolled in the trial (that is, only 0.1% of women were lost to follow-up). Interestingly, the demonstrated mortality reduction with screening mammography for women in their 40s validates the mortality benefit demonstrated in other large RCTs of women in their 40s.1
Another strong point is that the study addresses the issue of whether screening women in their 40s results in overdiagnosis compared with women who start screening in their 50s. Further, this study validates a prior observational study that mammographic findings of nonprogressive cancers do not disappear, so nonaggressive cancers that present on mammography in women in their 40s still would be detected when women start screening in their 50s.8
Study limitations should be noted
The study has several limitations. For example, significant improvements have been made in breast cancer treatments that may mitigate against the positive impact of screening mammography. The impact of changed breast cancer management over the past 20 years could not be addressed with this study’s design since women would have been treated in the 1990s. In addition, substantial improvements have occurred in breast cancer screening standards (2 views vs the single view used in the study) and technology since the 1990s. Current mammography includes nearly uniform use of either digital mammography (DM) or digital breast tomosynthesis (DBT), both of which improve breast cancer detection for women in their 40s compared with the older film-screen technology. In addition, DBT reduces false-positive results by approximately 40%, resulting in fewer callbacks and biopsies. While improved cancer detection and reduced false-positive results are seen with DM and DBT, whether these technology improvements result in improved breast cancer mortality has not yet been sufficiently studied.
Perhaps the most important limitation in this study is that the women did not undergo routine risk assessment before trial entry to assure that they all were at “average risk.” As a result, both high- and average-risk women would have been included in this population-based trial. Without risk stratification, it remains uncertain whether the reduction in breast cancer mortality disproportionately exists within a high-risk subgroup (such as breast cancer gene mutation carriers).
Finally, the cost efficacy of routine screening mammography for women in their 40s was not evaluated in this study.
The UK Age trial in perspective
The good news is that there is the clear evidence that breast cancer mortality rates (deaths per 100,000) have decreased by about 40% over the past 50 years, likely due to improvements in breast cancer treatment and routine screening mammography.9 Breast cancer mortality reduction is particularly important because breast cancer remains the most common cancer and is the second leading cause of cancer death in women in the United States. In the past decade, considerable debate has arisen arguing whether this reduction in breast cancer mortality is due to improved treatments, routine screening mammography, or both. Authors of a retrospective trial in Australia, recently reviewed in OBG Management, suggested that the majority of improvement is due to improvements in treatment.3,10 However, as the authors pointed out, due to the trial’s retrospective design, causality only can be inferred. The current UK Age trial does add to the numerous prospective trials demonstrating mortality benefit for mammography in women in their 40s.11
What remains a challenge for clinicians, and for women struggling with the mammography question, is the absence of risk assessment in these long-term RCT trials as well as in the large retrospective database studies. Without risk stratification, these studies treated all the study population as “average risk.” Because breast cancer risk assessment is sporadically performed in clinical practice and there are no published RCTs of screening mammography in risk-assessed “average risk” women in their 40s, it remains uncertain whether the women benefiting from screening in their 40s are in a high-risk group or whether women of average risk in this age group also are benefiting from routine screening mammography.
Continue to: What’s next: Incorporate routine risk assessment into clinical practice...
What’s next: Incorporate routine risk assessment into clinical practice
It is not time to abandon screening mammography for all women in their 40s. Rather, routine risk assessment should be performed using one of many available validated or widely tested tools, a recommendation supported by the American College of Obstetricians and Gynecologists, the National Comprehensive Cancer Network, and the US Preventive Services Task Force.5,6,12
Ideally, these tools can be incorporated into an electronic health record and prepopulated using already available patient data (such as age, reproductive risk factors, current medications, breast density if available, and family history). Prepopulating available data into breast cancer risk calculators would allow clinicians to spend time on counseling women regarding breast cancer risk and appropriate screening methods. The TABLE provides a summary of useful breast cancer risk calculators and includes comments about their utility and significant limitations and benefits. In addition to breast cancer risk, the more comprehensive risk calculators (Tyrer-Cuzick and BOADICEA) allow calculation of ovarian cancer risk and gene mutation risk.
Routinely performing breast cancer risk assessment can guide discussions of screening mammography and can provide data for conducting a more individualized discussion on cancer genetic counseling and testing, risk reduction methods in high-risk women, and possible use of intensive breast cancer screening tools in identified high-risk women.
Ultimately, debating the question of whether all women should have routine breast cancer screening in their 40s should be passé. Ideally, all women should undergo breast cancer risk assessment in their 20s. Risk assessment results can then be used to guide the discussion of multiple potential interventions for women in their 40s (or earlier if appropriate), including routine screening mammography, cancer genetic counseling and testing in appropriate individuals, and intervention for women who are identified at high risk.
Absent breast cancer risk assessment, screening mammography still should be offered to women in their 40s, and the decision to proceed should be based on a discussion of risks, benefits, and the value the patient places on these factors.●
- Nelson HD, Fu R, Cantor A, et al. Effectiveness of breast cancer screening: systematic review and meta-analysis to update the 2009 US Preventive Services Task Force recommendation. Ann Intern Med. 2016;164:244-255.
- Bleyer A, Welch HG. Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med. 2012;367:1998-2005.
- Burton R, Stevenson C. Assessment of breast cancer mortality trends associated with mammographic screening and adjuvant therapy from 1986 to 2013 in the state of Victoria, Australia. JAMA Netw Open. 2020;3:e208249-e.
- Nelson HD, Cantor A, Humphrey L, et al. A systematic review to update the 2009 US Preventive Services Task Force recommendation. Evidence syntheses No. 124. AHRQ Publication No. 14-05201-EF-1. Rockville, MD: Agency for Healthcare Research and Quality; 2016.
- Bevers TB, Helvie M, Bonaccio E, et al. Breast cancer screening and diagnosis, version 3.2018, NCCN clinical practice guidelines in oncology. J Natl Compr Canc Netw. 2018;16:1362-1389.
- ACOG Committee on Practice Bulletins–Gynecology. Breast cancer risk assessment and screening in average-risk women. Obstet Gynecol. 2017;130:e1-e16.
- Duffy SW, Vulkan D, Cuckle H, et al. Effect of mammographic screening from age 40 years on breast cancer mortality (UK Age trial): final results of a randomised, controlled trial. Lancet Oncol. 2020;21:1165-1172.
- Arleo EK, Monticciolo DL, Monsees B, et al. Persistent untreated screening-detected breast cancer: an argument against delaying screening or increasing the interval between screenings. J Am Coll Radiol. 2017;14:863-867.
- DeSantis CE, Ma J, Gaudet MM, et al. Breast cancer statistics, 2019. CA Cancer J Clin. 2019;69:438-451.
- Kaunitz AM. How effective is screening mammography for preventing breast cancer mortality? OBG Manag. 2020;32(8):17,49.
- Oeffinger KC, Fontham ET, Etzioni R, et al; American Cancer Society. Breast cancer screening for women at average risk: 2015 guideline update from the American Cancer Society. JAMA. 2015;314:1599-1614.
- US Preventive Services Task Force; Owens DK, Davidson KW, Krist AH, et al. Risk assessment, genetic counseling, and genetic testing for BRCA-related cancer: US Preventive Services Task Force recommendation statement. JAMA. 2019;322:652-665.
- Nelson HD, Fu R, Cantor A, et al. Effectiveness of breast cancer screening: systematic review and meta-analysis to update the 2009 US Preventive Services Task Force recommendation. Ann Intern Med. 2016;164:244-255.
- Bleyer A, Welch HG. Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med. 2012;367:1998-2005.
- Burton R, Stevenson C. Assessment of breast cancer mortality trends associated with mammographic screening and adjuvant therapy from 1986 to 2013 in the state of Victoria, Australia. JAMA Netw Open. 2020;3:e208249-e.
- Nelson HD, Cantor A, Humphrey L, et al. A systematic review to update the 2009 US Preventive Services Task Force recommendation. Evidence syntheses No. 124. AHRQ Publication No. 14-05201-EF-1. Rockville, MD: Agency for Healthcare Research and Quality; 2016.
- Bevers TB, Helvie M, Bonaccio E, et al. Breast cancer screening and diagnosis, version 3.2018, NCCN clinical practice guidelines in oncology. J Natl Compr Canc Netw. 2018;16:1362-1389.
- ACOG Committee on Practice Bulletins–Gynecology. Breast cancer risk assessment and screening in average-risk women. Obstet Gynecol. 2017;130:e1-e16.
- Duffy SW, Vulkan D, Cuckle H, et al. Effect of mammographic screening from age 40 years on breast cancer mortality (UK Age trial): final results of a randomised, controlled trial. Lancet Oncol. 2020;21:1165-1172.
- Arleo EK, Monticciolo DL, Monsees B, et al. Persistent untreated screening-detected breast cancer: an argument against delaying screening or increasing the interval between screenings. J Am Coll Radiol. 2017;14:863-867.
- DeSantis CE, Ma J, Gaudet MM, et al. Breast cancer statistics, 2019. CA Cancer J Clin. 2019;69:438-451.
- Kaunitz AM. How effective is screening mammography for preventing breast cancer mortality? OBG Manag. 2020;32(8):17,49.
- Oeffinger KC, Fontham ET, Etzioni R, et al; American Cancer Society. Breast cancer screening for women at average risk: 2015 guideline update from the American Cancer Society. JAMA. 2015;314:1599-1614.
- US Preventive Services Task Force; Owens DK, Davidson KW, Krist AH, et al. Risk assessment, genetic counseling, and genetic testing for BRCA-related cancer: US Preventive Services Task Force recommendation statement. JAMA. 2019;322:652-665.
Pathologic CR in HER2+ breast cancer predicts long-term survival
In fact, for the majority of women, pCR appears to be a marker of cure.
The trial was conducted among 455 women with HER2-positive breast cancer tumors measuring at least 2 cm who were randomized to neoadjuvant trastuzumab, lapatinib, or both drugs in combination, each together with paclitaxel, followed by more chemotherapy and more of the same targeted therapy after surgery.
Relative to trastuzumab alone, trastuzumab plus lapatinib improved rates of pCR, as shown by data published in The Lancet in 2012. However, the dual therapy did not significantly prolong event-free or overall survival, according to data published in The Lancet Oncology in 2014. Findings were similar in an update at a median follow-up of 6.7 years, published in the European Journal of Cancer in 2019.
Study investigator Paolo Nuciforo, MD, PhD, of the Vall d’Hebron Institute of Oncology in Barcelona, reported the trial’s final results, now at a median follow-up of 9.7 years, at the 12th European Breast Cancer Conference.
There were no significant differences in 9-year outcomes by specific HER2-targeted therapy. However, in a landmark analysis among women who were event free and still on follow-up 30 weeks after randomization, those achieving pCR with any of the therapies were 52% less likely to experience events and 63% less likely to die. Benefit was greatest in the subset of patients with hormone receptor–negative disease.
“The long-term follow-up confirms that, independent of the treatment regimen that we use – in this case, the dual blockade was with lapatinib, but similar results can be expected with other dual blockade – the pCR is a very robust surrogate biomarker of long-term survival,” Dr. Nuciforo commented in a press conference, noting that dual trastuzumab and pertuzumab has emerged as the standard of care.
“If we really pay attention to the curve, it’s maybe interesting to see that, after year 6, we actually don’t see any events in the pCR population. So this means that these patients are almost cured. We cannot say the word ‘cure’ in cancer, but it’s very reassuring to see the long-term survival analysis support the use of pCR as an endpoint,” he elaborated.
“Our results support the design of future trial concepts in HER2-positive early breast cancer which use pCR as an early efficacy readout of long-term benefit to escalate or deescalate therapy, particularly for hormone receptor–negative tumors,” Dr. Nuciforo concluded.
Support for current practice
“The study lends support for the current practice of risk-stratifying by pCR as well as making treatment decisions regarding T-DM1 [trastuzumab emtansine], and there hasn’t been a big change between 5-year and 9-year outcomes,” Lisa A. Carey, MD, of the University of North Carolina at Chapel Hill Lineberger Comprehensive Cancer Center, commented in an interview.
The lack of late events in the group with pCR technically meets the definition of cure, Dr. Carey said. “I think it speaks to the relatively early relapse risk in HER2-positive breast cancer and the impact of anti-HER2 therapy that carries forward. In general, these are findings similar to long-term findings of other trials and I suspect will be the same for any regimen.”
Although the analysis of dual lapatinib-trastuzumab therapy was underpowered, the trends seen align with favorable results in the adjuvant APHINITY trial (which combined trastuzumab with pertuzumab) and the neoadjuvant CALGB 40601 trial (which combined trastuzumab with lapatinib), according to Dr. Carey. “There has been a trend in every other study [of dual therapy] performed, so this is consistent.”
Study details
NeoALTTO is noteworthy for having the longest follow-up among all neoadjuvant studies of dual HER2 blockade in early breast cancer, Dr. Nuciforo said.
He reported no significant difference in survival between the treatment arms at 9 years.
The 9-year rate of event-free survival was 69% with lapatinib-trastuzumab, 63% with lapatinib alone, and 65% with trastuzumab alone. The corresponding 9-year rates of overall survival were 80%, 77%, and 76%, respectively.
However, there were significant differences in event-free and overall survival among women who achieved pCR and those who did not.
“pCR was achieved for almost twice as many patients treated with dual HER2 blockade, compared with patients in the single-agent arms,” Dr. Nuciforo pointed out. The pCR rate was 51.3% with lapatinib-trastuzumab, 24.7% with lapatinib alone, and 29.5% with trastuzumab alone.
Relative to peers who did not achieve pCR, women who did had better 9-year event-free survival (77% vs. 61%; adjusted hazard ratio, 0.48; P = .0008). The benefit was stronger in hormone receptor–negative disease (HR, 0.43; P = .002) than in hormone receptor–positive disease (HR, 0.60; P = .15).
The pattern was similar for overall survival at 9 years – 88% in those who achieved a pCR and 72% in those who did not (adjusted HR, 0.37; P = .0004). Again, greater benefit was seen in hormone receptor–negative disease (HR, 0.33; P = .002) than in hormone receptor–positive disease (HR, 0.44; P = .09).
“Biomarker-driven approaches may improve selection of those patients who are more likely to respond to anti-HER2 therapies,” Dr. Nuciforo proposed.
From 6 years onward, there were no additional fatal adverse events or nonfatal serious adverse events recorded, and no additional primary cardiac endpoints were recorded.
The study was funded by Novartis. Dr. Nuciforo and Dr. Carey disclosed no conflicts of interest.
SOURCE: Nuciforo P et al. EBCC-12 Virtual Conference, Abstract 23.
In fact, for the majority of women, pCR appears to be a marker of cure.
The trial was conducted among 455 women with HER2-positive breast cancer tumors measuring at least 2 cm who were randomized to neoadjuvant trastuzumab, lapatinib, or both drugs in combination, each together with paclitaxel, followed by more chemotherapy and more of the same targeted therapy after surgery.
Relative to trastuzumab alone, trastuzumab plus lapatinib improved rates of pCR, as shown by data published in The Lancet in 2012. However, the dual therapy did not significantly prolong event-free or overall survival, according to data published in The Lancet Oncology in 2014. Findings were similar in an update at a median follow-up of 6.7 years, published in the European Journal of Cancer in 2019.
Study investigator Paolo Nuciforo, MD, PhD, of the Vall d’Hebron Institute of Oncology in Barcelona, reported the trial’s final results, now at a median follow-up of 9.7 years, at the 12th European Breast Cancer Conference.
There were no significant differences in 9-year outcomes by specific HER2-targeted therapy. However, in a landmark analysis among women who were event free and still on follow-up 30 weeks after randomization, those achieving pCR with any of the therapies were 52% less likely to experience events and 63% less likely to die. Benefit was greatest in the subset of patients with hormone receptor–negative disease.
“The long-term follow-up confirms that, independent of the treatment regimen that we use – in this case, the dual blockade was with lapatinib, but similar results can be expected with other dual blockade – the pCR is a very robust surrogate biomarker of long-term survival,” Dr. Nuciforo commented in a press conference, noting that dual trastuzumab and pertuzumab has emerged as the standard of care.
“If we really pay attention to the curve, it’s maybe interesting to see that, after year 6, we actually don’t see any events in the pCR population. So this means that these patients are almost cured. We cannot say the word ‘cure’ in cancer, but it’s very reassuring to see the long-term survival analysis support the use of pCR as an endpoint,” he elaborated.
“Our results support the design of future trial concepts in HER2-positive early breast cancer which use pCR as an early efficacy readout of long-term benefit to escalate or deescalate therapy, particularly for hormone receptor–negative tumors,” Dr. Nuciforo concluded.
Support for current practice
“The study lends support for the current practice of risk-stratifying by pCR as well as making treatment decisions regarding T-DM1 [trastuzumab emtansine], and there hasn’t been a big change between 5-year and 9-year outcomes,” Lisa A. Carey, MD, of the University of North Carolina at Chapel Hill Lineberger Comprehensive Cancer Center, commented in an interview.
The lack of late events in the group with pCR technically meets the definition of cure, Dr. Carey said. “I think it speaks to the relatively early relapse risk in HER2-positive breast cancer and the impact of anti-HER2 therapy that carries forward. In general, these are findings similar to long-term findings of other trials and I suspect will be the same for any regimen.”
Although the analysis of dual lapatinib-trastuzumab therapy was underpowered, the trends seen align with favorable results in the adjuvant APHINITY trial (which combined trastuzumab with pertuzumab) and the neoadjuvant CALGB 40601 trial (which combined trastuzumab with lapatinib), according to Dr. Carey. “There has been a trend in every other study [of dual therapy] performed, so this is consistent.”
Study details
NeoALTTO is noteworthy for having the longest follow-up among all neoadjuvant studies of dual HER2 blockade in early breast cancer, Dr. Nuciforo said.
He reported no significant difference in survival between the treatment arms at 9 years.
The 9-year rate of event-free survival was 69% with lapatinib-trastuzumab, 63% with lapatinib alone, and 65% with trastuzumab alone. The corresponding 9-year rates of overall survival were 80%, 77%, and 76%, respectively.
However, there were significant differences in event-free and overall survival among women who achieved pCR and those who did not.
“pCR was achieved for almost twice as many patients treated with dual HER2 blockade, compared with patients in the single-agent arms,” Dr. Nuciforo pointed out. The pCR rate was 51.3% with lapatinib-trastuzumab, 24.7% with lapatinib alone, and 29.5% with trastuzumab alone.
Relative to peers who did not achieve pCR, women who did had better 9-year event-free survival (77% vs. 61%; adjusted hazard ratio, 0.48; P = .0008). The benefit was stronger in hormone receptor–negative disease (HR, 0.43; P = .002) than in hormone receptor–positive disease (HR, 0.60; P = .15).
The pattern was similar for overall survival at 9 years – 88% in those who achieved a pCR and 72% in those who did not (adjusted HR, 0.37; P = .0004). Again, greater benefit was seen in hormone receptor–negative disease (HR, 0.33; P = .002) than in hormone receptor–positive disease (HR, 0.44; P = .09).
“Biomarker-driven approaches may improve selection of those patients who are more likely to respond to anti-HER2 therapies,” Dr. Nuciforo proposed.
From 6 years onward, there were no additional fatal adverse events or nonfatal serious adverse events recorded, and no additional primary cardiac endpoints were recorded.
The study was funded by Novartis. Dr. Nuciforo and Dr. Carey disclosed no conflicts of interest.
SOURCE: Nuciforo P et al. EBCC-12 Virtual Conference, Abstract 23.
In fact, for the majority of women, pCR appears to be a marker of cure.
The trial was conducted among 455 women with HER2-positive breast cancer tumors measuring at least 2 cm who were randomized to neoadjuvant trastuzumab, lapatinib, or both drugs in combination, each together with paclitaxel, followed by more chemotherapy and more of the same targeted therapy after surgery.
Relative to trastuzumab alone, trastuzumab plus lapatinib improved rates of pCR, as shown by data published in The Lancet in 2012. However, the dual therapy did not significantly prolong event-free or overall survival, according to data published in The Lancet Oncology in 2014. Findings were similar in an update at a median follow-up of 6.7 years, published in the European Journal of Cancer in 2019.
Study investigator Paolo Nuciforo, MD, PhD, of the Vall d’Hebron Institute of Oncology in Barcelona, reported the trial’s final results, now at a median follow-up of 9.7 years, at the 12th European Breast Cancer Conference.
There were no significant differences in 9-year outcomes by specific HER2-targeted therapy. However, in a landmark analysis among women who were event free and still on follow-up 30 weeks after randomization, those achieving pCR with any of the therapies were 52% less likely to experience events and 63% less likely to die. Benefit was greatest in the subset of patients with hormone receptor–negative disease.
“The long-term follow-up confirms that, independent of the treatment regimen that we use – in this case, the dual blockade was with lapatinib, but similar results can be expected with other dual blockade – the pCR is a very robust surrogate biomarker of long-term survival,” Dr. Nuciforo commented in a press conference, noting that dual trastuzumab and pertuzumab has emerged as the standard of care.
“If we really pay attention to the curve, it’s maybe interesting to see that, after year 6, we actually don’t see any events in the pCR population. So this means that these patients are almost cured. We cannot say the word ‘cure’ in cancer, but it’s very reassuring to see the long-term survival analysis support the use of pCR as an endpoint,” he elaborated.
“Our results support the design of future trial concepts in HER2-positive early breast cancer which use pCR as an early efficacy readout of long-term benefit to escalate or deescalate therapy, particularly for hormone receptor–negative tumors,” Dr. Nuciforo concluded.
Support for current practice
“The study lends support for the current practice of risk-stratifying by pCR as well as making treatment decisions regarding T-DM1 [trastuzumab emtansine], and there hasn’t been a big change between 5-year and 9-year outcomes,” Lisa A. Carey, MD, of the University of North Carolina at Chapel Hill Lineberger Comprehensive Cancer Center, commented in an interview.
The lack of late events in the group with pCR technically meets the definition of cure, Dr. Carey said. “I think it speaks to the relatively early relapse risk in HER2-positive breast cancer and the impact of anti-HER2 therapy that carries forward. In general, these are findings similar to long-term findings of other trials and I suspect will be the same for any regimen.”
Although the analysis of dual lapatinib-trastuzumab therapy was underpowered, the trends seen align with favorable results in the adjuvant APHINITY trial (which combined trastuzumab with pertuzumab) and the neoadjuvant CALGB 40601 trial (which combined trastuzumab with lapatinib), according to Dr. Carey. “There has been a trend in every other study [of dual therapy] performed, so this is consistent.”
Study details
NeoALTTO is noteworthy for having the longest follow-up among all neoadjuvant studies of dual HER2 blockade in early breast cancer, Dr. Nuciforo said.
He reported no significant difference in survival between the treatment arms at 9 years.
The 9-year rate of event-free survival was 69% with lapatinib-trastuzumab, 63% with lapatinib alone, and 65% with trastuzumab alone. The corresponding 9-year rates of overall survival were 80%, 77%, and 76%, respectively.
However, there were significant differences in event-free and overall survival among women who achieved pCR and those who did not.
“pCR was achieved for almost twice as many patients treated with dual HER2 blockade, compared with patients in the single-agent arms,” Dr. Nuciforo pointed out. The pCR rate was 51.3% with lapatinib-trastuzumab, 24.7% with lapatinib alone, and 29.5% with trastuzumab alone.
Relative to peers who did not achieve pCR, women who did had better 9-year event-free survival (77% vs. 61%; adjusted hazard ratio, 0.48; P = .0008). The benefit was stronger in hormone receptor–negative disease (HR, 0.43; P = .002) than in hormone receptor–positive disease (HR, 0.60; P = .15).
The pattern was similar for overall survival at 9 years – 88% in those who achieved a pCR and 72% in those who did not (adjusted HR, 0.37; P = .0004). Again, greater benefit was seen in hormone receptor–negative disease (HR, 0.33; P = .002) than in hormone receptor–positive disease (HR, 0.44; P = .09).
“Biomarker-driven approaches may improve selection of those patients who are more likely to respond to anti-HER2 therapies,” Dr. Nuciforo proposed.
From 6 years onward, there were no additional fatal adverse events or nonfatal serious adverse events recorded, and no additional primary cardiac endpoints were recorded.
The study was funded by Novartis. Dr. Nuciforo and Dr. Carey disclosed no conflicts of interest.
SOURCE: Nuciforo P et al. EBCC-12 Virtual Conference, Abstract 23.
FROM EBCC-12 VIRTUAL CONFERENCE
Combined features of benign breast disease tied to breast cancer risk
“Benign breast disease is a key risk factor for breast cancer risk prediction,” commented presenting investigator Marta Román, PhD, of the Hospital del Mar Medical Research Institute in Barcelona. “Those women who have had a benign breast disease diagnosis have an increased risk that lasts for at least 20 years.”
To assess the combined influence of various attributes of benign breast disease, the investigators studied 629,087 women, aged 50-69 years, in Spain who underwent population-based mammographic breast cancer screening during 1994-2015 and did not have breast cancer at their prevalent (first) screen. The mean follow-up was 7.8 years.
Results showed that breast cancer risk was about three times higher for women with benign breast disease that was proliferative or that was detected on an incident screen, relative to peers with no benign breast disease. When combinations of factors were considered, breast cancer risk was most elevated – more than four times higher – for women with proliferative benign breast disease with atypia detected on an incident screen.
“We believe that these findings should be considered when discussing risk-based personalized screening strategies because these differences between prevalent and incident screens might be important if we want to personalize the screening, whether it’s the first time a woman comes to the screening program or a subsequent screen,” Dr. Román said.
Practice changing?
The study’s large size and population-based design, likely permitting capture of most biopsy results, are strengths, Mark David Pearlman, MD, of the University of Michigan, Ann Arbor, commented in an interview.
But its observational, retrospective nature opens the study up to biases, such as uncertainty as to how many women were symptomatic at the time of their mammogram and the likelihood of heightened monitoring after a biopsy showing hyperplasia, Dr. Pearlman cautioned.
“Moreover, the relative risk in this study for proliferative benign breast disease without atypia is substantially higher than prior observations of this group. This discrepancy was not discussed by the authors,” Dr. Pearlman said.
At present, women’s risk of breast cancer is predicted using well-validated models that include the question of prior breast biopsies, such as the Gail Model, the Tyrer-Cuzick model (IBIS tool), and the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm, Dr. Pearlman noted.
“This study, without further validation within a model, would not change risk assessment,” he said, disagreeing with the investigators’ conclusions. “What I would say is that further study to determine how to use this observation to decide if any change in screening or management should occur would be more appropriate.”
Study details
The 629,087 women studied underwent 2,327,384 screens, Dr. Román reported. In total, screening detected 9,184 cases of benign breast disease and 9,431 breast cancers.
Breast cancer was diagnosed in 2.4% and 3.0% of women with benign breast disease detected on prevalent and incident screens, respectively, compared with 1.5% of women without any benign breast disease detected.
Elevation of breast cancer risk varied across benign breast disease subtype. Relative to peers without any benign disease, risk was significantly elevated for women with nonproliferative disease (adjusted hazard ratio, 1.95), proliferative disease without atypia (aHR, 3.19), and proliferative disease with atypia (aHR, 3.82).
Similarly, elevation of risk varied depending on the screening at which the benign disease was detected. Risk was significantly elevated when the disease was found at prevalent screens (aHR, 1.87) and more so when it was found at incident screens (aHR, 2.67).
There was no significant interaction of these two factors (P = .83). However, when combinations were considered, risk was highest for women with proliferative benign breast disease with atypia detected on incident screens (aHR, 4.35) or prevalent screens (aHR, 3.35), and women with proliferative benign breast disease without atypia detected on incident screens (aHR, 3.83).
This study was supported by grants from Instituto de Salud Carlos III FEDER and by the Research Network on Health Services in Chronic Diseases. Dr. Román and Dr. Pearlman disclosed no conflicts of interest.
SOURCE: Román M et al. EBCC-12 Virtual Conference, Abstract 15.
“Benign breast disease is a key risk factor for breast cancer risk prediction,” commented presenting investigator Marta Román, PhD, of the Hospital del Mar Medical Research Institute in Barcelona. “Those women who have had a benign breast disease diagnosis have an increased risk that lasts for at least 20 years.”
To assess the combined influence of various attributes of benign breast disease, the investigators studied 629,087 women, aged 50-69 years, in Spain who underwent population-based mammographic breast cancer screening during 1994-2015 and did not have breast cancer at their prevalent (first) screen. The mean follow-up was 7.8 years.
Results showed that breast cancer risk was about three times higher for women with benign breast disease that was proliferative or that was detected on an incident screen, relative to peers with no benign breast disease. When combinations of factors were considered, breast cancer risk was most elevated – more than four times higher – for women with proliferative benign breast disease with atypia detected on an incident screen.
“We believe that these findings should be considered when discussing risk-based personalized screening strategies because these differences between prevalent and incident screens might be important if we want to personalize the screening, whether it’s the first time a woman comes to the screening program or a subsequent screen,” Dr. Román said.
Practice changing?
The study’s large size and population-based design, likely permitting capture of most biopsy results, are strengths, Mark David Pearlman, MD, of the University of Michigan, Ann Arbor, commented in an interview.
But its observational, retrospective nature opens the study up to biases, such as uncertainty as to how many women were symptomatic at the time of their mammogram and the likelihood of heightened monitoring after a biopsy showing hyperplasia, Dr. Pearlman cautioned.
“Moreover, the relative risk in this study for proliferative benign breast disease without atypia is substantially higher than prior observations of this group. This discrepancy was not discussed by the authors,” Dr. Pearlman said.
At present, women’s risk of breast cancer is predicted using well-validated models that include the question of prior breast biopsies, such as the Gail Model, the Tyrer-Cuzick model (IBIS tool), and the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm, Dr. Pearlman noted.
“This study, without further validation within a model, would not change risk assessment,” he said, disagreeing with the investigators’ conclusions. “What I would say is that further study to determine how to use this observation to decide if any change in screening or management should occur would be more appropriate.”
Study details
The 629,087 women studied underwent 2,327,384 screens, Dr. Román reported. In total, screening detected 9,184 cases of benign breast disease and 9,431 breast cancers.
Breast cancer was diagnosed in 2.4% and 3.0% of women with benign breast disease detected on prevalent and incident screens, respectively, compared with 1.5% of women without any benign breast disease detected.
Elevation of breast cancer risk varied across benign breast disease subtype. Relative to peers without any benign disease, risk was significantly elevated for women with nonproliferative disease (adjusted hazard ratio, 1.95), proliferative disease without atypia (aHR, 3.19), and proliferative disease with atypia (aHR, 3.82).
Similarly, elevation of risk varied depending on the screening at which the benign disease was detected. Risk was significantly elevated when the disease was found at prevalent screens (aHR, 1.87) and more so when it was found at incident screens (aHR, 2.67).
There was no significant interaction of these two factors (P = .83). However, when combinations were considered, risk was highest for women with proliferative benign breast disease with atypia detected on incident screens (aHR, 4.35) or prevalent screens (aHR, 3.35), and women with proliferative benign breast disease without atypia detected on incident screens (aHR, 3.83).
This study was supported by grants from Instituto de Salud Carlos III FEDER and by the Research Network on Health Services in Chronic Diseases. Dr. Román and Dr. Pearlman disclosed no conflicts of interest.
SOURCE: Román M et al. EBCC-12 Virtual Conference, Abstract 15.
“Benign breast disease is a key risk factor for breast cancer risk prediction,” commented presenting investigator Marta Román, PhD, of the Hospital del Mar Medical Research Institute in Barcelona. “Those women who have had a benign breast disease diagnosis have an increased risk that lasts for at least 20 years.”
To assess the combined influence of various attributes of benign breast disease, the investigators studied 629,087 women, aged 50-69 years, in Spain who underwent population-based mammographic breast cancer screening during 1994-2015 and did not have breast cancer at their prevalent (first) screen. The mean follow-up was 7.8 years.
Results showed that breast cancer risk was about three times higher for women with benign breast disease that was proliferative or that was detected on an incident screen, relative to peers with no benign breast disease. When combinations of factors were considered, breast cancer risk was most elevated – more than four times higher – for women with proliferative benign breast disease with atypia detected on an incident screen.
“We believe that these findings should be considered when discussing risk-based personalized screening strategies because these differences between prevalent and incident screens might be important if we want to personalize the screening, whether it’s the first time a woman comes to the screening program or a subsequent screen,” Dr. Román said.
Practice changing?
The study’s large size and population-based design, likely permitting capture of most biopsy results, are strengths, Mark David Pearlman, MD, of the University of Michigan, Ann Arbor, commented in an interview.
But its observational, retrospective nature opens the study up to biases, such as uncertainty as to how many women were symptomatic at the time of their mammogram and the likelihood of heightened monitoring after a biopsy showing hyperplasia, Dr. Pearlman cautioned.
“Moreover, the relative risk in this study for proliferative benign breast disease without atypia is substantially higher than prior observations of this group. This discrepancy was not discussed by the authors,” Dr. Pearlman said.
At present, women’s risk of breast cancer is predicted using well-validated models that include the question of prior breast biopsies, such as the Gail Model, the Tyrer-Cuzick model (IBIS tool), and the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm, Dr. Pearlman noted.
“This study, without further validation within a model, would not change risk assessment,” he said, disagreeing with the investigators’ conclusions. “What I would say is that further study to determine how to use this observation to decide if any change in screening or management should occur would be more appropriate.”
Study details
The 629,087 women studied underwent 2,327,384 screens, Dr. Román reported. In total, screening detected 9,184 cases of benign breast disease and 9,431 breast cancers.
Breast cancer was diagnosed in 2.4% and 3.0% of women with benign breast disease detected on prevalent and incident screens, respectively, compared with 1.5% of women without any benign breast disease detected.
Elevation of breast cancer risk varied across benign breast disease subtype. Relative to peers without any benign disease, risk was significantly elevated for women with nonproliferative disease (adjusted hazard ratio, 1.95), proliferative disease without atypia (aHR, 3.19), and proliferative disease with atypia (aHR, 3.82).
Similarly, elevation of risk varied depending on the screening at which the benign disease was detected. Risk was significantly elevated when the disease was found at prevalent screens (aHR, 1.87) and more so when it was found at incident screens (aHR, 2.67).
There was no significant interaction of these two factors (P = .83). However, when combinations were considered, risk was highest for women with proliferative benign breast disease with atypia detected on incident screens (aHR, 4.35) or prevalent screens (aHR, 3.35), and women with proliferative benign breast disease without atypia detected on incident screens (aHR, 3.83).
This study was supported by grants from Instituto de Salud Carlos III FEDER and by the Research Network on Health Services in Chronic Diseases. Dr. Román and Dr. Pearlman disclosed no conflicts of interest.
SOURCE: Román M et al. EBCC-12 Virtual Conference, Abstract 15.
FROM EBCC-12 VIRTUAL CONFERENCE
HIT-6 may help track meaningful change in chronic migraine
, recent research suggests.
Using data from the phase 3 PROMISE-2 study, which evaluated intravenous eptinezumab in doses of 100 mg or 300 mg, or placebo every 12 weeks in 1,072 participants for the prevention of chronic migraine, Carrie R. Houts, PhD, director of psychometrics at the Vector Psychometric Group, in Chapel Hill, N.C., and colleagues determined that their finding of 6-point improvement of HIT-6 total score was consistent with other studies. However, they pointed out that little research has been done in evaluating how item-specific scores of HIT-6 impact individuals with chronic migraine. HIT-6 item scores examine whether individuals with headaches experience severe pain, limit their daily activities, have a desire to lie down, feel too tired to do daily activities, felt “fed up or irritated” because of headaches, and feel their headaches limit concentration on work or daily activities.
“The item-specific responder definitions give clinicians and researchers the ability to evaluate and track the impact of headache on specific item-level areas of patients’ lives. These responder definitions provide practical and easily interpreted results that can be used to evaluate treatment benefits over time and to improve clinician-patients communication focus on improvements in key aspects of functioning in individuals with chronic migraine,” Dr. Houts and colleagues wrote in their study, published in the October issue of Headache.
The 6-point value and the 1-2 category improvement values in item-specific scores, they suggested, could be used as a benchmark to help other clinicians and researchers detect meaningful change in individual patients with chronic migraine. Although the user guide for HIT-6 highlights a 5-point change in the total score as clinically meaningful, the authors of the guide do not provide evidence for why the 5-point value signifies clinically meaningful change, they said.
Determining thresholds of clinically meaningful change
In their study, Dr. Houts and colleagues used distribution-based methods to gauge responder values for the HIT-6 total score, while item-specific HIT-6 analyses were measured with Patients’ Global Impression of Change (PGIC), reduction in migraine frequency through monthly migraine days (MMDs), and EuroQol 5 dimensions 5 levels visual analog scale (EQ-5D-5L VAS). The researchers also used HIT-6 values from a literature review and from analyses in PROMISE-2 to calculate “a final chronic migraine-specific responder definition value” between baseline and 12 weeks. Participants in the PROMISE-2 study were mostly women (88.2%) and white (91.0%) with a mean age of 40.5 years.
The literature search revealed responder thresholds for the HIT-6 total score in a range between a decrease of 3 points and 8 points. Within PROMISE-2, the HIT-6 total score responder threshold was found to be between –2.6 and –2.2, which the researchers rounded down to a decrease of 3 points. When taking both sets of responder thresholds into account, the researchers calculated the median responder value as –5.5, which was rounded down to a decrease in 6 points in the HIT-6 total score. “[The estimate] appears most appropriate for discriminating between individuals with chronic migraine who have experienced meaningful change over time and those who have not,” Dr. Houts and colleagues said.
For item-specific HIT-6 scores, the mean score changes were –1 points for categories involving severe pain, limiting activities, lying down, and –2 points for categories involving feeling tired, being fed up or irritated, and limiting concentration.
“Taken together, the current chronic migraine-specific results are consistent with values derived from general headache/migraine samples and suggest that a decrease of 6 points or more on the HIT-6 total score would be considered meaningful to chronic migraine patients,” Dr. Houts and colleagues said. “This would translate to approximately a 4-category change on a single item, change on 2 items of approximately 2 and 3 categories, or a 1-category change on 3 or 4 of the 6 items, depending on the initial category.”
The researchers cautioned that the values outlined in the study “should not be used to determine clinically meaningful difference between treatment groups” and that “future work, similar to that reported here, will identify a chronic migraine-specific clinically meaningful difference between treatment groups value.”
A better measure of chronic migraine?
In an interview, J. D. Bartleson Jr., MD, a retired neurologist with the Mayo Clinic in Rochester, Minn., questioned why HIT-6 criteria was used in the initial PROMISE-2 study. “There is not a lot of difference between the significant and insignificant categories. Chronic migraine may be better measured with pain severity and number of headache days per month,” he said.
,“It may be appropriate to use just 1 or 2 symptoms for evaluating a given patient’s headache burden,” in terms of clinical application of the study for neurologists, Dr. Bartleson said. He emphasized that more research is needed.
This study was funded by H. Lundbeck A/S, which also provided funding of medical writing and editorial support for the manuscript. Three authors report being employees of Vector Psychometric Group at the time of the study, and the company received funding from H. Lundbeck A/S for their time conducting study-related research. Three other authors report relationships with pharmaceutical companies, medical societies, government agencies, and industry related to the study in the form of consultancies, advisory board memberships, honoraria, research support, stock or stock options, and employment. Dr. Bartleson reports no relevant conflicts of interest.
, recent research suggests.
Using data from the phase 3 PROMISE-2 study, which evaluated intravenous eptinezumab in doses of 100 mg or 300 mg, or placebo every 12 weeks in 1,072 participants for the prevention of chronic migraine, Carrie R. Houts, PhD, director of psychometrics at the Vector Psychometric Group, in Chapel Hill, N.C., and colleagues determined that their finding of 6-point improvement of HIT-6 total score was consistent with other studies. However, they pointed out that little research has been done in evaluating how item-specific scores of HIT-6 impact individuals with chronic migraine. HIT-6 item scores examine whether individuals with headaches experience severe pain, limit their daily activities, have a desire to lie down, feel too tired to do daily activities, felt “fed up or irritated” because of headaches, and feel their headaches limit concentration on work or daily activities.
“The item-specific responder definitions give clinicians and researchers the ability to evaluate and track the impact of headache on specific item-level areas of patients’ lives. These responder definitions provide practical and easily interpreted results that can be used to evaluate treatment benefits over time and to improve clinician-patients communication focus on improvements in key aspects of functioning in individuals with chronic migraine,” Dr. Houts and colleagues wrote in their study, published in the October issue of Headache.
The 6-point value and the 1-2 category improvement values in item-specific scores, they suggested, could be used as a benchmark to help other clinicians and researchers detect meaningful change in individual patients with chronic migraine. Although the user guide for HIT-6 highlights a 5-point change in the total score as clinically meaningful, the authors of the guide do not provide evidence for why the 5-point value signifies clinically meaningful change, they said.
Determining thresholds of clinically meaningful change
In their study, Dr. Houts and colleagues used distribution-based methods to gauge responder values for the HIT-6 total score, while item-specific HIT-6 analyses were measured with Patients’ Global Impression of Change (PGIC), reduction in migraine frequency through monthly migraine days (MMDs), and EuroQol 5 dimensions 5 levels visual analog scale (EQ-5D-5L VAS). The researchers also used HIT-6 values from a literature review and from analyses in PROMISE-2 to calculate “a final chronic migraine-specific responder definition value” between baseline and 12 weeks. Participants in the PROMISE-2 study were mostly women (88.2%) and white (91.0%) with a mean age of 40.5 years.
The literature search revealed responder thresholds for the HIT-6 total score in a range between a decrease of 3 points and 8 points. Within PROMISE-2, the HIT-6 total score responder threshold was found to be between –2.6 and –2.2, which the researchers rounded down to a decrease of 3 points. When taking both sets of responder thresholds into account, the researchers calculated the median responder value as –5.5, which was rounded down to a decrease in 6 points in the HIT-6 total score. “[The estimate] appears most appropriate for discriminating between individuals with chronic migraine who have experienced meaningful change over time and those who have not,” Dr. Houts and colleagues said.
For item-specific HIT-6 scores, the mean score changes were –1 points for categories involving severe pain, limiting activities, lying down, and –2 points for categories involving feeling tired, being fed up or irritated, and limiting concentration.
“Taken together, the current chronic migraine-specific results are consistent with values derived from general headache/migraine samples and suggest that a decrease of 6 points or more on the HIT-6 total score would be considered meaningful to chronic migraine patients,” Dr. Houts and colleagues said. “This would translate to approximately a 4-category change on a single item, change on 2 items of approximately 2 and 3 categories, or a 1-category change on 3 or 4 of the 6 items, depending on the initial category.”
The researchers cautioned that the values outlined in the study “should not be used to determine clinically meaningful difference between treatment groups” and that “future work, similar to that reported here, will identify a chronic migraine-specific clinically meaningful difference between treatment groups value.”
A better measure of chronic migraine?
In an interview, J. D. Bartleson Jr., MD, a retired neurologist with the Mayo Clinic in Rochester, Minn., questioned why HIT-6 criteria was used in the initial PROMISE-2 study. “There is not a lot of difference between the significant and insignificant categories. Chronic migraine may be better measured with pain severity and number of headache days per month,” he said.
,“It may be appropriate to use just 1 or 2 symptoms for evaluating a given patient’s headache burden,” in terms of clinical application of the study for neurologists, Dr. Bartleson said. He emphasized that more research is needed.
This study was funded by H. Lundbeck A/S, which also provided funding of medical writing and editorial support for the manuscript. Three authors report being employees of Vector Psychometric Group at the time of the study, and the company received funding from H. Lundbeck A/S for their time conducting study-related research. Three other authors report relationships with pharmaceutical companies, medical societies, government agencies, and industry related to the study in the form of consultancies, advisory board memberships, honoraria, research support, stock or stock options, and employment. Dr. Bartleson reports no relevant conflicts of interest.
, recent research suggests.
Using data from the phase 3 PROMISE-2 study, which evaluated intravenous eptinezumab in doses of 100 mg or 300 mg, or placebo every 12 weeks in 1,072 participants for the prevention of chronic migraine, Carrie R. Houts, PhD, director of psychometrics at the Vector Psychometric Group, in Chapel Hill, N.C., and colleagues determined that their finding of 6-point improvement of HIT-6 total score was consistent with other studies. However, they pointed out that little research has been done in evaluating how item-specific scores of HIT-6 impact individuals with chronic migraine. HIT-6 item scores examine whether individuals with headaches experience severe pain, limit their daily activities, have a desire to lie down, feel too tired to do daily activities, felt “fed up or irritated” because of headaches, and feel their headaches limit concentration on work or daily activities.
“The item-specific responder definitions give clinicians and researchers the ability to evaluate and track the impact of headache on specific item-level areas of patients’ lives. These responder definitions provide practical and easily interpreted results that can be used to evaluate treatment benefits over time and to improve clinician-patients communication focus on improvements in key aspects of functioning in individuals with chronic migraine,” Dr. Houts and colleagues wrote in their study, published in the October issue of Headache.
The 6-point value and the 1-2 category improvement values in item-specific scores, they suggested, could be used as a benchmark to help other clinicians and researchers detect meaningful change in individual patients with chronic migraine. Although the user guide for HIT-6 highlights a 5-point change in the total score as clinically meaningful, the authors of the guide do not provide evidence for why the 5-point value signifies clinically meaningful change, they said.
Determining thresholds of clinically meaningful change
In their study, Dr. Houts and colleagues used distribution-based methods to gauge responder values for the HIT-6 total score, while item-specific HIT-6 analyses were measured with Patients’ Global Impression of Change (PGIC), reduction in migraine frequency through monthly migraine days (MMDs), and EuroQol 5 dimensions 5 levels visual analog scale (EQ-5D-5L VAS). The researchers also used HIT-6 values from a literature review and from analyses in PROMISE-2 to calculate “a final chronic migraine-specific responder definition value” between baseline and 12 weeks. Participants in the PROMISE-2 study were mostly women (88.2%) and white (91.0%) with a mean age of 40.5 years.
The literature search revealed responder thresholds for the HIT-6 total score in a range between a decrease of 3 points and 8 points. Within PROMISE-2, the HIT-6 total score responder threshold was found to be between –2.6 and –2.2, which the researchers rounded down to a decrease of 3 points. When taking both sets of responder thresholds into account, the researchers calculated the median responder value as –5.5, which was rounded down to a decrease in 6 points in the HIT-6 total score. “[The estimate] appears most appropriate for discriminating between individuals with chronic migraine who have experienced meaningful change over time and those who have not,” Dr. Houts and colleagues said.
For item-specific HIT-6 scores, the mean score changes were –1 points for categories involving severe pain, limiting activities, lying down, and –2 points for categories involving feeling tired, being fed up or irritated, and limiting concentration.
“Taken together, the current chronic migraine-specific results are consistent with values derived from general headache/migraine samples and suggest that a decrease of 6 points or more on the HIT-6 total score would be considered meaningful to chronic migraine patients,” Dr. Houts and colleagues said. “This would translate to approximately a 4-category change on a single item, change on 2 items of approximately 2 and 3 categories, or a 1-category change on 3 or 4 of the 6 items, depending on the initial category.”
The researchers cautioned that the values outlined in the study “should not be used to determine clinically meaningful difference between treatment groups” and that “future work, similar to that reported here, will identify a chronic migraine-specific clinically meaningful difference between treatment groups value.”
A better measure of chronic migraine?
In an interview, J. D. Bartleson Jr., MD, a retired neurologist with the Mayo Clinic in Rochester, Minn., questioned why HIT-6 criteria was used in the initial PROMISE-2 study. “There is not a lot of difference between the significant and insignificant categories. Chronic migraine may be better measured with pain severity and number of headache days per month,” he said.
,“It may be appropriate to use just 1 or 2 symptoms for evaluating a given patient’s headache burden,” in terms of clinical application of the study for neurologists, Dr. Bartleson said. He emphasized that more research is needed.
This study was funded by H. Lundbeck A/S, which also provided funding of medical writing and editorial support for the manuscript. Three authors report being employees of Vector Psychometric Group at the time of the study, and the company received funding from H. Lundbeck A/S for their time conducting study-related research. Three other authors report relationships with pharmaceutical companies, medical societies, government agencies, and industry related to the study in the form of consultancies, advisory board memberships, honoraria, research support, stock or stock options, and employment. Dr. Bartleson reports no relevant conflicts of interest.
FROM HEADACHE
Gene signature found similarly prognostic in ILC and IDC
ILC is enriched with features indicating low proliferative activity, noted investigator Otto Metzger, MD, of the Dana Farber Cancer Institute in Boston.
“Data from retrospective series have indicated no benefit with adjuvant chemotherapy for patients diagnosed with early-stage ILC,” he said. “It’s fair to say that chemotherapy decisions for patients with ILC remain controversial.”
With this in mind, Dr. Metzger and colleagues analyzed data for 5,313 women who underwent surgery for early-stage breast cancer (node-negative or up to three positive lymph nodes) and were risk-stratified to receive or skip adjuvant chemotherapy based on both clinical risk and the MammaPrint score for genomic risk. Fully 44% of women with ILC had discordant clinical and genomic risks.
With a median follow-up of 8.7 years, the 5-year rate of distant metastasis–free survival among all patients classified as genomic high risk was 92.1% in women with IDC and 88.1% in women with ILC, with overlapping 95% confidence intervals. Rates of distant metastasis–free survival for patients with genomic low risk were 96.4% in women with IDC and 96.6% in women with ILC, again with confidence intervals that overlapped.
The pattern was essentially the same for overall survival, and results carried over into 8-year outcomes as well.
“We believe that MammaPrint is a clinically useful test for patients diagnosed with ILC,” Dr. Metzger said. “There are similar survival outcomes for ILC and IDC when matched by genomic risk. This is an important message.”
It should be standard to omit chemotherapy for patients who have ILC classified as high clinical risk but low genomic risk by MammaPrint, Dr. Metzger recommended. “By contrast, MammaPrint should facilitate chemotherapy treatment decisions for patients diagnosed with ILC and high-risk MammaPrint,” he said.
Prognostic, but predictive?
“This is a well-designed prospective multicenter trial and provides the best evidence to date that MammaPrint is an important prognostic tool for ILC,” Todd Tuttle, MD, of University of Minnesota in Minneapolis, said in an interview.
Dr. Tuttle said he mainly uses the MammaPrint test and the OncoType 21-gene recurrence score to estimate prognosis for his patients with ILC.
These new data establish that MammaPrint is prognostic in ILC, but the value of MammaPrint’s genomic high risk result for making the decision about chemotherapy is still unclear, according to Dr. Tuttle.
“I don’t think we know whether MammaPrint can predict the benefit of chemotherapy for patients with stage I or II ILC,” he elaborated. “We need further high-quality studies such as this one to determine the best treatment strategies for ILC, which is a difficult breast cancer.”
Study details
Of the 5,313 patients studied, 487 had ILC (255 classic and 232 variant) and 4,826 had IDC according to central pathology assessment, Dr. Metzger reported.
MammaPrint classified 39% of the IDC group and 16% of the ILC group (10% of those with classic disease and 23% of those with variant disease) as genomically high risk for recurrence. The Adjuvant! Online tool classified 48.3% of ILC and 51.5% of IDC patients as clinically high risk.
Among the 44% of women with ILC having discordant genomic and clinical risk, discordance was usually due to the combination of low genomic risk and high clinical risk, seen in 38%.
The curves for 5-year distant metastasis–free survival stratified by genomic risk essentially overlapped for the IDC and ILC groups. Furthermore, there was no significant interaction of histologic type and genomic risk on this outcome (P = .547).
The 5-year rate of overall survival among women with genomic high risk was 95.6% in the IDC group and 93.5% in the ILC group. Among women with genomic low risk, 5-year overall survival was 98.1% in the IDC group and 97.7% in the ILC group, again with overlapping confidence intervals within each risk category.
The study was funded with support from the Breast Cancer Research Foundation. Dr. Metzger disclosed consulting fees from AbbVie, Genentech, Roche, and Pfizer. Dr. Tuttle disclosed no conflicts of interest.
SOURCE: Metzger O et al. EBCC-12 Virtual Conference. Abstract 6.
ILC is enriched with features indicating low proliferative activity, noted investigator Otto Metzger, MD, of the Dana Farber Cancer Institute in Boston.
“Data from retrospective series have indicated no benefit with adjuvant chemotherapy for patients diagnosed with early-stage ILC,” he said. “It’s fair to say that chemotherapy decisions for patients with ILC remain controversial.”
With this in mind, Dr. Metzger and colleagues analyzed data for 5,313 women who underwent surgery for early-stage breast cancer (node-negative or up to three positive lymph nodes) and were risk-stratified to receive or skip adjuvant chemotherapy based on both clinical risk and the MammaPrint score for genomic risk. Fully 44% of women with ILC had discordant clinical and genomic risks.
With a median follow-up of 8.7 years, the 5-year rate of distant metastasis–free survival among all patients classified as genomic high risk was 92.1% in women with IDC and 88.1% in women with ILC, with overlapping 95% confidence intervals. Rates of distant metastasis–free survival for patients with genomic low risk were 96.4% in women with IDC and 96.6% in women with ILC, again with confidence intervals that overlapped.
The pattern was essentially the same for overall survival, and results carried over into 8-year outcomes as well.
“We believe that MammaPrint is a clinically useful test for patients diagnosed with ILC,” Dr. Metzger said. “There are similar survival outcomes for ILC and IDC when matched by genomic risk. This is an important message.”
It should be standard to omit chemotherapy for patients who have ILC classified as high clinical risk but low genomic risk by MammaPrint, Dr. Metzger recommended. “By contrast, MammaPrint should facilitate chemotherapy treatment decisions for patients diagnosed with ILC and high-risk MammaPrint,” he said.
Prognostic, but predictive?
“This is a well-designed prospective multicenter trial and provides the best evidence to date that MammaPrint is an important prognostic tool for ILC,” Todd Tuttle, MD, of University of Minnesota in Minneapolis, said in an interview.
Dr. Tuttle said he mainly uses the MammaPrint test and the OncoType 21-gene recurrence score to estimate prognosis for his patients with ILC.
These new data establish that MammaPrint is prognostic in ILC, but the value of MammaPrint’s genomic high risk result for making the decision about chemotherapy is still unclear, according to Dr. Tuttle.
“I don’t think we know whether MammaPrint can predict the benefit of chemotherapy for patients with stage I or II ILC,” he elaborated. “We need further high-quality studies such as this one to determine the best treatment strategies for ILC, which is a difficult breast cancer.”
Study details
Of the 5,313 patients studied, 487 had ILC (255 classic and 232 variant) and 4,826 had IDC according to central pathology assessment, Dr. Metzger reported.
MammaPrint classified 39% of the IDC group and 16% of the ILC group (10% of those with classic disease and 23% of those with variant disease) as genomically high risk for recurrence. The Adjuvant! Online tool classified 48.3% of ILC and 51.5% of IDC patients as clinically high risk.
Among the 44% of women with ILC having discordant genomic and clinical risk, discordance was usually due to the combination of low genomic risk and high clinical risk, seen in 38%.
The curves for 5-year distant metastasis–free survival stratified by genomic risk essentially overlapped for the IDC and ILC groups. Furthermore, there was no significant interaction of histologic type and genomic risk on this outcome (P = .547).
The 5-year rate of overall survival among women with genomic high risk was 95.6% in the IDC group and 93.5% in the ILC group. Among women with genomic low risk, 5-year overall survival was 98.1% in the IDC group and 97.7% in the ILC group, again with overlapping confidence intervals within each risk category.
The study was funded with support from the Breast Cancer Research Foundation. Dr. Metzger disclosed consulting fees from AbbVie, Genentech, Roche, and Pfizer. Dr. Tuttle disclosed no conflicts of interest.
SOURCE: Metzger O et al. EBCC-12 Virtual Conference. Abstract 6.
ILC is enriched with features indicating low proliferative activity, noted investigator Otto Metzger, MD, of the Dana Farber Cancer Institute in Boston.
“Data from retrospective series have indicated no benefit with adjuvant chemotherapy for patients diagnosed with early-stage ILC,” he said. “It’s fair to say that chemotherapy decisions for patients with ILC remain controversial.”
With this in mind, Dr. Metzger and colleagues analyzed data for 5,313 women who underwent surgery for early-stage breast cancer (node-negative or up to three positive lymph nodes) and were risk-stratified to receive or skip adjuvant chemotherapy based on both clinical risk and the MammaPrint score for genomic risk. Fully 44% of women with ILC had discordant clinical and genomic risks.
With a median follow-up of 8.7 years, the 5-year rate of distant metastasis–free survival among all patients classified as genomic high risk was 92.1% in women with IDC and 88.1% in women with ILC, with overlapping 95% confidence intervals. Rates of distant metastasis–free survival for patients with genomic low risk were 96.4% in women with IDC and 96.6% in women with ILC, again with confidence intervals that overlapped.
The pattern was essentially the same for overall survival, and results carried over into 8-year outcomes as well.
“We believe that MammaPrint is a clinically useful test for patients diagnosed with ILC,” Dr. Metzger said. “There are similar survival outcomes for ILC and IDC when matched by genomic risk. This is an important message.”
It should be standard to omit chemotherapy for patients who have ILC classified as high clinical risk but low genomic risk by MammaPrint, Dr. Metzger recommended. “By contrast, MammaPrint should facilitate chemotherapy treatment decisions for patients diagnosed with ILC and high-risk MammaPrint,” he said.
Prognostic, but predictive?
“This is a well-designed prospective multicenter trial and provides the best evidence to date that MammaPrint is an important prognostic tool for ILC,” Todd Tuttle, MD, of University of Minnesota in Minneapolis, said in an interview.
Dr. Tuttle said he mainly uses the MammaPrint test and the OncoType 21-gene recurrence score to estimate prognosis for his patients with ILC.
These new data establish that MammaPrint is prognostic in ILC, but the value of MammaPrint’s genomic high risk result for making the decision about chemotherapy is still unclear, according to Dr. Tuttle.
“I don’t think we know whether MammaPrint can predict the benefit of chemotherapy for patients with stage I or II ILC,” he elaborated. “We need further high-quality studies such as this one to determine the best treatment strategies for ILC, which is a difficult breast cancer.”
Study details
Of the 5,313 patients studied, 487 had ILC (255 classic and 232 variant) and 4,826 had IDC according to central pathology assessment, Dr. Metzger reported.
MammaPrint classified 39% of the IDC group and 16% of the ILC group (10% of those with classic disease and 23% of those with variant disease) as genomically high risk for recurrence. The Adjuvant! Online tool classified 48.3% of ILC and 51.5% of IDC patients as clinically high risk.
Among the 44% of women with ILC having discordant genomic and clinical risk, discordance was usually due to the combination of low genomic risk and high clinical risk, seen in 38%.
The curves for 5-year distant metastasis–free survival stratified by genomic risk essentially overlapped for the IDC and ILC groups. Furthermore, there was no significant interaction of histologic type and genomic risk on this outcome (P = .547).
The 5-year rate of overall survival among women with genomic high risk was 95.6% in the IDC group and 93.5% in the ILC group. Among women with genomic low risk, 5-year overall survival was 98.1% in the IDC group and 97.7% in the ILC group, again with overlapping confidence intervals within each risk category.
The study was funded with support from the Breast Cancer Research Foundation. Dr. Metzger disclosed consulting fees from AbbVie, Genentech, Roche, and Pfizer. Dr. Tuttle disclosed no conflicts of interest.
SOURCE: Metzger O et al. EBCC-12 Virtual Conference. Abstract 6.
FROM EBCC-12 VIRTUAL CONFERENCE
Migraine nerve stimulation device now available over the counter
The Food and Drug Administration has cleared Cefaly Dual (Cefaly Technology) which was previously available only by prescription.
Most migraines involve the trigeminal nerve, which can be accessed through the skin on the forehead. Cefaly Dual stimulates the trigeminal nerve using a reusable self-adhesive electrode placed on the forehead.
The device has two settings, ACUTE and PREVENT. In the ACUTE setting, the individual wears the device for 60 minutes at headache onset or during a migraine attack. In the PREVENT setting, the individual wears the device for 20 minutes daily to help prevent future episodes.
At the start of a session, the wearer may feel a slight tingling sensation, which gradually increases and spreads throughout the forehead and the front part of the head. After about 14 minutes, the intensity stabilizes and remains constant until the treatment session is over, according to the company. The device automatically shuts off at the end of each session. It can be used as a stand-alone option or with existing treatment, the company noted.
“For millions of people across the U.S., living with migraine pain and coping with debilitating symptoms are daily realities. It is our mission to provide consumers with increased access to an effective and safe dual modality migraine treatment that is scientifically proven to reduce the number of monthly migraine days by almost half,” Jennifer Trainor McDermott, CEO of Cefaly Technology, said in a news release.
The FDA’s over-the-counter clearance of Cefaly Dual was based on several randomized, controlled clinical trials supporting the efficacy and safety of the device, the company said.
An earlier version of the Cefaly device was approved in the United States in March 2014 to help prevent migraine headache in adults aged 18 or older. The next-generation Cefaly Dual device is “small and sleek in comparison to its older model, which uses bands along the sides to create room for batteries. The newest device is palm-sized, more portable, and uses a battery that is rechargeable via USB,” the company said.
Last spring, the company announced a buyback program where customers in the United States may return their original device and receive a discount of the purchase of the Cefaly Dual device.
A version of this article originally appeared on Medscape.com.
The Food and Drug Administration has cleared Cefaly Dual (Cefaly Technology) which was previously available only by prescription.
Most migraines involve the trigeminal nerve, which can be accessed through the skin on the forehead. Cefaly Dual stimulates the trigeminal nerve using a reusable self-adhesive electrode placed on the forehead.
The device has two settings, ACUTE and PREVENT. In the ACUTE setting, the individual wears the device for 60 minutes at headache onset or during a migraine attack. In the PREVENT setting, the individual wears the device for 20 minutes daily to help prevent future episodes.
At the start of a session, the wearer may feel a slight tingling sensation, which gradually increases and spreads throughout the forehead and the front part of the head. After about 14 minutes, the intensity stabilizes and remains constant until the treatment session is over, according to the company. The device automatically shuts off at the end of each session. It can be used as a stand-alone option or with existing treatment, the company noted.
“For millions of people across the U.S., living with migraine pain and coping with debilitating symptoms are daily realities. It is our mission to provide consumers with increased access to an effective and safe dual modality migraine treatment that is scientifically proven to reduce the number of monthly migraine days by almost half,” Jennifer Trainor McDermott, CEO of Cefaly Technology, said in a news release.
The FDA’s over-the-counter clearance of Cefaly Dual was based on several randomized, controlled clinical trials supporting the efficacy and safety of the device, the company said.
An earlier version of the Cefaly device was approved in the United States in March 2014 to help prevent migraine headache in adults aged 18 or older. The next-generation Cefaly Dual device is “small and sleek in comparison to its older model, which uses bands along the sides to create room for batteries. The newest device is palm-sized, more portable, and uses a battery that is rechargeable via USB,” the company said.
Last spring, the company announced a buyback program where customers in the United States may return their original device and receive a discount of the purchase of the Cefaly Dual device.
A version of this article originally appeared on Medscape.com.
The Food and Drug Administration has cleared Cefaly Dual (Cefaly Technology) which was previously available only by prescription.
Most migraines involve the trigeminal nerve, which can be accessed through the skin on the forehead. Cefaly Dual stimulates the trigeminal nerve using a reusable self-adhesive electrode placed on the forehead.
The device has two settings, ACUTE and PREVENT. In the ACUTE setting, the individual wears the device for 60 minutes at headache onset or during a migraine attack. In the PREVENT setting, the individual wears the device for 20 minutes daily to help prevent future episodes.
At the start of a session, the wearer may feel a slight tingling sensation, which gradually increases and spreads throughout the forehead and the front part of the head. After about 14 minutes, the intensity stabilizes and remains constant until the treatment session is over, according to the company. The device automatically shuts off at the end of each session. It can be used as a stand-alone option or with existing treatment, the company noted.
“For millions of people across the U.S., living with migraine pain and coping with debilitating symptoms are daily realities. It is our mission to provide consumers with increased access to an effective and safe dual modality migraine treatment that is scientifically proven to reduce the number of monthly migraine days by almost half,” Jennifer Trainor McDermott, CEO of Cefaly Technology, said in a news release.
The FDA’s over-the-counter clearance of Cefaly Dual was based on several randomized, controlled clinical trials supporting the efficacy and safety of the device, the company said.
An earlier version of the Cefaly device was approved in the United States in March 2014 to help prevent migraine headache in adults aged 18 or older. The next-generation Cefaly Dual device is “small and sleek in comparison to its older model, which uses bands along the sides to create room for batteries. The newest device is palm-sized, more portable, and uses a battery that is rechargeable via USB,” the company said.
Last spring, the company announced a buyback program where customers in the United States may return their original device and receive a discount of the purchase of the Cefaly Dual device.
A version of this article originally appeared on Medscape.com.
Study advances personalized treatment for older breast cancer patients
Findings from the study were reported at the 12th European Breast Cancer Conference.
“Primary endocrine therapy is usually reserved for older, less fit, and frail women. Rates of use vary widely,” noted investigator Lynda Wyld, MBChB, PhD, of the University of Sheffield (England).
“Although there is no set threshold for who is suitable, some women are undoubtedly over- and undertreated for their breast cancer,” she added.
Dr. Wyld and colleagues undertook the Age Gap study among women older than 70 years with breast cancer recruited from 56 U.K. breast units during 2013-2018.
The main goals were to determine which women can be safely offered primary endocrine therapy as nonstandard care and to develop and test a tool to help women in this age group make treatment decisions.
The first component of the study was a multicenter, prospective cohort study of women with ER+ disease who were eligible for surgery. Results showed that breast cancer–specific mortality was greater with primary endocrine therapy than with surgery in the entire cohort. However, breast cancer–specific mortality was lower with primary endocrine therapy than with surgery in a cohort matched with propensity scores to achieve similar age, fitness, and frailty.
The second component of the study was a cluster-randomized controlled trial of women with operable breast cancer, most of whom had ER+ disease. Results showed that a decision support tool increased awareness of treatment options and readiness to decide. The tool also altered treatment choices, prompting a larger share of patients with ER+ disease to choose primary endocrine therapy.
Prospective cohort study
The prospective observational study was conducted in 2,854 women with ER+ disease who were eligible for surgery and treated in usual practice. Most women (n = 2,354) were treated with surgery (followed by antiestrogen therapy), while the rest received primary endocrine therapy (n = 500).
In the entire cohort, patients undergoing surgery were younger, had a lower level of comorbidity, and were less often frail. But these characteristics were generally similar in a propensity-matched cohort of 672 patients.
At a median follow-up of 52 months, overall and breast cancer–specific survival were significantly poorer with primary endocrine therapy versus surgery in the entire cohort but not in the propensity-matched cohort.
In the entire cohort, the breast cancer–specific mortality was 9.5% with primary endocrine therapy and 4.9% with surgery. In the propensity-matched cohort, breast cancer–specific mortality was 3.1% and 6.6%, respectively.
The overall mortality was 41.8% with primary endocrine therapy and 14.6% with surgery in the entire cohort, but the gap narrowed to 34.5% and 25.6%, respectively, in the propensity-matched cohort.
In the latter, “although there is a slight divergence in overall survival and it’s likely that with longer-term follow-up this will become significant, at the moment, it isn’t,” Dr. Wyld commented.
Curves for breast cancer–specific survival basically overlapped until 5 years, when surgery started to show an advantage. The rate of locoregional recurrence or progression was low and not significantly different by treatment.
None of the women in the entire cohort died from surgery. “But it’s worth bearing in mind that these were all women selected for surgery, who were thought to be fit for it by their surgeons. The least fit women in this cohort will have obviously been offered primary endocrine therapy,” Dr. Wyld cautioned.
Although 19% of patients had a surgical complication, only 2.1% had a systemic surgical complication.
Cluster-randomized controlled trial
In the cluster-randomized controlled trial, researchers compared a decision support tool to usual care. The tool was developed using U.K. registry data from almost 30,000 older women and input from women in this age group on their preferred format and method of presentation, according to Dr. Wyld.
The tool consists of an algorithm available to clinicians online (for input of tumor stage and biology, comorbidities, and functional status) plus a booklet and outcome sheets for patients to take home after discussions that can be personalized to their particulars.
Intention-to-treat analyses were based on 1,339 patients with operable breast cancer, 1,161 of whom had ER+ disease. Per-protocol analyses were based on the subset of 449 patients who were offered a choice between surgery and primary endocrine therapy, presumably because they were less fit and frailer.
Results showed that, at 6 months, mean scores for global quality of life on the EORTC questionnaire did not differ between decision support and usual care in the intention-to-treat population (69.0 vs. 68.9; P = .900), but scores were more favorable with decision support in the per-protocol population (70.7 vs. 66.8; P = .044).
The tool also altered treatment choices, with a larger share of ER+ patients choosing primary endocrine therapy (21.0% vs. 15.4%; P = .029) but still having similar disease outcomes.
Although ER+ patients in the decision support group more often selected primary endocrine therapy, at a median follow-up of 36 months, the groups did not differ significantly on overall survival, cause-specific survival, or time to recurrence in either intention-to-treat or per-protocol analyses.
Larger shares of women in the decision support group reported that they had adequate knowledge about the treatment options available to them (94% vs. 74%), were aware of the advantages and disadvantages of each option (91% vs. 76%), knew which option they preferred (96% vs. 91%), and were ready to make a decision (99% vs. 90%).
Applying results to practice
“Most women over the age of 70 are relatively fit, and the aim should be to treat them with surgery,” Dr. Wyld said. “For the less fit, a point is reached where the oncology benefits of surgery disappear and surgery may just cause harm. This threshold appears to be for women in their mid-80s with moderate to poor health.”
“Use of the Age Gap online tool may enhance shared decision-making for these women while increasing knowledge. And whilst it does seem to increase the use of primary endocrine therapy, this does not seem to have an adverse impact on survival at 36 months of follow-up,” she added.
“The study by Dr. Wyld and colleagues adds to the available literature regarding the scenarios in which some treatments may be omitted without impacting overall survival in older women with breast cancer,” Lesly A. Dossett, MD, of Michigan Medicine in Ann Arbor, commented in an interview.
In her own practice, Dr. Dossett emphasizes the generally favorable prognosis for older women with hormone receptor–positive breast cancer, she said. However, tools that help communicate risk and clarify the value of various therapies are welcome.
“The decision support tool appears to be a promising tool in helping to avoid treatments that are unlikely to benefit older women with breast cancer,” Dr. Dossett said. “The results will be widely applicable, as there is growing recognition that this patient population is at risk for overtreatment.”
The study was funded by the U.K. National Institute for Health Research programme grant for applied research. Dr. Wyld and Dr. Dossett said they had no relevant conflicts of interest.
SOURCES: Wyld L et al. EBCC-12 Virtual Congress. Abstract 8A and Abstract 8B.
Findings from the study were reported at the 12th European Breast Cancer Conference.
“Primary endocrine therapy is usually reserved for older, less fit, and frail women. Rates of use vary widely,” noted investigator Lynda Wyld, MBChB, PhD, of the University of Sheffield (England).
“Although there is no set threshold for who is suitable, some women are undoubtedly over- and undertreated for their breast cancer,” she added.
Dr. Wyld and colleagues undertook the Age Gap study among women older than 70 years with breast cancer recruited from 56 U.K. breast units during 2013-2018.
The main goals were to determine which women can be safely offered primary endocrine therapy as nonstandard care and to develop and test a tool to help women in this age group make treatment decisions.
The first component of the study was a multicenter, prospective cohort study of women with ER+ disease who were eligible for surgery. Results showed that breast cancer–specific mortality was greater with primary endocrine therapy than with surgery in the entire cohort. However, breast cancer–specific mortality was lower with primary endocrine therapy than with surgery in a cohort matched with propensity scores to achieve similar age, fitness, and frailty.
The second component of the study was a cluster-randomized controlled trial of women with operable breast cancer, most of whom had ER+ disease. Results showed that a decision support tool increased awareness of treatment options and readiness to decide. The tool also altered treatment choices, prompting a larger share of patients with ER+ disease to choose primary endocrine therapy.
Prospective cohort study
The prospective observational study was conducted in 2,854 women with ER+ disease who were eligible for surgery and treated in usual practice. Most women (n = 2,354) were treated with surgery (followed by antiestrogen therapy), while the rest received primary endocrine therapy (n = 500).
In the entire cohort, patients undergoing surgery were younger, had a lower level of comorbidity, and were less often frail. But these characteristics were generally similar in a propensity-matched cohort of 672 patients.
At a median follow-up of 52 months, overall and breast cancer–specific survival were significantly poorer with primary endocrine therapy versus surgery in the entire cohort but not in the propensity-matched cohort.
In the entire cohort, the breast cancer–specific mortality was 9.5% with primary endocrine therapy and 4.9% with surgery. In the propensity-matched cohort, breast cancer–specific mortality was 3.1% and 6.6%, respectively.
The overall mortality was 41.8% with primary endocrine therapy and 14.6% with surgery in the entire cohort, but the gap narrowed to 34.5% and 25.6%, respectively, in the propensity-matched cohort.
In the latter, “although there is a slight divergence in overall survival and it’s likely that with longer-term follow-up this will become significant, at the moment, it isn’t,” Dr. Wyld commented.
Curves for breast cancer–specific survival basically overlapped until 5 years, when surgery started to show an advantage. The rate of locoregional recurrence or progression was low and not significantly different by treatment.
None of the women in the entire cohort died from surgery. “But it’s worth bearing in mind that these were all women selected for surgery, who were thought to be fit for it by their surgeons. The least fit women in this cohort will have obviously been offered primary endocrine therapy,” Dr. Wyld cautioned.
Although 19% of patients had a surgical complication, only 2.1% had a systemic surgical complication.
Cluster-randomized controlled trial
In the cluster-randomized controlled trial, researchers compared a decision support tool to usual care. The tool was developed using U.K. registry data from almost 30,000 older women and input from women in this age group on their preferred format and method of presentation, according to Dr. Wyld.
The tool consists of an algorithm available to clinicians online (for input of tumor stage and biology, comorbidities, and functional status) plus a booklet and outcome sheets for patients to take home after discussions that can be personalized to their particulars.
Intention-to-treat analyses were based on 1,339 patients with operable breast cancer, 1,161 of whom had ER+ disease. Per-protocol analyses were based on the subset of 449 patients who were offered a choice between surgery and primary endocrine therapy, presumably because they were less fit and frailer.
Results showed that, at 6 months, mean scores for global quality of life on the EORTC questionnaire did not differ between decision support and usual care in the intention-to-treat population (69.0 vs. 68.9; P = .900), but scores were more favorable with decision support in the per-protocol population (70.7 vs. 66.8; P = .044).
The tool also altered treatment choices, with a larger share of ER+ patients choosing primary endocrine therapy (21.0% vs. 15.4%; P = .029) but still having similar disease outcomes.
Although ER+ patients in the decision support group more often selected primary endocrine therapy, at a median follow-up of 36 months, the groups did not differ significantly on overall survival, cause-specific survival, or time to recurrence in either intention-to-treat or per-protocol analyses.
Larger shares of women in the decision support group reported that they had adequate knowledge about the treatment options available to them (94% vs. 74%), were aware of the advantages and disadvantages of each option (91% vs. 76%), knew which option they preferred (96% vs. 91%), and were ready to make a decision (99% vs. 90%).
Applying results to practice
“Most women over the age of 70 are relatively fit, and the aim should be to treat them with surgery,” Dr. Wyld said. “For the less fit, a point is reached where the oncology benefits of surgery disappear and surgery may just cause harm. This threshold appears to be for women in their mid-80s with moderate to poor health.”
“Use of the Age Gap online tool may enhance shared decision-making for these women while increasing knowledge. And whilst it does seem to increase the use of primary endocrine therapy, this does not seem to have an adverse impact on survival at 36 months of follow-up,” she added.
“The study by Dr. Wyld and colleagues adds to the available literature regarding the scenarios in which some treatments may be omitted without impacting overall survival in older women with breast cancer,” Lesly A. Dossett, MD, of Michigan Medicine in Ann Arbor, commented in an interview.
In her own practice, Dr. Dossett emphasizes the generally favorable prognosis for older women with hormone receptor–positive breast cancer, she said. However, tools that help communicate risk and clarify the value of various therapies are welcome.
“The decision support tool appears to be a promising tool in helping to avoid treatments that are unlikely to benefit older women with breast cancer,” Dr. Dossett said. “The results will be widely applicable, as there is growing recognition that this patient population is at risk for overtreatment.”
The study was funded by the U.K. National Institute for Health Research programme grant for applied research. Dr. Wyld and Dr. Dossett said they had no relevant conflicts of interest.
SOURCES: Wyld L et al. EBCC-12 Virtual Congress. Abstract 8A and Abstract 8B.
Findings from the study were reported at the 12th European Breast Cancer Conference.
“Primary endocrine therapy is usually reserved for older, less fit, and frail women. Rates of use vary widely,” noted investigator Lynda Wyld, MBChB, PhD, of the University of Sheffield (England).
“Although there is no set threshold for who is suitable, some women are undoubtedly over- and undertreated for their breast cancer,” she added.
Dr. Wyld and colleagues undertook the Age Gap study among women older than 70 years with breast cancer recruited from 56 U.K. breast units during 2013-2018.
The main goals were to determine which women can be safely offered primary endocrine therapy as nonstandard care and to develop and test a tool to help women in this age group make treatment decisions.
The first component of the study was a multicenter, prospective cohort study of women with ER+ disease who were eligible for surgery. Results showed that breast cancer–specific mortality was greater with primary endocrine therapy than with surgery in the entire cohort. However, breast cancer–specific mortality was lower with primary endocrine therapy than with surgery in a cohort matched with propensity scores to achieve similar age, fitness, and frailty.
The second component of the study was a cluster-randomized controlled trial of women with operable breast cancer, most of whom had ER+ disease. Results showed that a decision support tool increased awareness of treatment options and readiness to decide. The tool also altered treatment choices, prompting a larger share of patients with ER+ disease to choose primary endocrine therapy.
Prospective cohort study
The prospective observational study was conducted in 2,854 women with ER+ disease who were eligible for surgery and treated in usual practice. Most women (n = 2,354) were treated with surgery (followed by antiestrogen therapy), while the rest received primary endocrine therapy (n = 500).
In the entire cohort, patients undergoing surgery were younger, had a lower level of comorbidity, and were less often frail. But these characteristics were generally similar in a propensity-matched cohort of 672 patients.
At a median follow-up of 52 months, overall and breast cancer–specific survival were significantly poorer with primary endocrine therapy versus surgery in the entire cohort but not in the propensity-matched cohort.
In the entire cohort, the breast cancer–specific mortality was 9.5% with primary endocrine therapy and 4.9% with surgery. In the propensity-matched cohort, breast cancer–specific mortality was 3.1% and 6.6%, respectively.
The overall mortality was 41.8% with primary endocrine therapy and 14.6% with surgery in the entire cohort, but the gap narrowed to 34.5% and 25.6%, respectively, in the propensity-matched cohort.
In the latter, “although there is a slight divergence in overall survival and it’s likely that with longer-term follow-up this will become significant, at the moment, it isn’t,” Dr. Wyld commented.
Curves for breast cancer–specific survival basically overlapped until 5 years, when surgery started to show an advantage. The rate of locoregional recurrence or progression was low and not significantly different by treatment.
None of the women in the entire cohort died from surgery. “But it’s worth bearing in mind that these were all women selected for surgery, who were thought to be fit for it by their surgeons. The least fit women in this cohort will have obviously been offered primary endocrine therapy,” Dr. Wyld cautioned.
Although 19% of patients had a surgical complication, only 2.1% had a systemic surgical complication.
Cluster-randomized controlled trial
In the cluster-randomized controlled trial, researchers compared a decision support tool to usual care. The tool was developed using U.K. registry data from almost 30,000 older women and input from women in this age group on their preferred format and method of presentation, according to Dr. Wyld.
The tool consists of an algorithm available to clinicians online (for input of tumor stage and biology, comorbidities, and functional status) plus a booklet and outcome sheets for patients to take home after discussions that can be personalized to their particulars.
Intention-to-treat analyses were based on 1,339 patients with operable breast cancer, 1,161 of whom had ER+ disease. Per-protocol analyses were based on the subset of 449 patients who were offered a choice between surgery and primary endocrine therapy, presumably because they were less fit and frailer.
Results showed that, at 6 months, mean scores for global quality of life on the EORTC questionnaire did not differ between decision support and usual care in the intention-to-treat population (69.0 vs. 68.9; P = .900), but scores were more favorable with decision support in the per-protocol population (70.7 vs. 66.8; P = .044).
The tool also altered treatment choices, with a larger share of ER+ patients choosing primary endocrine therapy (21.0% vs. 15.4%; P = .029) but still having similar disease outcomes.
Although ER+ patients in the decision support group more often selected primary endocrine therapy, at a median follow-up of 36 months, the groups did not differ significantly on overall survival, cause-specific survival, or time to recurrence in either intention-to-treat or per-protocol analyses.
Larger shares of women in the decision support group reported that they had adequate knowledge about the treatment options available to them (94% vs. 74%), were aware of the advantages and disadvantages of each option (91% vs. 76%), knew which option they preferred (96% vs. 91%), and were ready to make a decision (99% vs. 90%).
Applying results to practice
“Most women over the age of 70 are relatively fit, and the aim should be to treat them with surgery,” Dr. Wyld said. “For the less fit, a point is reached where the oncology benefits of surgery disappear and surgery may just cause harm. This threshold appears to be for women in their mid-80s with moderate to poor health.”
“Use of the Age Gap online tool may enhance shared decision-making for these women while increasing knowledge. And whilst it does seem to increase the use of primary endocrine therapy, this does not seem to have an adverse impact on survival at 36 months of follow-up,” she added.
“The study by Dr. Wyld and colleagues adds to the available literature regarding the scenarios in which some treatments may be omitted without impacting overall survival in older women with breast cancer,” Lesly A. Dossett, MD, of Michigan Medicine in Ann Arbor, commented in an interview.
In her own practice, Dr. Dossett emphasizes the generally favorable prognosis for older women with hormone receptor–positive breast cancer, she said. However, tools that help communicate risk and clarify the value of various therapies are welcome.
“The decision support tool appears to be a promising tool in helping to avoid treatments that are unlikely to benefit older women with breast cancer,” Dr. Dossett said. “The results will be widely applicable, as there is growing recognition that this patient population is at risk for overtreatment.”
The study was funded by the U.K. National Institute for Health Research programme grant for applied research. Dr. Wyld and Dr. Dossett said they had no relevant conflicts of interest.
SOURCES: Wyld L et al. EBCC-12 Virtual Congress. Abstract 8A and Abstract 8B.
FROM EBCC-12 VIRTUAL CONFERENCE
Biomarker in the eye may flag neurodegeneration risk
, opening the door to a potential new method of predicting neurodegenerative disease, new research suggests.
In a study of 77 patients undergoing eye surgery for various conditions, more than 70% had more than 20 pg/mL of NfL in their vitreous humor. Higher levels of NfL were associated with higher levels of other biomarkers known to be associated with Alzheimer’s disease, including amyloid-beta and tau proteins.
“The study had three primary findings,” said lead author Manju L. Subramanian, MD, associate professor of ophthalmology at Boston University.
First, the investigators were able to detect levels of NfL in eye fluid; and second, those levels were not in any way correlated to the patient’s clinical eye condition, Dr. Subramanian said. “The third finding was that we were able to correlate those neurofilament light levels with other markers that have been known to be associated with conditions such as Alzheimer’s disease,” she noted.
For Dr. Subramanian, these findings add to the hypothesis that the eye is an extension of the brain. “This is further evidence that the eye might potentially be a proxy for neurodegenerative diseases,” she said. “So finding neurofilament light chain in the eye demonstrates that the eye is not an isolated organ, and things that happen in the body can affect the eye and vice versa.”
The findings were published online Sept. 17 in Alzheimer’s Research & Therapy.
Verge of clinical applicability?
Early diagnosis of neurodegenerative diseases remains a challenge, the investigators noted. As such, there is a palpable need for reliable biomarkers that can help with early diagnosis, prognostic assessment, and measurable response to treatment for Alzheimer’s disease and other neurologic disorders
Recent research has identified NfL as a potential screening tool and some researchers believe it to be on the verge of clinical applicability. In addition, increased levels of the biomarker have been observed in both the cerebrospinal fluid (CSF) and blood of individuals with neurodegeneration and neurological diseases, including Alzheimer’s disease. In previous studies, for example, elevated levels of NfL in CSF and blood have been shown to reliably distinguish between patients with Alzheimer’s disease and healthy volunteers.
Because certain eye diseases have been associated with Alzheimer’s disease in epidemiological studies, they may share common risk factors and pathological mechanisms at the molecular level, the researchers noted. In an earlier study, the current investigators found that cognitive function among patients with eye disease was significantly associated with amyloid-beta and total tau protein levels in the vitreous humor.
Given these connections, the researchers hypothesized that NfL could be identified in the vitreous humor and may be associated with other relevant biomarkers of neuronal origin. “Neurofilament light chain is detectable in the cerebrospinal fluid, but it’s never been tested for detection in the eye,” Dr. Subramanian noted.
In total, vitreous humor samples were collected from 77 unique participants (mean age, 56.2 years; 63% men) as part of the single-center, prospective, cross-sectional cohort study. The researchers aspirated 0.5 to 1.0 ml of undiluted vitreous fluid during vitrectomy, while whole blood was drawn for APOE genotyping.
Immunoassay was used to quantitatively measure for NfL, amyloid-beta, total tau, phosphorylated tau 181 (p-tau181), inflammatory cytokines, chemokines, and vascular proteins in the vitreous humor. The trial’s primary outcome measures were the detection of NfL levels in the vitreous humor, as well as its associations with other proteins.
Significant correlations
Results showed that 55 of the 77 participants (71.4%) had at least 20 pg/ml of NfL protein present in the vitreous humor. The median level was 68.65 pg/ml. Statistically significant associations were found between NfL levels in the vitreous humor and Abeta40, Abeta42, and total tau; higher NfL levels were associated with higher levels of all three biomarkers. On the other hand, NfL levels were not positively associated with increased vitreous levels of p-tau181.
Vitreous NfL concentration was significantly associated with inflammatory cytokines, including interleukin-15, interleukin-16, and monocyte chemoattractant protein-1, as well as vascular proteins such as vascular endothelial growth factor receptor-1, VEGF-C, vascular cell adhesion molecule-1, Tie-2, and intracellular adhesion molecular-1.
Despite these findings, NfL in the vitreous humor was not associated with patients’ clinical ophthalmic conditions or systemic diseases such as hypertension, diabetes, and hyperlipidemia. Similarly, NfL was not significantly associated with APOE genotype E2 and E4, the alleles most commonly associated with Alzheimer’s disease.
Finally, no statistically significant associations were found between NfL and Mini-Mental State Examination (MMSE) scores.
A “first step”
Most research currently examining the role of the eye in neurodegenerative disease is focused on retinal biomarkers imaged by optical coherence tomography, the investigators noted. Although promising, data obtained this way have yielded conflicting results.
Similarly, while the diagnostic potential of the core CSF biomarkers for AD (Abeta40, Abeta42, p-tau, and total tau) is well established, the practical utility of testing CSF for neurodegenerative diseases is limited, wrote the researchers.
As such, an additional biomarker source such as NfL–which is quantifiable and protein-based within eye fluid – has the potential to play an important role in predicting neurodegenerative disease in the clinical setting, they added.
“The holy grail of neurodegenerative-disease diagnosis is early diagnosis. Because if you can implement treatment early, you can slow down and potentially halt the progression of these diseases,” Dr. Subramanian said.
“This study is the first step toward determining if the eye could play a potential role in early diagnosis of conditions such as Alzheimer’s disease,” she added.
That said, Dr. Subramanian was quick to recognize the findings’ preliminary nature and that they do not offer reliable evidence that vitreous NfL levels definitively represent neurodegeneration. As such, the investigators called for more research to validate the association between this type of biomarker with other established biomarkers of neurodegeneration, such as those found in CSF fluid or on MRI and PET scans.
“At this point, we can’t look at eye fluid and say that people have neurodegenerative diseases,” she noted. “The other thing to consider is that vitreous humor is at the back of the eye, so it’s actually a fairly invasive procedure.
“I think the next step is to look at other types of eye fluids such as the aqueous fluid in the front of the eye, or even tear secretions, potentially,” Dr. Subramanian said.
Other study limitations include the lack of an association between NfL levels and MMSE scores and that none of the study participants were actually diagnosed with Alzheimer’s disease. Validation studies are needed to compare vitreous levels of NfL in patients with mild cognitive impairment/AD to normal controls, the investigators noted.
Fascinating but impractical?
Commenting on the findings, Sharon Fekrat, MD, professor of ophthalmology, Duke University, Durham, N.C., agreed that there’s potential importance of the eye in diagnosing neurodegeneration. However, she suggested that vitreous humor may not be the most expedient medium to use.
“I commend the authors for this fascinating work. But practically speaking, if we ultimately want to use intraocular fluid to diagnose Alzheimer’s and perhaps other neurodegeneration, I think aqueous humor might be more practical than the vitreous humor,” said Dr. Fekrat, who was not involved with the research. “What might be even better is to have a device that can be held against the eyeball that measures the levels of various substances inside the eyeball without having to enter the eye,” added Justin Ma, a Duke University medical student working under Dr. Fekrat’s guidance. “It could be similar technology to what’s currently used to measure blood glucose levels,” Mr. Ma added.
The study was supported in part by the National Institute of Aging. Dr. Subramanian, Dr. Fekrat, and Mr. Ma have disclosed no relevant financial relationships. Disclosures for other study authors are listed in the original article.
A version of this article originally appeared on Medscape.com.
, opening the door to a potential new method of predicting neurodegenerative disease, new research suggests.
In a study of 77 patients undergoing eye surgery for various conditions, more than 70% had more than 20 pg/mL of NfL in their vitreous humor. Higher levels of NfL were associated with higher levels of other biomarkers known to be associated with Alzheimer’s disease, including amyloid-beta and tau proteins.
“The study had three primary findings,” said lead author Manju L. Subramanian, MD, associate professor of ophthalmology at Boston University.
First, the investigators were able to detect levels of NfL in eye fluid; and second, those levels were not in any way correlated to the patient’s clinical eye condition, Dr. Subramanian said. “The third finding was that we were able to correlate those neurofilament light levels with other markers that have been known to be associated with conditions such as Alzheimer’s disease,” she noted.
For Dr. Subramanian, these findings add to the hypothesis that the eye is an extension of the brain. “This is further evidence that the eye might potentially be a proxy for neurodegenerative diseases,” she said. “So finding neurofilament light chain in the eye demonstrates that the eye is not an isolated organ, and things that happen in the body can affect the eye and vice versa.”
The findings were published online Sept. 17 in Alzheimer’s Research & Therapy.
Verge of clinical applicability?
Early diagnosis of neurodegenerative diseases remains a challenge, the investigators noted. As such, there is a palpable need for reliable biomarkers that can help with early diagnosis, prognostic assessment, and measurable response to treatment for Alzheimer’s disease and other neurologic disorders
Recent research has identified NfL as a potential screening tool and some researchers believe it to be on the verge of clinical applicability. In addition, increased levels of the biomarker have been observed in both the cerebrospinal fluid (CSF) and blood of individuals with neurodegeneration and neurological diseases, including Alzheimer’s disease. In previous studies, for example, elevated levels of NfL in CSF and blood have been shown to reliably distinguish between patients with Alzheimer’s disease and healthy volunteers.
Because certain eye diseases have been associated with Alzheimer’s disease in epidemiological studies, they may share common risk factors and pathological mechanisms at the molecular level, the researchers noted. In an earlier study, the current investigators found that cognitive function among patients with eye disease was significantly associated with amyloid-beta and total tau protein levels in the vitreous humor.
Given these connections, the researchers hypothesized that NfL could be identified in the vitreous humor and may be associated with other relevant biomarkers of neuronal origin. “Neurofilament light chain is detectable in the cerebrospinal fluid, but it’s never been tested for detection in the eye,” Dr. Subramanian noted.
In total, vitreous humor samples were collected from 77 unique participants (mean age, 56.2 years; 63% men) as part of the single-center, prospective, cross-sectional cohort study. The researchers aspirated 0.5 to 1.0 ml of undiluted vitreous fluid during vitrectomy, while whole blood was drawn for APOE genotyping.
Immunoassay was used to quantitatively measure for NfL, amyloid-beta, total tau, phosphorylated tau 181 (p-tau181), inflammatory cytokines, chemokines, and vascular proteins in the vitreous humor. The trial’s primary outcome measures were the detection of NfL levels in the vitreous humor, as well as its associations with other proteins.
Significant correlations
Results showed that 55 of the 77 participants (71.4%) had at least 20 pg/ml of NfL protein present in the vitreous humor. The median level was 68.65 pg/ml. Statistically significant associations were found between NfL levels in the vitreous humor and Abeta40, Abeta42, and total tau; higher NfL levels were associated with higher levels of all three biomarkers. On the other hand, NfL levels were not positively associated with increased vitreous levels of p-tau181.
Vitreous NfL concentration was significantly associated with inflammatory cytokines, including interleukin-15, interleukin-16, and monocyte chemoattractant protein-1, as well as vascular proteins such as vascular endothelial growth factor receptor-1, VEGF-C, vascular cell adhesion molecule-1, Tie-2, and intracellular adhesion molecular-1.
Despite these findings, NfL in the vitreous humor was not associated with patients’ clinical ophthalmic conditions or systemic diseases such as hypertension, diabetes, and hyperlipidemia. Similarly, NfL was not significantly associated with APOE genotype E2 and E4, the alleles most commonly associated with Alzheimer’s disease.
Finally, no statistically significant associations were found between NfL and Mini-Mental State Examination (MMSE) scores.
A “first step”
Most research currently examining the role of the eye in neurodegenerative disease is focused on retinal biomarkers imaged by optical coherence tomography, the investigators noted. Although promising, data obtained this way have yielded conflicting results.
Similarly, while the diagnostic potential of the core CSF biomarkers for AD (Abeta40, Abeta42, p-tau, and total tau) is well established, the practical utility of testing CSF for neurodegenerative diseases is limited, wrote the researchers.
As such, an additional biomarker source such as NfL–which is quantifiable and protein-based within eye fluid – has the potential to play an important role in predicting neurodegenerative disease in the clinical setting, they added.
“The holy grail of neurodegenerative-disease diagnosis is early diagnosis. Because if you can implement treatment early, you can slow down and potentially halt the progression of these diseases,” Dr. Subramanian said.
“This study is the first step toward determining if the eye could play a potential role in early diagnosis of conditions such as Alzheimer’s disease,” she added.
That said, Dr. Subramanian was quick to recognize the findings’ preliminary nature and that they do not offer reliable evidence that vitreous NfL levels definitively represent neurodegeneration. As such, the investigators called for more research to validate the association between this type of biomarker with other established biomarkers of neurodegeneration, such as those found in CSF fluid or on MRI and PET scans.
“At this point, we can’t look at eye fluid and say that people have neurodegenerative diseases,” she noted. “The other thing to consider is that vitreous humor is at the back of the eye, so it’s actually a fairly invasive procedure.
“I think the next step is to look at other types of eye fluids such as the aqueous fluid in the front of the eye, or even tear secretions, potentially,” Dr. Subramanian said.
Other study limitations include the lack of an association between NfL levels and MMSE scores and that none of the study participants were actually diagnosed with Alzheimer’s disease. Validation studies are needed to compare vitreous levels of NfL in patients with mild cognitive impairment/AD to normal controls, the investigators noted.
Fascinating but impractical?
Commenting on the findings, Sharon Fekrat, MD, professor of ophthalmology, Duke University, Durham, N.C., agreed that there’s potential importance of the eye in diagnosing neurodegeneration. However, she suggested that vitreous humor may not be the most expedient medium to use.
“I commend the authors for this fascinating work. But practically speaking, if we ultimately want to use intraocular fluid to diagnose Alzheimer’s and perhaps other neurodegeneration, I think aqueous humor might be more practical than the vitreous humor,” said Dr. Fekrat, who was not involved with the research. “What might be even better is to have a device that can be held against the eyeball that measures the levels of various substances inside the eyeball without having to enter the eye,” added Justin Ma, a Duke University medical student working under Dr. Fekrat’s guidance. “It could be similar technology to what’s currently used to measure blood glucose levels,” Mr. Ma added.
The study was supported in part by the National Institute of Aging. Dr. Subramanian, Dr. Fekrat, and Mr. Ma have disclosed no relevant financial relationships. Disclosures for other study authors are listed in the original article.
A version of this article originally appeared on Medscape.com.
, opening the door to a potential new method of predicting neurodegenerative disease, new research suggests.
In a study of 77 patients undergoing eye surgery for various conditions, more than 70% had more than 20 pg/mL of NfL in their vitreous humor. Higher levels of NfL were associated with higher levels of other biomarkers known to be associated with Alzheimer’s disease, including amyloid-beta and tau proteins.
“The study had three primary findings,” said lead author Manju L. Subramanian, MD, associate professor of ophthalmology at Boston University.
First, the investigators were able to detect levels of NfL in eye fluid; and second, those levels were not in any way correlated to the patient’s clinical eye condition, Dr. Subramanian said. “The third finding was that we were able to correlate those neurofilament light levels with other markers that have been known to be associated with conditions such as Alzheimer’s disease,” she noted.
For Dr. Subramanian, these findings add to the hypothesis that the eye is an extension of the brain. “This is further evidence that the eye might potentially be a proxy for neurodegenerative diseases,” she said. “So finding neurofilament light chain in the eye demonstrates that the eye is not an isolated organ, and things that happen in the body can affect the eye and vice versa.”
The findings were published online Sept. 17 in Alzheimer’s Research & Therapy.
Verge of clinical applicability?
Early diagnosis of neurodegenerative diseases remains a challenge, the investigators noted. As such, there is a palpable need for reliable biomarkers that can help with early diagnosis, prognostic assessment, and measurable response to treatment for Alzheimer’s disease and other neurologic disorders
Recent research has identified NfL as a potential screening tool and some researchers believe it to be on the verge of clinical applicability. In addition, increased levels of the biomarker have been observed in both the cerebrospinal fluid (CSF) and blood of individuals with neurodegeneration and neurological diseases, including Alzheimer’s disease. In previous studies, for example, elevated levels of NfL in CSF and blood have been shown to reliably distinguish between patients with Alzheimer’s disease and healthy volunteers.
Because certain eye diseases have been associated with Alzheimer’s disease in epidemiological studies, they may share common risk factors and pathological mechanisms at the molecular level, the researchers noted. In an earlier study, the current investigators found that cognitive function among patients with eye disease was significantly associated with amyloid-beta and total tau protein levels in the vitreous humor.
Given these connections, the researchers hypothesized that NfL could be identified in the vitreous humor and may be associated with other relevant biomarkers of neuronal origin. “Neurofilament light chain is detectable in the cerebrospinal fluid, but it’s never been tested for detection in the eye,” Dr. Subramanian noted.
In total, vitreous humor samples were collected from 77 unique participants (mean age, 56.2 years; 63% men) as part of the single-center, prospective, cross-sectional cohort study. The researchers aspirated 0.5 to 1.0 ml of undiluted vitreous fluid during vitrectomy, while whole blood was drawn for APOE genotyping.
Immunoassay was used to quantitatively measure for NfL, amyloid-beta, total tau, phosphorylated tau 181 (p-tau181), inflammatory cytokines, chemokines, and vascular proteins in the vitreous humor. The trial’s primary outcome measures were the detection of NfL levels in the vitreous humor, as well as its associations with other proteins.
Significant correlations
Results showed that 55 of the 77 participants (71.4%) had at least 20 pg/ml of NfL protein present in the vitreous humor. The median level was 68.65 pg/ml. Statistically significant associations were found between NfL levels in the vitreous humor and Abeta40, Abeta42, and total tau; higher NfL levels were associated with higher levels of all three biomarkers. On the other hand, NfL levels were not positively associated with increased vitreous levels of p-tau181.
Vitreous NfL concentration was significantly associated with inflammatory cytokines, including interleukin-15, interleukin-16, and monocyte chemoattractant protein-1, as well as vascular proteins such as vascular endothelial growth factor receptor-1, VEGF-C, vascular cell adhesion molecule-1, Tie-2, and intracellular adhesion molecular-1.
Despite these findings, NfL in the vitreous humor was not associated with patients’ clinical ophthalmic conditions or systemic diseases such as hypertension, diabetes, and hyperlipidemia. Similarly, NfL was not significantly associated with APOE genotype E2 and E4, the alleles most commonly associated with Alzheimer’s disease.
Finally, no statistically significant associations were found between NfL and Mini-Mental State Examination (MMSE) scores.
A “first step”
Most research currently examining the role of the eye in neurodegenerative disease is focused on retinal biomarkers imaged by optical coherence tomography, the investigators noted. Although promising, data obtained this way have yielded conflicting results.
Similarly, while the diagnostic potential of the core CSF biomarkers for AD (Abeta40, Abeta42, p-tau, and total tau) is well established, the practical utility of testing CSF for neurodegenerative diseases is limited, wrote the researchers.
As such, an additional biomarker source such as NfL–which is quantifiable and protein-based within eye fluid – has the potential to play an important role in predicting neurodegenerative disease in the clinical setting, they added.
“The holy grail of neurodegenerative-disease diagnosis is early diagnosis. Because if you can implement treatment early, you can slow down and potentially halt the progression of these diseases,” Dr. Subramanian said.
“This study is the first step toward determining if the eye could play a potential role in early diagnosis of conditions such as Alzheimer’s disease,” she added.
That said, Dr. Subramanian was quick to recognize the findings’ preliminary nature and that they do not offer reliable evidence that vitreous NfL levels definitively represent neurodegeneration. As such, the investigators called for more research to validate the association between this type of biomarker with other established biomarkers of neurodegeneration, such as those found in CSF fluid or on MRI and PET scans.
“At this point, we can’t look at eye fluid and say that people have neurodegenerative diseases,” she noted. “The other thing to consider is that vitreous humor is at the back of the eye, so it’s actually a fairly invasive procedure.
“I think the next step is to look at other types of eye fluids such as the aqueous fluid in the front of the eye, or even tear secretions, potentially,” Dr. Subramanian said.
Other study limitations include the lack of an association between NfL levels and MMSE scores and that none of the study participants were actually diagnosed with Alzheimer’s disease. Validation studies are needed to compare vitreous levels of NfL in patients with mild cognitive impairment/AD to normal controls, the investigators noted.
Fascinating but impractical?
Commenting on the findings, Sharon Fekrat, MD, professor of ophthalmology, Duke University, Durham, N.C., agreed that there’s potential importance of the eye in diagnosing neurodegeneration. However, she suggested that vitreous humor may not be the most expedient medium to use.
“I commend the authors for this fascinating work. But practically speaking, if we ultimately want to use intraocular fluid to diagnose Alzheimer’s and perhaps other neurodegeneration, I think aqueous humor might be more practical than the vitreous humor,” said Dr. Fekrat, who was not involved with the research. “What might be even better is to have a device that can be held against the eyeball that measures the levels of various substances inside the eyeball without having to enter the eye,” added Justin Ma, a Duke University medical student working under Dr. Fekrat’s guidance. “It could be similar technology to what’s currently used to measure blood glucose levels,” Mr. Ma added.
The study was supported in part by the National Institute of Aging. Dr. Subramanian, Dr. Fekrat, and Mr. Ma have disclosed no relevant financial relationships. Disclosures for other study authors are listed in the original article.
A version of this article originally appeared on Medscape.com.
FROM ALZHEIMER’S RESEARCH & THERAPY
Breast Cancer Journal Scan: October 2020
Screening mammography has led to decreased breast cancer-specific mortality, and both digital mammography (DM) and digital breast tomosynthesis (DBT) are available modalities. A study by Lowry and colleagues evaluated DM and DBT performance in over 1,500,000 women age 40-79 without a prior history of breast cancer and demonstrated greater DBT benefit on initial screening exam. DBT benefit persisted on subsequent screening for women with heterogeneously dense breasts and scattered fibroglandular density, while no improvement in recall or cancer detection rates was seen for women with extremely dense breasts with DBT on subsequent exams. A physician survey showed 30% utilization of DBT, with higher uptake in academic settings and those with higher number of breast imagers and mammography units. Interestingly, 16% of respondents used mammographic density as a criterion to select patients to undergo DBT. Guidelines to help determine which women benefit from DBT would be a useful asset to clinicians and help optimize resources.
Although the majority of breast cancers are detected by screening mammography, a significant proportion are first noticed by a patient. Interval breast cancers, those detected between a normal mammogram and next scheduled mammogram, have more unfavorable features and worse survival compared with those detected by screening. Niraula et al found that interval breast cancers accounted for approximately 20% of cases, were over 6 times more likely to be higher grade, nearly 3 times more likely to be estrogen receptor-negative, and had a hazard ratio of 3.5 for breast cancer-specific mortality compared to screening-detected breast cancers. These findings are not entirely surprising as tumors with more aggressive biology are expected to have a faster onset and progression. Development of more personalized screening strategies may help address breast cancer heterogeneity.
Breast cancer diagnosed in women ≥70 years of age tends to be early stage and hormone receptor (HR)-positive. These cancers carry an excellent prognosis, and omission of routine sentinel lymph node biopsy (SLNB) and post-lumpectomy radiotherapy (assuming endocrine therapy is given) are acceptable strategies. However, these modalities are still utilized at fairly high rates nationally. Wang and colleagues conducted a qualitative study in women ≥70 years of age without a diagnosis of breast cancer, to evaluate treatment preferences in the setting of a hypothetical diagnosis of low-risk HR-positive breast cancer. A total of 40% stated they would elect to undergo SLNB, regarding the procedure as low-risk and providing prognostic information. Most women (73%) would choose to avoid radiation, due to perception of risk/benefit ratio and inconvenience. This study highlights the importance of effective communication regarding the excellent prognosis of these cancers in older women, and that de-escalation strategies are presented to reduce overtreatment and potential harms while achieving similar benefit.
Higher rates of genetic mutations (non-BRCA 1/2) have been observed in patients with breast cancer and another primary cancer compared to those with single primary breast cancer. Maxwell et al demonstrated rates of 7-9% compared to 4-5% for those with multiple primary breast cancer and single breast cancer, respectively. Further, they showed gene mutations (other than BRCA) are found in up to 25% of patients with breast cancer and another primary with their first breast cancer diagnosed ≤30 years old. Genetic testing is not a one-size fits all method and many patients are offered multigene panel testing. A multidisciplinary approach is key to identifying patients at higher risk, implementing effective screening and hopefully preventing future cancer development.
Erin Roesch, MD
The Cleveland Clinic
References:
Hardesty LA, Kreidler SM, Glueck DH. Digital breast tomosynthesis utilization in the United States: A survey of physician members of the society of breast imaging. J Am Coll Radiol 2016; 11S:R67-R73.
Bellio G, Marion R, Giudici F, Kus S, Tonutti M, Zanconati F, Bortul M. Interval breast cancer versus screen-detected cancer: comparison of clinicopathologic characteristics in a single-center analysis. Clin Breast Cancer. 2017;17:564-71.
Piccinin C, Panchal S, Watkins N, Kim, RH. An update on genetic risk assessment and prevention: the role of genetic testing panels in breast cancer. Expert Rev Anticancer Ther. 2019; 19:787-801.
Screening mammography has led to decreased breast cancer-specific mortality, and both digital mammography (DM) and digital breast tomosynthesis (DBT) are available modalities. A study by Lowry and colleagues evaluated DM and DBT performance in over 1,500,000 women age 40-79 without a prior history of breast cancer and demonstrated greater DBT benefit on initial screening exam. DBT benefit persisted on subsequent screening for women with heterogeneously dense breasts and scattered fibroglandular density, while no improvement in recall or cancer detection rates was seen for women with extremely dense breasts with DBT on subsequent exams. A physician survey showed 30% utilization of DBT, with higher uptake in academic settings and those with higher number of breast imagers and mammography units. Interestingly, 16% of respondents used mammographic density as a criterion to select patients to undergo DBT. Guidelines to help determine which women benefit from DBT would be a useful asset to clinicians and help optimize resources.
Although the majority of breast cancers are detected by screening mammography, a significant proportion are first noticed by a patient. Interval breast cancers, those detected between a normal mammogram and next scheduled mammogram, have more unfavorable features and worse survival compared with those detected by screening. Niraula et al found that interval breast cancers accounted for approximately 20% of cases, were over 6 times more likely to be higher grade, nearly 3 times more likely to be estrogen receptor-negative, and had a hazard ratio of 3.5 for breast cancer-specific mortality compared to screening-detected breast cancers. These findings are not entirely surprising as tumors with more aggressive biology are expected to have a faster onset and progression. Development of more personalized screening strategies may help address breast cancer heterogeneity.
Breast cancer diagnosed in women ≥70 years of age tends to be early stage and hormone receptor (HR)-positive. These cancers carry an excellent prognosis, and omission of routine sentinel lymph node biopsy (SLNB) and post-lumpectomy radiotherapy (assuming endocrine therapy is given) are acceptable strategies. However, these modalities are still utilized at fairly high rates nationally. Wang and colleagues conducted a qualitative study in women ≥70 years of age without a diagnosis of breast cancer, to evaluate treatment preferences in the setting of a hypothetical diagnosis of low-risk HR-positive breast cancer. A total of 40% stated they would elect to undergo SLNB, regarding the procedure as low-risk and providing prognostic information. Most women (73%) would choose to avoid radiation, due to perception of risk/benefit ratio and inconvenience. This study highlights the importance of effective communication regarding the excellent prognosis of these cancers in older women, and that de-escalation strategies are presented to reduce overtreatment and potential harms while achieving similar benefit.
Higher rates of genetic mutations (non-BRCA 1/2) have been observed in patients with breast cancer and another primary cancer compared to those with single primary breast cancer. Maxwell et al demonstrated rates of 7-9% compared to 4-5% for those with multiple primary breast cancer and single breast cancer, respectively. Further, they showed gene mutations (other than BRCA) are found in up to 25% of patients with breast cancer and another primary with their first breast cancer diagnosed ≤30 years old. Genetic testing is not a one-size fits all method and many patients are offered multigene panel testing. A multidisciplinary approach is key to identifying patients at higher risk, implementing effective screening and hopefully preventing future cancer development.
Erin Roesch, MD
The Cleveland Clinic
References:
Hardesty LA, Kreidler SM, Glueck DH. Digital breast tomosynthesis utilization in the United States: A survey of physician members of the society of breast imaging. J Am Coll Radiol 2016; 11S:R67-R73.
Bellio G, Marion R, Giudici F, Kus S, Tonutti M, Zanconati F, Bortul M. Interval breast cancer versus screen-detected cancer: comparison of clinicopathologic characteristics in a single-center analysis. Clin Breast Cancer. 2017;17:564-71.
Piccinin C, Panchal S, Watkins N, Kim, RH. An update on genetic risk assessment and prevention: the role of genetic testing panels in breast cancer. Expert Rev Anticancer Ther. 2019; 19:787-801.
Screening mammography has led to decreased breast cancer-specific mortality, and both digital mammography (DM) and digital breast tomosynthesis (DBT) are available modalities. A study by Lowry and colleagues evaluated DM and DBT performance in over 1,500,000 women age 40-79 without a prior history of breast cancer and demonstrated greater DBT benefit on initial screening exam. DBT benefit persisted on subsequent screening for women with heterogeneously dense breasts and scattered fibroglandular density, while no improvement in recall or cancer detection rates was seen for women with extremely dense breasts with DBT on subsequent exams. A physician survey showed 30% utilization of DBT, with higher uptake in academic settings and those with higher number of breast imagers and mammography units. Interestingly, 16% of respondents used mammographic density as a criterion to select patients to undergo DBT. Guidelines to help determine which women benefit from DBT would be a useful asset to clinicians and help optimize resources.
Although the majority of breast cancers are detected by screening mammography, a significant proportion are first noticed by a patient. Interval breast cancers, those detected between a normal mammogram and next scheduled mammogram, have more unfavorable features and worse survival compared with those detected by screening. Niraula et al found that interval breast cancers accounted for approximately 20% of cases, were over 6 times more likely to be higher grade, nearly 3 times more likely to be estrogen receptor-negative, and had a hazard ratio of 3.5 for breast cancer-specific mortality compared to screening-detected breast cancers. These findings are not entirely surprising as tumors with more aggressive biology are expected to have a faster onset and progression. Development of more personalized screening strategies may help address breast cancer heterogeneity.
Breast cancer diagnosed in women ≥70 years of age tends to be early stage and hormone receptor (HR)-positive. These cancers carry an excellent prognosis, and omission of routine sentinel lymph node biopsy (SLNB) and post-lumpectomy radiotherapy (assuming endocrine therapy is given) are acceptable strategies. However, these modalities are still utilized at fairly high rates nationally. Wang and colleagues conducted a qualitative study in women ≥70 years of age without a diagnosis of breast cancer, to evaluate treatment preferences in the setting of a hypothetical diagnosis of low-risk HR-positive breast cancer. A total of 40% stated they would elect to undergo SLNB, regarding the procedure as low-risk and providing prognostic information. Most women (73%) would choose to avoid radiation, due to perception of risk/benefit ratio and inconvenience. This study highlights the importance of effective communication regarding the excellent prognosis of these cancers in older women, and that de-escalation strategies are presented to reduce overtreatment and potential harms while achieving similar benefit.
Higher rates of genetic mutations (non-BRCA 1/2) have been observed in patients with breast cancer and another primary cancer compared to those with single primary breast cancer. Maxwell et al demonstrated rates of 7-9% compared to 4-5% for those with multiple primary breast cancer and single breast cancer, respectively. Further, they showed gene mutations (other than BRCA) are found in up to 25% of patients with breast cancer and another primary with their first breast cancer diagnosed ≤30 years old. Genetic testing is not a one-size fits all method and many patients are offered multigene panel testing. A multidisciplinary approach is key to identifying patients at higher risk, implementing effective screening and hopefully preventing future cancer development.
Erin Roesch, MD
The Cleveland Clinic
References:
Hardesty LA, Kreidler SM, Glueck DH. Digital breast tomosynthesis utilization in the United States: A survey of physician members of the society of breast imaging. J Am Coll Radiol 2016; 11S:R67-R73.
Bellio G, Marion R, Giudici F, Kus S, Tonutti M, Zanconati F, Bortul M. Interval breast cancer versus screen-detected cancer: comparison of clinicopathologic characteristics in a single-center analysis. Clin Breast Cancer. 2017;17:564-71.
Piccinin C, Panchal S, Watkins N, Kim, RH. An update on genetic risk assessment and prevention: the role of genetic testing panels in breast cancer. Expert Rev Anticancer Ther. 2019; 19:787-801.
Interval breast cancer has higher hazard for breast cancer death than screen-detected breast cancer
Key clinical point: Interval breast cancers (IBC) were six times more likely to be grade III and had 3.5 times increased hazards of death compared with screen-detected cancers (SBC).
Major finding: Breast cancer–specific mortality was significantly higher for IBC compared with SBC cancers (hazard ratio [HR] 3.55; 95% CI, 2.01-6.28; P < .001).
Study details: A cohort study of 69,000 women aged 50-64 years
Disclosures: Dr Hu is the holder of a Manitoba Medical Services Foundation (MMSF) Allen Rouse Basic Science Career Development Research Award.
Source: Niraula, Saroj, MD, MSc, et al. JAMA Netw Open. 2020;3(9):e2018179. doi:10.1001/jamanetworkopen.2020.18179
Key clinical point: Interval breast cancers (IBC) were six times more likely to be grade III and had 3.5 times increased hazards of death compared with screen-detected cancers (SBC).
Major finding: Breast cancer–specific mortality was significantly higher for IBC compared with SBC cancers (hazard ratio [HR] 3.55; 95% CI, 2.01-6.28; P < .001).
Study details: A cohort study of 69,000 women aged 50-64 years
Disclosures: Dr Hu is the holder of a Manitoba Medical Services Foundation (MMSF) Allen Rouse Basic Science Career Development Research Award.
Source: Niraula, Saroj, MD, MSc, et al. JAMA Netw Open. 2020;3(9):e2018179. doi:10.1001/jamanetworkopen.2020.18179
Key clinical point: Interval breast cancers (IBC) were six times more likely to be grade III and had 3.5 times increased hazards of death compared with screen-detected cancers (SBC).
Major finding: Breast cancer–specific mortality was significantly higher for IBC compared with SBC cancers (hazard ratio [HR] 3.55; 95% CI, 2.01-6.28; P < .001).
Study details: A cohort study of 69,000 women aged 50-64 years
Disclosures: Dr Hu is the holder of a Manitoba Medical Services Foundation (MMSF) Allen Rouse Basic Science Career Development Research Award.
Source: Niraula, Saroj, MD, MSc, et al. JAMA Netw Open. 2020;3(9):e2018179. doi:10.1001/jamanetworkopen.2020.18179