User login
Screening for Prostate Cancer in Black Men
IN THIS ARTICLE
- Prostate cancer screening tools
- Ethic disparities
- Screening guidance
Prostate cancer, the second most common cancer to affect American men, is a slow-growing cancer that is curable when detected early. While the overall incidence has declined in the past 20 years (see Figure 1), prostate cancer remains a major concern among black men due to disproportionate incidence and mortality rates.1-3 A general understanding of the prostate and of prostate cancer lays the groundwork to acknowledge and address this divide.
ANATOMY OF THE PROSTATE
Although most men know where the prostate gland is located, many do not understand how it functions.4 The largest accessory gland of the male reproductive system, the prostate is located below the bladder and in front of the rectum (see Figure 2).5 The urethra passes through this gland; therefore, enlargement of the prostate can cause constriction of the urethra, which can affect the ability to eliminate urine from the body.5
The prostate is broken down into four distinct regions (see Figure 3). Certain types of inflammation may occur more often in some regions of the prostate than others; as such, 75% of prostate cancer occurs in the peripheral zone (the region located closest to the rectal wall).5,6
DIAGNOSING PROSTATE CANCER
Signs and symptoms
According to the CDC, the signs and symptoms of prostate cancer include
- Difficulty starting urination
- Weak or interrupted flow of urine
- Frequent urination (especially at night)
- Difficulty emptying the bladder
- Pain or burning during urination
- Blood in the urine or semen
- Pain in the back, hips, or pelvis
- Painful ejaculation.
However, none of these signs and symptoms are unique to prostate cancer.7 For instance, difficulty starting urination, weak or interrupted flow of urine, and frequent urination can also be attributed to benign prostatic hyperplasia. Further, in its early stages, prostate cancer may not exhibit any signs or symptoms, making accurate screening essential for detection and treatment.7
Screening tools
There are two primary tools for detection of prostate cancer: the prostate-specific antigen (PSA) test and the digital rectal exam (DRE).8 The blood test for PSA is routinely used as a screening tool and is therefore considered a standard test for prostate cancer.9 A PSA level above 4.0 ng/mL is considered abnormal.10 Although measuring the PSA level can improve the odds of early prostate cancer detection, there is considerable debate over its dependability in this regard, as PSA can be elevated for benign reasons.
Sociocultural and genetic risk factors
While both black and white men are at an increased risk for prostate cancer if a first-degree relative (ie, father, brother, son) had the disease, one in five black men will develop prostate cancer in their lifetimes, compared with one in seven white men.3 And despite a five-year survival rate of nearly 100% for regional prostate cancer, black men are more than two times as likely as white men to die of the disease (1 in 23 and 1 in 38, respectively).8,11 From 2011 to 2015, the age-adjusted mortality rate of prostate cancer among black men was 40.8, versus 18.2 for non-Hispanic white men (per 100,000 population).12
Continue to: The disparity in prostate cancer mortality...
The disparity in prostate cancer mortality among black men has been attributed to multiple variables. Cultural differences can play a role in whether patients choose to undergo prostate cancer screening. Black men are, for example, less likely than other men to participate in preventive health care practices.13 Although an in-depth discussion is outside the scope of this article, researchers have identified some plausible factors for this, including economic limitations, lack of access to health care, distrust of the health care system, and an indifference to pain or discomfort.13,14 Decisions surrounding prostate screening can also be affected by a patient’s perceived risk for prostate cancer, the impact of a cancer diagnosis, and the availability of treatment.
Other factors that contribute to the higher incidence and mortality rate among black men include genetic predisposition, health beliefs, and knowledge about the prostate and cancer screenings.15 While most researchers have focused on men ages 40 and older, Ogunsanya et al suggested that educating black men about screening for prostate cancer at an earlier age may help them to make informed decisions later in life.15
PRACTICE POINTS
- Prostate cancer remains a major concern among black men due to disproportionate incidence and mortality.
- Developing prostate cancer screening recommendations for black men would help reduce mortality and morbidity in this population.
- Educating black men about screening for prostate cancer at an earlier age may help them to make informed decisions later in life.
IMPLICATIONS FOR PRACTICE
The age at which men should begin screening for prostate cancer has been a source of controversy due to the lack of consensus between the American Cancer Society, the American Urological Association, and the United States Preventive Services Task Force (USPSTF) guidelines (see Table).16-18 The current USPSTF recommendations for prostate cancer screening do not take into account ethnic differences, despite the identified racial disparity.19 Ambiguity in public health policy creates a quandary in the decision-making process regarding testing and treatment.9,19,20
In addition, these guidelines recommend the use of both the DRE and PSA screening tests. Screening should be performed every two years for men who have a PSA level < 2.5 ng/mL, and every year for men who have a level > 2.5 ng/mL.
Continue to: TREATMENT
TREATMENT
Fortunately, there are several treatment options for men who are diagnosed with prostate cancer.22 These include watchful waiting, surgery, radiation, cryotherapy, hormone therapy, and chemotherapy. The type of treatment chosen depends on many factors, such as the tumor grade or cancer stage, the implications for quality of life, and the shared provider/patient decision-making process. Indeed, choosing the right treatment is a specialized approach that varies according to case and circumstance.22
CONCLUSION
There has been an increase in prostate cancer screening in recent years. However, black men still lag behind when it comes to having DRE and PSA tests. Many factors, including cultural perceptions of medical care among black men, often cause delays in seeking evaluation and treatment. Developing consistent and uniform prostate cancer screening recommendations for black men would be an important step in reducing mortality and morbidity in this population.
1. Murphy SL, Kochanek KD, Xu J, Heron M. Deaths: final data for 2012. Nat Vital Stat Rep. 2015;63(9):37-80.
2. Nevada Division of Public and Behavioral Health. Comprehensive report: prostate cancer. September 2015. http://dpbh.nv.gov/Programs/Office_of_Public_Healh_Informatics_and_Epidemiology_(OPHIE)/. Accessed September 19, 2018.
3. Odedina FT, Dagne G, Pressey S, et al. Prostate cancer health and cultural beliefs of black men: the Florida prostate cancer disparity project. Infect Agent Cancer. 2011;6(2):1-7.
4. Winterich JA, Grzywacz JG, Quandt SA, et al. Men’s knowledge and beliefs about prostate cancer: education, race, and screening status. Ethn Dis. 2009;19(2):199-203.
5. Bhavsar A, Verma S. Anatomic imaging of the prostate. Biomed Res Int. 2014,1-9.
6. National Institutes of Health. Zones of the prostate. www.training.seer.cancer.gov/prostate/anatomy/zones.html. Accessed September 7, 2018.
7. CDC. Prostate cancer statistics. June 12, 2017. www.cdc.gov/cancer/prostate/statistics/. Accessed September 7, 2018.
8. American Cancer Society. Prostate cancer risk factors. www.cancer.org/cancer/prostate-cancer/causes-risks-prevention/what-causes.html.
9. Mkanta W, Ndjakani Y, Bandiera F, et al. Prostate cancer screening and mortality in blacks and whites: a hospital-based case-control study. J Nat Med Assoc. 2015;107(2):32-38.
10. Hoffman R. Screening for prostate cancer. N Engl J Med. 2011;365(21):2013-2019.
11. CDC. Who is at risk for prostate cancer? June 7, 2018. www.cdc.gov/cancer/prostate/basic_info/risk_factors.htm. Accessed September 7, 2018.
12. American Cancer Society. Cancer facts and figures 2017. www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2017/cancer-facts-and-figures-2017.pdf. Accessed September 7, 2018.
13. Woods VD, Montgomery SB, Belliard JC, et al. Culture, black men, and prostate cancer: what is reality? Cancer Control. 2004;11(6):388-396.
14. Braithwaite RL. Health Issues in the Black Community. 2nd ed. San Francisco, Calif: Jossey-Bass Publishers; 2001.
15. Ogunsanya ME, Brown CM, Odedina FT, et al. Beliefs regarding prostate cancer screening among black males aged 18 to 40 years. Am J Mens Health. 2017;11(1):41-53.
16. American Cancer Society. American Cancer Society Recommendations for Prostate Cancer Early Detection. April 14, 2016. www.cancer.org/cancer/prostate-cancer/early-detection/acs-recommendations.html. Accessed September 7, 2018.
17. American Urological Association. Early detection of prostate cancer. 2013. www.auanet.org/guidelines/prostate-cancer-early-detection-(2013-reviewed-for-currency-2018). Accessed September 7, 2018.
18. United States Preventative Services Task Force. Final recommendation statement. Prostate cancer: screening. 2018. www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/prostate-cancer-screening1. Accessed September 7, 2018.
19. Shenoy D, Packianathan S, Chen AM, Vijayakumar S. Do African-American men need separate prostate cancer screening guidelines? BMC Urol. 2016;16(19):1-6.
20. Odedina FT, Campbell E, LaRose-Pierre M, et al. Personal factors affecting African-American men’s prostate cancer screening behavior. J Natl Med Assoc. 2008;100(6):724-733.
IN THIS ARTICLE
- Prostate cancer screening tools
- Ethic disparities
- Screening guidance
Prostate cancer, the second most common cancer to affect American men, is a slow-growing cancer that is curable when detected early. While the overall incidence has declined in the past 20 years (see Figure 1), prostate cancer remains a major concern among black men due to disproportionate incidence and mortality rates.1-3 A general understanding of the prostate and of prostate cancer lays the groundwork to acknowledge and address this divide.
ANATOMY OF THE PROSTATE
Although most men know where the prostate gland is located, many do not understand how it functions.4 The largest accessory gland of the male reproductive system, the prostate is located below the bladder and in front of the rectum (see Figure 2).5 The urethra passes through this gland; therefore, enlargement of the prostate can cause constriction of the urethra, which can affect the ability to eliminate urine from the body.5
The prostate is broken down into four distinct regions (see Figure 3). Certain types of inflammation may occur more often in some regions of the prostate than others; as such, 75% of prostate cancer occurs in the peripheral zone (the region located closest to the rectal wall).5,6
DIAGNOSING PROSTATE CANCER
Signs and symptoms
According to the CDC, the signs and symptoms of prostate cancer include
- Difficulty starting urination
- Weak or interrupted flow of urine
- Frequent urination (especially at night)
- Difficulty emptying the bladder
- Pain or burning during urination
- Blood in the urine or semen
- Pain in the back, hips, or pelvis
- Painful ejaculation.
However, none of these signs and symptoms are unique to prostate cancer.7 For instance, difficulty starting urination, weak or interrupted flow of urine, and frequent urination can also be attributed to benign prostatic hyperplasia. Further, in its early stages, prostate cancer may not exhibit any signs or symptoms, making accurate screening essential for detection and treatment.7
Screening tools
There are two primary tools for detection of prostate cancer: the prostate-specific antigen (PSA) test and the digital rectal exam (DRE).8 The blood test for PSA is routinely used as a screening tool and is therefore considered a standard test for prostate cancer.9 A PSA level above 4.0 ng/mL is considered abnormal.10 Although measuring the PSA level can improve the odds of early prostate cancer detection, there is considerable debate over its dependability in this regard, as PSA can be elevated for benign reasons.
Sociocultural and genetic risk factors
While both black and white men are at an increased risk for prostate cancer if a first-degree relative (ie, father, brother, son) had the disease, one in five black men will develop prostate cancer in their lifetimes, compared with one in seven white men.3 And despite a five-year survival rate of nearly 100% for regional prostate cancer, black men are more than two times as likely as white men to die of the disease (1 in 23 and 1 in 38, respectively).8,11 From 2011 to 2015, the age-adjusted mortality rate of prostate cancer among black men was 40.8, versus 18.2 for non-Hispanic white men (per 100,000 population).12
Continue to: The disparity in prostate cancer mortality...
The disparity in prostate cancer mortality among black men has been attributed to multiple variables. Cultural differences can play a role in whether patients choose to undergo prostate cancer screening. Black men are, for example, less likely than other men to participate in preventive health care practices.13 Although an in-depth discussion is outside the scope of this article, researchers have identified some plausible factors for this, including economic limitations, lack of access to health care, distrust of the health care system, and an indifference to pain or discomfort.13,14 Decisions surrounding prostate screening can also be affected by a patient’s perceived risk for prostate cancer, the impact of a cancer diagnosis, and the availability of treatment.
Other factors that contribute to the higher incidence and mortality rate among black men include genetic predisposition, health beliefs, and knowledge about the prostate and cancer screenings.15 While most researchers have focused on men ages 40 and older, Ogunsanya et al suggested that educating black men about screening for prostate cancer at an earlier age may help them to make informed decisions later in life.15
PRACTICE POINTS
- Prostate cancer remains a major concern among black men due to disproportionate incidence and mortality.
- Developing prostate cancer screening recommendations for black men would help reduce mortality and morbidity in this population.
- Educating black men about screening for prostate cancer at an earlier age may help them to make informed decisions later in life.
IMPLICATIONS FOR PRACTICE
The age at which men should begin screening for prostate cancer has been a source of controversy due to the lack of consensus between the American Cancer Society, the American Urological Association, and the United States Preventive Services Task Force (USPSTF) guidelines (see Table).16-18 The current USPSTF recommendations for prostate cancer screening do not take into account ethnic differences, despite the identified racial disparity.19 Ambiguity in public health policy creates a quandary in the decision-making process regarding testing and treatment.9,19,20
In addition, these guidelines recommend the use of both the DRE and PSA screening tests. Screening should be performed every two years for men who have a PSA level < 2.5 ng/mL, and every year for men who have a level > 2.5 ng/mL.
Continue to: TREATMENT
TREATMENT
Fortunately, there are several treatment options for men who are diagnosed with prostate cancer.22 These include watchful waiting, surgery, radiation, cryotherapy, hormone therapy, and chemotherapy. The type of treatment chosen depends on many factors, such as the tumor grade or cancer stage, the implications for quality of life, and the shared provider/patient decision-making process. Indeed, choosing the right treatment is a specialized approach that varies according to case and circumstance.22
CONCLUSION
There has been an increase in prostate cancer screening in recent years. However, black men still lag behind when it comes to having DRE and PSA tests. Many factors, including cultural perceptions of medical care among black men, often cause delays in seeking evaluation and treatment. Developing consistent and uniform prostate cancer screening recommendations for black men would be an important step in reducing mortality and morbidity in this population.
IN THIS ARTICLE
- Prostate cancer screening tools
- Ethic disparities
- Screening guidance
Prostate cancer, the second most common cancer to affect American men, is a slow-growing cancer that is curable when detected early. While the overall incidence has declined in the past 20 years (see Figure 1), prostate cancer remains a major concern among black men due to disproportionate incidence and mortality rates.1-3 A general understanding of the prostate and of prostate cancer lays the groundwork to acknowledge and address this divide.
ANATOMY OF THE PROSTATE
Although most men know where the prostate gland is located, many do not understand how it functions.4 The largest accessory gland of the male reproductive system, the prostate is located below the bladder and in front of the rectum (see Figure 2).5 The urethra passes through this gland; therefore, enlargement of the prostate can cause constriction of the urethra, which can affect the ability to eliminate urine from the body.5
The prostate is broken down into four distinct regions (see Figure 3). Certain types of inflammation may occur more often in some regions of the prostate than others; as such, 75% of prostate cancer occurs in the peripheral zone (the region located closest to the rectal wall).5,6
DIAGNOSING PROSTATE CANCER
Signs and symptoms
According to the CDC, the signs and symptoms of prostate cancer include
- Difficulty starting urination
- Weak or interrupted flow of urine
- Frequent urination (especially at night)
- Difficulty emptying the bladder
- Pain or burning during urination
- Blood in the urine or semen
- Pain in the back, hips, or pelvis
- Painful ejaculation.
However, none of these signs and symptoms are unique to prostate cancer.7 For instance, difficulty starting urination, weak or interrupted flow of urine, and frequent urination can also be attributed to benign prostatic hyperplasia. Further, in its early stages, prostate cancer may not exhibit any signs or symptoms, making accurate screening essential for detection and treatment.7
Screening tools
There are two primary tools for detection of prostate cancer: the prostate-specific antigen (PSA) test and the digital rectal exam (DRE).8 The blood test for PSA is routinely used as a screening tool and is therefore considered a standard test for prostate cancer.9 A PSA level above 4.0 ng/mL is considered abnormal.10 Although measuring the PSA level can improve the odds of early prostate cancer detection, there is considerable debate over its dependability in this regard, as PSA can be elevated for benign reasons.
Sociocultural and genetic risk factors
While both black and white men are at an increased risk for prostate cancer if a first-degree relative (ie, father, brother, son) had the disease, one in five black men will develop prostate cancer in their lifetimes, compared with one in seven white men.3 And despite a five-year survival rate of nearly 100% for regional prostate cancer, black men are more than two times as likely as white men to die of the disease (1 in 23 and 1 in 38, respectively).8,11 From 2011 to 2015, the age-adjusted mortality rate of prostate cancer among black men was 40.8, versus 18.2 for non-Hispanic white men (per 100,000 population).12
Continue to: The disparity in prostate cancer mortality...
The disparity in prostate cancer mortality among black men has been attributed to multiple variables. Cultural differences can play a role in whether patients choose to undergo prostate cancer screening. Black men are, for example, less likely than other men to participate in preventive health care practices.13 Although an in-depth discussion is outside the scope of this article, researchers have identified some plausible factors for this, including economic limitations, lack of access to health care, distrust of the health care system, and an indifference to pain or discomfort.13,14 Decisions surrounding prostate screening can also be affected by a patient’s perceived risk for prostate cancer, the impact of a cancer diagnosis, and the availability of treatment.
Other factors that contribute to the higher incidence and mortality rate among black men include genetic predisposition, health beliefs, and knowledge about the prostate and cancer screenings.15 While most researchers have focused on men ages 40 and older, Ogunsanya et al suggested that educating black men about screening for prostate cancer at an earlier age may help them to make informed decisions later in life.15
PRACTICE POINTS
- Prostate cancer remains a major concern among black men due to disproportionate incidence and mortality.
- Developing prostate cancer screening recommendations for black men would help reduce mortality and morbidity in this population.
- Educating black men about screening for prostate cancer at an earlier age may help them to make informed decisions later in life.
IMPLICATIONS FOR PRACTICE
The age at which men should begin screening for prostate cancer has been a source of controversy due to the lack of consensus between the American Cancer Society, the American Urological Association, and the United States Preventive Services Task Force (USPSTF) guidelines (see Table).16-18 The current USPSTF recommendations for prostate cancer screening do not take into account ethnic differences, despite the identified racial disparity.19 Ambiguity in public health policy creates a quandary in the decision-making process regarding testing and treatment.9,19,20
In addition, these guidelines recommend the use of both the DRE and PSA screening tests. Screening should be performed every two years for men who have a PSA level < 2.5 ng/mL, and every year for men who have a level > 2.5 ng/mL.
Continue to: TREATMENT
TREATMENT
Fortunately, there are several treatment options for men who are diagnosed with prostate cancer.22 These include watchful waiting, surgery, radiation, cryotherapy, hormone therapy, and chemotherapy. The type of treatment chosen depends on many factors, such as the tumor grade or cancer stage, the implications for quality of life, and the shared provider/patient decision-making process. Indeed, choosing the right treatment is a specialized approach that varies according to case and circumstance.22
CONCLUSION
There has been an increase in prostate cancer screening in recent years. However, black men still lag behind when it comes to having DRE and PSA tests. Many factors, including cultural perceptions of medical care among black men, often cause delays in seeking evaluation and treatment. Developing consistent and uniform prostate cancer screening recommendations for black men would be an important step in reducing mortality and morbidity in this population.
1. Murphy SL, Kochanek KD, Xu J, Heron M. Deaths: final data for 2012. Nat Vital Stat Rep. 2015;63(9):37-80.
2. Nevada Division of Public and Behavioral Health. Comprehensive report: prostate cancer. September 2015. http://dpbh.nv.gov/Programs/Office_of_Public_Healh_Informatics_and_Epidemiology_(OPHIE)/. Accessed September 19, 2018.
3. Odedina FT, Dagne G, Pressey S, et al. Prostate cancer health and cultural beliefs of black men: the Florida prostate cancer disparity project. Infect Agent Cancer. 2011;6(2):1-7.
4. Winterich JA, Grzywacz JG, Quandt SA, et al. Men’s knowledge and beliefs about prostate cancer: education, race, and screening status. Ethn Dis. 2009;19(2):199-203.
5. Bhavsar A, Verma S. Anatomic imaging of the prostate. Biomed Res Int. 2014,1-9.
6. National Institutes of Health. Zones of the prostate. www.training.seer.cancer.gov/prostate/anatomy/zones.html. Accessed September 7, 2018.
7. CDC. Prostate cancer statistics. June 12, 2017. www.cdc.gov/cancer/prostate/statistics/. Accessed September 7, 2018.
8. American Cancer Society. Prostate cancer risk factors. www.cancer.org/cancer/prostate-cancer/causes-risks-prevention/what-causes.html.
9. Mkanta W, Ndjakani Y, Bandiera F, et al. Prostate cancer screening and mortality in blacks and whites: a hospital-based case-control study. J Nat Med Assoc. 2015;107(2):32-38.
10. Hoffman R. Screening for prostate cancer. N Engl J Med. 2011;365(21):2013-2019.
11. CDC. Who is at risk for prostate cancer? June 7, 2018. www.cdc.gov/cancer/prostate/basic_info/risk_factors.htm. Accessed September 7, 2018.
12. American Cancer Society. Cancer facts and figures 2017. www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2017/cancer-facts-and-figures-2017.pdf. Accessed September 7, 2018.
13. Woods VD, Montgomery SB, Belliard JC, et al. Culture, black men, and prostate cancer: what is reality? Cancer Control. 2004;11(6):388-396.
14. Braithwaite RL. Health Issues in the Black Community. 2nd ed. San Francisco, Calif: Jossey-Bass Publishers; 2001.
15. Ogunsanya ME, Brown CM, Odedina FT, et al. Beliefs regarding prostate cancer screening among black males aged 18 to 40 years. Am J Mens Health. 2017;11(1):41-53.
16. American Cancer Society. American Cancer Society Recommendations for Prostate Cancer Early Detection. April 14, 2016. www.cancer.org/cancer/prostate-cancer/early-detection/acs-recommendations.html. Accessed September 7, 2018.
17. American Urological Association. Early detection of prostate cancer. 2013. www.auanet.org/guidelines/prostate-cancer-early-detection-(2013-reviewed-for-currency-2018). Accessed September 7, 2018.
18. United States Preventative Services Task Force. Final recommendation statement. Prostate cancer: screening. 2018. www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/prostate-cancer-screening1. Accessed September 7, 2018.
19. Shenoy D, Packianathan S, Chen AM, Vijayakumar S. Do African-American men need separate prostate cancer screening guidelines? BMC Urol. 2016;16(19):1-6.
20. Odedina FT, Campbell E, LaRose-Pierre M, et al. Personal factors affecting African-American men’s prostate cancer screening behavior. J Natl Med Assoc. 2008;100(6):724-733.
1. Murphy SL, Kochanek KD, Xu J, Heron M. Deaths: final data for 2012. Nat Vital Stat Rep. 2015;63(9):37-80.
2. Nevada Division of Public and Behavioral Health. Comprehensive report: prostate cancer. September 2015. http://dpbh.nv.gov/Programs/Office_of_Public_Healh_Informatics_and_Epidemiology_(OPHIE)/. Accessed September 19, 2018.
3. Odedina FT, Dagne G, Pressey S, et al. Prostate cancer health and cultural beliefs of black men: the Florida prostate cancer disparity project. Infect Agent Cancer. 2011;6(2):1-7.
4. Winterich JA, Grzywacz JG, Quandt SA, et al. Men’s knowledge and beliefs about prostate cancer: education, race, and screening status. Ethn Dis. 2009;19(2):199-203.
5. Bhavsar A, Verma S. Anatomic imaging of the prostate. Biomed Res Int. 2014,1-9.
6. National Institutes of Health. Zones of the prostate. www.training.seer.cancer.gov/prostate/anatomy/zones.html. Accessed September 7, 2018.
7. CDC. Prostate cancer statistics. June 12, 2017. www.cdc.gov/cancer/prostate/statistics/. Accessed September 7, 2018.
8. American Cancer Society. Prostate cancer risk factors. www.cancer.org/cancer/prostate-cancer/causes-risks-prevention/what-causes.html.
9. Mkanta W, Ndjakani Y, Bandiera F, et al. Prostate cancer screening and mortality in blacks and whites: a hospital-based case-control study. J Nat Med Assoc. 2015;107(2):32-38.
10. Hoffman R. Screening for prostate cancer. N Engl J Med. 2011;365(21):2013-2019.
11. CDC. Who is at risk for prostate cancer? June 7, 2018. www.cdc.gov/cancer/prostate/basic_info/risk_factors.htm. Accessed September 7, 2018.
12. American Cancer Society. Cancer facts and figures 2017. www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2017/cancer-facts-and-figures-2017.pdf. Accessed September 7, 2018.
13. Woods VD, Montgomery SB, Belliard JC, et al. Culture, black men, and prostate cancer: what is reality? Cancer Control. 2004;11(6):388-396.
14. Braithwaite RL. Health Issues in the Black Community. 2nd ed. San Francisco, Calif: Jossey-Bass Publishers; 2001.
15. Ogunsanya ME, Brown CM, Odedina FT, et al. Beliefs regarding prostate cancer screening among black males aged 18 to 40 years. Am J Mens Health. 2017;11(1):41-53.
16. American Cancer Society. American Cancer Society Recommendations for Prostate Cancer Early Detection. April 14, 2016. www.cancer.org/cancer/prostate-cancer/early-detection/acs-recommendations.html. Accessed September 7, 2018.
17. American Urological Association. Early detection of prostate cancer. 2013. www.auanet.org/guidelines/prostate-cancer-early-detection-(2013-reviewed-for-currency-2018). Accessed September 7, 2018.
18. United States Preventative Services Task Force. Final recommendation statement. Prostate cancer: screening. 2018. www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/prostate-cancer-screening1. Accessed September 7, 2018.
19. Shenoy D, Packianathan S, Chen AM, Vijayakumar S. Do African-American men need separate prostate cancer screening guidelines? BMC Urol. 2016;16(19):1-6.
20. Odedina FT, Campbell E, LaRose-Pierre M, et al. Personal factors affecting African-American men’s prostate cancer screening behavior. J Natl Med Assoc. 2008;100(6):724-733.
Corporal punishment bans may reduce youth violence
with males in those countries about 30% less likely to engage in fighting and females almost 60% less likely to do so, according to a study of school-based health surveys completed by 403,604 adolescents in 88 different countries published in BMJ Open.
“These findings add to a growing body of evidence on links between corporal punishment and adolescent health and safety. A growing number of countries have banned corporal punishment as an acceptable means of child discipline, and this is an important step that should be encouraged,” said Frank J. Elgar, PhD, of McGill University in Montreal and his colleagues. “Health providers are well positioned to offer practical and effective tools that support such approaches to child discipline. Cultural shifts from punitive to positive discipline happen slowly.”
The researchers placed countries into three categories: those that have banned corporate punishment in the home and at school; those that have banned it in school only (which include the United States, Canada, and the United Kingdom); and those that have not banned corporal punishment in either setting.
Frequent fighting rates varied widely, Dr. Elgar and his colleagues noted, ranging from a low of less than 1% among females in Costa Rica, which bans all forms of corporal punishment, to a high of 35% among males in Samoa, which allows corporal punishment in both settings.
The 30 countries with full bans had rates of fighting 31% lower in males and 58% lower in females than the 20 countries with no ban. Thirty-eight countries with bans in schools but not in the home reported less fighting in females only – 44% lower than countries without bans.
The reasons for the gender difference in fighting rates among countries with partial bans is unclear, the authors said. “It could be that males, compared with females, experience more physical violence outside school settings or are affected differently by corporal punishment by teachers,” Dr. Elgar and his coauthors said. “Further investigation is needed.”
The study analyzed findings of two well-established surveys used internationally to measure fighting among adolescents: the World Health Organization Health Behavior in School-aged Children (HBSC) study and the Global School-based Health Survey (GSHS). The former is conducted among children ages 11, 13, and 15 in Canada, the United States, and most European countries every 4 years. The GSHS measures fighting among children aged 13-17 years in 55 low- and middle-income countries.
Among the limitations the study authors acknowledged was the inability to account for when the surveys were completed and when the bans were implemented, enforced, or modified, but they also pointed out the large and diverse sample of countries as a strength of the study.
Dr. Elgar and coauthors reported having no financial relationships. The work was supported by grants from the Canadian Institutes for Health Research, the Social Sciences and Humanities Research Council, and the Canada Research Chairs programme.
SOURCE: Elgar FJ et al. BMJ Open. 2018;8:e021616.
with males in those countries about 30% less likely to engage in fighting and females almost 60% less likely to do so, according to a study of school-based health surveys completed by 403,604 adolescents in 88 different countries published in BMJ Open.
“These findings add to a growing body of evidence on links between corporal punishment and adolescent health and safety. A growing number of countries have banned corporal punishment as an acceptable means of child discipline, and this is an important step that should be encouraged,” said Frank J. Elgar, PhD, of McGill University in Montreal and his colleagues. “Health providers are well positioned to offer practical and effective tools that support such approaches to child discipline. Cultural shifts from punitive to positive discipline happen slowly.”
The researchers placed countries into three categories: those that have banned corporate punishment in the home and at school; those that have banned it in school only (which include the United States, Canada, and the United Kingdom); and those that have not banned corporal punishment in either setting.
Frequent fighting rates varied widely, Dr. Elgar and his colleagues noted, ranging from a low of less than 1% among females in Costa Rica, which bans all forms of corporal punishment, to a high of 35% among males in Samoa, which allows corporal punishment in both settings.
The 30 countries with full bans had rates of fighting 31% lower in males and 58% lower in females than the 20 countries with no ban. Thirty-eight countries with bans in schools but not in the home reported less fighting in females only – 44% lower than countries without bans.
The reasons for the gender difference in fighting rates among countries with partial bans is unclear, the authors said. “It could be that males, compared with females, experience more physical violence outside school settings or are affected differently by corporal punishment by teachers,” Dr. Elgar and his coauthors said. “Further investigation is needed.”
The study analyzed findings of two well-established surveys used internationally to measure fighting among adolescents: the World Health Organization Health Behavior in School-aged Children (HBSC) study and the Global School-based Health Survey (GSHS). The former is conducted among children ages 11, 13, and 15 in Canada, the United States, and most European countries every 4 years. The GSHS measures fighting among children aged 13-17 years in 55 low- and middle-income countries.
Among the limitations the study authors acknowledged was the inability to account for when the surveys were completed and when the bans were implemented, enforced, or modified, but they also pointed out the large and diverse sample of countries as a strength of the study.
Dr. Elgar and coauthors reported having no financial relationships. The work was supported by grants from the Canadian Institutes for Health Research, the Social Sciences and Humanities Research Council, and the Canada Research Chairs programme.
SOURCE: Elgar FJ et al. BMJ Open. 2018;8:e021616.
with males in those countries about 30% less likely to engage in fighting and females almost 60% less likely to do so, according to a study of school-based health surveys completed by 403,604 adolescents in 88 different countries published in BMJ Open.
“These findings add to a growing body of evidence on links between corporal punishment and adolescent health and safety. A growing number of countries have banned corporal punishment as an acceptable means of child discipline, and this is an important step that should be encouraged,” said Frank J. Elgar, PhD, of McGill University in Montreal and his colleagues. “Health providers are well positioned to offer practical and effective tools that support such approaches to child discipline. Cultural shifts from punitive to positive discipline happen slowly.”
The researchers placed countries into three categories: those that have banned corporate punishment in the home and at school; those that have banned it in school only (which include the United States, Canada, and the United Kingdom); and those that have not banned corporal punishment in either setting.
Frequent fighting rates varied widely, Dr. Elgar and his colleagues noted, ranging from a low of less than 1% among females in Costa Rica, which bans all forms of corporal punishment, to a high of 35% among males in Samoa, which allows corporal punishment in both settings.
The 30 countries with full bans had rates of fighting 31% lower in males and 58% lower in females than the 20 countries with no ban. Thirty-eight countries with bans in schools but not in the home reported less fighting in females only – 44% lower than countries without bans.
The reasons for the gender difference in fighting rates among countries with partial bans is unclear, the authors said. “It could be that males, compared with females, experience more physical violence outside school settings or are affected differently by corporal punishment by teachers,” Dr. Elgar and his coauthors said. “Further investigation is needed.”
The study analyzed findings of two well-established surveys used internationally to measure fighting among adolescents: the World Health Organization Health Behavior in School-aged Children (HBSC) study and the Global School-based Health Survey (GSHS). The former is conducted among children ages 11, 13, and 15 in Canada, the United States, and most European countries every 4 years. The GSHS measures fighting among children aged 13-17 years in 55 low- and middle-income countries.
Among the limitations the study authors acknowledged was the inability to account for when the surveys were completed and when the bans were implemented, enforced, or modified, but they also pointed out the large and diverse sample of countries as a strength of the study.
Dr. Elgar and coauthors reported having no financial relationships. The work was supported by grants from the Canadian Institutes for Health Research, the Social Sciences and Humanities Research Council, and the Canada Research Chairs programme.
SOURCE: Elgar FJ et al. BMJ Open. 2018;8:e021616.
FROM BMJ OPEN
Key clinical point: Nations that ban corporal punishment of children have lower rates of youth violence.
Major finding: Countries with total bans on corporal punishment reported rates of fighting in males 31% lower than countries with no bans.
Study details: An ecological study evaluating school-based health surveys of 403,604 adolescents from 88 low- to high-income countries.
Disclosures: Dr. Elgar and coauthors reported having no financial relationships. The work was supported by grants from the Canadian Institutes for Health Research, the Social Sciences and Humanities Research Council, and the Canada Research Chairs programme.
Source: Elgar FJ et al. BMJ Open. 2018;8:e021616.
Older adults who self-harm face increased suicide risk
Adults aged 65 years and older with a self-harm history are more likely to die from unnatural causes – specifically suicide – than are those who do not self-harm, according to what researchers called the first study of self-harm that exclusively focused on older adults from the perspective of primary care.
“This work should alert policy makers and primary health care professionals to progress towards implementing preventive measures among older adults who consult with a GP,” lead author Catharine Morgan, PhD, and her coauthors, wrote in the Lancet Psychiatry.
The study, which reviewed the primary care records of 4,124 older adults in the United Kingdom with incidents of self-harm, found that , said Dr. Morgan, of the National Institute for Health Research (NIHR) Greater Manchester (England) Patient Safety Translational Research Centre at the University of Manchester, and her coauthors. They also noted that, “compared with their peers who had not harmed themselves, adults in the self-harm cohort were an estimated 20 times more likely to die unnaturally during the first year after a self-harm episode and three or four times more likely to die unnaturally in subsequent years.”
The coauthors also found that, compared with a comparison cohort, the prevalence of a previous mental illness was twice as high among older adults who had engaged in self-harm (hazard ratio, 2.10; 95% confidence interval, 2.03-2.17). Older adults with a self-harm history also had a 20% higher prevalence of a physical illness (HR, 1.20; 95% CI, 1.17-1.23), compared with those without such a history.
Dr. Morgan and her coauthors also uncovered differing likelihoods of referral to specialists, depending on socioeconomic status of the surrounding area. Older patients in “more socially deprived localities” were less likely to be referred to mental health services. Women also were more likely than men were to be referred, highlighting “an important target for improvement across the health care system.” They also recommended avoiding tricyclics for older patients and encouraged maintaining “frequent medication reviews after self-harm.”
The coauthors noted potential limitations in their study, including reliance on clinicians who entered the primary care records and reluctance of coroners to report suicide as the cause of death in certain scenarios. However, they strongly encouraged general practitioners to intervene early and consider alternative medications when treating older patients who exhibit risk factors.
“Health care professionals should take the opportunity to consider the risk of self-harm when an older person consults with other health problems, especially when major physical illnesses and psychopathology are both present, to reduce the risk of an escalation in self-harming behaviour and associated mortality,” they wrote.
The NIHR Greater Manchester Patient Safety Translational Research Centre funded the study. Dr. Morgan and three of her coauthors declared no conflicts of interest. Two authors reported grants from the NIHR, and one author reported grants from the Department of Health and Social Care and the Healthcare Quality Improvement Partnership.
SOURCE: Morgan C et al. Lancet Psychiatry. 2018 Oct 15. doi: 10.1016/S2215-0366(18)30348-1.
The study by Morgan et al. and her colleagues reinforced both the risks of self-harm among older adults and the absence of follow-up, but more research needs to be done, according to Rebecca Mitchell, PhD, an associate professor at the Australian Institute of Health Innovation at Macquarie University in Sydney.
Just 11.7% of older adults who self-harmed were referred to a mental health specialist, even though the authors found that the older adult cohort had twice the prevalence of a previous mental illness, compared with a matched comparison cohort. Though we may not always know the factors that contributed to these incidents of self-harm, “Morgan and colleagues have provided evidence that the clinical management of older adults who self-harm needs to improve,” Dr. Mitchell wrote.
Next steps could include “qualitative studies that focus on life experiences, social connectedness, resilience, and experience of health care use,” she wrote, painting a fuller picture of the intentions behind those self-harm choices.
“Further research still needs to be done on self-harm among older adults, including the replication of Morgan and colleagues’ research in other countries, to increase our understanding of how primary care could present an early window of opportunity to prevent repeated self-harm attempts and unnatural deaths,” Dr. Mitchell added.
These comments are adapted from an accompanying editorial (Lancet Psychiatry. 2018 Oct 15. doi: 10.1016/S2215-0366[18]30358-4). Dr. Mitchell declared no conflicts of interest.
The study by Morgan et al. and her colleagues reinforced both the risks of self-harm among older adults and the absence of follow-up, but more research needs to be done, according to Rebecca Mitchell, PhD, an associate professor at the Australian Institute of Health Innovation at Macquarie University in Sydney.
Just 11.7% of older adults who self-harmed were referred to a mental health specialist, even though the authors found that the older adult cohort had twice the prevalence of a previous mental illness, compared with a matched comparison cohort. Though we may not always know the factors that contributed to these incidents of self-harm, “Morgan and colleagues have provided evidence that the clinical management of older adults who self-harm needs to improve,” Dr. Mitchell wrote.
Next steps could include “qualitative studies that focus on life experiences, social connectedness, resilience, and experience of health care use,” she wrote, painting a fuller picture of the intentions behind those self-harm choices.
“Further research still needs to be done on self-harm among older adults, including the replication of Morgan and colleagues’ research in other countries, to increase our understanding of how primary care could present an early window of opportunity to prevent repeated self-harm attempts and unnatural deaths,” Dr. Mitchell added.
These comments are adapted from an accompanying editorial (Lancet Psychiatry. 2018 Oct 15. doi: 10.1016/S2215-0366[18]30358-4). Dr. Mitchell declared no conflicts of interest.
The study by Morgan et al. and her colleagues reinforced both the risks of self-harm among older adults and the absence of follow-up, but more research needs to be done, according to Rebecca Mitchell, PhD, an associate professor at the Australian Institute of Health Innovation at Macquarie University in Sydney.
Just 11.7% of older adults who self-harmed were referred to a mental health specialist, even though the authors found that the older adult cohort had twice the prevalence of a previous mental illness, compared with a matched comparison cohort. Though we may not always know the factors that contributed to these incidents of self-harm, “Morgan and colleagues have provided evidence that the clinical management of older adults who self-harm needs to improve,” Dr. Mitchell wrote.
Next steps could include “qualitative studies that focus on life experiences, social connectedness, resilience, and experience of health care use,” she wrote, painting a fuller picture of the intentions behind those self-harm choices.
“Further research still needs to be done on self-harm among older adults, including the replication of Morgan and colleagues’ research in other countries, to increase our understanding of how primary care could present an early window of opportunity to prevent repeated self-harm attempts and unnatural deaths,” Dr. Mitchell added.
These comments are adapted from an accompanying editorial (Lancet Psychiatry. 2018 Oct 15. doi: 10.1016/S2215-0366[18]30358-4). Dr. Mitchell declared no conflicts of interest.
Adults aged 65 years and older with a self-harm history are more likely to die from unnatural causes – specifically suicide – than are those who do not self-harm, according to what researchers called the first study of self-harm that exclusively focused on older adults from the perspective of primary care.
“This work should alert policy makers and primary health care professionals to progress towards implementing preventive measures among older adults who consult with a GP,” lead author Catharine Morgan, PhD, and her coauthors, wrote in the Lancet Psychiatry.
The study, which reviewed the primary care records of 4,124 older adults in the United Kingdom with incidents of self-harm, found that , said Dr. Morgan, of the National Institute for Health Research (NIHR) Greater Manchester (England) Patient Safety Translational Research Centre at the University of Manchester, and her coauthors. They also noted that, “compared with their peers who had not harmed themselves, adults in the self-harm cohort were an estimated 20 times more likely to die unnaturally during the first year after a self-harm episode and three or four times more likely to die unnaturally in subsequent years.”
The coauthors also found that, compared with a comparison cohort, the prevalence of a previous mental illness was twice as high among older adults who had engaged in self-harm (hazard ratio, 2.10; 95% confidence interval, 2.03-2.17). Older adults with a self-harm history also had a 20% higher prevalence of a physical illness (HR, 1.20; 95% CI, 1.17-1.23), compared with those without such a history.
Dr. Morgan and her coauthors also uncovered differing likelihoods of referral to specialists, depending on socioeconomic status of the surrounding area. Older patients in “more socially deprived localities” were less likely to be referred to mental health services. Women also were more likely than men were to be referred, highlighting “an important target for improvement across the health care system.” They also recommended avoiding tricyclics for older patients and encouraged maintaining “frequent medication reviews after self-harm.”
The coauthors noted potential limitations in their study, including reliance on clinicians who entered the primary care records and reluctance of coroners to report suicide as the cause of death in certain scenarios. However, they strongly encouraged general practitioners to intervene early and consider alternative medications when treating older patients who exhibit risk factors.
“Health care professionals should take the opportunity to consider the risk of self-harm when an older person consults with other health problems, especially when major physical illnesses and psychopathology are both present, to reduce the risk of an escalation in self-harming behaviour and associated mortality,” they wrote.
The NIHR Greater Manchester Patient Safety Translational Research Centre funded the study. Dr. Morgan and three of her coauthors declared no conflicts of interest. Two authors reported grants from the NIHR, and one author reported grants from the Department of Health and Social Care and the Healthcare Quality Improvement Partnership.
SOURCE: Morgan C et al. Lancet Psychiatry. 2018 Oct 15. doi: 10.1016/S2215-0366(18)30348-1.
Adults aged 65 years and older with a self-harm history are more likely to die from unnatural causes – specifically suicide – than are those who do not self-harm, according to what researchers called the first study of self-harm that exclusively focused on older adults from the perspective of primary care.
“This work should alert policy makers and primary health care professionals to progress towards implementing preventive measures among older adults who consult with a GP,” lead author Catharine Morgan, PhD, and her coauthors, wrote in the Lancet Psychiatry.
The study, which reviewed the primary care records of 4,124 older adults in the United Kingdom with incidents of self-harm, found that , said Dr. Morgan, of the National Institute for Health Research (NIHR) Greater Manchester (England) Patient Safety Translational Research Centre at the University of Manchester, and her coauthors. They also noted that, “compared with their peers who had not harmed themselves, adults in the self-harm cohort were an estimated 20 times more likely to die unnaturally during the first year after a self-harm episode and three or four times more likely to die unnaturally in subsequent years.”
The coauthors also found that, compared with a comparison cohort, the prevalence of a previous mental illness was twice as high among older adults who had engaged in self-harm (hazard ratio, 2.10; 95% confidence interval, 2.03-2.17). Older adults with a self-harm history also had a 20% higher prevalence of a physical illness (HR, 1.20; 95% CI, 1.17-1.23), compared with those without such a history.
Dr. Morgan and her coauthors also uncovered differing likelihoods of referral to specialists, depending on socioeconomic status of the surrounding area. Older patients in “more socially deprived localities” were less likely to be referred to mental health services. Women also were more likely than men were to be referred, highlighting “an important target for improvement across the health care system.” They also recommended avoiding tricyclics for older patients and encouraged maintaining “frequent medication reviews after self-harm.”
The coauthors noted potential limitations in their study, including reliance on clinicians who entered the primary care records and reluctance of coroners to report suicide as the cause of death in certain scenarios. However, they strongly encouraged general practitioners to intervene early and consider alternative medications when treating older patients who exhibit risk factors.
“Health care professionals should take the opportunity to consider the risk of self-harm when an older person consults with other health problems, especially when major physical illnesses and psychopathology are both present, to reduce the risk of an escalation in self-harming behaviour and associated mortality,” they wrote.
The NIHR Greater Manchester Patient Safety Translational Research Centre funded the study. Dr. Morgan and three of her coauthors declared no conflicts of interest. Two authors reported grants from the NIHR, and one author reported grants from the Department of Health and Social Care and the Healthcare Quality Improvement Partnership.
SOURCE: Morgan C et al. Lancet Psychiatry. 2018 Oct 15. doi: 10.1016/S2215-0366(18)30348-1.
FROM THE LANCET PSYCHIATRY
Key clinical point: Consider medications other than tricyclics and frequent medication reviews for older adults who self-harm.
Major finding: “Adults in the self-harm cohort were an estimated 20 times more likely to die unnaturally during the first year after a self-harm episode and three or four times more likely to die unnaturally in subsequent years.”
Study details: A multiphase cohort study involving 4,124 adults in the United Kingdom, aged 65 years and older, with a self-harm episode recorded during 2001-2014.
Disclosures: The National Institute for Health Research (NIHR) Greater Manchester Patient Safety Translational Research Centre funded the study. Dr. Morgan and three of her coauthors declared no conflicts of interest. Two authors reported grants from the NIHR, and one reported grants from the Department of Health and Social Care and the Healthcare Quality Improvement Partnership.
Source: Morgan C et al. Lancet Psychiatry. 2018 Oct 15. doi: 10.1016/S2215-0366(18)30348-1.
PURE Healthy Diet Score validated
MUNICH – A formula for scoring diet quality that during its development phase significantly correlated with overall survival received validation when tested using three independent, large data sets that together included almost 80,000 people.

With these new findings the PURE Healthy Diet Score had now shown consistent, significant correlations with overall survival and the incidence of MI and stroke in a total of about 218,000 people from 50 countries who had been followed in any of four separate studies. This new validation is especially notable because the optimal diet identified by the scoring system diverged from current American diet recommendations in two important ways: Optimal food consumption included three daily servings of full-fat dairy and 1.5 servings daily of unprocessed red meat Andrew Mente, PhD, reported at the annual congress of the European Society of Cardiology. He explained this finding as possibly related to the global scope of the study, which included many people from low- or middle-income countries where average diets are usually low in important nutrients.
The PURE Healthy Diet Score should now be “considered for broad, global dietary recommendations,” Dr. Mente said in a video interview. Testing a diet profile in a large, randomized trial would be ideal, but also difficult to run. Until then, the only alternative for defining an evidence-based optimal diet is observational data, as in the current study. The PURE Healthy Diet Score “is ready for routine use,” said Dr. Mente, a clinical epidemiologist at McMaster University in Hamilton, Canada.
Dr. Mente and his associates developed the Pure Healthy Diet Score with data taken from 138,527 people enrolled in the Prospective Urban Rural Epidemiology (PURE) study. They published a pair of reports in 2017 with their initial findings that also included some of their first steps toward developing the score (Lancet. 2017 Nov 4; 380[10107]:2037-49; 380[10107]:2050-62). The PURE analysis identified seven food groups for which daily intake levels significantly linked with survival: fruits, vegetables, nuts, legumes, dairy, red meat, and fish. Based on this, they devised a scoring formula that gives a person a rating of 1-5 for each of these seven food types, from the lowest quintile of consumption, which scores 1, to the highest quintile, which scores 5. The result is a score than can range from 7 to 35. They then divided the PURE participants into quintiles based on their intakes of all seven food types and found the highest survival rate among people in the quintile with the highest intake level for all of the food groups.
The best-outcome quintile consumed on average about eight servings of fruits and vegetables daily, 2.5 servings of legumes and nuts, three servings of full-fat daily, 1.5 servings of unprocessed red meat, and 0.3 servings of fish (or about two servings of fish weekly). Energy consumption in the best-outcome quintile received 54% of calories as carbohydrates, 28% as fat, and 18% as protein. In contrast, the worst-outcomes quintile received 69% of calories from carbohydrates, 19% from fat, and 12% from protein.
In a model that adjusted for all measured confounders the people in PURE with the best-outcome diet had a statistically significant, 25% reduced all-cause mortality, compared with people in the quintile with the worst diet.
To validate the formula the researchers used data collected from three other trials run by their group at McMaster University:
- The ONTARGET and TRANSCEND studies (N Engl J Med. 2008 Apr 10;358[15]:1547-58), which together included diet and outcomes data for 31,546 patients with vascular disease. Diet analysis and scoring showed that enrolled people in the quintile with the highest score had a statistically significant 24% relative reduction in mortality, compared with the quintile with the worst score after adjusting for measured confounders.
- The INTERHEART study (Lancet. 2004 Sep 11;364[9438]:937-52), which had data for 27,098 people and showed that the primary outcome of incident MI was a statistically significant 22% lower after adjustment in the quintile with the best diet score, compared with the quintile with the worst score.
- The INTERSTROKE study (Lancet. 2016 Aug 20;388[10046]:761-75), with data for 20,834 people, showed that the rate of stroke was a statistically significant 25% lower after adjustment in the quintile with the highest diet score, compared with those with the lowest score.
Dr. Mente had no financial disclosures.
Dr. Mente and his associates have validated the PURE Healthy Diet Score. However, it remains unclear whether the score captures all of the many facets of diet, and it’s also uncertain whether the score is sensitive to changes in diet.
Another issue with the quintile analysis that the researchers used to derive the formula was that the spread between the median scores of the bottom, worst-outcome quartile and the top, best-outcome quartile was only 7 points on a scale that ranged from 7 to 35. The small magnitude of the difference in scores between the bottom and top quintiles might limit the discriminatory power of this scoring system.
Eva Prescott, MD, is a cardiologist at Bispebjerg Hospital in Copenhagen. She has been an advisor to AstraZeneca, NovoNordisk, and Sanofi. She made these comments as designated discussant for the report.
Dr. Mente and his associates have validated the PURE Healthy Diet Score. However, it remains unclear whether the score captures all of the many facets of diet, and it’s also uncertain whether the score is sensitive to changes in diet.
Another issue with the quintile analysis that the researchers used to derive the formula was that the spread between the median scores of the bottom, worst-outcome quartile and the top, best-outcome quartile was only 7 points on a scale that ranged from 7 to 35. The small magnitude of the difference in scores between the bottom and top quintiles might limit the discriminatory power of this scoring system.
Eva Prescott, MD, is a cardiologist at Bispebjerg Hospital in Copenhagen. She has been an advisor to AstraZeneca, NovoNordisk, and Sanofi. She made these comments as designated discussant for the report.
Dr. Mente and his associates have validated the PURE Healthy Diet Score. However, it remains unclear whether the score captures all of the many facets of diet, and it’s also uncertain whether the score is sensitive to changes in diet.
Another issue with the quintile analysis that the researchers used to derive the formula was that the spread between the median scores of the bottom, worst-outcome quartile and the top, best-outcome quartile was only 7 points on a scale that ranged from 7 to 35. The small magnitude of the difference in scores between the bottom and top quintiles might limit the discriminatory power of this scoring system.
Eva Prescott, MD, is a cardiologist at Bispebjerg Hospital in Copenhagen. She has been an advisor to AstraZeneca, NovoNordisk, and Sanofi. She made these comments as designated discussant for the report.
MUNICH – A formula for scoring diet quality that during its development phase significantly correlated with overall survival received validation when tested using three independent, large data sets that together included almost 80,000 people.

With these new findings the PURE Healthy Diet Score had now shown consistent, significant correlations with overall survival and the incidence of MI and stroke in a total of about 218,000 people from 50 countries who had been followed in any of four separate studies. This new validation is especially notable because the optimal diet identified by the scoring system diverged from current American diet recommendations in two important ways: Optimal food consumption included three daily servings of full-fat dairy and 1.5 servings daily of unprocessed red meat Andrew Mente, PhD, reported at the annual congress of the European Society of Cardiology. He explained this finding as possibly related to the global scope of the study, which included many people from low- or middle-income countries where average diets are usually low in important nutrients.
The PURE Healthy Diet Score should now be “considered for broad, global dietary recommendations,” Dr. Mente said in a video interview. Testing a diet profile in a large, randomized trial would be ideal, but also difficult to run. Until then, the only alternative for defining an evidence-based optimal diet is observational data, as in the current study. The PURE Healthy Diet Score “is ready for routine use,” said Dr. Mente, a clinical epidemiologist at McMaster University in Hamilton, Canada.
Dr. Mente and his associates developed the Pure Healthy Diet Score with data taken from 138,527 people enrolled in the Prospective Urban Rural Epidemiology (PURE) study. They published a pair of reports in 2017 with their initial findings that also included some of their first steps toward developing the score (Lancet. 2017 Nov 4; 380[10107]:2037-49; 380[10107]:2050-62). The PURE analysis identified seven food groups for which daily intake levels significantly linked with survival: fruits, vegetables, nuts, legumes, dairy, red meat, and fish. Based on this, they devised a scoring formula that gives a person a rating of 1-5 for each of these seven food types, from the lowest quintile of consumption, which scores 1, to the highest quintile, which scores 5. The result is a score than can range from 7 to 35. They then divided the PURE participants into quintiles based on their intakes of all seven food types and found the highest survival rate among people in the quintile with the highest intake level for all of the food groups.
The best-outcome quintile consumed on average about eight servings of fruits and vegetables daily, 2.5 servings of legumes and nuts, three servings of full-fat daily, 1.5 servings of unprocessed red meat, and 0.3 servings of fish (or about two servings of fish weekly). Energy consumption in the best-outcome quintile received 54% of calories as carbohydrates, 28% as fat, and 18% as protein. In contrast, the worst-outcomes quintile received 69% of calories from carbohydrates, 19% from fat, and 12% from protein.
In a model that adjusted for all measured confounders the people in PURE with the best-outcome diet had a statistically significant, 25% reduced all-cause mortality, compared with people in the quintile with the worst diet.
To validate the formula the researchers used data collected from three other trials run by their group at McMaster University:
- The ONTARGET and TRANSCEND studies (N Engl J Med. 2008 Apr 10;358[15]:1547-58), which together included diet and outcomes data for 31,546 patients with vascular disease. Diet analysis and scoring showed that enrolled people in the quintile with the highest score had a statistically significant 24% relative reduction in mortality, compared with the quintile with the worst score after adjusting for measured confounders.
- The INTERHEART study (Lancet. 2004 Sep 11;364[9438]:937-52), which had data for 27,098 people and showed that the primary outcome of incident MI was a statistically significant 22% lower after adjustment in the quintile with the best diet score, compared with the quintile with the worst score.
- The INTERSTROKE study (Lancet. 2016 Aug 20;388[10046]:761-75), with data for 20,834 people, showed that the rate of stroke was a statistically significant 25% lower after adjustment in the quintile with the highest diet score, compared with those with the lowest score.
Dr. Mente had no financial disclosures.
MUNICH – A formula for scoring diet quality that during its development phase significantly correlated with overall survival received validation when tested using three independent, large data sets that together included almost 80,000 people.

With these new findings the PURE Healthy Diet Score had now shown consistent, significant correlations with overall survival and the incidence of MI and stroke in a total of about 218,000 people from 50 countries who had been followed in any of four separate studies. This new validation is especially notable because the optimal diet identified by the scoring system diverged from current American diet recommendations in two important ways: Optimal food consumption included three daily servings of full-fat dairy and 1.5 servings daily of unprocessed red meat Andrew Mente, PhD, reported at the annual congress of the European Society of Cardiology. He explained this finding as possibly related to the global scope of the study, which included many people from low- or middle-income countries where average diets are usually low in important nutrients.
The PURE Healthy Diet Score should now be “considered for broad, global dietary recommendations,” Dr. Mente said in a video interview. Testing a diet profile in a large, randomized trial would be ideal, but also difficult to run. Until then, the only alternative for defining an evidence-based optimal diet is observational data, as in the current study. The PURE Healthy Diet Score “is ready for routine use,” said Dr. Mente, a clinical epidemiologist at McMaster University in Hamilton, Canada.
Dr. Mente and his associates developed the Pure Healthy Diet Score with data taken from 138,527 people enrolled in the Prospective Urban Rural Epidemiology (PURE) study. They published a pair of reports in 2017 with their initial findings that also included some of their first steps toward developing the score (Lancet. 2017 Nov 4; 380[10107]:2037-49; 380[10107]:2050-62). The PURE analysis identified seven food groups for which daily intake levels significantly linked with survival: fruits, vegetables, nuts, legumes, dairy, red meat, and fish. Based on this, they devised a scoring formula that gives a person a rating of 1-5 for each of these seven food types, from the lowest quintile of consumption, which scores 1, to the highest quintile, which scores 5. The result is a score than can range from 7 to 35. They then divided the PURE participants into quintiles based on their intakes of all seven food types and found the highest survival rate among people in the quintile with the highest intake level for all of the food groups.
The best-outcome quintile consumed on average about eight servings of fruits and vegetables daily, 2.5 servings of legumes and nuts, three servings of full-fat daily, 1.5 servings of unprocessed red meat, and 0.3 servings of fish (or about two servings of fish weekly). Energy consumption in the best-outcome quintile received 54% of calories as carbohydrates, 28% as fat, and 18% as protein. In contrast, the worst-outcomes quintile received 69% of calories from carbohydrates, 19% from fat, and 12% from protein.
In a model that adjusted for all measured confounders the people in PURE with the best-outcome diet had a statistically significant, 25% reduced all-cause mortality, compared with people in the quintile with the worst diet.
To validate the formula the researchers used data collected from three other trials run by their group at McMaster University:
- The ONTARGET and TRANSCEND studies (N Engl J Med. 2008 Apr 10;358[15]:1547-58), which together included diet and outcomes data for 31,546 patients with vascular disease. Diet analysis and scoring showed that enrolled people in the quintile with the highest score had a statistically significant 24% relative reduction in mortality, compared with the quintile with the worst score after adjusting for measured confounders.
- The INTERHEART study (Lancet. 2004 Sep 11;364[9438]:937-52), which had data for 27,098 people and showed that the primary outcome of incident MI was a statistically significant 22% lower after adjustment in the quintile with the best diet score, compared with the quintile with the worst score.
- The INTERSTROKE study (Lancet. 2016 Aug 20;388[10046]:761-75), with data for 20,834 people, showed that the rate of stroke was a statistically significant 25% lower after adjustment in the quintile with the highest diet score, compared with those with the lowest score.
Dr. Mente had no financial disclosures.
REPORTING FROM THE ESC CONGRESS 2018
Key clinical point:
Major finding: The highest-scoring quintiles had about 25% fewer deaths, MIs, and strokes, compared with the lowest-scoring quintiles.
Study details: The PURE Healthy Diet Score underwent validation using three independent data sets with a total of 79,478 people.
Disclosures: Dr. Mente had no financial disclosures.
Optimizing use of TKIs in chronic leukemia
DUBROVNIK, CROATIA – Long-term efficacy and toxicity should inform decisions about tyrosine kinase inhibitors (TKIs) in chronic myeloid leukemia (CML), according to one expert.
Studies have indicated that long-term survival rates are similar whether CML patients receive frontline treatment with imatinib or second-generation TKIs. But the newer TKIs pose a higher risk of uncommon toxicities, Hagop M. Kantarjian, MD, said during the keynote presentation at Leukemia and Lymphoma, a meeting jointly sponsored by the University of Texas MD Anderson Cancer Center and the School of Medicine at the University of Zagreb, Croatia.
Dr. Kantarjian, a professor at MD Anderson Cancer Center in Houston, said most CML patients should receive daily treatment with TKIs – even if they are in complete cytogenetic response or 100% Philadelphia chromosome positive – because they will live longer.
Frontline treatment options for CML that are approved by the Food and Drug Administration include imatinib, dasatinib, nilotinib, and bosutinib.
Dr. Kantarjian noted that dasatinib and nilotinib bested imatinib in early analyses from clinical trials, but all three TKIs produced similar rates of overall survival (OS) and progression-free survival (PFS) at extended follow-up.
Dasatinib and imatinib produced similar rates of 5-year OS and PFS in the DASISION trial (J Clin Oncol. 2016 Jul 10;34[20]:2333-40).
In ENESTnd, 5-year OS and PFS rates were similar with nilotinib and imatinib (Leukemia. 2016 May;30[5]:1044-54).
However, the higher incidence of uncommon toxicities with the newer TKIs must be taken into account, Dr. Kantarjian said.
Choosing a TKI
Dr. Kantarjian recommends frontline imatinib for older patients (aged 65-70) and those who are low risk based on their Sokal score.
Second-generation TKIs should be given up front to patients who are at higher risk by Sokal and for “very young patients in whom early treatment discontinuation is important,” he said.
“In accelerated or blast phase, I always use the second-generation TKIs,” he said. “If there’s no binding mutation, I prefer dasatinib. I think it’s the most potent of them. If there are toxicities with dasatinib, bosutinib is equivalent in efficacy, so they are interchangeable.”
A TKI should not be discarded unless there is loss of complete cytogenetic response – not major molecular response – at the maximum tolerated adjusted dose that does not cause grade 3-4 toxicities or chronic grade 2 toxicities, Dr. Kantarjian added.
“We have to remember that we can go down on the dosages of, for example, imatinib, down to 200 mg a day, dasatinib as low as 20 mg a day, nilotinib as low as 150 mg twice a day or even 200 mg daily, and bosutinib down to 200 mg daily,” he said. “So if we have a patient who’s responding with side effects, we should not abandon the particular TKI, we should try to manipulate the dose schedule if they are having a good response.”
Dr. Kantarjian noted that pleural effusion is a toxicity of particular concern with dasatinib, but lowering the dose to 50 mg daily results in similar efficacy and significantly less toxicity than 100 mg daily. For patients over the age of 70, a 20-mg dose can be used.
Vaso-occlusive and vasospastic reactions are increasingly observed in patients treated with nilotinib. For that reason, Dr. Kantarjian said he prefers to forgo up-front nilotinib, particularly in patients who have cardiovascular or neurotoxic problems.
“The incidence of vaso-occlusive and vasospastic reactions is now close to 10%-15% at about 10 years with nilotinib,” Dr. Kantarjian said. “So it is not a trivial toxicity.”
For patients with vaso-occlusive/vasospastic reactions, “bosutinib is probably the safest drug,” Dr. Kantarjian said.
For second- or third-line therapy, patients can receive ponatinib or a second-generation TKI (dasatinib, nilotinib, or bosutinib), as well as omacetaxine or allogeneic stem cell transplant.
“If you disregard toxicities, I think ponatinib is the most powerful TKI, and I think that’s because we are using it at a higher dose that produces so many toxicities,” Dr. Kantarjian said.
Ponatinib is not used up front because of these toxicities, particularly pancreatitis, skin rashes, vaso-occlusive disorders, and hypertension, he added.
Dr. Kantarjian suggests giving ponatinib at 30 mg daily in patients with T315I mutation and those without guiding mutations who are resistant to second-generation TKIs.
Discontinuing a TKI
Dr. Kantarjian said patients can discontinue TKI therapy if they:
- Are low- or intermediate-risk by Sokal.
- Have quantifiable BCR-ABL transcripts.
- Are in chronic phase.
- Achieved an optimal response to their first TKI.
- Have been on TKI therapy for more than 8 years.
- Achieved a complete molecular response.
- Have had a molecular response for more than 2-3 years.
- Are available for monitoring every other month for the first 2 years.
Dr. Kantarjian did not report any conflicts of interest at the meeting. However, he has previously reported relationships with Novartis, Bristol-Myers Squibb, Pfizer, and Ariad Pharmaceuticals.
The Leukemia and Lymphoma meeting is organized by Jonathan Wood & Association, which is owned by the parent company of this news organization.
DUBROVNIK, CROATIA – Long-term efficacy and toxicity should inform decisions about tyrosine kinase inhibitors (TKIs) in chronic myeloid leukemia (CML), according to one expert.
Studies have indicated that long-term survival rates are similar whether CML patients receive frontline treatment with imatinib or second-generation TKIs. But the newer TKIs pose a higher risk of uncommon toxicities, Hagop M. Kantarjian, MD, said during the keynote presentation at Leukemia and Lymphoma, a meeting jointly sponsored by the University of Texas MD Anderson Cancer Center and the School of Medicine at the University of Zagreb, Croatia.
Dr. Kantarjian, a professor at MD Anderson Cancer Center in Houston, said most CML patients should receive daily treatment with TKIs – even if they are in complete cytogenetic response or 100% Philadelphia chromosome positive – because they will live longer.
Frontline treatment options for CML that are approved by the Food and Drug Administration include imatinib, dasatinib, nilotinib, and bosutinib.
Dr. Kantarjian noted that dasatinib and nilotinib bested imatinib in early analyses from clinical trials, but all three TKIs produced similar rates of overall survival (OS) and progression-free survival (PFS) at extended follow-up.
Dasatinib and imatinib produced similar rates of 5-year OS and PFS in the DASISION trial (J Clin Oncol. 2016 Jul 10;34[20]:2333-40).
In ENESTnd, 5-year OS and PFS rates were similar with nilotinib and imatinib (Leukemia. 2016 May;30[5]:1044-54).
However, the higher incidence of uncommon toxicities with the newer TKIs must be taken into account, Dr. Kantarjian said.
Choosing a TKI
Dr. Kantarjian recommends frontline imatinib for older patients (aged 65-70) and those who are low risk based on their Sokal score.
Second-generation TKIs should be given up front to patients who are at higher risk by Sokal and for “very young patients in whom early treatment discontinuation is important,” he said.
“In accelerated or blast phase, I always use the second-generation TKIs,” he said. “If there’s no binding mutation, I prefer dasatinib. I think it’s the most potent of them. If there are toxicities with dasatinib, bosutinib is equivalent in efficacy, so they are interchangeable.”
A TKI should not be discarded unless there is loss of complete cytogenetic response – not major molecular response – at the maximum tolerated adjusted dose that does not cause grade 3-4 toxicities or chronic grade 2 toxicities, Dr. Kantarjian added.
“We have to remember that we can go down on the dosages of, for example, imatinib, down to 200 mg a day, dasatinib as low as 20 mg a day, nilotinib as low as 150 mg twice a day or even 200 mg daily, and bosutinib down to 200 mg daily,” he said. “So if we have a patient who’s responding with side effects, we should not abandon the particular TKI, we should try to manipulate the dose schedule if they are having a good response.”
Dr. Kantarjian noted that pleural effusion is a toxicity of particular concern with dasatinib, but lowering the dose to 50 mg daily results in similar efficacy and significantly less toxicity than 100 mg daily. For patients over the age of 70, a 20-mg dose can be used.
Vaso-occlusive and vasospastic reactions are increasingly observed in patients treated with nilotinib. For that reason, Dr. Kantarjian said he prefers to forgo up-front nilotinib, particularly in patients who have cardiovascular or neurotoxic problems.
“The incidence of vaso-occlusive and vasospastic reactions is now close to 10%-15% at about 10 years with nilotinib,” Dr. Kantarjian said. “So it is not a trivial toxicity.”
For patients with vaso-occlusive/vasospastic reactions, “bosutinib is probably the safest drug,” Dr. Kantarjian said.
For second- or third-line therapy, patients can receive ponatinib or a second-generation TKI (dasatinib, nilotinib, or bosutinib), as well as omacetaxine or allogeneic stem cell transplant.
“If you disregard toxicities, I think ponatinib is the most powerful TKI, and I think that’s because we are using it at a higher dose that produces so many toxicities,” Dr. Kantarjian said.
Ponatinib is not used up front because of these toxicities, particularly pancreatitis, skin rashes, vaso-occlusive disorders, and hypertension, he added.
Dr. Kantarjian suggests giving ponatinib at 30 mg daily in patients with T315I mutation and those without guiding mutations who are resistant to second-generation TKIs.
Discontinuing a TKI
Dr. Kantarjian said patients can discontinue TKI therapy if they:
- Are low- or intermediate-risk by Sokal.
- Have quantifiable BCR-ABL transcripts.
- Are in chronic phase.
- Achieved an optimal response to their first TKI.
- Have been on TKI therapy for more than 8 years.
- Achieved a complete molecular response.
- Have had a molecular response for more than 2-3 years.
- Are available for monitoring every other month for the first 2 years.
Dr. Kantarjian did not report any conflicts of interest at the meeting. However, he has previously reported relationships with Novartis, Bristol-Myers Squibb, Pfizer, and Ariad Pharmaceuticals.
The Leukemia and Lymphoma meeting is organized by Jonathan Wood & Association, which is owned by the parent company of this news organization.
DUBROVNIK, CROATIA – Long-term efficacy and toxicity should inform decisions about tyrosine kinase inhibitors (TKIs) in chronic myeloid leukemia (CML), according to one expert.
Studies have indicated that long-term survival rates are similar whether CML patients receive frontline treatment with imatinib or second-generation TKIs. But the newer TKIs pose a higher risk of uncommon toxicities, Hagop M. Kantarjian, MD, said during the keynote presentation at Leukemia and Lymphoma, a meeting jointly sponsored by the University of Texas MD Anderson Cancer Center and the School of Medicine at the University of Zagreb, Croatia.
Dr. Kantarjian, a professor at MD Anderson Cancer Center in Houston, said most CML patients should receive daily treatment with TKIs – even if they are in complete cytogenetic response or 100% Philadelphia chromosome positive – because they will live longer.
Frontline treatment options for CML that are approved by the Food and Drug Administration include imatinib, dasatinib, nilotinib, and bosutinib.
Dr. Kantarjian noted that dasatinib and nilotinib bested imatinib in early analyses from clinical trials, but all three TKIs produced similar rates of overall survival (OS) and progression-free survival (PFS) at extended follow-up.
Dasatinib and imatinib produced similar rates of 5-year OS and PFS in the DASISION trial (J Clin Oncol. 2016 Jul 10;34[20]:2333-40).
In ENESTnd, 5-year OS and PFS rates were similar with nilotinib and imatinib (Leukemia. 2016 May;30[5]:1044-54).
However, the higher incidence of uncommon toxicities with the newer TKIs must be taken into account, Dr. Kantarjian said.
Choosing a TKI
Dr. Kantarjian recommends frontline imatinib for older patients (aged 65-70) and those who are low risk based on their Sokal score.
Second-generation TKIs should be given up front to patients who are at higher risk by Sokal and for “very young patients in whom early treatment discontinuation is important,” he said.
“In accelerated or blast phase, I always use the second-generation TKIs,” he said. “If there’s no binding mutation, I prefer dasatinib. I think it’s the most potent of them. If there are toxicities with dasatinib, bosutinib is equivalent in efficacy, so they are interchangeable.”
A TKI should not be discarded unless there is loss of complete cytogenetic response – not major molecular response – at the maximum tolerated adjusted dose that does not cause grade 3-4 toxicities or chronic grade 2 toxicities, Dr. Kantarjian added.
“We have to remember that we can go down on the dosages of, for example, imatinib, down to 200 mg a day, dasatinib as low as 20 mg a day, nilotinib as low as 150 mg twice a day or even 200 mg daily, and bosutinib down to 200 mg daily,” he said. “So if we have a patient who’s responding with side effects, we should not abandon the particular TKI, we should try to manipulate the dose schedule if they are having a good response.”
Dr. Kantarjian noted that pleural effusion is a toxicity of particular concern with dasatinib, but lowering the dose to 50 mg daily results in similar efficacy and significantly less toxicity than 100 mg daily. For patients over the age of 70, a 20-mg dose can be used.
Vaso-occlusive and vasospastic reactions are increasingly observed in patients treated with nilotinib. For that reason, Dr. Kantarjian said he prefers to forgo up-front nilotinib, particularly in patients who have cardiovascular or neurotoxic problems.
“The incidence of vaso-occlusive and vasospastic reactions is now close to 10%-15% at about 10 years with nilotinib,” Dr. Kantarjian said. “So it is not a trivial toxicity.”
For patients with vaso-occlusive/vasospastic reactions, “bosutinib is probably the safest drug,” Dr. Kantarjian said.
For second- or third-line therapy, patients can receive ponatinib or a second-generation TKI (dasatinib, nilotinib, or bosutinib), as well as omacetaxine or allogeneic stem cell transplant.
“If you disregard toxicities, I think ponatinib is the most powerful TKI, and I think that’s because we are using it at a higher dose that produces so many toxicities,” Dr. Kantarjian said.
Ponatinib is not used up front because of these toxicities, particularly pancreatitis, skin rashes, vaso-occlusive disorders, and hypertension, he added.
Dr. Kantarjian suggests giving ponatinib at 30 mg daily in patients with T315I mutation and those without guiding mutations who are resistant to second-generation TKIs.
Discontinuing a TKI
Dr. Kantarjian said patients can discontinue TKI therapy if they:
- Are low- or intermediate-risk by Sokal.
- Have quantifiable BCR-ABL transcripts.
- Are in chronic phase.
- Achieved an optimal response to their first TKI.
- Have been on TKI therapy for more than 8 years.
- Achieved a complete molecular response.
- Have had a molecular response for more than 2-3 years.
- Are available for monitoring every other month for the first 2 years.
Dr. Kantarjian did not report any conflicts of interest at the meeting. However, he has previously reported relationships with Novartis, Bristol-Myers Squibb, Pfizer, and Ariad Pharmaceuticals.
The Leukemia and Lymphoma meeting is organized by Jonathan Wood & Association, which is owned by the parent company of this news organization.
REPORTING FROM LEUKEMIA AND LYMPHOMA 2018
Real-world data, machine learning, and the reemergence of humanism
As we relentlessly enter information into our EHRs, we typically perceive that we are just recording information about our patients to provide continuity of care and have an accurate representation of what was done. While that is true, the information we record is now increasingly being examined for many additional purposes. A whole new area of study has emerged over the last few years known as “real-world data,” and innovators are beginning to explore how machine learning (currently employed in other areas by such companies as Amazon and Google) may be used to improve the care of patients. The information we are putting into our EHRs is being translated into discrete data and is then combined with data from labs, pharmacies, and claims databases to examine how medications actually work when used in the wide and wild world of practice.
Let’s first talk about why real-world data are important. Traditionally, the evidence we rely upon in medicine has come from randomized trials to give us an unbiased assessment about the safety and the efficacy of the medications that we use. The Achilles’ heel of randomized trials is that, by their nature, they employ a carefully defined group of patients – with specific inclusion and exclusion criteria – who may not be like the patients in our practices. Randomized trials are also conducted in sites that are different than most of our offices. The clinics where randomized trials are conducted have dedicated personnel to follow up on patients, to make sure that patients take their medications, and ensure that patients remember their follow up visits. What this means is that the results in of those studies might not reflect the results seen in the real world.
A nice example of this was reported recently in the area of diabetes management. Randomized trials have shown that the glucagonlike peptide–1 (GLP-1) class of medications have about twice the effectiveness in lowering hemoglobin A1c as do the dipeptidyl peptidase–4 (DPP-4) inhibitor class of medications, but that difference in efficacy is not seen in practice. When looked at in real-world studies, the two classes of medications have about the same glucose-lowering efficacy. Why might that be? In reality, it might be that compliance with GLP-1s is less than that of DPP-4s because of the side effects of nausea and GI intolerance. When patients miss more doses of their GLP-1, they do not achieve the HbA1c lowering seen in trials in which compliance is far better.1
This exploration of real-world outcomes is just a first step in using the information documented in our charts. The exciting next step will be machine learning, also called deep learning.2 In this process, computers look at an enormous number of data points and find relationships that would otherwise not be detected. Imagine a supercomputer analyzing every blood pressure after any medication is changed across thousands, or even millions, of patients, and linking the outcome of that medication choice with the next blood pressure.3 Then imagine the computer meshing millions of data points that include all patients’ weights, ages, sexes, family histories of cardiovascular disease, renal function, etc. and matching those parameters with the specific medication and follow-up blood pressures. While much has been discussed about using genetics to advance personalized medicine, one can imagine these machine-based algorithms discovering connections about which medications work best for individuals with specific characteristics – without the need for additional testing. When the final loop of this cascade is connected, the computer could present recommendations to the clinician about which medication is optimal for the patient and then refine these recommendations, based on outcomes, to optimize safety and efficacy.
Some have argued that there is no way a computer will be able to perform as well as an experienced clinician who utilizes a combination of data and intuition to choose the best medication for his or her patient. This argument is similar to the controversy over autonomous driving cars. Many have asked how you can be assured that the cars will never have an accident. That is, of course, the wrong question. The correct question, as articulated very nicely by one of the innovators in that field, George Holtz, is how we can make a car that is safer than the way that cars are currently being driven (which means fewer deaths than the 15,000 that occur annually with humans behind the wheel).4
Our current method of providing care often leaves patients without appropriate guideline-recommended medications, and many don’t reach their HbA1c, blood pressure, cholesterol, and asthma-control goals. The era of machine learning with machine-generated algorithms may be much closer than we think, which will allow us to spend more time talking with patients, educating them about their disease, and supporting them in their efforts to remain healthy – an attractive future for both us and our patients.
References
1. Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017 Nov;40(11):1469-78.
2. Naylor CD. On the prospects for a (deep) learning health care system. JAMA. 2018 Sep 18;320(11):1099-100.
3. Wang YR et al. Outpatient hypertension treatment, treatment intensification, and control in Western Europe and the United States. Arch Intern Med. 2007 Jan 22;167(2):141-7.
4. Super Hacker George Hotz: “I can make your car drive itself for under $1,000.”
As we relentlessly enter information into our EHRs, we typically perceive that we are just recording information about our patients to provide continuity of care and have an accurate representation of what was done. While that is true, the information we record is now increasingly being examined for many additional purposes. A whole new area of study has emerged over the last few years known as “real-world data,” and innovators are beginning to explore how machine learning (currently employed in other areas by such companies as Amazon and Google) may be used to improve the care of patients. The information we are putting into our EHRs is being translated into discrete data and is then combined with data from labs, pharmacies, and claims databases to examine how medications actually work when used in the wide and wild world of practice.
Let’s first talk about why real-world data are important. Traditionally, the evidence we rely upon in medicine has come from randomized trials to give us an unbiased assessment about the safety and the efficacy of the medications that we use. The Achilles’ heel of randomized trials is that, by their nature, they employ a carefully defined group of patients – with specific inclusion and exclusion criteria – who may not be like the patients in our practices. Randomized trials are also conducted in sites that are different than most of our offices. The clinics where randomized trials are conducted have dedicated personnel to follow up on patients, to make sure that patients take their medications, and ensure that patients remember their follow up visits. What this means is that the results in of those studies might not reflect the results seen in the real world.
A nice example of this was reported recently in the area of diabetes management. Randomized trials have shown that the glucagonlike peptide–1 (GLP-1) class of medications have about twice the effectiveness in lowering hemoglobin A1c as do the dipeptidyl peptidase–4 (DPP-4) inhibitor class of medications, but that difference in efficacy is not seen in practice. When looked at in real-world studies, the two classes of medications have about the same glucose-lowering efficacy. Why might that be? In reality, it might be that compliance with GLP-1s is less than that of DPP-4s because of the side effects of nausea and GI intolerance. When patients miss more doses of their GLP-1, they do not achieve the HbA1c lowering seen in trials in which compliance is far better.1
This exploration of real-world outcomes is just a first step in using the information documented in our charts. The exciting next step will be machine learning, also called deep learning.2 In this process, computers look at an enormous number of data points and find relationships that would otherwise not be detected. Imagine a supercomputer analyzing every blood pressure after any medication is changed across thousands, or even millions, of patients, and linking the outcome of that medication choice with the next blood pressure.3 Then imagine the computer meshing millions of data points that include all patients’ weights, ages, sexes, family histories of cardiovascular disease, renal function, etc. and matching those parameters with the specific medication and follow-up blood pressures. While much has been discussed about using genetics to advance personalized medicine, one can imagine these machine-based algorithms discovering connections about which medications work best for individuals with specific characteristics – without the need for additional testing. When the final loop of this cascade is connected, the computer could present recommendations to the clinician about which medication is optimal for the patient and then refine these recommendations, based on outcomes, to optimize safety and efficacy.
Some have argued that there is no way a computer will be able to perform as well as an experienced clinician who utilizes a combination of data and intuition to choose the best medication for his or her patient. This argument is similar to the controversy over autonomous driving cars. Many have asked how you can be assured that the cars will never have an accident. That is, of course, the wrong question. The correct question, as articulated very nicely by one of the innovators in that field, George Holtz, is how we can make a car that is safer than the way that cars are currently being driven (which means fewer deaths than the 15,000 that occur annually with humans behind the wheel).4
Our current method of providing care often leaves patients without appropriate guideline-recommended medications, and many don’t reach their HbA1c, blood pressure, cholesterol, and asthma-control goals. The era of machine learning with machine-generated algorithms may be much closer than we think, which will allow us to spend more time talking with patients, educating them about their disease, and supporting them in their efforts to remain healthy – an attractive future for both us and our patients.
References
1. Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017 Nov;40(11):1469-78.
2. Naylor CD. On the prospects for a (deep) learning health care system. JAMA. 2018 Sep 18;320(11):1099-100.
3. Wang YR et al. Outpatient hypertension treatment, treatment intensification, and control in Western Europe and the United States. Arch Intern Med. 2007 Jan 22;167(2):141-7.
4. Super Hacker George Hotz: “I can make your car drive itself for under $1,000.”
As we relentlessly enter information into our EHRs, we typically perceive that we are just recording information about our patients to provide continuity of care and have an accurate representation of what was done. While that is true, the information we record is now increasingly being examined for many additional purposes. A whole new area of study has emerged over the last few years known as “real-world data,” and innovators are beginning to explore how machine learning (currently employed in other areas by such companies as Amazon and Google) may be used to improve the care of patients. The information we are putting into our EHRs is being translated into discrete data and is then combined with data from labs, pharmacies, and claims databases to examine how medications actually work when used in the wide and wild world of practice.
Let’s first talk about why real-world data are important. Traditionally, the evidence we rely upon in medicine has come from randomized trials to give us an unbiased assessment about the safety and the efficacy of the medications that we use. The Achilles’ heel of randomized trials is that, by their nature, they employ a carefully defined group of patients – with specific inclusion and exclusion criteria – who may not be like the patients in our practices. Randomized trials are also conducted in sites that are different than most of our offices. The clinics where randomized trials are conducted have dedicated personnel to follow up on patients, to make sure that patients take their medications, and ensure that patients remember their follow up visits. What this means is that the results in of those studies might not reflect the results seen in the real world.
A nice example of this was reported recently in the area of diabetes management. Randomized trials have shown that the glucagonlike peptide–1 (GLP-1) class of medications have about twice the effectiveness in lowering hemoglobin A1c as do the dipeptidyl peptidase–4 (DPP-4) inhibitor class of medications, but that difference in efficacy is not seen in practice. When looked at in real-world studies, the two classes of medications have about the same glucose-lowering efficacy. Why might that be? In reality, it might be that compliance with GLP-1s is less than that of DPP-4s because of the side effects of nausea and GI intolerance. When patients miss more doses of their GLP-1, they do not achieve the HbA1c lowering seen in trials in which compliance is far better.1
This exploration of real-world outcomes is just a first step in using the information documented in our charts. The exciting next step will be machine learning, also called deep learning.2 In this process, computers look at an enormous number of data points and find relationships that would otherwise not be detected. Imagine a supercomputer analyzing every blood pressure after any medication is changed across thousands, or even millions, of patients, and linking the outcome of that medication choice with the next blood pressure.3 Then imagine the computer meshing millions of data points that include all patients’ weights, ages, sexes, family histories of cardiovascular disease, renal function, etc. and matching those parameters with the specific medication and follow-up blood pressures. While much has been discussed about using genetics to advance personalized medicine, one can imagine these machine-based algorithms discovering connections about which medications work best for individuals with specific characteristics – without the need for additional testing. When the final loop of this cascade is connected, the computer could present recommendations to the clinician about which medication is optimal for the patient and then refine these recommendations, based on outcomes, to optimize safety and efficacy.
Some have argued that there is no way a computer will be able to perform as well as an experienced clinician who utilizes a combination of data and intuition to choose the best medication for his or her patient. This argument is similar to the controversy over autonomous driving cars. Many have asked how you can be assured that the cars will never have an accident. That is, of course, the wrong question. The correct question, as articulated very nicely by one of the innovators in that field, George Holtz, is how we can make a car that is safer than the way that cars are currently being driven (which means fewer deaths than the 15,000 that occur annually with humans behind the wheel).4
Our current method of providing care often leaves patients without appropriate guideline-recommended medications, and many don’t reach their HbA1c, blood pressure, cholesterol, and asthma-control goals. The era of machine learning with machine-generated algorithms may be much closer than we think, which will allow us to spend more time talking with patients, educating them about their disease, and supporting them in their efforts to remain healthy – an attractive future for both us and our patients.
References
1. Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017 Nov;40(11):1469-78.
2. Naylor CD. On the prospects for a (deep) learning health care system. JAMA. 2018 Sep 18;320(11):1099-100.
3. Wang YR et al. Outpatient hypertension treatment, treatment intensification, and control in Western Europe and the United States. Arch Intern Med. 2007 Jan 22;167(2):141-7.
4. Super Hacker George Hotz: “I can make your car drive itself for under $1,000.”
Diagnosis is an ongoing concern in endometriosis
according to a new survey by Health Union, a family of online health communities.
Advances in support and understanding have been made through research and dissemination of information via the Internet, but complete control of endometriosis remains elusive, as only 13% of the 1,239 women surveyed from June 13 to July 14, 2018, said that their condition was under control with their current treatment plan.
Before control, of course, comes diagnosis, and the average gap between onset of symptoms and diagnosis was 8.6 years. Such a gap “can lead to delayed treatment and a potentially negative impact on quality of life,” Health Union said in a written statement. Those years of delays often involved visits to multiple physicians: 44% of respondents saw 3-5 physicians before receiving a diagnosis and 11% had to see 10 or more physicians.
“When comparing differences between symptom onset-to-diagnosis groups, there are some significant findings that suggest a fair amount of progress has been made, for the better,” Health Union said, noting that women who received a diagnosis in less than 5 years “were significantly less likely to think their symptoms were related to their menstrual cycles than those with a longer symptoms-to-diagnosis gap.” Respondents who had a gap of less than 2 years “were more likely to seek medical care as soon as possible” and to have used hormone therapies than those with longer gaps, the group said.
The most common diagnostic tests were laparoscopy, reported by 85% of respondents, and pelvic/transvaginal ultrasound, reported by 46%. Of the women who did not have a laparoscopy, 43% were undergoing a surgical procedure for another condition when their endometriosis was discovered. Laparoscopy also was by far the most common surgery to treat endometriosis, with a 79% prevalence among respondents, compared with 16% for laparotomy and 12% for oophorectomy, Health Union reported in Endometriosis in America 2018.
Common nonsurgical tactics to improve symptoms included increased water intake (79%), use of a heating pad (75%), and increased fresh fruit (64%) or green vegetables (62%) in the diet. Three-quarters of respondents also tried alternative and complementary therapies such as vitamins, exercise, and acupuncture, the report showed.
“Living with endometriosis is much easier now than it was not even a decade ago, as the Internet and social media have definitely increased knowledge about the disease,” said Endometriosis.net (one of the Health Union online communities) patient advocate Laura Kiesel. “When I first suspected I had the disease, in the mid-90s, hardly anyone had heard about it, and those aware of it didn’t think it was very serious. All these years later, I get a lot more sympathy and support – both online and in person – and people understand how serious, painful, and life altering it could be.”
according to a new survey by Health Union, a family of online health communities.
Advances in support and understanding have been made through research and dissemination of information via the Internet, but complete control of endometriosis remains elusive, as only 13% of the 1,239 women surveyed from June 13 to July 14, 2018, said that their condition was under control with their current treatment plan.
Before control, of course, comes diagnosis, and the average gap between onset of symptoms and diagnosis was 8.6 years. Such a gap “can lead to delayed treatment and a potentially negative impact on quality of life,” Health Union said in a written statement. Those years of delays often involved visits to multiple physicians: 44% of respondents saw 3-5 physicians before receiving a diagnosis and 11% had to see 10 or more physicians.
“When comparing differences between symptom onset-to-diagnosis groups, there are some significant findings that suggest a fair amount of progress has been made, for the better,” Health Union said, noting that women who received a diagnosis in less than 5 years “were significantly less likely to think their symptoms were related to their menstrual cycles than those with a longer symptoms-to-diagnosis gap.” Respondents who had a gap of less than 2 years “were more likely to seek medical care as soon as possible” and to have used hormone therapies than those with longer gaps, the group said.
The most common diagnostic tests were laparoscopy, reported by 85% of respondents, and pelvic/transvaginal ultrasound, reported by 46%. Of the women who did not have a laparoscopy, 43% were undergoing a surgical procedure for another condition when their endometriosis was discovered. Laparoscopy also was by far the most common surgery to treat endometriosis, with a 79% prevalence among respondents, compared with 16% for laparotomy and 12% for oophorectomy, Health Union reported in Endometriosis in America 2018.
Common nonsurgical tactics to improve symptoms included increased water intake (79%), use of a heating pad (75%), and increased fresh fruit (64%) or green vegetables (62%) in the diet. Three-quarters of respondents also tried alternative and complementary therapies such as vitamins, exercise, and acupuncture, the report showed.
“Living with endometriosis is much easier now than it was not even a decade ago, as the Internet and social media have definitely increased knowledge about the disease,” said Endometriosis.net (one of the Health Union online communities) patient advocate Laura Kiesel. “When I first suspected I had the disease, in the mid-90s, hardly anyone had heard about it, and those aware of it didn’t think it was very serious. All these years later, I get a lot more sympathy and support – both online and in person – and people understand how serious, painful, and life altering it could be.”
according to a new survey by Health Union, a family of online health communities.
Advances in support and understanding have been made through research and dissemination of information via the Internet, but complete control of endometriosis remains elusive, as only 13% of the 1,239 women surveyed from June 13 to July 14, 2018, said that their condition was under control with their current treatment plan.
Before control, of course, comes diagnosis, and the average gap between onset of symptoms and diagnosis was 8.6 years. Such a gap “can lead to delayed treatment and a potentially negative impact on quality of life,” Health Union said in a written statement. Those years of delays often involved visits to multiple physicians: 44% of respondents saw 3-5 physicians before receiving a diagnosis and 11% had to see 10 or more physicians.
“When comparing differences between symptom onset-to-diagnosis groups, there are some significant findings that suggest a fair amount of progress has been made, for the better,” Health Union said, noting that women who received a diagnosis in less than 5 years “were significantly less likely to think their symptoms were related to their menstrual cycles than those with a longer symptoms-to-diagnosis gap.” Respondents who had a gap of less than 2 years “were more likely to seek medical care as soon as possible” and to have used hormone therapies than those with longer gaps, the group said.
The most common diagnostic tests were laparoscopy, reported by 85% of respondents, and pelvic/transvaginal ultrasound, reported by 46%. Of the women who did not have a laparoscopy, 43% were undergoing a surgical procedure for another condition when their endometriosis was discovered. Laparoscopy also was by far the most common surgery to treat endometriosis, with a 79% prevalence among respondents, compared with 16% for laparotomy and 12% for oophorectomy, Health Union reported in Endometriosis in America 2018.
Common nonsurgical tactics to improve symptoms included increased water intake (79%), use of a heating pad (75%), and increased fresh fruit (64%) or green vegetables (62%) in the diet. Three-quarters of respondents also tried alternative and complementary therapies such as vitamins, exercise, and acupuncture, the report showed.
“Living with endometriosis is much easier now than it was not even a decade ago, as the Internet and social media have definitely increased knowledge about the disease,” said Endometriosis.net (one of the Health Union online communities) patient advocate Laura Kiesel. “When I first suspected I had the disease, in the mid-90s, hardly anyone had heard about it, and those aware of it didn’t think it was very serious. All these years later, I get a lot more sympathy and support – both online and in person – and people understand how serious, painful, and life altering it could be.”
Moderate hypofractionation preferred in new guideline for localized PC
Moderate hypofractionation is preferred over conventional fractionation in treatment of patients with localized prostate cancer who are candidates for external beam radiotherapy (EBRT), according to new a clinical practice guideline.
A meta-analysis of randomized clinical trials showed that moderate fractionation delivered the same efficacy as did conventional fractionation with a mild increase in gastrointestinal toxicity, reported lead author Scott C. Morgan, MD of OSF Medical Group in Bloomington, Illinois, and his colleagues. The drawback of toxicity is outweighed by distinct advantages in resource utilization and patient convenience, which make moderate hypofractionation the winning choice.
For many types of cancer, a shift toward fewer fractions of higher radiation is ongoing, driven largely by technological advances in radiation planning and delivery.
“Technical advances have permitted more precise and conformal delivery of escalated doses of radiation to the prostate, thereby improving the therapeutic ratio,” the authors wrote in the Journal of Clinical Oncology.
Fractionation is typically limited by adjacent tissue sensitivity, but prostate tumors are more sensitive to radiation than the rectum, allowing for higher doses of radiation without damaging healthy tissue. While conventional fractionation doses are between 180 and 200 cGy, moderate hypofractionation delivers doses of 240-340 cGy. Ultrahypofractionation is defined by doses equal to or greater than 500 cGy (the upper limit of the linear-quadratic model of cell survival).
The present guideline was developed through a 2-year, collaborative effort between the American Society of Radiation Oncology, the Society of Clinical Oncology, and the American Urological Association. Task force members included urologic surgeons and oncologists, medical physicists, and radiation oncologists from academic and nonacademic settings. A patient representative and radiation oncology resident also were involved. After completing a systematic literature review, the team developed recommendations with varying degrees of strength. Supporting evidence quality and level of consensus also were described.
Of note, the guideline calls for moderate hypofractionation for patients with localized prostate cancer regardless of urinary function, anatomy, comorbidity, or age, with or without radiation to the seminal vesicles. Along with this recommendation, clinicians should discuss with patients the small increased risk of acute gastrointestinal toxicity, compared with conventional fractionation and the limited follow-up time in most relevant clinical trials (often less than 5 years).
The guideline conveyed more skepticism regarding ultrahypofractionation because of a lack of supporting evidence and comparative trials. As such, the authors conditionally recommended ultrahypofractionation for low-risk and intermediate patients, the latter of whom should be encouraged to enter clinical trials.
“The conditional recommendations regarding ultrahypofractionation underscore the importance of shared decision making between clinicians and patients in this setting,” the authors wrote. “The decision to use ultrahypofractionated EBRT at this time should follow a detailed discussion of the existing uncertainties in the risk-benefit balance associated with this treatment approach and should be informed at all stages by the patient’s values and preferences.”
The authors reported financial affiliations with Amgen, GlaxoSmithKline, Bristol-Myers Squibb, and others.
SOURCE: Morgan et al. J Clin Oncol. 2018 Oct 11. doi: 10.1200/JCO.18.01097.
Moderate hypofractionation is preferred over conventional fractionation in treatment of patients with localized prostate cancer who are candidates for external beam radiotherapy (EBRT), according to new a clinical practice guideline.
A meta-analysis of randomized clinical trials showed that moderate fractionation delivered the same efficacy as did conventional fractionation with a mild increase in gastrointestinal toxicity, reported lead author Scott C. Morgan, MD of OSF Medical Group in Bloomington, Illinois, and his colleagues. The drawback of toxicity is outweighed by distinct advantages in resource utilization and patient convenience, which make moderate hypofractionation the winning choice.
For many types of cancer, a shift toward fewer fractions of higher radiation is ongoing, driven largely by technological advances in radiation planning and delivery.
“Technical advances have permitted more precise and conformal delivery of escalated doses of radiation to the prostate, thereby improving the therapeutic ratio,” the authors wrote in the Journal of Clinical Oncology.
Fractionation is typically limited by adjacent tissue sensitivity, but prostate tumors are more sensitive to radiation than the rectum, allowing for higher doses of radiation without damaging healthy tissue. While conventional fractionation doses are between 180 and 200 cGy, moderate hypofractionation delivers doses of 240-340 cGy. Ultrahypofractionation is defined by doses equal to or greater than 500 cGy (the upper limit of the linear-quadratic model of cell survival).
The present guideline was developed through a 2-year, collaborative effort between the American Society of Radiation Oncology, the Society of Clinical Oncology, and the American Urological Association. Task force members included urologic surgeons and oncologists, medical physicists, and radiation oncologists from academic and nonacademic settings. A patient representative and radiation oncology resident also were involved. After completing a systematic literature review, the team developed recommendations with varying degrees of strength. Supporting evidence quality and level of consensus also were described.
Of note, the guideline calls for moderate hypofractionation for patients with localized prostate cancer regardless of urinary function, anatomy, comorbidity, or age, with or without radiation to the seminal vesicles. Along with this recommendation, clinicians should discuss with patients the small increased risk of acute gastrointestinal toxicity, compared with conventional fractionation and the limited follow-up time in most relevant clinical trials (often less than 5 years).
The guideline conveyed more skepticism regarding ultrahypofractionation because of a lack of supporting evidence and comparative trials. As such, the authors conditionally recommended ultrahypofractionation for low-risk and intermediate patients, the latter of whom should be encouraged to enter clinical trials.
“The conditional recommendations regarding ultrahypofractionation underscore the importance of shared decision making between clinicians and patients in this setting,” the authors wrote. “The decision to use ultrahypofractionated EBRT at this time should follow a detailed discussion of the existing uncertainties in the risk-benefit balance associated with this treatment approach and should be informed at all stages by the patient’s values and preferences.”
The authors reported financial affiliations with Amgen, GlaxoSmithKline, Bristol-Myers Squibb, and others.
SOURCE: Morgan et al. J Clin Oncol. 2018 Oct 11. doi: 10.1200/JCO.18.01097.
Moderate hypofractionation is preferred over conventional fractionation in treatment of patients with localized prostate cancer who are candidates for external beam radiotherapy (EBRT), according to new a clinical practice guideline.
A meta-analysis of randomized clinical trials showed that moderate fractionation delivered the same efficacy as did conventional fractionation with a mild increase in gastrointestinal toxicity, reported lead author Scott C. Morgan, MD of OSF Medical Group in Bloomington, Illinois, and his colleagues. The drawback of toxicity is outweighed by distinct advantages in resource utilization and patient convenience, which make moderate hypofractionation the winning choice.
For many types of cancer, a shift toward fewer fractions of higher radiation is ongoing, driven largely by technological advances in radiation planning and delivery.
“Technical advances have permitted more precise and conformal delivery of escalated doses of radiation to the prostate, thereby improving the therapeutic ratio,” the authors wrote in the Journal of Clinical Oncology.
Fractionation is typically limited by adjacent tissue sensitivity, but prostate tumors are more sensitive to radiation than the rectum, allowing for higher doses of radiation without damaging healthy tissue. While conventional fractionation doses are between 180 and 200 cGy, moderate hypofractionation delivers doses of 240-340 cGy. Ultrahypofractionation is defined by doses equal to or greater than 500 cGy (the upper limit of the linear-quadratic model of cell survival).
The present guideline was developed through a 2-year, collaborative effort between the American Society of Radiation Oncology, the Society of Clinical Oncology, and the American Urological Association. Task force members included urologic surgeons and oncologists, medical physicists, and radiation oncologists from academic and nonacademic settings. A patient representative and radiation oncology resident also were involved. After completing a systematic literature review, the team developed recommendations with varying degrees of strength. Supporting evidence quality and level of consensus also were described.
Of note, the guideline calls for moderate hypofractionation for patients with localized prostate cancer regardless of urinary function, anatomy, comorbidity, or age, with or without radiation to the seminal vesicles. Along with this recommendation, clinicians should discuss with patients the small increased risk of acute gastrointestinal toxicity, compared with conventional fractionation and the limited follow-up time in most relevant clinical trials (often less than 5 years).
The guideline conveyed more skepticism regarding ultrahypofractionation because of a lack of supporting evidence and comparative trials. As such, the authors conditionally recommended ultrahypofractionation for low-risk and intermediate patients, the latter of whom should be encouraged to enter clinical trials.
“The conditional recommendations regarding ultrahypofractionation underscore the importance of shared decision making between clinicians and patients in this setting,” the authors wrote. “The decision to use ultrahypofractionated EBRT at this time should follow a detailed discussion of the existing uncertainties in the risk-benefit balance associated with this treatment approach and should be informed at all stages by the patient’s values and preferences.”
The authors reported financial affiliations with Amgen, GlaxoSmithKline, Bristol-Myers Squibb, and others.
SOURCE: Morgan et al. J Clin Oncol. 2018 Oct 11. doi: 10.1200/JCO.18.01097.
FROM JOURNAL OF CLINICAL ONCOLOGY
Key clinical point: Moderate hypofractionation is preferred over conventional fractionation in treatment of patients with localized prostate cancer who are candidates for external beam radiotherapy (EBRT).
Major finding: The guideline panel reached a 94% consensus for the recommendation of moderate hypofractionation over conventional fractionation regardless of urinary function, anatomy, comorbidity, or age.
Study details: An evidence-based clinical practice guideline developed by the American Society of Radiation Oncology (ASTRO), the American Society of Clinical Oncology (ASCO), and the American Urological Association (AUA).
Disclosures: The authors reported financial affiliations with Amgen, GlaxoSmithKline, Bristol-Myers Squibb, and others.
Source: Morgan et al. J Clin Oncol. 2018 Oct 11. doi: 10.1200/JCO.18.01097.
Dr. Bawa-Garba and trainee liability
Question: Which of the following regarding medical trainee liability is best?
A. Trainees are commonly named as codefendants with their attending physician in a medical malpractice lawsuit.
B. “From a culture of blame to a culture of safety” is a rallying cry against poor work conditions.
C. House officers are always judged by a lower standard, because they are not fully qualified.
D. A, B, and C are correct.
E. A and C are correct.
Answer: A. A recent case of trainee liability in the United Kingdom resulted in criminal prosecution followed by the trainee being struck off the medical register.1 Dr. Hadiza Bawa-Garba, a pediatric trainee in the U.K. National Health Service, was prosecuted in a court of law and found guilty of manslaughter by gross negligence for the septic death of a 6-year-old boy with Down syndrome. The General Medical Council (GMC), the U.K. medical regulatory agency, voted to take away her license. The decision aroused the ire of physicians worldwide, who noted the poor supervision and undue pressures she was under.
In August 2018, the U.K. Court of Appeal noted that the general clinical competency of Dr. Bawa-Garba was never at issue, and that “the risk of her clinical practice suddenly and without explanation falling below the standards expected on any given day is no higher than for any other reasonably competent doctor.” It reversed the expulsion order and reinstated the 1-year suspension recommended by the Medical Practitioners Tribunal.
Even as the GMC accepted this appellate decision and had convened a commission to look into criminal negligence, it nonetheless received heavy criticism for having overreacted – and for its failure to speak out more forcefully to support those practicing under oppressive conditions.
For example, the Doctors’ Association UK said the GMC had shown it could not be trusted to be objective and nonpunitive. The case, it noted, had “united the medical profession in fear and outrage,” whereby “a pediatrician in training ... a highly regarded doctor, with a previously unblemished record, [was] convicted of [the criminal offence of] gross negligence manslaughter for judgments made whilst doing the jobs of several doctors at once, covering six wards across four floors, responding to numerous pediatric emergencies, without a functioning IT system, and in the absence of a consultant [senior physician], all when just returning from 14 months of maternity leave.”
The Royal College of Pediatrics and Child Health said it had “previously flagged the importance of fostering a culture of supporting doctors to learn from their mistakes, rather than one which seeks to blame.” And the British Medical Association said, “lessons must be learned from this case, which raises wider issues about the multiple factors that affect patient safety in an NHS under extreme pressure, rather than narrowly focusing only on individuals.”2
The fiasco surrounding the Dr. Bawa-Garba case will hopefully result in action similar to that following the seminal report that medical errors account for nearly 100,000 annual hospital deaths in the United States. That study was not restricted to house staff mistakes, but involved multiple hospitals and hospital staff members. It spawned a nationwide reappraisal of how to approach medical errors, and it spurred the Institute of Medicine to recommend that the profession shift “from a culture of blame to a culture of safety.”3
Criminal prosecution in the United States is decidedly rare in death or injury occurring during the course of patient care – for either trainees or attending physicians. A malpractice lawsuit would have been a far more likely outcome had the Dr. Bawa-Garba case taken place in the United States.
Lawsuits against U.S. house staff are not rare, and resident physicians are regularly joined as codefendants with their supervisors, who may be medical school faculty or community practitioners admitting to “team care.” Regulatory actions are typically directed against fully licensed physicians, rather than the trainees. Instead, the director of the training program itself would take corrective action against an errant resident, if warranted, which can range from a warning to outright dismissal from the program.
How is negligence law applied to a trainee? Should it demand the same standard of care as it would a fully qualified attending physician?4 Surprisingly, the courts are split on this question. Some have favored using a dual standard of conduct, with trainees being held to a lower standard.
This was articulated in Rush v. Akron General Hospital, which involved a patient who had fallen through a glass door. The patient suffered several lacerations to his shoulder, which the intern treated. However, when two remaining pieces of glass were later discovered in the area of injury, the patient sued the intern for negligence.
The court dismissed the claim, finding that the intern had practiced with the skill and care of his peers of similar training. “It would be unreasonable to exact from an intern, doing emergency work in a hospital, that high degree of skill which is impliedly possessed by a physician and surgeon in the general practice of his profession, with an extensive and constant practice in hospitals and the community,” the court noted.5
However, not all courts have embraced this dual standard of review. The New Jersey Superior Court held that licensed residents should be judged by a standard applicable to a general practitioner, because any reduction in the standard of care would set a problematic precedent.6 In that case, the residents allegedly failed to reinsert a nasogastric tube, which caused the patient to aspirate.
And in Pratt v. Stein, a second-year resident was judged by an even higher standard – that of a specialist – after he had allegedly administered a toxic dose of neomycin to a postoperative patient, which resulted in deafness. Although the lower court had ruled that the resident should be held to the standard of an ordinary physician, the Pennsylvania appellate court disagreed, reasoning that “a resident should be held to the standard of a specialist when the resident is acting within his field of specialty. In our estimation, this is a sound conclusion. A resident is already a physician who has chosen to specialize, and thus possesses a higher degree of knowledge and skill in the chosen specialty than does the nonspecialist.”7
However, a subsequent decision from the same jurisdiction suggests a retreat from this unrealistic standard.
An orthopedic resident allegedly applied a cast with insufficient padding to the broken wrist of a patient. The plaintiff claimed this led to soft-tissue infection with Staphylococcus aureus, with complicating septicemia, staphylococcal endocarditis, and eventual death.
The court held that the resident’s standard of care should be “higher than that for general practitioners but less than that for fully trained orthopedic specialists. ... To require a resident to meet the same standard of care as fully trained specialists would be unrealistic. A resident may have had only days or weeks of training in the specialized residency program; a specialist, on the other hand, will have completed the residency program and may also have had years of experience in the specialized field. If we were to require the resident to exercise the same degree of skill and training as the specialist, we would, in effect, be requiring the resident to do the impossible.”8
Dr. Tan is emeritus professor of medicine and former adjunct professor of law at the University of Hawaii, Honolulu. This article is meant to be educational and does not constitute medical, ethical, or legal advice. For additional information, readers may contact the author at [email protected].
References
1. Saurabh Jha, “To Err Is Homicide in Britain: The Case of Hadiza Bawa-Garba.” The Health Care Blog, Jan. 30, 2018.
2. “‘Lessons Must Be Learned’: UK Societies on Bawa-Garba Ruling.” Medscape, Aug. 14, 2018.
3. “To Err is Human: Building a Safer Health System.” Institute of Medicine, National Academies Press, Washington D.C., 1999.
4. JAMA. 2004 Sep 1;292(9):1051-6.
5. Rush v. Akron General Hospital, 171 N.E.2d 378 (Ohio Ct. App. 1987).
6. Clark v. University Hospital, 914 A.2d 838 (N.J. Super. 2006).
7. Pratt v. Stein, 444 A.2d 674 (Pa. Super. 1980).
8. Jistarri v. Nappi, 549 A.2d 210 (Pa. Super. 1988).
Question: Which of the following regarding medical trainee liability is best?
A. Trainees are commonly named as codefendants with their attending physician in a medical malpractice lawsuit.
B. “From a culture of blame to a culture of safety” is a rallying cry against poor work conditions.
C. House officers are always judged by a lower standard, because they are not fully qualified.
D. A, B, and C are correct.
E. A and C are correct.
Answer: A. A recent case of trainee liability in the United Kingdom resulted in criminal prosecution followed by the trainee being struck off the medical register.1 Dr. Hadiza Bawa-Garba, a pediatric trainee in the U.K. National Health Service, was prosecuted in a court of law and found guilty of manslaughter by gross negligence for the septic death of a 6-year-old boy with Down syndrome. The General Medical Council (GMC), the U.K. medical regulatory agency, voted to take away her license. The decision aroused the ire of physicians worldwide, who noted the poor supervision and undue pressures she was under.
In August 2018, the U.K. Court of Appeal noted that the general clinical competency of Dr. Bawa-Garba was never at issue, and that “the risk of her clinical practice suddenly and without explanation falling below the standards expected on any given day is no higher than for any other reasonably competent doctor.” It reversed the expulsion order and reinstated the 1-year suspension recommended by the Medical Practitioners Tribunal.
Even as the GMC accepted this appellate decision and had convened a commission to look into criminal negligence, it nonetheless received heavy criticism for having overreacted – and for its failure to speak out more forcefully to support those practicing under oppressive conditions.
For example, the Doctors’ Association UK said the GMC had shown it could not be trusted to be objective and nonpunitive. The case, it noted, had “united the medical profession in fear and outrage,” whereby “a pediatrician in training ... a highly regarded doctor, with a previously unblemished record, [was] convicted of [the criminal offence of] gross negligence manslaughter for judgments made whilst doing the jobs of several doctors at once, covering six wards across four floors, responding to numerous pediatric emergencies, without a functioning IT system, and in the absence of a consultant [senior physician], all when just returning from 14 months of maternity leave.”
The Royal College of Pediatrics and Child Health said it had “previously flagged the importance of fostering a culture of supporting doctors to learn from their mistakes, rather than one which seeks to blame.” And the British Medical Association said, “lessons must be learned from this case, which raises wider issues about the multiple factors that affect patient safety in an NHS under extreme pressure, rather than narrowly focusing only on individuals.”2
The fiasco surrounding the Dr. Bawa-Garba case will hopefully result in action similar to that following the seminal report that medical errors account for nearly 100,000 annual hospital deaths in the United States. That study was not restricted to house staff mistakes, but involved multiple hospitals and hospital staff members. It spawned a nationwide reappraisal of how to approach medical errors, and it spurred the Institute of Medicine to recommend that the profession shift “from a culture of blame to a culture of safety.”3
Criminal prosecution in the United States is decidedly rare in death or injury occurring during the course of patient care – for either trainees or attending physicians. A malpractice lawsuit would have been a far more likely outcome had the Dr. Bawa-Garba case taken place in the United States.
Lawsuits against U.S. house staff are not rare, and resident physicians are regularly joined as codefendants with their supervisors, who may be medical school faculty or community practitioners admitting to “team care.” Regulatory actions are typically directed against fully licensed physicians, rather than the trainees. Instead, the director of the training program itself would take corrective action against an errant resident, if warranted, which can range from a warning to outright dismissal from the program.
How is negligence law applied to a trainee? Should it demand the same standard of care as it would a fully qualified attending physician?4 Surprisingly, the courts are split on this question. Some have favored using a dual standard of conduct, with trainees being held to a lower standard.
This was articulated in Rush v. Akron General Hospital, which involved a patient who had fallen through a glass door. The patient suffered several lacerations to his shoulder, which the intern treated. However, when two remaining pieces of glass were later discovered in the area of injury, the patient sued the intern for negligence.
The court dismissed the claim, finding that the intern had practiced with the skill and care of his peers of similar training. “It would be unreasonable to exact from an intern, doing emergency work in a hospital, that high degree of skill which is impliedly possessed by a physician and surgeon in the general practice of his profession, with an extensive and constant practice in hospitals and the community,” the court noted.5
However, not all courts have embraced this dual standard of review. The New Jersey Superior Court held that licensed residents should be judged by a standard applicable to a general practitioner, because any reduction in the standard of care would set a problematic precedent.6 In that case, the residents allegedly failed to reinsert a nasogastric tube, which caused the patient to aspirate.
And in Pratt v. Stein, a second-year resident was judged by an even higher standard – that of a specialist – after he had allegedly administered a toxic dose of neomycin to a postoperative patient, which resulted in deafness. Although the lower court had ruled that the resident should be held to the standard of an ordinary physician, the Pennsylvania appellate court disagreed, reasoning that “a resident should be held to the standard of a specialist when the resident is acting within his field of specialty. In our estimation, this is a sound conclusion. A resident is already a physician who has chosen to specialize, and thus possesses a higher degree of knowledge and skill in the chosen specialty than does the nonspecialist.”7
However, a subsequent decision from the same jurisdiction suggests a retreat from this unrealistic standard.
An orthopedic resident allegedly applied a cast with insufficient padding to the broken wrist of a patient. The plaintiff claimed this led to soft-tissue infection with Staphylococcus aureus, with complicating septicemia, staphylococcal endocarditis, and eventual death.
The court held that the resident’s standard of care should be “higher than that for general practitioners but less than that for fully trained orthopedic specialists. ... To require a resident to meet the same standard of care as fully trained specialists would be unrealistic. A resident may have had only days or weeks of training in the specialized residency program; a specialist, on the other hand, will have completed the residency program and may also have had years of experience in the specialized field. If we were to require the resident to exercise the same degree of skill and training as the specialist, we would, in effect, be requiring the resident to do the impossible.”8
Dr. Tan is emeritus professor of medicine and former adjunct professor of law at the University of Hawaii, Honolulu. This article is meant to be educational and does not constitute medical, ethical, or legal advice. For additional information, readers may contact the author at [email protected].
References
1. Saurabh Jha, “To Err Is Homicide in Britain: The Case of Hadiza Bawa-Garba.” The Health Care Blog, Jan. 30, 2018.
2. “‘Lessons Must Be Learned’: UK Societies on Bawa-Garba Ruling.” Medscape, Aug. 14, 2018.
3. “To Err is Human: Building a Safer Health System.” Institute of Medicine, National Academies Press, Washington D.C., 1999.
4. JAMA. 2004 Sep 1;292(9):1051-6.
5. Rush v. Akron General Hospital, 171 N.E.2d 378 (Ohio Ct. App. 1987).
6. Clark v. University Hospital, 914 A.2d 838 (N.J. Super. 2006).
7. Pratt v. Stein, 444 A.2d 674 (Pa. Super. 1980).
8. Jistarri v. Nappi, 549 A.2d 210 (Pa. Super. 1988).
Question: Which of the following regarding medical trainee liability is best?
A. Trainees are commonly named as codefendants with their attending physician in a medical malpractice lawsuit.
B. “From a culture of blame to a culture of safety” is a rallying cry against poor work conditions.
C. House officers are always judged by a lower standard, because they are not fully qualified.
D. A, B, and C are correct.
E. A and C are correct.
Answer: A. A recent case of trainee liability in the United Kingdom resulted in criminal prosecution followed by the trainee being struck off the medical register.1 Dr. Hadiza Bawa-Garba, a pediatric trainee in the U.K. National Health Service, was prosecuted in a court of law and found guilty of manslaughter by gross negligence for the septic death of a 6-year-old boy with Down syndrome. The General Medical Council (GMC), the U.K. medical regulatory agency, voted to take away her license. The decision aroused the ire of physicians worldwide, who noted the poor supervision and undue pressures she was under.
In August 2018, the U.K. Court of Appeal noted that the general clinical competency of Dr. Bawa-Garba was never at issue, and that “the risk of her clinical practice suddenly and without explanation falling below the standards expected on any given day is no higher than for any other reasonably competent doctor.” It reversed the expulsion order and reinstated the 1-year suspension recommended by the Medical Practitioners Tribunal.
Even as the GMC accepted this appellate decision and had convened a commission to look into criminal negligence, it nonetheless received heavy criticism for having overreacted – and for its failure to speak out more forcefully to support those practicing under oppressive conditions.
For example, the Doctors’ Association UK said the GMC had shown it could not be trusted to be objective and nonpunitive. The case, it noted, had “united the medical profession in fear and outrage,” whereby “a pediatrician in training ... a highly regarded doctor, with a previously unblemished record, [was] convicted of [the criminal offence of] gross negligence manslaughter for judgments made whilst doing the jobs of several doctors at once, covering six wards across four floors, responding to numerous pediatric emergencies, without a functioning IT system, and in the absence of a consultant [senior physician], all when just returning from 14 months of maternity leave.”
The Royal College of Pediatrics and Child Health said it had “previously flagged the importance of fostering a culture of supporting doctors to learn from their mistakes, rather than one which seeks to blame.” And the British Medical Association said, “lessons must be learned from this case, which raises wider issues about the multiple factors that affect patient safety in an NHS under extreme pressure, rather than narrowly focusing only on individuals.”2
The fiasco surrounding the Dr. Bawa-Garba case will hopefully result in action similar to that following the seminal report that medical errors account for nearly 100,000 annual hospital deaths in the United States. That study was not restricted to house staff mistakes, but involved multiple hospitals and hospital staff members. It spawned a nationwide reappraisal of how to approach medical errors, and it spurred the Institute of Medicine to recommend that the profession shift “from a culture of blame to a culture of safety.”3
Criminal prosecution in the United States is decidedly rare in death or injury occurring during the course of patient care – for either trainees or attending physicians. A malpractice lawsuit would have been a far more likely outcome had the Dr. Bawa-Garba case taken place in the United States.
Lawsuits against U.S. house staff are not rare, and resident physicians are regularly joined as codefendants with their supervisors, who may be medical school faculty or community practitioners admitting to “team care.” Regulatory actions are typically directed against fully licensed physicians, rather than the trainees. Instead, the director of the training program itself would take corrective action against an errant resident, if warranted, which can range from a warning to outright dismissal from the program.
How is negligence law applied to a trainee? Should it demand the same standard of care as it would a fully qualified attending physician?4 Surprisingly, the courts are split on this question. Some have favored using a dual standard of conduct, with trainees being held to a lower standard.
This was articulated in Rush v. Akron General Hospital, which involved a patient who had fallen through a glass door. The patient suffered several lacerations to his shoulder, which the intern treated. However, when two remaining pieces of glass were later discovered in the area of injury, the patient sued the intern for negligence.
The court dismissed the claim, finding that the intern had practiced with the skill and care of his peers of similar training. “It would be unreasonable to exact from an intern, doing emergency work in a hospital, that high degree of skill which is impliedly possessed by a physician and surgeon in the general practice of his profession, with an extensive and constant practice in hospitals and the community,” the court noted.5
However, not all courts have embraced this dual standard of review. The New Jersey Superior Court held that licensed residents should be judged by a standard applicable to a general practitioner, because any reduction in the standard of care would set a problematic precedent.6 In that case, the residents allegedly failed to reinsert a nasogastric tube, which caused the patient to aspirate.
And in Pratt v. Stein, a second-year resident was judged by an even higher standard – that of a specialist – after he had allegedly administered a toxic dose of neomycin to a postoperative patient, which resulted in deafness. Although the lower court had ruled that the resident should be held to the standard of an ordinary physician, the Pennsylvania appellate court disagreed, reasoning that “a resident should be held to the standard of a specialist when the resident is acting within his field of specialty. In our estimation, this is a sound conclusion. A resident is already a physician who has chosen to specialize, and thus possesses a higher degree of knowledge and skill in the chosen specialty than does the nonspecialist.”7
However, a subsequent decision from the same jurisdiction suggests a retreat from this unrealistic standard.
An orthopedic resident allegedly applied a cast with insufficient padding to the broken wrist of a patient. The plaintiff claimed this led to soft-tissue infection with Staphylococcus aureus, with complicating septicemia, staphylococcal endocarditis, and eventual death.
The court held that the resident’s standard of care should be “higher than that for general practitioners but less than that for fully trained orthopedic specialists. ... To require a resident to meet the same standard of care as fully trained specialists would be unrealistic. A resident may have had only days or weeks of training in the specialized residency program; a specialist, on the other hand, will have completed the residency program and may also have had years of experience in the specialized field. If we were to require the resident to exercise the same degree of skill and training as the specialist, we would, in effect, be requiring the resident to do the impossible.”8
Dr. Tan is emeritus professor of medicine and former adjunct professor of law at the University of Hawaii, Honolulu. This article is meant to be educational and does not constitute medical, ethical, or legal advice. For additional information, readers may contact the author at [email protected].
References
1. Saurabh Jha, “To Err Is Homicide in Britain: The Case of Hadiza Bawa-Garba.” The Health Care Blog, Jan. 30, 2018.
2. “‘Lessons Must Be Learned’: UK Societies on Bawa-Garba Ruling.” Medscape, Aug. 14, 2018.
3. “To Err is Human: Building a Safer Health System.” Institute of Medicine, National Academies Press, Washington D.C., 1999.
4. JAMA. 2004 Sep 1;292(9):1051-6.
5. Rush v. Akron General Hospital, 171 N.E.2d 378 (Ohio Ct. App. 1987).
6. Clark v. University Hospital, 914 A.2d 838 (N.J. Super. 2006).
7. Pratt v. Stein, 444 A.2d 674 (Pa. Super. 1980).
8. Jistarri v. Nappi, 549 A.2d 210 (Pa. Super. 1988).
Have apheresis units, will travel
BOSTON – If donors can’t get to the apheresis center, bring the apheresis center to the donors.
Responding to the request of a patient with cancer, David Anthony, Amber Lazareff, RN, and their colleagues at the University of California at Los Angeles Blood and Platelet Center explored adding mobile apheresis units to their existing community blood drives. They found that, with careful planning and coordination, they could augment their supply of vital blood products and introduce potential new donors to the idea of apheresis donations at the hospital.
“There was a needs drive for an oncology patient at UCLA. She wanted to bring in donors and had her whole community behind her, and we thought well, she’s an oncology patient and she uses platelets, and we had talked about doing platelets out in the field rather than just at fixed sites, and we thought that this would be a good chance to try it,” Mr. Anthony said in an interview at AABB 2018, the annual meeting of the group formerly known as the American Association of Blood Banks.
Until the mobile unit was established, apheresis platelet collections for the hospital-based donor center were limited to two fixed collection sites, with mobile units used only for collection of whole blood.
To see whether concurrent whole blood and platelet community drives were practical, the center’s blood donor field recruiter requested to schedule a community drive in a region of the county where potential donors had expressed a high level of interest in apheresis platelet donations.
Operations staff visited the site to assess its suitability, including appropriate space for donor registration and history taking, separate areas for whole blood and apheresis donations, and a donor recovery area. The assessment included ensuring that there were suitable electrical outlets, space, and support for apheresis machines.
“Over about 2 weeks we discussed with our medical directors, [infusion technicians], and our mobile people what we would need to do it. The recruiter out in the field was able to go to a high school drive out in that area, recruit donors, and get [platelet] precounts from them so that we could find out who was a good candidate,” Mr. Anthony said.
Once they had platelet counts from potential apheresis donors, 10 donors were prescreened based on their eligibility to donate multiple products, history of donations and red blood cell loss, and, for women who had previously had more than one pregnancy, favorable HLA test results.
Four of the prescreened donors were scheduled to donate platelets, and the time slot also included two backup donors, one of whom ultimately donated platelets. Of the four apheresis donors, three were first-time platelet donors.
The first drive collected seven platelet products, including three double products and one single product.
The donated products resulted in about a $3,000 cost savings by obviating the need for purchasing products from an outside supplier, and bolstered the blood bank’s inventory on a normally low collection day, the authors reported.
“We’ve had two more apheresis drives since then, and we’ll have another one in 3 weeks,” Mr. Anthony said.
He acknowledged that it is more challenging to recruit, educate, and ideally retain donors in the field than in the brick-and-mortar hospital setting.
“We have to make sure that they’re going to show up if we’re going to make the effort to take a machine out there, whereas at our centers, we have regular donors who come in every 2 weeks, it’s easy for them to make an appointment, and they know where we are,” he said.
The center plans to continue concurrent monthly whole blood and platelet collection drives, he added.
The pilot program was internally funded. The authors reported having no relevant conflicts of interest.
SOURCE: Anthony D et al., AABB 2018, Poster BBC 135.
BOSTON – If donors can’t get to the apheresis center, bring the apheresis center to the donors.
Responding to the request of a patient with cancer, David Anthony, Amber Lazareff, RN, and their colleagues at the University of California at Los Angeles Blood and Platelet Center explored adding mobile apheresis units to their existing community blood drives. They found that, with careful planning and coordination, they could augment their supply of vital blood products and introduce potential new donors to the idea of apheresis donations at the hospital.
“There was a needs drive for an oncology patient at UCLA. She wanted to bring in donors and had her whole community behind her, and we thought well, she’s an oncology patient and she uses platelets, and we had talked about doing platelets out in the field rather than just at fixed sites, and we thought that this would be a good chance to try it,” Mr. Anthony said in an interview at AABB 2018, the annual meeting of the group formerly known as the American Association of Blood Banks.
Until the mobile unit was established, apheresis platelet collections for the hospital-based donor center were limited to two fixed collection sites, with mobile units used only for collection of whole blood.
To see whether concurrent whole blood and platelet community drives were practical, the center’s blood donor field recruiter requested to schedule a community drive in a region of the county where potential donors had expressed a high level of interest in apheresis platelet donations.
Operations staff visited the site to assess its suitability, including appropriate space for donor registration and history taking, separate areas for whole blood and apheresis donations, and a donor recovery area. The assessment included ensuring that there were suitable electrical outlets, space, and support for apheresis machines.
“Over about 2 weeks we discussed with our medical directors, [infusion technicians], and our mobile people what we would need to do it. The recruiter out in the field was able to go to a high school drive out in that area, recruit donors, and get [platelet] precounts from them so that we could find out who was a good candidate,” Mr. Anthony said.
Once they had platelet counts from potential apheresis donors, 10 donors were prescreened based on their eligibility to donate multiple products, history of donations and red blood cell loss, and, for women who had previously had more than one pregnancy, favorable HLA test results.
Four of the prescreened donors were scheduled to donate platelets, and the time slot also included two backup donors, one of whom ultimately donated platelets. Of the four apheresis donors, three were first-time platelet donors.
The first drive collected seven platelet products, including three double products and one single product.
The donated products resulted in about a $3,000 cost savings by obviating the need for purchasing products from an outside supplier, and bolstered the blood bank’s inventory on a normally low collection day, the authors reported.
“We’ve had two more apheresis drives since then, and we’ll have another one in 3 weeks,” Mr. Anthony said.
He acknowledged that it is more challenging to recruit, educate, and ideally retain donors in the field than in the brick-and-mortar hospital setting.
“We have to make sure that they’re going to show up if we’re going to make the effort to take a machine out there, whereas at our centers, we have regular donors who come in every 2 weeks, it’s easy for them to make an appointment, and they know where we are,” he said.
The center plans to continue concurrent monthly whole blood and platelet collection drives, he added.
The pilot program was internally funded. The authors reported having no relevant conflicts of interest.
SOURCE: Anthony D et al., AABB 2018, Poster BBC 135.
BOSTON – If donors can’t get to the apheresis center, bring the apheresis center to the donors.
Responding to the request of a patient with cancer, David Anthony, Amber Lazareff, RN, and their colleagues at the University of California at Los Angeles Blood and Platelet Center explored adding mobile apheresis units to their existing community blood drives. They found that, with careful planning and coordination, they could augment their supply of vital blood products and introduce potential new donors to the idea of apheresis donations at the hospital.
“There was a needs drive for an oncology patient at UCLA. She wanted to bring in donors and had her whole community behind her, and we thought well, she’s an oncology patient and she uses platelets, and we had talked about doing platelets out in the field rather than just at fixed sites, and we thought that this would be a good chance to try it,” Mr. Anthony said in an interview at AABB 2018, the annual meeting of the group formerly known as the American Association of Blood Banks.
Until the mobile unit was established, apheresis platelet collections for the hospital-based donor center were limited to two fixed collection sites, with mobile units used only for collection of whole blood.
To see whether concurrent whole blood and platelet community drives were practical, the center’s blood donor field recruiter requested to schedule a community drive in a region of the county where potential donors had expressed a high level of interest in apheresis platelet donations.
Operations staff visited the site to assess its suitability, including appropriate space for donor registration and history taking, separate areas for whole blood and apheresis donations, and a donor recovery area. The assessment included ensuring that there were suitable electrical outlets, space, and support for apheresis machines.
“Over about 2 weeks we discussed with our medical directors, [infusion technicians], and our mobile people what we would need to do it. The recruiter out in the field was able to go to a high school drive out in that area, recruit donors, and get [platelet] precounts from them so that we could find out who was a good candidate,” Mr. Anthony said.
Once they had platelet counts from potential apheresis donors, 10 donors were prescreened based on their eligibility to donate multiple products, history of donations and red blood cell loss, and, for women who had previously had more than one pregnancy, favorable HLA test results.
Four of the prescreened donors were scheduled to donate platelets, and the time slot also included two backup donors, one of whom ultimately donated platelets. Of the four apheresis donors, three were first-time platelet donors.
The first drive collected seven platelet products, including three double products and one single product.
The donated products resulted in about a $3,000 cost savings by obviating the need for purchasing products from an outside supplier, and bolstered the blood bank’s inventory on a normally low collection day, the authors reported.
“We’ve had two more apheresis drives since then, and we’ll have another one in 3 weeks,” Mr. Anthony said.
He acknowledged that it is more challenging to recruit, educate, and ideally retain donors in the field than in the brick-and-mortar hospital setting.
“We have to make sure that they’re going to show up if we’re going to make the effort to take a machine out there, whereas at our centers, we have regular donors who come in every 2 weeks, it’s easy for them to make an appointment, and they know where we are,” he said.
The center plans to continue concurrent monthly whole blood and platelet collection drives, he added.
The pilot program was internally funded. The authors reported having no relevant conflicts of interest.
SOURCE: Anthony D et al., AABB 2018, Poster BBC 135.
AT AABB 2018
Key clinical point:
Major finding: Field-based collection of platelet products saved costs and augmented the hospital’s supply on a normally low collection day.
Study details: Pilot program testing apheresis platelet donations during community blood drives.
Disclosures: The pilot program was internally funded. The authors reported having no relevant conflicts of interest.
Source: Anthony D et al. AABB 2018, Poster BBC 135.