User login
Like a hot potato
Most of us did our postgraduate training in tertiary medical centers, ivory towers of medicine often attached to or closely affiliated with medical schools. These are the places where the buck stops. Occasionally, a very complex patient might be sent to another tertiary center that claims to have a supersubspecialist, a one-of-a-kind physician with nationally recognized expertise. But for most patients, the tertiary medical center is the end of the line, and his or her physicians must manage with the resources at hand. They may confer with one another but there is no place for them to pass the buck.
But most of us who chose primary care left the comforting cocoon of the teaching hospital complex when we finished our training. Those first few months and years in the hinterland can be angst producing. Until we have established our own personal networks of consultants and mentors, patients with more than run-of-the-mill complaints may prompt us to reach for the phone or fire off an email call for help to our recently departed mother ship.
It can take awhile to establish the self-confidence – or at least the appearance of self-confidence – that physicians are expected to exude. But even after years of experience, none of us wants to watch a patient die or suffer preventable complications under our care when we know there is another facility that can provide a higher lever of care just an ambulance ride or short helicopter trip away.
Our primary concern is of course assuring that our patient is receiving the best care. How quickly we reach for the phone to refer out the most fragile patients depends on several factors. Do we practice in a community that has a historic reputation of having a low threshold for malpractice suits? How well do we know the patient and her family? Have we had time to establish bidirectional trust?
Is the patient’s diagnosis one that we feel comfortable with or is the diagnosis one that we believe could quickly deteriorate without warning? For example, a recently published study revealed that 20% of pediatric trauma patients were overtriaged and that the mechanism of injury – firearms or motor vehicle accidents – appeared to have an outsized influence in the triage decision (Trauma Surg Acute Care Open. 2019 Dec 29. doi: 10.1136/tsaco-2019-000300).
Because I have no experience with firearm injuries and minimal experience with motor vehicle injuries I can understand why the emergency medical technicians might be quick to ship these patients to the trauma center. However, I hope that, were I offered better training and more opportunities to gain experience with these types of injuries, I would have a lower overtriage percentage.
Which begs the question of what is an acceptable rate of overtriage or overreferral? It’s the same old question of how many normal appendixes should one remove to avoid a fatal outcome. Each of us arrives at a given clinical crossroads with our own level of experience and comfort level.
But in the final analysis it boils down to a personal decision and our own basic level of anxiety. Let’s face it, some of us worry more than others. Physicians come in all shades of anxiety. A hot potato in your hands may feel only room temperature to me.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Email him at [email protected].
Most of us did our postgraduate training in tertiary medical centers, ivory towers of medicine often attached to or closely affiliated with medical schools. These are the places where the buck stops. Occasionally, a very complex patient might be sent to another tertiary center that claims to have a supersubspecialist, a one-of-a-kind physician with nationally recognized expertise. But for most patients, the tertiary medical center is the end of the line, and his or her physicians must manage with the resources at hand. They may confer with one another but there is no place for them to pass the buck.
But most of us who chose primary care left the comforting cocoon of the teaching hospital complex when we finished our training. Those first few months and years in the hinterland can be angst producing. Until we have established our own personal networks of consultants and mentors, patients with more than run-of-the-mill complaints may prompt us to reach for the phone or fire off an email call for help to our recently departed mother ship.
It can take awhile to establish the self-confidence – or at least the appearance of self-confidence – that physicians are expected to exude. But even after years of experience, none of us wants to watch a patient die or suffer preventable complications under our care when we know there is another facility that can provide a higher lever of care just an ambulance ride or short helicopter trip away.
Our primary concern is of course assuring that our patient is receiving the best care. How quickly we reach for the phone to refer out the most fragile patients depends on several factors. Do we practice in a community that has a historic reputation of having a low threshold for malpractice suits? How well do we know the patient and her family? Have we had time to establish bidirectional trust?
Is the patient’s diagnosis one that we feel comfortable with or is the diagnosis one that we believe could quickly deteriorate without warning? For example, a recently published study revealed that 20% of pediatric trauma patients were overtriaged and that the mechanism of injury – firearms or motor vehicle accidents – appeared to have an outsized influence in the triage decision (Trauma Surg Acute Care Open. 2019 Dec 29. doi: 10.1136/tsaco-2019-000300).
Because I have no experience with firearm injuries and minimal experience with motor vehicle injuries I can understand why the emergency medical technicians might be quick to ship these patients to the trauma center. However, I hope that, were I offered better training and more opportunities to gain experience with these types of injuries, I would have a lower overtriage percentage.
Which begs the question of what is an acceptable rate of overtriage or overreferral? It’s the same old question of how many normal appendixes should one remove to avoid a fatal outcome. Each of us arrives at a given clinical crossroads with our own level of experience and comfort level.
But in the final analysis it boils down to a personal decision and our own basic level of anxiety. Let’s face it, some of us worry more than others. Physicians come in all shades of anxiety. A hot potato in your hands may feel only room temperature to me.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Email him at [email protected].
Most of us did our postgraduate training in tertiary medical centers, ivory towers of medicine often attached to or closely affiliated with medical schools. These are the places where the buck stops. Occasionally, a very complex patient might be sent to another tertiary center that claims to have a supersubspecialist, a one-of-a-kind physician with nationally recognized expertise. But for most patients, the tertiary medical center is the end of the line, and his or her physicians must manage with the resources at hand. They may confer with one another but there is no place for them to pass the buck.
But most of us who chose primary care left the comforting cocoon of the teaching hospital complex when we finished our training. Those first few months and years in the hinterland can be angst producing. Until we have established our own personal networks of consultants and mentors, patients with more than run-of-the-mill complaints may prompt us to reach for the phone or fire off an email call for help to our recently departed mother ship.
It can take awhile to establish the self-confidence – or at least the appearance of self-confidence – that physicians are expected to exude. But even after years of experience, none of us wants to watch a patient die or suffer preventable complications under our care when we know there is another facility that can provide a higher lever of care just an ambulance ride or short helicopter trip away.
Our primary concern is of course assuring that our patient is receiving the best care. How quickly we reach for the phone to refer out the most fragile patients depends on several factors. Do we practice in a community that has a historic reputation of having a low threshold for malpractice suits? How well do we know the patient and her family? Have we had time to establish bidirectional trust?
Is the patient’s diagnosis one that we feel comfortable with or is the diagnosis one that we believe could quickly deteriorate without warning? For example, a recently published study revealed that 20% of pediatric trauma patients were overtriaged and that the mechanism of injury – firearms or motor vehicle accidents – appeared to have an outsized influence in the triage decision (Trauma Surg Acute Care Open. 2019 Dec 29. doi: 10.1136/tsaco-2019-000300).
Because I have no experience with firearm injuries and minimal experience with motor vehicle injuries I can understand why the emergency medical technicians might be quick to ship these patients to the trauma center. However, I hope that, were I offered better training and more opportunities to gain experience with these types of injuries, I would have a lower overtriage percentage.
Which begs the question of what is an acceptable rate of overtriage or overreferral? It’s the same old question of how many normal appendixes should one remove to avoid a fatal outcome. Each of us arrives at a given clinical crossroads with our own level of experience and comfort level.
But in the final analysis it boils down to a personal decision and our own basic level of anxiety. Let’s face it, some of us worry more than others. Physicians come in all shades of anxiety. A hot potato in your hands may feel only room temperature to me.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Email him at [email protected].
The Mississippi solution
I agree wholeheartedly with Dr. William G. Wilkoff’s doubts that an increase in medical schools/students and/or foreign medical graduates is the answer to the physician shortage felt by many areas of the country (Letters From Maine, “Help Wanted,” Nov. 2019, page 19). All you have to do is look at the glut of physicians – and just about any other profession – in metropolitan areas versus rural America, and ask basic questions regarding why those doctors practice where they do. You will quickly discover that most are willing to trade the possibility of a higher salary in areas where their presence is more needed to achieve more school choices, jobs for a spouse, and likely a more favorable call schedule. Something more attractive than salary or the prospect of more “elbow room” is desired.
Here in Mississippi we may have found an answer to the problem. A few years ago our state legislature started the Mississippi Rural Health Scholarship Program that pays for recipients to attend a state-run medical school on scholarship in exchange for agreeing to practice at least 4 years in a rural area of the state (less than 20k population) following their primary care residency (family medicine, pediatrics, ob.gyn., med-peds, internal medicine, and, recently added, psychiatry). Although a recent increase in the number of pediatric residency slots at our state’s sole program will no doubt also have a positive effect to this end, such a scholarship program as the one implemented by Mississippi is the best way to compete with the various intangibles that lead people to choose bigger cities over rural areas of the state to practice their trade. Once there, many – like myself – will find that such a practice is not only a good business decision but often is a wonderful place to raise a family. Meanwhile, our own practice just added a fourth physician as a result of said Rural Health Scholarship Program, and we could not be more satisfied with the result.
I agree wholeheartedly with Dr. William G. Wilkoff’s doubts that an increase in medical schools/students and/or foreign medical graduates is the answer to the physician shortage felt by many areas of the country (Letters From Maine, “Help Wanted,” Nov. 2019, page 19). All you have to do is look at the glut of physicians – and just about any other profession – in metropolitan areas versus rural America, and ask basic questions regarding why those doctors practice where they do. You will quickly discover that most are willing to trade the possibility of a higher salary in areas where their presence is more needed to achieve more school choices, jobs for a spouse, and likely a more favorable call schedule. Something more attractive than salary or the prospect of more “elbow room” is desired.
Here in Mississippi we may have found an answer to the problem. A few years ago our state legislature started the Mississippi Rural Health Scholarship Program that pays for recipients to attend a state-run medical school on scholarship in exchange for agreeing to practice at least 4 years in a rural area of the state (less than 20k population) following their primary care residency (family medicine, pediatrics, ob.gyn., med-peds, internal medicine, and, recently added, psychiatry). Although a recent increase in the number of pediatric residency slots at our state’s sole program will no doubt also have a positive effect to this end, such a scholarship program as the one implemented by Mississippi is the best way to compete with the various intangibles that lead people to choose bigger cities over rural areas of the state to practice their trade. Once there, many – like myself – will find that such a practice is not only a good business decision but often is a wonderful place to raise a family. Meanwhile, our own practice just added a fourth physician as a result of said Rural Health Scholarship Program, and we could not be more satisfied with the result.
I agree wholeheartedly with Dr. William G. Wilkoff’s doubts that an increase in medical schools/students and/or foreign medical graduates is the answer to the physician shortage felt by many areas of the country (Letters From Maine, “Help Wanted,” Nov. 2019, page 19). All you have to do is look at the glut of physicians – and just about any other profession – in metropolitan areas versus rural America, and ask basic questions regarding why those doctors practice where they do. You will quickly discover that most are willing to trade the possibility of a higher salary in areas where their presence is more needed to achieve more school choices, jobs for a spouse, and likely a more favorable call schedule. Something more attractive than salary or the prospect of more “elbow room” is desired.
Here in Mississippi we may have found an answer to the problem. A few years ago our state legislature started the Mississippi Rural Health Scholarship Program that pays for recipients to attend a state-run medical school on scholarship in exchange for agreeing to practice at least 4 years in a rural area of the state (less than 20k population) following their primary care residency (family medicine, pediatrics, ob.gyn., med-peds, internal medicine, and, recently added, psychiatry). Although a recent increase in the number of pediatric residency slots at our state’s sole program will no doubt also have a positive effect to this end, such a scholarship program as the one implemented by Mississippi is the best way to compete with the various intangibles that lead people to choose bigger cities over rural areas of the state to practice their trade. Once there, many – like myself – will find that such a practice is not only a good business decision but often is a wonderful place to raise a family. Meanwhile, our own practice just added a fourth physician as a result of said Rural Health Scholarship Program, and we could not be more satisfied with the result.
Vaccinating most girls could eliminate cervical cancer within a century
Cervical cancer is the second most common cancer among women in lower- and middle-income countries, but universal human papillomavirus vaccination for girls would reduce new cervical cancer cases by about 90% over the next century, according to researchers.
Adding twice-lifetime cervical screening with human papillomavirus (HPV) testing would further reduce the incidence of cervical cancer, including in countries with the highest burden, the researchers reported in The Lancet.
Marc Brisson, PhD, of Laval University, Quebec City, and colleagues conducted this study using three models identified by the World Health Organization. The models were used to project reductions in cervical cancer incidence for women in 78 low- and middle-income countries based on the following HPV vaccination and screening scenarios:
- Universal girls-only vaccination at age 9 years, assuming 90% of girls vaccinated and a vaccine that is perfectly effective
- Girls-only vaccination plus cervical screening with HPV testing at age 35 years
- Girls-only vaccination plus screening at ages 35 and 45.
All three scenarios modeled would result in the elimination of cervical cancer, Dr. Brisson and colleagues found. Elimination was defined as four or fewer new cases per 100,000 women-years.
The simplest scenario, universal girls-only vaccination, was predicted to reduce age-standardized cervical cancer incidence from 19.8 cases per 100,000 women-years to 2.1 cases per 100,000 women-years (89.4% reduction) by 2120. That amounts to about 61 million potential cases avoided, with elimination targets reached in 60% of the countries studied.
HPV vaccination plus one-time screening was predicted to reduce the incidence of cervical cancer to 1.0 case per 100,000 women-years (95.0% reduction), and HPV vaccination plus twice-lifetime screening was predicted to reduce the incidence to 0.7 cases per 100,000 women-years (96.7% reduction).
Dr. Brisson and colleagues reported that, for the countries with the highest burden of cervical cancer (more than 25 cases per 100,000 women-years), adding screening would be necessary to achieve elimination.
To meet the same targets across all 78 countries, “our models predict that scale-up of both girls-only HPV vaccination and twice-lifetime screening is necessary, with 90% HPV vaccination coverage, 90% screening uptake, and long-term protection against HPV types 16, 18, 31, 33, 45, 52, and 58,” the researchers wrote.
Dr. Brisson and colleagues claimed that a strength of this study is the modeling approach, which compared three models “that have been extensively peer reviewed and validated with postvaccination surveillance data.”
The researchers acknowledged, however, that their modeling could not account for variations in sexual behavior from country to country, and the study was not designed to anticipate behavioral or technological changes that could affect cervical cancer incidence in the decades to come.
The study was funded by the WHO, the United Nations, and the Canadian and Australian governments. The WHO contributed to the study design, data analysis and interpretation, and writing of the manuscript. Two study authors reported receiving indirect industry funding for a cervical screening trial in Australia.
SOURCE: Brisson M et al. Lancet. 2020 Jan 30. doi: 10.1016/S0140-6736(20)30068-4.
Cervical cancer is the second most common cancer among women in lower- and middle-income countries, but universal human papillomavirus vaccination for girls would reduce new cervical cancer cases by about 90% over the next century, according to researchers.
Adding twice-lifetime cervical screening with human papillomavirus (HPV) testing would further reduce the incidence of cervical cancer, including in countries with the highest burden, the researchers reported in The Lancet.
Marc Brisson, PhD, of Laval University, Quebec City, and colleagues conducted this study using three models identified by the World Health Organization. The models were used to project reductions in cervical cancer incidence for women in 78 low- and middle-income countries based on the following HPV vaccination and screening scenarios:
- Universal girls-only vaccination at age 9 years, assuming 90% of girls vaccinated and a vaccine that is perfectly effective
- Girls-only vaccination plus cervical screening with HPV testing at age 35 years
- Girls-only vaccination plus screening at ages 35 and 45.
All three scenarios modeled would result in the elimination of cervical cancer, Dr. Brisson and colleagues found. Elimination was defined as four or fewer new cases per 100,000 women-years.
The simplest scenario, universal girls-only vaccination, was predicted to reduce age-standardized cervical cancer incidence from 19.8 cases per 100,000 women-years to 2.1 cases per 100,000 women-years (89.4% reduction) by 2120. That amounts to about 61 million potential cases avoided, with elimination targets reached in 60% of the countries studied.
HPV vaccination plus one-time screening was predicted to reduce the incidence of cervical cancer to 1.0 case per 100,000 women-years (95.0% reduction), and HPV vaccination plus twice-lifetime screening was predicted to reduce the incidence to 0.7 cases per 100,000 women-years (96.7% reduction).
Dr. Brisson and colleagues reported that, for the countries with the highest burden of cervical cancer (more than 25 cases per 100,000 women-years), adding screening would be necessary to achieve elimination.
To meet the same targets across all 78 countries, “our models predict that scale-up of both girls-only HPV vaccination and twice-lifetime screening is necessary, with 90% HPV vaccination coverage, 90% screening uptake, and long-term protection against HPV types 16, 18, 31, 33, 45, 52, and 58,” the researchers wrote.
Dr. Brisson and colleagues claimed that a strength of this study is the modeling approach, which compared three models “that have been extensively peer reviewed and validated with postvaccination surveillance data.”
The researchers acknowledged, however, that their modeling could not account for variations in sexual behavior from country to country, and the study was not designed to anticipate behavioral or technological changes that could affect cervical cancer incidence in the decades to come.
The study was funded by the WHO, the United Nations, and the Canadian and Australian governments. The WHO contributed to the study design, data analysis and interpretation, and writing of the manuscript. Two study authors reported receiving indirect industry funding for a cervical screening trial in Australia.
SOURCE: Brisson M et al. Lancet. 2020 Jan 30. doi: 10.1016/S0140-6736(20)30068-4.
Cervical cancer is the second most common cancer among women in lower- and middle-income countries, but universal human papillomavirus vaccination for girls would reduce new cervical cancer cases by about 90% over the next century, according to researchers.
Adding twice-lifetime cervical screening with human papillomavirus (HPV) testing would further reduce the incidence of cervical cancer, including in countries with the highest burden, the researchers reported in The Lancet.
Marc Brisson, PhD, of Laval University, Quebec City, and colleagues conducted this study using three models identified by the World Health Organization. The models were used to project reductions in cervical cancer incidence for women in 78 low- and middle-income countries based on the following HPV vaccination and screening scenarios:
- Universal girls-only vaccination at age 9 years, assuming 90% of girls vaccinated and a vaccine that is perfectly effective
- Girls-only vaccination plus cervical screening with HPV testing at age 35 years
- Girls-only vaccination plus screening at ages 35 and 45.
All three scenarios modeled would result in the elimination of cervical cancer, Dr. Brisson and colleagues found. Elimination was defined as four or fewer new cases per 100,000 women-years.
The simplest scenario, universal girls-only vaccination, was predicted to reduce age-standardized cervical cancer incidence from 19.8 cases per 100,000 women-years to 2.1 cases per 100,000 women-years (89.4% reduction) by 2120. That amounts to about 61 million potential cases avoided, with elimination targets reached in 60% of the countries studied.
HPV vaccination plus one-time screening was predicted to reduce the incidence of cervical cancer to 1.0 case per 100,000 women-years (95.0% reduction), and HPV vaccination plus twice-lifetime screening was predicted to reduce the incidence to 0.7 cases per 100,000 women-years (96.7% reduction).
Dr. Brisson and colleagues reported that, for the countries with the highest burden of cervical cancer (more than 25 cases per 100,000 women-years), adding screening would be necessary to achieve elimination.
To meet the same targets across all 78 countries, “our models predict that scale-up of both girls-only HPV vaccination and twice-lifetime screening is necessary, with 90% HPV vaccination coverage, 90% screening uptake, and long-term protection against HPV types 16, 18, 31, 33, 45, 52, and 58,” the researchers wrote.
Dr. Brisson and colleagues claimed that a strength of this study is the modeling approach, which compared three models “that have been extensively peer reviewed and validated with postvaccination surveillance data.”
The researchers acknowledged, however, that their modeling could not account for variations in sexual behavior from country to country, and the study was not designed to anticipate behavioral or technological changes that could affect cervical cancer incidence in the decades to come.
The study was funded by the WHO, the United Nations, and the Canadian and Australian governments. The WHO contributed to the study design, data analysis and interpretation, and writing of the manuscript. Two study authors reported receiving indirect industry funding for a cervical screening trial in Australia.
SOURCE: Brisson M et al. Lancet. 2020 Jan 30. doi: 10.1016/S0140-6736(20)30068-4.
FROM THE LANCET
New tools could help predict complication risks in lung and breast cancer
In this edition of “How I Will Treat My Next Patient,” I highlight the potential role of new models for predicting risks of common, clinically important situations in general oncology practice: severe neutropenia in lung cancer patients and locoregional recurrence of breast cancer.
Predicting neutropenia
Accurate, lung cancer–specific prediction models would be useful to estimate risk of chemotherapy-induced neutropenia (CIN), especially febrile neutropenia (FN), since that particular toxicity is linked to infection, dose delays and dose reductions that can compromise treatment efficacy, and poor health-related quality of life. Lung cancer patients are often older adults, with advanced disease and comorbid conditions, so they are a particularly vulnerable population for CIN.
Xiaowen Cao of Duke University, Durham, N.C., and coinvestigators published a model for predicting risk of severe CIN in advanced lung cancer patients, based on 10 pretreatment variables (Lung Cancer. 2020 Jan 5. doi: 10.1016/j.lungcan.2020.01.004). They developed their model to overcome limitations of the previously published work of Gary H. Lyman, MD, and colleagues that is not specific to lung cancer and incorporated relative dose intensity as a predictor (Cancer. 2011;117:1917-27). Relative dose intensity is not determined until after a treatment course is completed.
The new prediction model was based on a lung cancer data set encompassing 11,352 patients from 67 phase 2-3 cooperative group studies conducted between 1991 and 2010. In this data set, the Lyman model had an area under the curve of 0.8772 in patients with small cell lung cancer, but an area under the curve of just 0.6787 in non–small cell lung cancer.
The derivation model was derived from about two-thirds of the patients, randomly selected. The validation set was conducted using the remaining third. The variables included were readily clinically available: age, gender, weight, body mass index, insurance status, disease stage, number of metastatic sites, chemotherapy agents used, number of chemotherapy agents, planned growth factor use, duration of planned therapy, pleural effusion, presence of symptoms, and performance status. Their model had an area under the curve of 0.8348 in the training set and 0.8234 in the testing set.
How these results influence practice
The risk of an initial episode of FN is highest during a patient’s initial cycle of chemotherapy, when most patients are receiving full-dose treatment, often without prophylactic measures. Guidelines from the National Comprehensive Cancer Network suggest the use of prophylactic growth factors in patients with more than a 20% risk of FN, and considering using prophylaxis in patients with 10%-20% risk of FN. Underestimating those risks and failure to take adequate precautions may be particularly important for patients with lung cancer who are generally older adults, with comorbid conditions.
The comprehensive risk model for neutropenic complications that was developed by Dr. Lyman and colleagues was based on a large, prospective cohort including nearly 3,800 patients. The model had a 90% sensitivity and 96% predictive value, but was not lung cancer specific and, in this latest study, did not perform as well in the 85% of lung cancer patients with non–small cell lung cancer. The Lyman data, however, was obtained in cancer patients treated with investigator-choice chemotherapy in community practices. It remains the National Comprehensive Cancer Network standard for evaluating FN risk in patients embarking on chemotherapy for advanced malignancies. That should remain the case, pending the additional validation testing of the new lung cancer–specific model at independent institutions, treating heterogeneous patients in real-world settings.
Locoregional recurrence
A retrospective cohort analysis of SWOG 8814, a phase 3 study of tamoxifen alone versus chemotherapy plus by tamoxifen in postmenopausal, node-positive, hormone receptor–positive breast cancer patients suggests that the 21-gene assay recurrence score (RS) can aid decisions about radiotherapy (RT).
Wendy A. Woodward, MD, PhD, and colleagues, analyzed patients who underwent mastectomy or breast-conserving surgery as their local therapy (JAMA Oncol. 2020 Jan 9. doi: 10.1001/jamaoncol.2019.5559). They found that patients with an intermediate or high RS – according to the 21-gene assay OncotypeDX – had more locoregional recurrences (LRR; breast, chest wall, axilla, internal mammary, supraclavicular or infraclavicular nodes).
There were 367 patients in SWOG 8814 who received tamoxifen alone or cyclophosphamide, doxorubicin, and fluorouracil followed by tamoxifen. LRR was observed in 5.8% of patients with a low RS (less than 18) and in 13.8% of patients with an intermediate or high RS (more than 18). The estimated 10-year cumulative LRR incidence rates were 9.7% and 16.5%, respectively (P = .02).
In the subset of patients with one to three positive nodes who had mastectomy without radiotherapy, the LRR was 1.5% for those with low RS and 11.1% for those with intermediate or high RS (P = .051). No difference by RS was found in the 10-year rates of LRR among patients with four or more involved nodes who received a mastectomy without RT (25.9% vs. 27.0%; P = .27).
In multivariate analysis, incorporating RS, type of surgery, and number of involved nodes, intermediate or high RS was a significant predictor of LRR, with a hazard ratio of 2.36 (P = .04). The investigators suggested that RS, when available, should be one of the factors considered in selecting patients for postmastectomy RT.
How these results influence practice
Selecting the node-positive, hormone receptor–positive, breast cancer patients who should receive postmastectomy RT is difficult and controversial. This is particularly true for those postmenopausal patients with fewer than four involved nodes, no lymphatic or vascular invasion, and no extracapsular spread of disease into the axillary fat. Limited information exists on the ability of genomic assays to identify LRR risk.
Eleftherios P. Mamounas, MD, and colleagues examined the results of NSABP B-28, a trial of chemotherapy plus tamoxifen (J Natl Cancer Inst. 2017;109[4]. doi:10.1093/jnci/djw259). Postmastectomy RT was not permitted. They found high RS correlated with greater LRR and low RS with decreased LRR among patients with one to three positive nodes. At first blush, the prospectively treated cohort of SWOG 8814 represents a uniformly treated cohort with long-term follow-up (median, 8.5 years) and extends in an independent analysis the findings of NSABP B-28.
However, as Dr. Woodward and colleagues point out, the current study has limitations. The use of RT was extracted retrospectively and may be underreported. More modern chemotherapy and RT may lower LRR from the risks observed in SWOG 8814. Finally, the modest numbers of LRR events precluded secondary analysis of RS as a continuous variable. This is important because the risk group cutoffs suggested by the authors are not aligned with those in the recently published TailorRx study or the ongoing RxPonder trial.
The TailorRT (Regional Radiotherapy in Biomarker Low Risk Node Positive Breast Cancer) study examines the safety of omitting RT among patients with low RS and one to three positive nodes. Until the TailorRT results are reported, the controversy regarding the role of postmastectomy RT in this group will continue for patients with low nodal tumor burden and less aggressive tumor features, including low RS.
An observed LRR risk of 11.1% in SWOG 8814 among patients with N1 disease and an RS above 18 suggest that genomic risk could be one of the factors that may justify postmastectomy RT in postmenopausal patients with node-positive, hormone receptor–positive breast cancer until additional data emerge from the contemporary trials.
Dr. Lyss has been a community-based medical oncologist and clinical researcher for more than 35 years, practicing in St. Louis. His clinical and research interests are in the prevention, diagnosis, and treatment of breast and lung cancers and in expanding access to clinical trials to medically underserved populations.
In this edition of “How I Will Treat My Next Patient,” I highlight the potential role of new models for predicting risks of common, clinically important situations in general oncology practice: severe neutropenia in lung cancer patients and locoregional recurrence of breast cancer.
Predicting neutropenia
Accurate, lung cancer–specific prediction models would be useful to estimate risk of chemotherapy-induced neutropenia (CIN), especially febrile neutropenia (FN), since that particular toxicity is linked to infection, dose delays and dose reductions that can compromise treatment efficacy, and poor health-related quality of life. Lung cancer patients are often older adults, with advanced disease and comorbid conditions, so they are a particularly vulnerable population for CIN.
Xiaowen Cao of Duke University, Durham, N.C., and coinvestigators published a model for predicting risk of severe CIN in advanced lung cancer patients, based on 10 pretreatment variables (Lung Cancer. 2020 Jan 5. doi: 10.1016/j.lungcan.2020.01.004). They developed their model to overcome limitations of the previously published work of Gary H. Lyman, MD, and colleagues that is not specific to lung cancer and incorporated relative dose intensity as a predictor (Cancer. 2011;117:1917-27). Relative dose intensity is not determined until after a treatment course is completed.
The new prediction model was based on a lung cancer data set encompassing 11,352 patients from 67 phase 2-3 cooperative group studies conducted between 1991 and 2010. In this data set, the Lyman model had an area under the curve of 0.8772 in patients with small cell lung cancer, but an area under the curve of just 0.6787 in non–small cell lung cancer.
The derivation model was derived from about two-thirds of the patients, randomly selected. The validation set was conducted using the remaining third. The variables included were readily clinically available: age, gender, weight, body mass index, insurance status, disease stage, number of metastatic sites, chemotherapy agents used, number of chemotherapy agents, planned growth factor use, duration of planned therapy, pleural effusion, presence of symptoms, and performance status. Their model had an area under the curve of 0.8348 in the training set and 0.8234 in the testing set.
How these results influence practice
The risk of an initial episode of FN is highest during a patient’s initial cycle of chemotherapy, when most patients are receiving full-dose treatment, often without prophylactic measures. Guidelines from the National Comprehensive Cancer Network suggest the use of prophylactic growth factors in patients with more than a 20% risk of FN, and considering using prophylaxis in patients with 10%-20% risk of FN. Underestimating those risks and failure to take adequate precautions may be particularly important for patients with lung cancer who are generally older adults, with comorbid conditions.
The comprehensive risk model for neutropenic complications that was developed by Dr. Lyman and colleagues was based on a large, prospective cohort including nearly 3,800 patients. The model had a 90% sensitivity and 96% predictive value, but was not lung cancer specific and, in this latest study, did not perform as well in the 85% of lung cancer patients with non–small cell lung cancer. The Lyman data, however, was obtained in cancer patients treated with investigator-choice chemotherapy in community practices. It remains the National Comprehensive Cancer Network standard for evaluating FN risk in patients embarking on chemotherapy for advanced malignancies. That should remain the case, pending the additional validation testing of the new lung cancer–specific model at independent institutions, treating heterogeneous patients in real-world settings.
Locoregional recurrence
A retrospective cohort analysis of SWOG 8814, a phase 3 study of tamoxifen alone versus chemotherapy plus by tamoxifen in postmenopausal, node-positive, hormone receptor–positive breast cancer patients suggests that the 21-gene assay recurrence score (RS) can aid decisions about radiotherapy (RT).
Wendy A. Woodward, MD, PhD, and colleagues, analyzed patients who underwent mastectomy or breast-conserving surgery as their local therapy (JAMA Oncol. 2020 Jan 9. doi: 10.1001/jamaoncol.2019.5559). They found that patients with an intermediate or high RS – according to the 21-gene assay OncotypeDX – had more locoregional recurrences (LRR; breast, chest wall, axilla, internal mammary, supraclavicular or infraclavicular nodes).
There were 367 patients in SWOG 8814 who received tamoxifen alone or cyclophosphamide, doxorubicin, and fluorouracil followed by tamoxifen. LRR was observed in 5.8% of patients with a low RS (less than 18) and in 13.8% of patients with an intermediate or high RS (more than 18). The estimated 10-year cumulative LRR incidence rates were 9.7% and 16.5%, respectively (P = .02).
In the subset of patients with one to three positive nodes who had mastectomy without radiotherapy, the LRR was 1.5% for those with low RS and 11.1% for those with intermediate or high RS (P = .051). No difference by RS was found in the 10-year rates of LRR among patients with four or more involved nodes who received a mastectomy without RT (25.9% vs. 27.0%; P = .27).
In multivariate analysis, incorporating RS, type of surgery, and number of involved nodes, intermediate or high RS was a significant predictor of LRR, with a hazard ratio of 2.36 (P = .04). The investigators suggested that RS, when available, should be one of the factors considered in selecting patients for postmastectomy RT.
How these results influence practice
Selecting the node-positive, hormone receptor–positive, breast cancer patients who should receive postmastectomy RT is difficult and controversial. This is particularly true for those postmenopausal patients with fewer than four involved nodes, no lymphatic or vascular invasion, and no extracapsular spread of disease into the axillary fat. Limited information exists on the ability of genomic assays to identify LRR risk.
Eleftherios P. Mamounas, MD, and colleagues examined the results of NSABP B-28, a trial of chemotherapy plus tamoxifen (J Natl Cancer Inst. 2017;109[4]. doi:10.1093/jnci/djw259). Postmastectomy RT was not permitted. They found high RS correlated with greater LRR and low RS with decreased LRR among patients with one to three positive nodes. At first blush, the prospectively treated cohort of SWOG 8814 represents a uniformly treated cohort with long-term follow-up (median, 8.5 years) and extends in an independent analysis the findings of NSABP B-28.
However, as Dr. Woodward and colleagues point out, the current study has limitations. The use of RT was extracted retrospectively and may be underreported. More modern chemotherapy and RT may lower LRR from the risks observed in SWOG 8814. Finally, the modest numbers of LRR events precluded secondary analysis of RS as a continuous variable. This is important because the risk group cutoffs suggested by the authors are not aligned with those in the recently published TailorRx study or the ongoing RxPonder trial.
The TailorRT (Regional Radiotherapy in Biomarker Low Risk Node Positive Breast Cancer) study examines the safety of omitting RT among patients with low RS and one to three positive nodes. Until the TailorRT results are reported, the controversy regarding the role of postmastectomy RT in this group will continue for patients with low nodal tumor burden and less aggressive tumor features, including low RS.
An observed LRR risk of 11.1% in SWOG 8814 among patients with N1 disease and an RS above 18 suggest that genomic risk could be one of the factors that may justify postmastectomy RT in postmenopausal patients with node-positive, hormone receptor–positive breast cancer until additional data emerge from the contemporary trials.
Dr. Lyss has been a community-based medical oncologist and clinical researcher for more than 35 years, practicing in St. Louis. His clinical and research interests are in the prevention, diagnosis, and treatment of breast and lung cancers and in expanding access to clinical trials to medically underserved populations.
In this edition of “How I Will Treat My Next Patient,” I highlight the potential role of new models for predicting risks of common, clinically important situations in general oncology practice: severe neutropenia in lung cancer patients and locoregional recurrence of breast cancer.
Predicting neutropenia
Accurate, lung cancer–specific prediction models would be useful to estimate risk of chemotherapy-induced neutropenia (CIN), especially febrile neutropenia (FN), since that particular toxicity is linked to infection, dose delays and dose reductions that can compromise treatment efficacy, and poor health-related quality of life. Lung cancer patients are often older adults, with advanced disease and comorbid conditions, so they are a particularly vulnerable population for CIN.
Xiaowen Cao of Duke University, Durham, N.C., and coinvestigators published a model for predicting risk of severe CIN in advanced lung cancer patients, based on 10 pretreatment variables (Lung Cancer. 2020 Jan 5. doi: 10.1016/j.lungcan.2020.01.004). They developed their model to overcome limitations of the previously published work of Gary H. Lyman, MD, and colleagues that is not specific to lung cancer and incorporated relative dose intensity as a predictor (Cancer. 2011;117:1917-27). Relative dose intensity is not determined until after a treatment course is completed.
The new prediction model was based on a lung cancer data set encompassing 11,352 patients from 67 phase 2-3 cooperative group studies conducted between 1991 and 2010. In this data set, the Lyman model had an area under the curve of 0.8772 in patients with small cell lung cancer, but an area under the curve of just 0.6787 in non–small cell lung cancer.
The derivation model was derived from about two-thirds of the patients, randomly selected. The validation set was conducted using the remaining third. The variables included were readily clinically available: age, gender, weight, body mass index, insurance status, disease stage, number of metastatic sites, chemotherapy agents used, number of chemotherapy agents, planned growth factor use, duration of planned therapy, pleural effusion, presence of symptoms, and performance status. Their model had an area under the curve of 0.8348 in the training set and 0.8234 in the testing set.
How these results influence practice
The risk of an initial episode of FN is highest during a patient’s initial cycle of chemotherapy, when most patients are receiving full-dose treatment, often without prophylactic measures. Guidelines from the National Comprehensive Cancer Network suggest the use of prophylactic growth factors in patients with more than a 20% risk of FN, and considering using prophylaxis in patients with 10%-20% risk of FN. Underestimating those risks and failure to take adequate precautions may be particularly important for patients with lung cancer who are generally older adults, with comorbid conditions.
The comprehensive risk model for neutropenic complications that was developed by Dr. Lyman and colleagues was based on a large, prospective cohort including nearly 3,800 patients. The model had a 90% sensitivity and 96% predictive value, but was not lung cancer specific and, in this latest study, did not perform as well in the 85% of lung cancer patients with non–small cell lung cancer. The Lyman data, however, was obtained in cancer patients treated with investigator-choice chemotherapy in community practices. It remains the National Comprehensive Cancer Network standard for evaluating FN risk in patients embarking on chemotherapy for advanced malignancies. That should remain the case, pending the additional validation testing of the new lung cancer–specific model at independent institutions, treating heterogeneous patients in real-world settings.
Locoregional recurrence
A retrospective cohort analysis of SWOG 8814, a phase 3 study of tamoxifen alone versus chemotherapy plus by tamoxifen in postmenopausal, node-positive, hormone receptor–positive breast cancer patients suggests that the 21-gene assay recurrence score (RS) can aid decisions about radiotherapy (RT).
Wendy A. Woodward, MD, PhD, and colleagues, analyzed patients who underwent mastectomy or breast-conserving surgery as their local therapy (JAMA Oncol. 2020 Jan 9. doi: 10.1001/jamaoncol.2019.5559). They found that patients with an intermediate or high RS – according to the 21-gene assay OncotypeDX – had more locoregional recurrences (LRR; breast, chest wall, axilla, internal mammary, supraclavicular or infraclavicular nodes).
There were 367 patients in SWOG 8814 who received tamoxifen alone or cyclophosphamide, doxorubicin, and fluorouracil followed by tamoxifen. LRR was observed in 5.8% of patients with a low RS (less than 18) and in 13.8% of patients with an intermediate or high RS (more than 18). The estimated 10-year cumulative LRR incidence rates were 9.7% and 16.5%, respectively (P = .02).
In the subset of patients with one to three positive nodes who had mastectomy without radiotherapy, the LRR was 1.5% for those with low RS and 11.1% for those with intermediate or high RS (P = .051). No difference by RS was found in the 10-year rates of LRR among patients with four or more involved nodes who received a mastectomy without RT (25.9% vs. 27.0%; P = .27).
In multivariate analysis, incorporating RS, type of surgery, and number of involved nodes, intermediate or high RS was a significant predictor of LRR, with a hazard ratio of 2.36 (P = .04). The investigators suggested that RS, when available, should be one of the factors considered in selecting patients for postmastectomy RT.
How these results influence practice
Selecting the node-positive, hormone receptor–positive, breast cancer patients who should receive postmastectomy RT is difficult and controversial. This is particularly true for those postmenopausal patients with fewer than four involved nodes, no lymphatic or vascular invasion, and no extracapsular spread of disease into the axillary fat. Limited information exists on the ability of genomic assays to identify LRR risk.
Eleftherios P. Mamounas, MD, and colleagues examined the results of NSABP B-28, a trial of chemotherapy plus tamoxifen (J Natl Cancer Inst. 2017;109[4]. doi:10.1093/jnci/djw259). Postmastectomy RT was not permitted. They found high RS correlated with greater LRR and low RS with decreased LRR among patients with one to three positive nodes. At first blush, the prospectively treated cohort of SWOG 8814 represents a uniformly treated cohort with long-term follow-up (median, 8.5 years) and extends in an independent analysis the findings of NSABP B-28.
However, as Dr. Woodward and colleagues point out, the current study has limitations. The use of RT was extracted retrospectively and may be underreported. More modern chemotherapy and RT may lower LRR from the risks observed in SWOG 8814. Finally, the modest numbers of LRR events precluded secondary analysis of RS as a continuous variable. This is important because the risk group cutoffs suggested by the authors are not aligned with those in the recently published TailorRx study or the ongoing RxPonder trial.
The TailorRT (Regional Radiotherapy in Biomarker Low Risk Node Positive Breast Cancer) study examines the safety of omitting RT among patients with low RS and one to three positive nodes. Until the TailorRT results are reported, the controversy regarding the role of postmastectomy RT in this group will continue for patients with low nodal tumor burden and less aggressive tumor features, including low RS.
An observed LRR risk of 11.1% in SWOG 8814 among patients with N1 disease and an RS above 18 suggest that genomic risk could be one of the factors that may justify postmastectomy RT in postmenopausal patients with node-positive, hormone receptor–positive breast cancer until additional data emerge from the contemporary trials.
Dr. Lyss has been a community-based medical oncologist and clinical researcher for more than 35 years, practicing in St. Louis. His clinical and research interests are in the prevention, diagnosis, and treatment of breast and lung cancers and in expanding access to clinical trials to medically underserved populations.
Are doctors really at highest risk for suicide?
In October 2012, Pamela Wible, MD, attended a memorial service in her town for a physician who had died by suicide. Sitting in the third row, she began to count all the colleagues she had lost to suicide, and the result shocked her: 3 in her small town alone, 10 if she expanded her scope to all the doctors she’d ever known.
And so she set out on a mission to document as many physician suicides as she could, in an attempt to understand why her fellow doctors were taking their lives. “I viewed this as a personal quest,” she said in an interview. “I wanted to find out why my friends were dying.” Over the course of 7 years, she documented more than 1,300 physician suicides in the United States with the help of individuals who have lost colleagues and loved ones. She maintains a suicide prevention hotline for medical students and doctors.
On her website, Dr. Wible calls high physician suicide rates a “public health crisis.” She states many conclusions from the stories she’s collected, among them that anesthesiologists are at highest risk for suicide among physicians.
The claim that doctors have a high suicide rate is a common one beyond Dr. Wible’s documentation project. Frequently cited papers contend that 300 physicians commit suicide per year, and that physicians’ suicide rate is higher than the general population. Researchers presenting at the American Psychiatric Association meeting in 2018 said physicians have the highest suicide rate of any profession – double that of the general population, with one completed suicide every day – and Medscape’s coverage of the talk has been widely referenced as supporting evidence.
A closer look at the data behind these claims, however, reveals the difficulty of establishing reliable statistics. Dr. Wible acknowledges that her data are limited. “We do not have accurate numbers. These [statistics] have come to me organically,” she said. Incorrectly coded death certificates are one reason it’s hard to get solid information. “When we’re trying to figure out how many doctors do die by suicide, it’s very hard to know.”
Similar claims have been made at various times about dentists, construction workers, and farmers, perhaps in an effort to call attention to difficult working conditions and inadequate mental health care. Overall, an associate professor at the University of Michigan, Ann Arbor, who researches physician wellness, mental health, and suicide. It’s critical to know the accurate numbers, she said, “so we can know if we’re making progress.”
Scrutinizing a statistic
The idea for the research presented at the APA meeting in 2018 came up a year earlier “when there were quite a number of physician deaths by suicide,” lead author Omotola T’Sarumi, MD, psychiatrist and chief resident at Columbia University’s Harlem Hospital in New York at the time of the presentation, said in an interview. The poster describes the methodology as a systematic review of research articles published in the last 10 years. Dr. T’Sarumi and colleagues concluded that the rate was 28-40 suicides per 100,000 doctors, compared with a rate of 12.3 per 100,000 for the general population. “That just stunned me,” she said. “We should be doing better.” A peer-reviewed article on the work has not been published.
The references on the poster show limited data to support the headline conclusion that physicians have the highest suicide rate of any profession: four papers and a book chapter. The poster itself does not describe the methodology used to arrive at the numbers stated, and Dr. T’Sarumi said that she was unable to gain access to her previous research since moving to a new institution. Dr. Gold, the first author on one of the papers the poster cites, said there are “huge issues” with the work. “In my paper that they’re citing, I was not looking at rates of suicide,” she said. “This is just picking a couple of studies and highlighting them.”
Dr. Gold’s paper uses data from the Centers for Disease Control and Prevention’s National Violent Death Reporting System (NVDRS) to identify differences in risk factors and suicide methods between physicians and others who died by suicide in 17 states. The researchers did not attempt to quantify a difference in overall rates, but found that physicians who end their own lives are more likely to have a known mental health disorder with lower rates of medication treatment than nonphysicians. “Inadequate treatment and increased problems related to job stress may be potentially modifiable risk factors to reduce suicidal death among physicians,” the authors conclude.
The second study referenced in the 2018 poster, “A History of Physician Suicide in America” by Rupinder Legha, MD, offers a narrative history of physician suicide, including a reference to an 1897 editorial in the Philadelphia Medical and Surgical Reporter that says: “Our profession is more prone to suicide than any other.” The study does not, however, attempt to quantify that risk.
The third study referenced does offer a quantitative analysis based on death and census data in 26 states, and concludes that the suicide rate for white female physicians was about two times higher than the general population. For white male physicians and dentists, however, the study found that the overall rate of suicide was lower than in the general population, but higher in male physicians and dentists older than 55 years.
In search of reliable data
With all of the popular but poorly substantiated claims about physician suicide, Dr. Gold argues that getting accurate numbers is critical. Without them, there is no way to know if rates are increasing or decreasing over time, or if attempts to help physicians in crisis are effective.
The CDC just released its own updated analysis of NVDRS data by major occupational groups across 32 states in 2016. It shows that males and females in the construction and extraction industries had the highest suicide rates: 49.4 per 100,000 and 25.5 per 100,000 respectively. Males in the “health care practitioners and technical” occupation group had a lower than average rate, while females in the same group had a higher than average rate.
The most reliable data that exist, according to Dr. Gold, are found in the CDC’s National Occupational Mortality Surveillance catalog, though it does not contain information from all states and is missing several years of records. Based on its data, the CDC provides a proportionate mortality ratio (PMR) that indicates whether the proportion of deaths tied to a given cause for a given occupation appears high or low, compared with all other occupations. But occupation data are often missing from the CDC’s records, which could make the PMRs unreliable. “You’re talking about relatively small numbers,” said Dr. Gold. “Even if we’re talking about 400 a year, the difference in one or two or five people being physicians could make a huge difference in the rate.”
The PMR for physicians who have died by intentional self-harm suggests that they are 2.5 times as likely as other populations to die by suicide. Filtering the data by race and gender, it appears black female physicians are at highest risk, more than five times as likely to die by suicide as other populations, while white males are twice as likely. Overall, the professionals with highest suicide risk in the database are hunters and trappers, followed by podiatrists, dentists, veterans, and nuclear engineers. Physicians follow with the fifth-highest rate.
The only way to get a true sense of physician suicide rates would be to collect all of the vital records data that states report to the federal government, according to Dr. Gold. “That would require 50 separate institutional review boards, so I doubt anyone is going to go to the effort to do that study,” she said.
Even without a reliable, exact number, it’s clear there are more physician suicides than there should be, Dr. Gold said. “This is a population that really should not be having a relatively high number of suicide deaths, whether it’s highest or not.”
As Dr. Legha wrote in his “History of Physician Suicide,” cited in the 2018 APA poster: “The problem of physician suicide is not solely a matter of whether or not it takes place at a rate higher than the general public. That a professional caregiver can fall ill and not receive adequate care and support, despite being surrounded by other caregivers, begs for a thoughtful assessment to determine why it happens at all.”
If you or someone you know is in need of support, the National Suicide Prevention Lifeline’s toll-free number is 1-800-273-TALK (8255). A version of this article first appeared on Medscape.com.
In October 2012, Pamela Wible, MD, attended a memorial service in her town for a physician who had died by suicide. Sitting in the third row, she began to count all the colleagues she had lost to suicide, and the result shocked her: 3 in her small town alone, 10 if she expanded her scope to all the doctors she’d ever known.
And so she set out on a mission to document as many physician suicides as she could, in an attempt to understand why her fellow doctors were taking their lives. “I viewed this as a personal quest,” she said in an interview. “I wanted to find out why my friends were dying.” Over the course of 7 years, she documented more than 1,300 physician suicides in the United States with the help of individuals who have lost colleagues and loved ones. She maintains a suicide prevention hotline for medical students and doctors.
On her website, Dr. Wible calls high physician suicide rates a “public health crisis.” She states many conclusions from the stories she’s collected, among them that anesthesiologists are at highest risk for suicide among physicians.
The claim that doctors have a high suicide rate is a common one beyond Dr. Wible’s documentation project. Frequently cited papers contend that 300 physicians commit suicide per year, and that physicians’ suicide rate is higher than the general population. Researchers presenting at the American Psychiatric Association meeting in 2018 said physicians have the highest suicide rate of any profession – double that of the general population, with one completed suicide every day – and Medscape’s coverage of the talk has been widely referenced as supporting evidence.
A closer look at the data behind these claims, however, reveals the difficulty of establishing reliable statistics. Dr. Wible acknowledges that her data are limited. “We do not have accurate numbers. These [statistics] have come to me organically,” she said. Incorrectly coded death certificates are one reason it’s hard to get solid information. “When we’re trying to figure out how many doctors do die by suicide, it’s very hard to know.”
Similar claims have been made at various times about dentists, construction workers, and farmers, perhaps in an effort to call attention to difficult working conditions and inadequate mental health care. Overall, an associate professor at the University of Michigan, Ann Arbor, who researches physician wellness, mental health, and suicide. It’s critical to know the accurate numbers, she said, “so we can know if we’re making progress.”
Scrutinizing a statistic
The idea for the research presented at the APA meeting in 2018 came up a year earlier “when there were quite a number of physician deaths by suicide,” lead author Omotola T’Sarumi, MD, psychiatrist and chief resident at Columbia University’s Harlem Hospital in New York at the time of the presentation, said in an interview. The poster describes the methodology as a systematic review of research articles published in the last 10 years. Dr. T’Sarumi and colleagues concluded that the rate was 28-40 suicides per 100,000 doctors, compared with a rate of 12.3 per 100,000 for the general population. “That just stunned me,” she said. “We should be doing better.” A peer-reviewed article on the work has not been published.
The references on the poster show limited data to support the headline conclusion that physicians have the highest suicide rate of any profession: four papers and a book chapter. The poster itself does not describe the methodology used to arrive at the numbers stated, and Dr. T’Sarumi said that she was unable to gain access to her previous research since moving to a new institution. Dr. Gold, the first author on one of the papers the poster cites, said there are “huge issues” with the work. “In my paper that they’re citing, I was not looking at rates of suicide,” she said. “This is just picking a couple of studies and highlighting them.”
Dr. Gold’s paper uses data from the Centers for Disease Control and Prevention’s National Violent Death Reporting System (NVDRS) to identify differences in risk factors and suicide methods between physicians and others who died by suicide in 17 states. The researchers did not attempt to quantify a difference in overall rates, but found that physicians who end their own lives are more likely to have a known mental health disorder with lower rates of medication treatment than nonphysicians. “Inadequate treatment and increased problems related to job stress may be potentially modifiable risk factors to reduce suicidal death among physicians,” the authors conclude.
The second study referenced in the 2018 poster, “A History of Physician Suicide in America” by Rupinder Legha, MD, offers a narrative history of physician suicide, including a reference to an 1897 editorial in the Philadelphia Medical and Surgical Reporter that says: “Our profession is more prone to suicide than any other.” The study does not, however, attempt to quantify that risk.
The third study referenced does offer a quantitative analysis based on death and census data in 26 states, and concludes that the suicide rate for white female physicians was about two times higher than the general population. For white male physicians and dentists, however, the study found that the overall rate of suicide was lower than in the general population, but higher in male physicians and dentists older than 55 years.
In search of reliable data
With all of the popular but poorly substantiated claims about physician suicide, Dr. Gold argues that getting accurate numbers is critical. Without them, there is no way to know if rates are increasing or decreasing over time, or if attempts to help physicians in crisis are effective.
The CDC just released its own updated analysis of NVDRS data by major occupational groups across 32 states in 2016. It shows that males and females in the construction and extraction industries had the highest suicide rates: 49.4 per 100,000 and 25.5 per 100,000 respectively. Males in the “health care practitioners and technical” occupation group had a lower than average rate, while females in the same group had a higher than average rate.
The most reliable data that exist, according to Dr. Gold, are found in the CDC’s National Occupational Mortality Surveillance catalog, though it does not contain information from all states and is missing several years of records. Based on its data, the CDC provides a proportionate mortality ratio (PMR) that indicates whether the proportion of deaths tied to a given cause for a given occupation appears high or low, compared with all other occupations. But occupation data are often missing from the CDC’s records, which could make the PMRs unreliable. “You’re talking about relatively small numbers,” said Dr. Gold. “Even if we’re talking about 400 a year, the difference in one or two or five people being physicians could make a huge difference in the rate.”
The PMR for physicians who have died by intentional self-harm suggests that they are 2.5 times as likely as other populations to die by suicide. Filtering the data by race and gender, it appears black female physicians are at highest risk, more than five times as likely to die by suicide as other populations, while white males are twice as likely. Overall, the professionals with highest suicide risk in the database are hunters and trappers, followed by podiatrists, dentists, veterans, and nuclear engineers. Physicians follow with the fifth-highest rate.
The only way to get a true sense of physician suicide rates would be to collect all of the vital records data that states report to the federal government, according to Dr. Gold. “That would require 50 separate institutional review boards, so I doubt anyone is going to go to the effort to do that study,” she said.
Even without a reliable, exact number, it’s clear there are more physician suicides than there should be, Dr. Gold said. “This is a population that really should not be having a relatively high number of suicide deaths, whether it’s highest or not.”
As Dr. Legha wrote in his “History of Physician Suicide,” cited in the 2018 APA poster: “The problem of physician suicide is not solely a matter of whether or not it takes place at a rate higher than the general public. That a professional caregiver can fall ill and not receive adequate care and support, despite being surrounded by other caregivers, begs for a thoughtful assessment to determine why it happens at all.”
If you or someone you know is in need of support, the National Suicide Prevention Lifeline’s toll-free number is 1-800-273-TALK (8255). A version of this article first appeared on Medscape.com.
In October 2012, Pamela Wible, MD, attended a memorial service in her town for a physician who had died by suicide. Sitting in the third row, she began to count all the colleagues she had lost to suicide, and the result shocked her: 3 in her small town alone, 10 if she expanded her scope to all the doctors she’d ever known.
And so she set out on a mission to document as many physician suicides as she could, in an attempt to understand why her fellow doctors were taking their lives. “I viewed this as a personal quest,” she said in an interview. “I wanted to find out why my friends were dying.” Over the course of 7 years, she documented more than 1,300 physician suicides in the United States with the help of individuals who have lost colleagues and loved ones. She maintains a suicide prevention hotline for medical students and doctors.
On her website, Dr. Wible calls high physician suicide rates a “public health crisis.” She states many conclusions from the stories she’s collected, among them that anesthesiologists are at highest risk for suicide among physicians.
The claim that doctors have a high suicide rate is a common one beyond Dr. Wible’s documentation project. Frequently cited papers contend that 300 physicians commit suicide per year, and that physicians’ suicide rate is higher than the general population. Researchers presenting at the American Psychiatric Association meeting in 2018 said physicians have the highest suicide rate of any profession – double that of the general population, with one completed suicide every day – and Medscape’s coverage of the talk has been widely referenced as supporting evidence.
A closer look at the data behind these claims, however, reveals the difficulty of establishing reliable statistics. Dr. Wible acknowledges that her data are limited. “We do not have accurate numbers. These [statistics] have come to me organically,” she said. Incorrectly coded death certificates are one reason it’s hard to get solid information. “When we’re trying to figure out how many doctors do die by suicide, it’s very hard to know.”
Similar claims have been made at various times about dentists, construction workers, and farmers, perhaps in an effort to call attention to difficult working conditions and inadequate mental health care. Overall, an associate professor at the University of Michigan, Ann Arbor, who researches physician wellness, mental health, and suicide. It’s critical to know the accurate numbers, she said, “so we can know if we’re making progress.”
Scrutinizing a statistic
The idea for the research presented at the APA meeting in 2018 came up a year earlier “when there were quite a number of physician deaths by suicide,” lead author Omotola T’Sarumi, MD, psychiatrist and chief resident at Columbia University’s Harlem Hospital in New York at the time of the presentation, said in an interview. The poster describes the methodology as a systematic review of research articles published in the last 10 years. Dr. T’Sarumi and colleagues concluded that the rate was 28-40 suicides per 100,000 doctors, compared with a rate of 12.3 per 100,000 for the general population. “That just stunned me,” she said. “We should be doing better.” A peer-reviewed article on the work has not been published.
The references on the poster show limited data to support the headline conclusion that physicians have the highest suicide rate of any profession: four papers and a book chapter. The poster itself does not describe the methodology used to arrive at the numbers stated, and Dr. T’Sarumi said that she was unable to gain access to her previous research since moving to a new institution. Dr. Gold, the first author on one of the papers the poster cites, said there are “huge issues” with the work. “In my paper that they’re citing, I was not looking at rates of suicide,” she said. “This is just picking a couple of studies and highlighting them.”
Dr. Gold’s paper uses data from the Centers for Disease Control and Prevention’s National Violent Death Reporting System (NVDRS) to identify differences in risk factors and suicide methods between physicians and others who died by suicide in 17 states. The researchers did not attempt to quantify a difference in overall rates, but found that physicians who end their own lives are more likely to have a known mental health disorder with lower rates of medication treatment than nonphysicians. “Inadequate treatment and increased problems related to job stress may be potentially modifiable risk factors to reduce suicidal death among physicians,” the authors conclude.
The second study referenced in the 2018 poster, “A History of Physician Suicide in America” by Rupinder Legha, MD, offers a narrative history of physician suicide, including a reference to an 1897 editorial in the Philadelphia Medical and Surgical Reporter that says: “Our profession is more prone to suicide than any other.” The study does not, however, attempt to quantify that risk.
The third study referenced does offer a quantitative analysis based on death and census data in 26 states, and concludes that the suicide rate for white female physicians was about two times higher than the general population. For white male physicians and dentists, however, the study found that the overall rate of suicide was lower than in the general population, but higher in male physicians and dentists older than 55 years.
In search of reliable data
With all of the popular but poorly substantiated claims about physician suicide, Dr. Gold argues that getting accurate numbers is critical. Without them, there is no way to know if rates are increasing or decreasing over time, or if attempts to help physicians in crisis are effective.
The CDC just released its own updated analysis of NVDRS data by major occupational groups across 32 states in 2016. It shows that males and females in the construction and extraction industries had the highest suicide rates: 49.4 per 100,000 and 25.5 per 100,000 respectively. Males in the “health care practitioners and technical” occupation group had a lower than average rate, while females in the same group had a higher than average rate.
The most reliable data that exist, according to Dr. Gold, are found in the CDC’s National Occupational Mortality Surveillance catalog, though it does not contain information from all states and is missing several years of records. Based on its data, the CDC provides a proportionate mortality ratio (PMR) that indicates whether the proportion of deaths tied to a given cause for a given occupation appears high or low, compared with all other occupations. But occupation data are often missing from the CDC’s records, which could make the PMRs unreliable. “You’re talking about relatively small numbers,” said Dr. Gold. “Even if we’re talking about 400 a year, the difference in one or two or five people being physicians could make a huge difference in the rate.”
The PMR for physicians who have died by intentional self-harm suggests that they are 2.5 times as likely as other populations to die by suicide. Filtering the data by race and gender, it appears black female physicians are at highest risk, more than five times as likely to die by suicide as other populations, while white males are twice as likely. Overall, the professionals with highest suicide risk in the database are hunters and trappers, followed by podiatrists, dentists, veterans, and nuclear engineers. Physicians follow with the fifth-highest rate.
The only way to get a true sense of physician suicide rates would be to collect all of the vital records data that states report to the federal government, according to Dr. Gold. “That would require 50 separate institutional review boards, so I doubt anyone is going to go to the effort to do that study,” she said.
Even without a reliable, exact number, it’s clear there are more physician suicides than there should be, Dr. Gold said. “This is a population that really should not be having a relatively high number of suicide deaths, whether it’s highest or not.”
As Dr. Legha wrote in his “History of Physician Suicide,” cited in the 2018 APA poster: “The problem of physician suicide is not solely a matter of whether or not it takes place at a rate higher than the general public. That a professional caregiver can fall ill and not receive adequate care and support, despite being surrounded by other caregivers, begs for a thoughtful assessment to determine why it happens at all.”
If you or someone you know is in need of support, the National Suicide Prevention Lifeline’s toll-free number is 1-800-273-TALK (8255). A version of this article first appeared on Medscape.com.
The power of an odd couple
The time has come for good men and women to unite and rise up against a common foe. For too long nurses and doctors have labored under the tyranny of a dictator who claimed to help them provide high-quality care for their patients while at the same time cutting their paperwork to nil. But like most autocrats he failed to engage his subjects in a meaningful dialogue as each new version of his promised improvements rolled off the drawing board. When the caregivers were slow to adopt these new nonsystems he offered them financial incentives and issued threats to their survival. Although they were warned that there might be uncomfortable adjustment periods, the caregivers were promised that the steep learning curves would level out and their professional lives would again be valued and productive.
Of course, the dictator is not a single person but a motley and disorganized conglomerate of user- and patient-unfriendly electronic health record nonsystems. Ask almost any nurse or physician for her feelings about computer-based medical record systems, and you will hear tales of long hours, disengagement, and frustration. Caregivers are unhappy at all levels, and patients have grown tired of their nurses and physicians spending most of their time looking at computer screens.
You certainly have heard this all before. But you are hearing it in hospital hallways and grocery store checkout lines as a low rumble of discontent emerging from separate individuals, not as a well-articulated and widely distributed voice of physicians as a group. To some extent this relative silence is because there is no such group, at least not in same mold as a labor union. The term “labor union” may make you uncomfortable. But given the current climate in medicine, unionizing may be the best and only way to effect change.
But organizing to effect change in the workplace isn’t part of the physician genome. In the 1960s, a group of house officers in Boston engaged in a heal-in to successfully improve their salaries and working conditions. But over the ensuing half century physicians have remained tragically silent in the face of a changing workplace landscape in which they have gone from being independent owner operators in control of their destinies to becoming employees feeling powerless to improve their working conditions. This perceived impotence has escalated in the face of the challenge posed by the introduction of dysfunctional EHRs.
Ironically, a solution is at almost every physician’s elbow. In a recent New York Times opinion piece Theresa Brown and Stephen Bergman acknowledge that physicians don’t seem prepared to mount a meaningful response to the challenge to the failed promise of EHRs (“Doctors, Nurses and the Paperwork Crisis That Could Unite Them,” Dec. 31, 2019). They point out that, over the last half century, physicians have remained isolated on the sidelines, finding just enough voice to grumble. Nurses have in a variety of situations organized to effect change in their working conditions – in some cases by forming labor unions.
The authors of this op-ed piece, a physician and a nurse, make a strong argument that the time has come for nurses and doctors shake off the shackles of their stereotypic roles and join in creating a loud, forceful, and effective voice to demand a working environment in which the computer functions as an asset and no longer as the terrible burden it has become. Neither group has the power to do it alone, but together they may be able to turn the tide. For physicians it will probably mean venturing several steps outside of their comfort zone. But working shoulder to shoulder with nurses may provide the courage to speak out.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Email him at [email protected].
The time has come for good men and women to unite and rise up against a common foe. For too long nurses and doctors have labored under the tyranny of a dictator who claimed to help them provide high-quality care for their patients while at the same time cutting their paperwork to nil. But like most autocrats he failed to engage his subjects in a meaningful dialogue as each new version of his promised improvements rolled off the drawing board. When the caregivers were slow to adopt these new nonsystems he offered them financial incentives and issued threats to their survival. Although they were warned that there might be uncomfortable adjustment periods, the caregivers were promised that the steep learning curves would level out and their professional lives would again be valued and productive.
Of course, the dictator is not a single person but a motley and disorganized conglomerate of user- and patient-unfriendly electronic health record nonsystems. Ask almost any nurse or physician for her feelings about computer-based medical record systems, and you will hear tales of long hours, disengagement, and frustration. Caregivers are unhappy at all levels, and patients have grown tired of their nurses and physicians spending most of their time looking at computer screens.
You certainly have heard this all before. But you are hearing it in hospital hallways and grocery store checkout lines as a low rumble of discontent emerging from separate individuals, not as a well-articulated and widely distributed voice of physicians as a group. To some extent this relative silence is because there is no such group, at least not in same mold as a labor union. The term “labor union” may make you uncomfortable. But given the current climate in medicine, unionizing may be the best and only way to effect change.
But organizing to effect change in the workplace isn’t part of the physician genome. In the 1960s, a group of house officers in Boston engaged in a heal-in to successfully improve their salaries and working conditions. But over the ensuing half century physicians have remained tragically silent in the face of a changing workplace landscape in which they have gone from being independent owner operators in control of their destinies to becoming employees feeling powerless to improve their working conditions. This perceived impotence has escalated in the face of the challenge posed by the introduction of dysfunctional EHRs.
Ironically, a solution is at almost every physician’s elbow. In a recent New York Times opinion piece Theresa Brown and Stephen Bergman acknowledge that physicians don’t seem prepared to mount a meaningful response to the challenge to the failed promise of EHRs (“Doctors, Nurses and the Paperwork Crisis That Could Unite Them,” Dec. 31, 2019). They point out that, over the last half century, physicians have remained isolated on the sidelines, finding just enough voice to grumble. Nurses have in a variety of situations organized to effect change in their working conditions – in some cases by forming labor unions.
The authors of this op-ed piece, a physician and a nurse, make a strong argument that the time has come for nurses and doctors shake off the shackles of their stereotypic roles and join in creating a loud, forceful, and effective voice to demand a working environment in which the computer functions as an asset and no longer as the terrible burden it has become. Neither group has the power to do it alone, but together they may be able to turn the tide. For physicians it will probably mean venturing several steps outside of their comfort zone. But working shoulder to shoulder with nurses may provide the courage to speak out.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Email him at [email protected].
The time has come for good men and women to unite and rise up against a common foe. For too long nurses and doctors have labored under the tyranny of a dictator who claimed to help them provide high-quality care for their patients while at the same time cutting their paperwork to nil. But like most autocrats he failed to engage his subjects in a meaningful dialogue as each new version of his promised improvements rolled off the drawing board. When the caregivers were slow to adopt these new nonsystems he offered them financial incentives and issued threats to their survival. Although they were warned that there might be uncomfortable adjustment periods, the caregivers were promised that the steep learning curves would level out and their professional lives would again be valued and productive.
Of course, the dictator is not a single person but a motley and disorganized conglomerate of user- and patient-unfriendly electronic health record nonsystems. Ask almost any nurse or physician for her feelings about computer-based medical record systems, and you will hear tales of long hours, disengagement, and frustration. Caregivers are unhappy at all levels, and patients have grown tired of their nurses and physicians spending most of their time looking at computer screens.
You certainly have heard this all before. But you are hearing it in hospital hallways and grocery store checkout lines as a low rumble of discontent emerging from separate individuals, not as a well-articulated and widely distributed voice of physicians as a group. To some extent this relative silence is because there is no such group, at least not in same mold as a labor union. The term “labor union” may make you uncomfortable. But given the current climate in medicine, unionizing may be the best and only way to effect change.
But organizing to effect change in the workplace isn’t part of the physician genome. In the 1960s, a group of house officers in Boston engaged in a heal-in to successfully improve their salaries and working conditions. But over the ensuing half century physicians have remained tragically silent in the face of a changing workplace landscape in which they have gone from being independent owner operators in control of their destinies to becoming employees feeling powerless to improve their working conditions. This perceived impotence has escalated in the face of the challenge posed by the introduction of dysfunctional EHRs.
Ironically, a solution is at almost every physician’s elbow. In a recent New York Times opinion piece Theresa Brown and Stephen Bergman acknowledge that physicians don’t seem prepared to mount a meaningful response to the challenge to the failed promise of EHRs (“Doctors, Nurses and the Paperwork Crisis That Could Unite Them,” Dec. 31, 2019). They point out that, over the last half century, physicians have remained isolated on the sidelines, finding just enough voice to grumble. Nurses have in a variety of situations organized to effect change in their working conditions – in some cases by forming labor unions.
The authors of this op-ed piece, a physician and a nurse, make a strong argument that the time has come for nurses and doctors shake off the shackles of their stereotypic roles and join in creating a loud, forceful, and effective voice to demand a working environment in which the computer functions as an asset and no longer as the terrible burden it has become. Neither group has the power to do it alone, but together they may be able to turn the tide. For physicians it will probably mean venturing several steps outside of their comfort zone. But working shoulder to shoulder with nurses may provide the courage to speak out.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Email him at [email protected].
What to do when stimulants fail for ADHD
NEW ORLEANS – A variety of reasons can contribute to the failure of stimulants to treat ADHD in children, such as comorbidities, missed diagnoses, inadequate medication dosage, side effects, major life changes, and other factors in the home or school environments, said Alison Schonwald, MD, of Harvard Medical School, Boston.
Stimulant medications indicated for ADHD usually work in 70%-75% of school-age children, but that leaves one in four children whose condition can be more challenging to treat, she said.
“Look around you,” Dr. Schonwald told a packed room at the annual meeting of the American Academy of Pediatrics. “You’re not the only one struggling with this topic.” She sprinkled her presentation with case studies of patients with ADHD for whom stimulants weren’t working, examples that the audience clearly found familiar.
The three steps you already know to do with treatment-resistant children sound simple: assess the child for factors linked to their poor response; develop a new treatment plan; and use Food and Drug Administration-approved nonstimulant medications, including off-label options, in a new plan.
But in the office, the process can be anything but simple when you must consider school and family environments, comorbidities, and other factors potentially complicating the child’s ability to function well.
Comorbidities
To start, Dr. Schonwald provided a chart of common coexisting problems in children with ADHD that included the recommended assessment and intervention:
- Mood and self-esteem issues call for the depression section of the patient health questionnaire (PHQ9) and Moods and Feelings questionnaire (MFQ), followed by interventions such as individual and peer group therapy and exercise.
- Anxiety can be assessed with the Screen for Child Anxiety Related Disorders (SCARED) and Spence Children’s Anxiety Scale, then treated similarly to mood and self-esteem issues.
- Bullying or trauma require taking a history during an interview, and treatment with individual and peer group therapy.
- Substance abuse should be assessed with the CRAFFT screening tool (Car, Relax, Alone, Forget, Friends, Trouble) and Screening to Brief Intervention (S2BI) Tool, then treated according to best practices.
- Executive function, low cognitive abilities, and poor adaptive skills require a review of the child’s Individualized Education Program (IEP) testing, followed by personalized school and home interventions.
- Poor social skills, assessed in an interview, also require personalized interventions at home and in school.
Doctors also may need to consider other common comorbidities in children with ADHD, such as bipolar disorder, depression, learning disabilities, oppositional defiant disorder, and tic disorders.
Tic disorders typically have an onset around 7 years old and peak in midadolescence, declining in late teen years. An estimated 35%-90% of children with Tourette syndrome have ADHD, Dr. Schonwald said (Dev Med Child Neurol. 2006 Jul;48[7]:616-21).
Managing treatment with stimulants
A common dosage amount for stimulants is 2.5-5 mg, but that dose may be too low for children, Dr. Schonwald said. She recommended increasing it until an effect is seen and stopping at the effective dose level the child can tolerate. The maximum recommended by the FDA is 60 mg/day for short-acting stimulants and 72 mg/day for extended-release ones, but some research has shown dosage can go even higher without causing toxic effects (J Child Adolesc Psychopharmacol. 2010 Feb;20[1]:49-54).
Dr. Schonwald also suggested trying both methylphenidate and amphetamine medication, while recognizing the latter tends to have more stimulant-related side effects.
Adherence is another consideration because multiple studies show high rates of noncompliance or discontinuation, such as up to 19% discontinuation for long-acting and 38% for short-acting stimulants (J Clin Psychiatry. 2015 Nov;76(11):e1459-68; Postgrad Med. 2012 May;124(3):139-48). A study of a school cohort in Philadelphia found only about one in five children were adherent (J Am Acad Child Adolesc Psychiatry. 2011 May;50[5]:480-9).
One potential solution to adherence challenges are pill reminder smartphone apps, such as Medisafe Medication Management, Pill Reminder-All in One, MyTherapy: Medication Reminder, and CareZone.
Dr. Schonwald noted several factors that can influence children’s response to stimulants. Among children with comorbid intellectual disability, for example, the response rate is lower than the average 75% of children without the disability, hovering around 40%-50% (Res Dev Disabil. 2018 Dec;83:217-32). Those who get more sleep tend to have improved attention, compared with children with less sleep (Atten Defic Hyperact Disord. 2017 Mar;9[1]:31-38).
She also offered strategies to manage problematic adverse effects from stimulants. Those experiencing weight loss can take their stimulant after breakfast, drink whole milk, and consider taking drug holidays.
To reduce stomachaches, children should take their medication with food, and you should look at whether the child is taking the lowest effective dose they can and whether anxiety may be involved. Similarly, children with headaches should take stimulants with food, and you should look at the dosage and ask whether the patient is getting adequate sleep.
Strategies to address difficulty falling asleep can include taking the stimulant earlier in the day or switching to a shorter-acting form, dexmethylphenidate, or another stimulant. If they’re having trouble staying asleep, inquire about sleep hygiene, and look for associations with other factors that might explain why the child is experiencing new problems with staying asleep. If these strategies are unsuccessful, you can consider prescribing melatonin or clonidine.
Alternatives to stimulants
Several medications besides stimulants are available to prescribe to children with ADHD if they aren’t responding adequately to stimulants, Dr. Schonwald said.
Atomoxetine performed better than placebo in treatment studies, with similar weight loss effects, albeit the lowest mean effect size in clinician ratings (Lancet Psychiatry. 2018 Sep;5[9]:727-38). Dr. Schonwald recommended starting atomoxetine in children under 40 kg at 0.5 mg/kg for 4 days, then increasing to 1.2 mg/kg/day. For children over 40 kg, the dose can start at 40 mg. Maximum dose can range from 1.4 to 1.8 mg/kg or 100 mg/day.
About 7% of white children and 2% of African American children are poor metabolizers of atomoxetine, and the drug has interactions with dextromethorphan, fluoxetine, and paroxetine, she noted. Side effects can include abdominal pain, dry mouth, fatigue, mood swings, nausea, and vomiting.
Two alpha-adrenergics that you can consider are clonidine and guanfacine. Clonidine, a hypotensive drug given at a dose of 0.05-0.2 mg up to three times a day, is helpful for hyperactivity and impulsivity rather than attention difficulties. Side effects can include depression, headache, rebound hypertension, and sedation, and it’s only FDA approved for ages 12 years and older.
An extended release version of clonidine (Kapvay) is approved for monotherapy or adjunctive therapy for ADHD; it led to improvements in ADHD–Rating Scale-IV scores as soon as the second week in an 8-week randomized controlled trial. Mild to moderate somnolence was the most common adverse event, and changes on electrocardiograms were minor (J Am Acad Child Adolesc Psychiatry. 2011 Feb;50[2]:171-9).
Guanfacine, also a hypotensive drug, given at a dose of 0.5-2 mg up to three times a day, has fewer data about its use for ADHD but appears to treat attention problems more effectively than hyperactivity. Also approved only for ages 12 years and older, guanfacine is less sedating, and its side effects can include agitation, headache , and insomnia. An extended-release version of guanfacine (brand name Intuniv) showed statistically significant reductions in ADHD Rating Scale-IV scores in a 9-week, double-blind, randomized, controlled trial. Side effects including fatigue, sedation, and somnolence occurred in the first 2 weeks but generally resolved, and participants returned to baseline during dose maintenance and tapering (J Am Acad Child Adolesc Psychiatry. 2009 Feb;48[2]:155-65).
Intuniv doses should start at 1 mg/day and increase no more than 1 mg/week, Dr. Schonwald said, until reaching a maintenance dose of 1-4 mg once daily, depending on the patient’s clinical response and tolerability. Children also must be able to swallow the pill whole.
Treating preschoolers
Preschool children are particularly difficult to diagnose given their normal range of temperament and development, Dr. Schonwald said. Their symptoms could be resulting from another diagnosis or from circumstances in the environment.
You should consider potential comorbidities and whether the child’s symptoms are situational or pervasive. About 55% of preschoolers have at least one comorbidity, she said (Infants & Young Children. 2006 Apr-Jun;19[2]:109-122.)
That said, stimulants usually are effective in very young children whose primary concern is ADHD. In a randomized controlled trial of 303 preschoolers, significantly more children experienced reduced ADHD symptoms with methylphenidate than with placebo. The trial’s “data suggest that preschoolers with ADHD need to start with low methylphenidate doses. Treatment may best begin using methylphenidate–immediate release at 2.5 mg twice daily, and then be increased to 7.5 mg three times a day during the course of 1 week. The mean optimal total daily [methylphenidate] dose for preschoolers was 14.2 plus or minus 8.1 mg/day” (J Am Acad Child Adolesc Psychiatry. 2006 Nov;45[11]:1284-93).
In treating preschoolers, if the patient’s symptoms appear to get worse after starting a stimulant, you can consider a medication change. If symptoms are much worse, consider a lower dose or a different stimulant class, or whether the diagnosis is appropriate.
Five common components of poor behavior in preschoolers with ADHD include agitation, anxiety, explosively, hyperactivity, and impulsivity. If these issues are occurring throughout the day, consider reducing the dose or switching drug classes.
If it’s only occurring in the morning, Dr. Schonwald said, optimize the morning structure and consider giving the medication earlier in the morning or adding a short-acting booster. If it’s occurring in late afternoon, consider a booster and reducing high-demand activities for the child.
If a preschooler experiences some benefit from the stimulant but still has problems, adjunctive atomoxetine or an alpha adrenergic may help. Those medications also are recommended if the child has no benefit with the stimulant or cannot tolerate the lowest therapeutic dose.
Dr. Schonwald said she had no relevant financial disclosures.
NEW ORLEANS – A variety of reasons can contribute to the failure of stimulants to treat ADHD in children, such as comorbidities, missed diagnoses, inadequate medication dosage, side effects, major life changes, and other factors in the home or school environments, said Alison Schonwald, MD, of Harvard Medical School, Boston.
Stimulant medications indicated for ADHD usually work in 70%-75% of school-age children, but that leaves one in four children whose condition can be more challenging to treat, she said.
“Look around you,” Dr. Schonwald told a packed room at the annual meeting of the American Academy of Pediatrics. “You’re not the only one struggling with this topic.” She sprinkled her presentation with case studies of patients with ADHD for whom stimulants weren’t working, examples that the audience clearly found familiar.
The three steps you already know to do with treatment-resistant children sound simple: assess the child for factors linked to their poor response; develop a new treatment plan; and use Food and Drug Administration-approved nonstimulant medications, including off-label options, in a new plan.
But in the office, the process can be anything but simple when you must consider school and family environments, comorbidities, and other factors potentially complicating the child’s ability to function well.
Comorbidities
To start, Dr. Schonwald provided a chart of common coexisting problems in children with ADHD that included the recommended assessment and intervention:
- Mood and self-esteem issues call for the depression section of the patient health questionnaire (PHQ9) and Moods and Feelings questionnaire (MFQ), followed by interventions such as individual and peer group therapy and exercise.
- Anxiety can be assessed with the Screen for Child Anxiety Related Disorders (SCARED) and Spence Children’s Anxiety Scale, then treated similarly to mood and self-esteem issues.
- Bullying or trauma require taking a history during an interview, and treatment with individual and peer group therapy.
- Substance abuse should be assessed with the CRAFFT screening tool (Car, Relax, Alone, Forget, Friends, Trouble) and Screening to Brief Intervention (S2BI) Tool, then treated according to best practices.
- Executive function, low cognitive abilities, and poor adaptive skills require a review of the child’s Individualized Education Program (IEP) testing, followed by personalized school and home interventions.
- Poor social skills, assessed in an interview, also require personalized interventions at home and in school.
Doctors also may need to consider other common comorbidities in children with ADHD, such as bipolar disorder, depression, learning disabilities, oppositional defiant disorder, and tic disorders.
Tic disorders typically have an onset around 7 years old and peak in midadolescence, declining in late teen years. An estimated 35%-90% of children with Tourette syndrome have ADHD, Dr. Schonwald said (Dev Med Child Neurol. 2006 Jul;48[7]:616-21).
Managing treatment with stimulants
A common dosage amount for stimulants is 2.5-5 mg, but that dose may be too low for children, Dr. Schonwald said. She recommended increasing it until an effect is seen and stopping at the effective dose level the child can tolerate. The maximum recommended by the FDA is 60 mg/day for short-acting stimulants and 72 mg/day for extended-release ones, but some research has shown dosage can go even higher without causing toxic effects (J Child Adolesc Psychopharmacol. 2010 Feb;20[1]:49-54).
Dr. Schonwald also suggested trying both methylphenidate and amphetamine medication, while recognizing the latter tends to have more stimulant-related side effects.
Adherence is another consideration because multiple studies show high rates of noncompliance or discontinuation, such as up to 19% discontinuation for long-acting and 38% for short-acting stimulants (J Clin Psychiatry. 2015 Nov;76(11):e1459-68; Postgrad Med. 2012 May;124(3):139-48). A study of a school cohort in Philadelphia found only about one in five children were adherent (J Am Acad Child Adolesc Psychiatry. 2011 May;50[5]:480-9).
One potential solution to adherence challenges are pill reminder smartphone apps, such as Medisafe Medication Management, Pill Reminder-All in One, MyTherapy: Medication Reminder, and CareZone.
Dr. Schonwald noted several factors that can influence children’s response to stimulants. Among children with comorbid intellectual disability, for example, the response rate is lower than the average 75% of children without the disability, hovering around 40%-50% (Res Dev Disabil. 2018 Dec;83:217-32). Those who get more sleep tend to have improved attention, compared with children with less sleep (Atten Defic Hyperact Disord. 2017 Mar;9[1]:31-38).
She also offered strategies to manage problematic adverse effects from stimulants. Those experiencing weight loss can take their stimulant after breakfast, drink whole milk, and consider taking drug holidays.
To reduce stomachaches, children should take their medication with food, and you should look at whether the child is taking the lowest effective dose they can and whether anxiety may be involved. Similarly, children with headaches should take stimulants with food, and you should look at the dosage and ask whether the patient is getting adequate sleep.
Strategies to address difficulty falling asleep can include taking the stimulant earlier in the day or switching to a shorter-acting form, dexmethylphenidate, or another stimulant. If they’re having trouble staying asleep, inquire about sleep hygiene, and look for associations with other factors that might explain why the child is experiencing new problems with staying asleep. If these strategies are unsuccessful, you can consider prescribing melatonin or clonidine.
Alternatives to stimulants
Several medications besides stimulants are available to prescribe to children with ADHD if they aren’t responding adequately to stimulants, Dr. Schonwald said.
Atomoxetine performed better than placebo in treatment studies, with similar weight loss effects, albeit the lowest mean effect size in clinician ratings (Lancet Psychiatry. 2018 Sep;5[9]:727-38). Dr. Schonwald recommended starting atomoxetine in children under 40 kg at 0.5 mg/kg for 4 days, then increasing to 1.2 mg/kg/day. For children over 40 kg, the dose can start at 40 mg. Maximum dose can range from 1.4 to 1.8 mg/kg or 100 mg/day.
About 7% of white children and 2% of African American children are poor metabolizers of atomoxetine, and the drug has interactions with dextromethorphan, fluoxetine, and paroxetine, she noted. Side effects can include abdominal pain, dry mouth, fatigue, mood swings, nausea, and vomiting.
Two alpha-adrenergics that you can consider are clonidine and guanfacine. Clonidine, a hypotensive drug given at a dose of 0.05-0.2 mg up to three times a day, is helpful for hyperactivity and impulsivity rather than attention difficulties. Side effects can include depression, headache, rebound hypertension, and sedation, and it’s only FDA approved for ages 12 years and older.
An extended release version of clonidine (Kapvay) is approved for monotherapy or adjunctive therapy for ADHD; it led to improvements in ADHD–Rating Scale-IV scores as soon as the second week in an 8-week randomized controlled trial. Mild to moderate somnolence was the most common adverse event, and changes on electrocardiograms were minor (J Am Acad Child Adolesc Psychiatry. 2011 Feb;50[2]:171-9).
Guanfacine, also a hypotensive drug, given at a dose of 0.5-2 mg up to three times a day, has fewer data about its use for ADHD but appears to treat attention problems more effectively than hyperactivity. Also approved only for ages 12 years and older, guanfacine is less sedating, and its side effects can include agitation, headache , and insomnia. An extended-release version of guanfacine (brand name Intuniv) showed statistically significant reductions in ADHD Rating Scale-IV scores in a 9-week, double-blind, randomized, controlled trial. Side effects including fatigue, sedation, and somnolence occurred in the first 2 weeks but generally resolved, and participants returned to baseline during dose maintenance and tapering (J Am Acad Child Adolesc Psychiatry. 2009 Feb;48[2]:155-65).
Intuniv doses should start at 1 mg/day and increase no more than 1 mg/week, Dr. Schonwald said, until reaching a maintenance dose of 1-4 mg once daily, depending on the patient’s clinical response and tolerability. Children also must be able to swallow the pill whole.
Treating preschoolers
Preschool children are particularly difficult to diagnose given their normal range of temperament and development, Dr. Schonwald said. Their symptoms could be resulting from another diagnosis or from circumstances in the environment.
You should consider potential comorbidities and whether the child’s symptoms are situational or pervasive. About 55% of preschoolers have at least one comorbidity, she said (Infants & Young Children. 2006 Apr-Jun;19[2]:109-122.)
That said, stimulants usually are effective in very young children whose primary concern is ADHD. In a randomized controlled trial of 303 preschoolers, significantly more children experienced reduced ADHD symptoms with methylphenidate than with placebo. The trial’s “data suggest that preschoolers with ADHD need to start with low methylphenidate doses. Treatment may best begin using methylphenidate–immediate release at 2.5 mg twice daily, and then be increased to 7.5 mg three times a day during the course of 1 week. The mean optimal total daily [methylphenidate] dose for preschoolers was 14.2 plus or minus 8.1 mg/day” (J Am Acad Child Adolesc Psychiatry. 2006 Nov;45[11]:1284-93).
In treating preschoolers, if the patient’s symptoms appear to get worse after starting a stimulant, you can consider a medication change. If symptoms are much worse, consider a lower dose or a different stimulant class, or whether the diagnosis is appropriate.
Five common components of poor behavior in preschoolers with ADHD include agitation, anxiety, explosively, hyperactivity, and impulsivity. If these issues are occurring throughout the day, consider reducing the dose or switching drug classes.
If it’s only occurring in the morning, Dr. Schonwald said, optimize the morning structure and consider giving the medication earlier in the morning or adding a short-acting booster. If it’s occurring in late afternoon, consider a booster and reducing high-demand activities for the child.
If a preschooler experiences some benefit from the stimulant but still has problems, adjunctive atomoxetine or an alpha adrenergic may help. Those medications also are recommended if the child has no benefit with the stimulant or cannot tolerate the lowest therapeutic dose.
Dr. Schonwald said she had no relevant financial disclosures.
NEW ORLEANS – A variety of reasons can contribute to the failure of stimulants to treat ADHD in children, such as comorbidities, missed diagnoses, inadequate medication dosage, side effects, major life changes, and other factors in the home or school environments, said Alison Schonwald, MD, of Harvard Medical School, Boston.
Stimulant medications indicated for ADHD usually work in 70%-75% of school-age children, but that leaves one in four children whose condition can be more challenging to treat, she said.
“Look around you,” Dr. Schonwald told a packed room at the annual meeting of the American Academy of Pediatrics. “You’re not the only one struggling with this topic.” She sprinkled her presentation with case studies of patients with ADHD for whom stimulants weren’t working, examples that the audience clearly found familiar.
The three steps you already know to do with treatment-resistant children sound simple: assess the child for factors linked to their poor response; develop a new treatment plan; and use Food and Drug Administration-approved nonstimulant medications, including off-label options, in a new plan.
But in the office, the process can be anything but simple when you must consider school and family environments, comorbidities, and other factors potentially complicating the child’s ability to function well.
Comorbidities
To start, Dr. Schonwald provided a chart of common coexisting problems in children with ADHD that included the recommended assessment and intervention:
- Mood and self-esteem issues call for the depression section of the patient health questionnaire (PHQ9) and Moods and Feelings questionnaire (MFQ), followed by interventions such as individual and peer group therapy and exercise.
- Anxiety can be assessed with the Screen for Child Anxiety Related Disorders (SCARED) and Spence Children’s Anxiety Scale, then treated similarly to mood and self-esteem issues.
- Bullying or trauma require taking a history during an interview, and treatment with individual and peer group therapy.
- Substance abuse should be assessed with the CRAFFT screening tool (Car, Relax, Alone, Forget, Friends, Trouble) and Screening to Brief Intervention (S2BI) Tool, then treated according to best practices.
- Executive function, low cognitive abilities, and poor adaptive skills require a review of the child’s Individualized Education Program (IEP) testing, followed by personalized school and home interventions.
- Poor social skills, assessed in an interview, also require personalized interventions at home and in school.
Doctors also may need to consider other common comorbidities in children with ADHD, such as bipolar disorder, depression, learning disabilities, oppositional defiant disorder, and tic disorders.
Tic disorders typically have an onset around 7 years old and peak in midadolescence, declining in late teen years. An estimated 35%-90% of children with Tourette syndrome have ADHD, Dr. Schonwald said (Dev Med Child Neurol. 2006 Jul;48[7]:616-21).
Managing treatment with stimulants
A common dosage amount for stimulants is 2.5-5 mg, but that dose may be too low for children, Dr. Schonwald said. She recommended increasing it until an effect is seen and stopping at the effective dose level the child can tolerate. The maximum recommended by the FDA is 60 mg/day for short-acting stimulants and 72 mg/day for extended-release ones, but some research has shown dosage can go even higher without causing toxic effects (J Child Adolesc Psychopharmacol. 2010 Feb;20[1]:49-54).
Dr. Schonwald also suggested trying both methylphenidate and amphetamine medication, while recognizing the latter tends to have more stimulant-related side effects.
Adherence is another consideration because multiple studies show high rates of noncompliance or discontinuation, such as up to 19% discontinuation for long-acting and 38% for short-acting stimulants (J Clin Psychiatry. 2015 Nov;76(11):e1459-68; Postgrad Med. 2012 May;124(3):139-48). A study of a school cohort in Philadelphia found only about one in five children were adherent (J Am Acad Child Adolesc Psychiatry. 2011 May;50[5]:480-9).
One potential solution to adherence challenges are pill reminder smartphone apps, such as Medisafe Medication Management, Pill Reminder-All in One, MyTherapy: Medication Reminder, and CareZone.
Dr. Schonwald noted several factors that can influence children’s response to stimulants. Among children with comorbid intellectual disability, for example, the response rate is lower than the average 75% of children without the disability, hovering around 40%-50% (Res Dev Disabil. 2018 Dec;83:217-32). Those who get more sleep tend to have improved attention, compared with children with less sleep (Atten Defic Hyperact Disord. 2017 Mar;9[1]:31-38).
She also offered strategies to manage problematic adverse effects from stimulants. Those experiencing weight loss can take their stimulant after breakfast, drink whole milk, and consider taking drug holidays.
To reduce stomachaches, children should take their medication with food, and you should look at whether the child is taking the lowest effective dose they can and whether anxiety may be involved. Similarly, children with headaches should take stimulants with food, and you should look at the dosage and ask whether the patient is getting adequate sleep.
Strategies to address difficulty falling asleep can include taking the stimulant earlier in the day or switching to a shorter-acting form, dexmethylphenidate, or another stimulant. If they’re having trouble staying asleep, inquire about sleep hygiene, and look for associations with other factors that might explain why the child is experiencing new problems with staying asleep. If these strategies are unsuccessful, you can consider prescribing melatonin or clonidine.
Alternatives to stimulants
Several medications besides stimulants are available to prescribe to children with ADHD if they aren’t responding adequately to stimulants, Dr. Schonwald said.
Atomoxetine performed better than placebo in treatment studies, with similar weight loss effects, albeit the lowest mean effect size in clinician ratings (Lancet Psychiatry. 2018 Sep;5[9]:727-38). Dr. Schonwald recommended starting atomoxetine in children under 40 kg at 0.5 mg/kg for 4 days, then increasing to 1.2 mg/kg/day. For children over 40 kg, the dose can start at 40 mg. Maximum dose can range from 1.4 to 1.8 mg/kg or 100 mg/day.
About 7% of white children and 2% of African American children are poor metabolizers of atomoxetine, and the drug has interactions with dextromethorphan, fluoxetine, and paroxetine, she noted. Side effects can include abdominal pain, dry mouth, fatigue, mood swings, nausea, and vomiting.
Two alpha-adrenergics that you can consider are clonidine and guanfacine. Clonidine, a hypotensive drug given at a dose of 0.05-0.2 mg up to three times a day, is helpful for hyperactivity and impulsivity rather than attention difficulties. Side effects can include depression, headache, rebound hypertension, and sedation, and it’s only FDA approved for ages 12 years and older.
An extended release version of clonidine (Kapvay) is approved for monotherapy or adjunctive therapy for ADHD; it led to improvements in ADHD–Rating Scale-IV scores as soon as the second week in an 8-week randomized controlled trial. Mild to moderate somnolence was the most common adverse event, and changes on electrocardiograms were minor (J Am Acad Child Adolesc Psychiatry. 2011 Feb;50[2]:171-9).
Guanfacine, also a hypotensive drug, given at a dose of 0.5-2 mg up to three times a day, has fewer data about its use for ADHD but appears to treat attention problems more effectively than hyperactivity. Also approved only for ages 12 years and older, guanfacine is less sedating, and its side effects can include agitation, headache , and insomnia. An extended-release version of guanfacine (brand name Intuniv) showed statistically significant reductions in ADHD Rating Scale-IV scores in a 9-week, double-blind, randomized, controlled trial. Side effects including fatigue, sedation, and somnolence occurred in the first 2 weeks but generally resolved, and participants returned to baseline during dose maintenance and tapering (J Am Acad Child Adolesc Psychiatry. 2009 Feb;48[2]:155-65).
Intuniv doses should start at 1 mg/day and increase no more than 1 mg/week, Dr. Schonwald said, until reaching a maintenance dose of 1-4 mg once daily, depending on the patient’s clinical response and tolerability. Children also must be able to swallow the pill whole.
Treating preschoolers
Preschool children are particularly difficult to diagnose given their normal range of temperament and development, Dr. Schonwald said. Their symptoms could be resulting from another diagnosis or from circumstances in the environment.
You should consider potential comorbidities and whether the child’s symptoms are situational or pervasive. About 55% of preschoolers have at least one comorbidity, she said (Infants & Young Children. 2006 Apr-Jun;19[2]:109-122.)
That said, stimulants usually are effective in very young children whose primary concern is ADHD. In a randomized controlled trial of 303 preschoolers, significantly more children experienced reduced ADHD symptoms with methylphenidate than with placebo. The trial’s “data suggest that preschoolers with ADHD need to start with low methylphenidate doses. Treatment may best begin using methylphenidate–immediate release at 2.5 mg twice daily, and then be increased to 7.5 mg three times a day during the course of 1 week. The mean optimal total daily [methylphenidate] dose for preschoolers was 14.2 plus or minus 8.1 mg/day” (J Am Acad Child Adolesc Psychiatry. 2006 Nov;45[11]:1284-93).
In treating preschoolers, if the patient’s symptoms appear to get worse after starting a stimulant, you can consider a medication change. If symptoms are much worse, consider a lower dose or a different stimulant class, or whether the diagnosis is appropriate.
Five common components of poor behavior in preschoolers with ADHD include agitation, anxiety, explosively, hyperactivity, and impulsivity. If these issues are occurring throughout the day, consider reducing the dose or switching drug classes.
If it’s only occurring in the morning, Dr. Schonwald said, optimize the morning structure and consider giving the medication earlier in the morning or adding a short-acting booster. If it’s occurring in late afternoon, consider a booster and reducing high-demand activities for the child.
If a preschooler experiences some benefit from the stimulant but still has problems, adjunctive atomoxetine or an alpha adrenergic may help. Those medications also are recommended if the child has no benefit with the stimulant or cannot tolerate the lowest therapeutic dose.
Dr. Schonwald said she had no relevant financial disclosures.
EXPERT ANALYSIS FROM AAP 2019
Should supplemental MRI be used in otherwise average-risk women with extremely dense breasts?
While the frequency of dense breasts decreases with age, approximately 10% of women in the United States have extremely dense breasts (Breast Imaging, Reporting, and Data System [BI-RADS] category D), and another 40% have heterogeneously dense breasts (BI-RADS category C).1 Women with dense breasts have both an increased risk for developing breast cancer and reduced mammographic sensitivity for breast cancer detection compared with women who have nondense breasts.2
These 2 observations have led the majority of states to pass legislation requiring that women with dense breasts be informed of their breast density, and most require that providers discuss these results with their patients. Thoughtful clinicians who review the available literature, however, will find sparse evidence on which to counsel patients as to next steps.
Now, a recent trial adds to our knowledge about supplemental magnetic resonance imaging (MRI) breast screening in women with extremely dense breasts.
DENSE trial offers high-quality data
Bakker and colleagues studied women aged 50 to 74 who were participating in a Netherlands population-based biennial mammography screening program.3 They enrolled average-risk women with extremely dense breasts who had a negative screening digital mammogram into the Dense Tissue and Early Breast Neoplasm Screening (DENSE) multicenter trial. The women were randomly assigned to receive either continued biennial digital mammography or supplemental breast MRI.
The primary outcome was the between-group difference in the development of interval breast cancers—that is, breast cancers detected by women or their providers between rounds of screening mammography. Interval breast cancers were chosen as the primary outcome for 2 reasons:
- interval cancers appear to be more aggressive tumors than those cancers detected by screening mammography
- interval cancers can be identified over a shorter time interval, making them easier to study than outcomes such as breast cancer mortality, which typically require more than a decade to identify.
The DENSE trial’s secondary outcomes included recall rates from MRI, cancer detection rates on MRI, positive predictive value of MRIs requiring biopsy, and breast cancer characteristics (size, stage) diagnosed in the different groups.
Between-group difference in incidence of interval cancers
A total of 40,373 women with extremely dense breasts were screened; 8,061 of these were randomly assigned to receive breast MRI and 32,312 to continued mammography only (1:4 cluster randomization) across 12 mammography centers in the Netherlands. Among the women assigned to the MRI group, 59% actually underwent MRI (4,783 of the 8,061).
The interval cancer rate in the mammography-only group was 5.0 per 1,000 screenings (95% confidence interval [CI], 4.3–5.8), while the interval cancer rate in the MRI-assigned group was 2.5 per 1,000 screenings (95% CI, 1.6–3.8) (TABLE 1).3
Key secondary outcomes
Of the women who underwent supplemental MRI, 9.49% were recalled for additional imaging, follow-up, or biopsy. Of the 4,783 women who had an MRI, 300 (6.3%) underwent a breast biopsy, and 79 breast cancers (1.65%) were detected. Sixty-four of these cancers were invasive, and 15 were ductal carcinoma in situ (DCIS). Among women who underwent a biopsy for an MRI-detected abnormality, the positive predictive value was 26.3%.
Tumor characteristics. For women who developed breast cancer during the study, both tumor size at diagnosis and tumor stage (early vs late) were described. TABLE 2 shows these results in the women who had their breast cancer detected on MRI, those in the MRI-assigned group who developed interval cancer, and those in the mammography-only group who had interval cancers.3 Overall, tumor size was smaller in the interval group who underwent MRI compared with those who underwent mammography only.
Continue to: Study contributes valuable data, but we need more on long-term outcomes...
Study contributes valuable data, but we need more on long-term outcomes
The trial by Bakker and colleagues employed a solid study design as women were randomly assigned to supplemental MRI screening or ongoing biennial mammography, and nearly all cancers were identified in the short-term of follow-up. In addition, very few women were lost to follow-up, and secondary outcomes, including false-positive rates, were collected to help providers and patients better understand some of the potential downsides of supplemental screening.
The substantial reduction in interval cancers (50% in the intent-to-screen analysis and 84% in the women who actually underwent supplemental MRI) was highly statistically significant (P<.001). While there were substantially fewer interval cancers in the MRI-assigned group, the interval cancers that did occur were of similar stage as those in the women assigned to the mammography-only group (TABLE 2).
Data demonstrate that interval cancers appear to be more aggressive than screen-detected cancers.4 While reducing interval cancers should be a good thing overall, it remains unproven that using supplemental MRI in all women with dense breasts would reduce breast cancer specific mortality, all-cause mortality, or the risk of more invasive treatments (for example, the need for chemotherapy or requirement for mastectomy).
On the other hand, using routine supplemental breast MRI in women with extremely dense breasts would result in very substantial use of resources, including cost, radiologist time, provider time, and machine time. In the United States, approximately 49 million women are aged 50 to 74.5 Breast MRI charges commonly range from $1,000 to $4,000. If the 4.9 million women with extremely dense breasts underwent supplemental MRI this year, the approximate cost would be somewhere between $4.9 and $19.5 billion for imaging alone. This does not include callbacks, biopsies, or provider time for ordering, interpreting, and arranging for follow-up.
While the reduction in interval cancers seen in this study is promising, more assurance of improvement in important outcomes—such as reduced mortality or reduced need for more invasive breast cancer treatments—should precede any routine change in practice.
Unanswered questions
This study did not address a number of other important questions, including:
Should MRI be done with every round of breast cancer screening given the possibility of prevalence bias? Prevalence bias can be defined as more cancers detected in the first round of MRI screening with possible reduced benefit in future rounds of screening. The study authors indicated that they will continue to analyze the study results to see what occurs in the next round of screening.
Is there a similar impact on decreased interval cancers in women undergoing annual mammography or in women screened between ages 40 and 49? This study was conducted in women aged 50 to 74 undergoing mammography every 2 years. In the United States, annual mammography in women aged 40 to 49 is frequently recommended.
What effect does supplemental MRI screening have in women with heterogeneously dense breasts, which represents 40% of the population? The US Food and Drug Administration recommends that all women with dense breasts be counseled regarding options for management.6
Do these results translate to the more racially and ethnically diverse populations of the United States? In the Netherlands, where this study was conducted, 85% to 90% of women are either Dutch or of western European origin. Women of different racial and ancestral backgrounds have biologically different breast cancers and cancer risk (for example, higher rates of triple-negative breast cancers in African American women; 10-fold higher rates of BRCA pathogenic variants in Ashkenazi Jewish women).
Continue to: Use validated tools to assess risk comprehensively...
Use validated tools to assess risk comprehensively
Women aged 50 to 74 with extremely dense breasts have reduced interval cancers following a normal biennial mammogram if supplemental MRI is offered, but the long-term benefit of identifying these cancers earlier is unclear. Until more data are available on important long-term outcomes (such as breast cancer mortality and need for more invasive treatments), providers should consider breast density in the context of a more comprehensive assessment of breast cancer risk using a validated breast cancer risk assessment tool.
I prefer the modified version of the International Breast Cancer Intervention Study (IBIS) tool, which is readily available online (https://ibis.ikonopedia.com/).7 This tool incorporates several breast cancer risk factors, including reproductive risk factors, body mass index, BRCA gene status, breast density, and family history. The tool takes 1 to 2 minutes to complete and provides an estimate of a woman’s 10-year risk and lifetime risk of breast cancer.
If the lifetime risk exceeds 20%, I offer the patient supplemental MRI screening, consistent with current recommendations of the National Comprehensive Cancer Network and the American Cancer Society.8,9 I generally recommend starting breast imaging screening 7 to 10 years prior to the youngest breast cancer occurrence in the family, with mammography starting no earlier than age 30 and MRI no earlier than age 25. Other validated tools also can be used.10-13
Incorporating breast density and other important risk factors allows a more comprehensive analysis upon which to counsel women about the value (benefits and harms) of breast imaging.8
- Sprague BL, Gagnon RE, Burt V, et al. Prevalence of mammographically dense breasts in the United States. J Natl Cancer Inst. 2014;106:dju255. doi: 10.1093/jcni/dju255.
- Boyd NF, Guo H, Martin LJ, et al. Mammographic density and the risk and detection of breast cancer. N Engl J Med. 2007;356:227-236.
- Bakker MF, de Lange SV, Pijnappel RM, et al; for the DENSE Trial Study Group. Supplemental MRI screening for women with extremely dense breast tissue. N Engl J Med. 2019;381:2091-2102.
- Drukker CA, Schmidt MK, Rutgers EJT, et al. Mammographic screening detects low-risk tumor biology breast cancers. Breast Cancer Res Treat. 2014;144:103-111.
- Statista website. Resident population of the United States by sex and age as of July 1, 2018. https://www.statista.com/statistics/241488/population-of-the-us-by-sex-and-age. Accessed January 6, 2020.
- US Food and Drug Administration website. Mammography: what you need to know. https://www.fda.gov/consumers/consumer-updates/mammography-what-you-need-know. Accessed January 13, 2020.
- IBIS (International Breast Cancer Intervention Study) website. Online Tyrer-Cuzick Model Breast Cancer Risk Evaluation Tool. ibis.ikonopedia.com. Accessed January 13, 2020.
- Bevers TB, Anderson BO, Bonaccio E, et al; National Comprehensive Cancer Network. Breast cancer screening and diagnosis: NCCN practice guidelines in oncology. JNCCN. 2009;7:1060-1096.
- Saslow D, Boetes C, Burke W, et al. American Cancer Society guidelines for breast screening with MRI as an adjunct to mammography. CA Cancer J Clin. 2007;57:75-89.
- Antoniou AC, Cunningham AP, Peto J, et al. The BOADICEA model of genetic susceptibility to breast and ovarian cancers: updates and extensions. Br J Cancer. 2008;98:1457-1466.
- Claus EB, Risch N, Thompson WD. Autosomal dominant inheritance of early-onset breast cancer: implications for risk prediction. Cancer. 1994;73:643-651.
- Parmigiani G, Berry D, Aguilar O. Determining carrier probabilities for breast cancer-susceptibility genes BRCA1 and BRCA2. Am J Hum Genet. 1998;62:145-158.
- Tyrer J, Duffy SW, Cuzick J. A breast cancer prediction model incorporating familial and personal risk factors. Stat Med. 2004;23:1111-1130.
While the frequency of dense breasts decreases with age, approximately 10% of women in the United States have extremely dense breasts (Breast Imaging, Reporting, and Data System [BI-RADS] category D), and another 40% have heterogeneously dense breasts (BI-RADS category C).1 Women with dense breasts have both an increased risk for developing breast cancer and reduced mammographic sensitivity for breast cancer detection compared with women who have nondense breasts.2
These 2 observations have led the majority of states to pass legislation requiring that women with dense breasts be informed of their breast density, and most require that providers discuss these results with their patients. Thoughtful clinicians who review the available literature, however, will find sparse evidence on which to counsel patients as to next steps.
Now, a recent trial adds to our knowledge about supplemental magnetic resonance imaging (MRI) breast screening in women with extremely dense breasts.
DENSE trial offers high-quality data
Bakker and colleagues studied women aged 50 to 74 who were participating in a Netherlands population-based biennial mammography screening program.3 They enrolled average-risk women with extremely dense breasts who had a negative screening digital mammogram into the Dense Tissue and Early Breast Neoplasm Screening (DENSE) multicenter trial. The women were randomly assigned to receive either continued biennial digital mammography or supplemental breast MRI.
The primary outcome was the between-group difference in the development of interval breast cancers—that is, breast cancers detected by women or their providers between rounds of screening mammography. Interval breast cancers were chosen as the primary outcome for 2 reasons:
- interval cancers appear to be more aggressive tumors than those cancers detected by screening mammography
- interval cancers can be identified over a shorter time interval, making them easier to study than outcomes such as breast cancer mortality, which typically require more than a decade to identify.
The DENSE trial’s secondary outcomes included recall rates from MRI, cancer detection rates on MRI, positive predictive value of MRIs requiring biopsy, and breast cancer characteristics (size, stage) diagnosed in the different groups.
Between-group difference in incidence of interval cancers
A total of 40,373 women with extremely dense breasts were screened; 8,061 of these were randomly assigned to receive breast MRI and 32,312 to continued mammography only (1:4 cluster randomization) across 12 mammography centers in the Netherlands. Among the women assigned to the MRI group, 59% actually underwent MRI (4,783 of the 8,061).
The interval cancer rate in the mammography-only group was 5.0 per 1,000 screenings (95% confidence interval [CI], 4.3–5.8), while the interval cancer rate in the MRI-assigned group was 2.5 per 1,000 screenings (95% CI, 1.6–3.8) (TABLE 1).3
Key secondary outcomes
Of the women who underwent supplemental MRI, 9.49% were recalled for additional imaging, follow-up, or biopsy. Of the 4,783 women who had an MRI, 300 (6.3%) underwent a breast biopsy, and 79 breast cancers (1.65%) were detected. Sixty-four of these cancers were invasive, and 15 were ductal carcinoma in situ (DCIS). Among women who underwent a biopsy for an MRI-detected abnormality, the positive predictive value was 26.3%.
Tumor characteristics. For women who developed breast cancer during the study, both tumor size at diagnosis and tumor stage (early vs late) were described. TABLE 2 shows these results in the women who had their breast cancer detected on MRI, those in the MRI-assigned group who developed interval cancer, and those in the mammography-only group who had interval cancers.3 Overall, tumor size was smaller in the interval group who underwent MRI compared with those who underwent mammography only.
Continue to: Study contributes valuable data, but we need more on long-term outcomes...
Study contributes valuable data, but we need more on long-term outcomes
The trial by Bakker and colleagues employed a solid study design as women were randomly assigned to supplemental MRI screening or ongoing biennial mammography, and nearly all cancers were identified in the short-term of follow-up. In addition, very few women were lost to follow-up, and secondary outcomes, including false-positive rates, were collected to help providers and patients better understand some of the potential downsides of supplemental screening.
The substantial reduction in interval cancers (50% in the intent-to-screen analysis and 84% in the women who actually underwent supplemental MRI) was highly statistically significant (P<.001). While there were substantially fewer interval cancers in the MRI-assigned group, the interval cancers that did occur were of similar stage as those in the women assigned to the mammography-only group (TABLE 2).
Data demonstrate that interval cancers appear to be more aggressive than screen-detected cancers.4 While reducing interval cancers should be a good thing overall, it remains unproven that using supplemental MRI in all women with dense breasts would reduce breast cancer specific mortality, all-cause mortality, or the risk of more invasive treatments (for example, the need for chemotherapy or requirement for mastectomy).
On the other hand, using routine supplemental breast MRI in women with extremely dense breasts would result in very substantial use of resources, including cost, radiologist time, provider time, and machine time. In the United States, approximately 49 million women are aged 50 to 74.5 Breast MRI charges commonly range from $1,000 to $4,000. If the 4.9 million women with extremely dense breasts underwent supplemental MRI this year, the approximate cost would be somewhere between $4.9 and $19.5 billion for imaging alone. This does not include callbacks, biopsies, or provider time for ordering, interpreting, and arranging for follow-up.
While the reduction in interval cancers seen in this study is promising, more assurance of improvement in important outcomes—such as reduced mortality or reduced need for more invasive breast cancer treatments—should precede any routine change in practice.
Unanswered questions
This study did not address a number of other important questions, including:
Should MRI be done with every round of breast cancer screening given the possibility of prevalence bias? Prevalence bias can be defined as more cancers detected in the first round of MRI screening with possible reduced benefit in future rounds of screening. The study authors indicated that they will continue to analyze the study results to see what occurs in the next round of screening.
Is there a similar impact on decreased interval cancers in women undergoing annual mammography or in women screened between ages 40 and 49? This study was conducted in women aged 50 to 74 undergoing mammography every 2 years. In the United States, annual mammography in women aged 40 to 49 is frequently recommended.
What effect does supplemental MRI screening have in women with heterogeneously dense breasts, which represents 40% of the population? The US Food and Drug Administration recommends that all women with dense breasts be counseled regarding options for management.6
Do these results translate to the more racially and ethnically diverse populations of the United States? In the Netherlands, where this study was conducted, 85% to 90% of women are either Dutch or of western European origin. Women of different racial and ancestral backgrounds have biologically different breast cancers and cancer risk (for example, higher rates of triple-negative breast cancers in African American women; 10-fold higher rates of BRCA pathogenic variants in Ashkenazi Jewish women).
Continue to: Use validated tools to assess risk comprehensively...
Use validated tools to assess risk comprehensively
Women aged 50 to 74 with extremely dense breasts have reduced interval cancers following a normal biennial mammogram if supplemental MRI is offered, but the long-term benefit of identifying these cancers earlier is unclear. Until more data are available on important long-term outcomes (such as breast cancer mortality and need for more invasive treatments), providers should consider breast density in the context of a more comprehensive assessment of breast cancer risk using a validated breast cancer risk assessment tool.
I prefer the modified version of the International Breast Cancer Intervention Study (IBIS) tool, which is readily available online (https://ibis.ikonopedia.com/).7 This tool incorporates several breast cancer risk factors, including reproductive risk factors, body mass index, BRCA gene status, breast density, and family history. The tool takes 1 to 2 minutes to complete and provides an estimate of a woman’s 10-year risk and lifetime risk of breast cancer.
If the lifetime risk exceeds 20%, I offer the patient supplemental MRI screening, consistent with current recommendations of the National Comprehensive Cancer Network and the American Cancer Society.8,9 I generally recommend starting breast imaging screening 7 to 10 years prior to the youngest breast cancer occurrence in the family, with mammography starting no earlier than age 30 and MRI no earlier than age 25. Other validated tools also can be used.10-13
Incorporating breast density and other important risk factors allows a more comprehensive analysis upon which to counsel women about the value (benefits and harms) of breast imaging.8
While the frequency of dense breasts decreases with age, approximately 10% of women in the United States have extremely dense breasts (Breast Imaging, Reporting, and Data System [BI-RADS] category D), and another 40% have heterogeneously dense breasts (BI-RADS category C).1 Women with dense breasts have both an increased risk for developing breast cancer and reduced mammographic sensitivity for breast cancer detection compared with women who have nondense breasts.2
These 2 observations have led the majority of states to pass legislation requiring that women with dense breasts be informed of their breast density, and most require that providers discuss these results with their patients. Thoughtful clinicians who review the available literature, however, will find sparse evidence on which to counsel patients as to next steps.
Now, a recent trial adds to our knowledge about supplemental magnetic resonance imaging (MRI) breast screening in women with extremely dense breasts.
DENSE trial offers high-quality data
Bakker and colleagues studied women aged 50 to 74 who were participating in a Netherlands population-based biennial mammography screening program.3 They enrolled average-risk women with extremely dense breasts who had a negative screening digital mammogram into the Dense Tissue and Early Breast Neoplasm Screening (DENSE) multicenter trial. The women were randomly assigned to receive either continued biennial digital mammography or supplemental breast MRI.
The primary outcome was the between-group difference in the development of interval breast cancers—that is, breast cancers detected by women or their providers between rounds of screening mammography. Interval breast cancers were chosen as the primary outcome for 2 reasons:
- interval cancers appear to be more aggressive tumors than those cancers detected by screening mammography
- interval cancers can be identified over a shorter time interval, making them easier to study than outcomes such as breast cancer mortality, which typically require more than a decade to identify.
The DENSE trial’s secondary outcomes included recall rates from MRI, cancer detection rates on MRI, positive predictive value of MRIs requiring biopsy, and breast cancer characteristics (size, stage) diagnosed in the different groups.
Between-group difference in incidence of interval cancers
A total of 40,373 women with extremely dense breasts were screened; 8,061 of these were randomly assigned to receive breast MRI and 32,312 to continued mammography only (1:4 cluster randomization) across 12 mammography centers in the Netherlands. Among the women assigned to the MRI group, 59% actually underwent MRI (4,783 of the 8,061).
The interval cancer rate in the mammography-only group was 5.0 per 1,000 screenings (95% confidence interval [CI], 4.3–5.8), while the interval cancer rate in the MRI-assigned group was 2.5 per 1,000 screenings (95% CI, 1.6–3.8) (TABLE 1).3
Key secondary outcomes
Of the women who underwent supplemental MRI, 9.49% were recalled for additional imaging, follow-up, or biopsy. Of the 4,783 women who had an MRI, 300 (6.3%) underwent a breast biopsy, and 79 breast cancers (1.65%) were detected. Sixty-four of these cancers were invasive, and 15 were ductal carcinoma in situ (DCIS). Among women who underwent a biopsy for an MRI-detected abnormality, the positive predictive value was 26.3%.
Tumor characteristics. For women who developed breast cancer during the study, both tumor size at diagnosis and tumor stage (early vs late) were described. TABLE 2 shows these results in the women who had their breast cancer detected on MRI, those in the MRI-assigned group who developed interval cancer, and those in the mammography-only group who had interval cancers.3 Overall, tumor size was smaller in the interval group who underwent MRI compared with those who underwent mammography only.
Continue to: Study contributes valuable data, but we need more on long-term outcomes...
Study contributes valuable data, but we need more on long-term outcomes
The trial by Bakker and colleagues employed a solid study design as women were randomly assigned to supplemental MRI screening or ongoing biennial mammography, and nearly all cancers were identified in the short-term of follow-up. In addition, very few women were lost to follow-up, and secondary outcomes, including false-positive rates, were collected to help providers and patients better understand some of the potential downsides of supplemental screening.
The substantial reduction in interval cancers (50% in the intent-to-screen analysis and 84% in the women who actually underwent supplemental MRI) was highly statistically significant (P<.001). While there were substantially fewer interval cancers in the MRI-assigned group, the interval cancers that did occur were of similar stage as those in the women assigned to the mammography-only group (TABLE 2).
Data demonstrate that interval cancers appear to be more aggressive than screen-detected cancers.4 While reducing interval cancers should be a good thing overall, it remains unproven that using supplemental MRI in all women with dense breasts would reduce breast cancer specific mortality, all-cause mortality, or the risk of more invasive treatments (for example, the need for chemotherapy or requirement for mastectomy).
On the other hand, using routine supplemental breast MRI in women with extremely dense breasts would result in very substantial use of resources, including cost, radiologist time, provider time, and machine time. In the United States, approximately 49 million women are aged 50 to 74.5 Breast MRI charges commonly range from $1,000 to $4,000. If the 4.9 million women with extremely dense breasts underwent supplemental MRI this year, the approximate cost would be somewhere between $4.9 and $19.5 billion for imaging alone. This does not include callbacks, biopsies, or provider time for ordering, interpreting, and arranging for follow-up.
While the reduction in interval cancers seen in this study is promising, more assurance of improvement in important outcomes—such as reduced mortality or reduced need for more invasive breast cancer treatments—should precede any routine change in practice.
Unanswered questions
This study did not address a number of other important questions, including:
Should MRI be done with every round of breast cancer screening given the possibility of prevalence bias? Prevalence bias can be defined as more cancers detected in the first round of MRI screening with possible reduced benefit in future rounds of screening. The study authors indicated that they will continue to analyze the study results to see what occurs in the next round of screening.
Is there a similar impact on decreased interval cancers in women undergoing annual mammography or in women screened between ages 40 and 49? This study was conducted in women aged 50 to 74 undergoing mammography every 2 years. In the United States, annual mammography in women aged 40 to 49 is frequently recommended.
What effect does supplemental MRI screening have in women with heterogeneously dense breasts, which represents 40% of the population? The US Food and Drug Administration recommends that all women with dense breasts be counseled regarding options for management.6
Do these results translate to the more racially and ethnically diverse populations of the United States? In the Netherlands, where this study was conducted, 85% to 90% of women are either Dutch or of western European origin. Women of different racial and ancestral backgrounds have biologically different breast cancers and cancer risk (for example, higher rates of triple-negative breast cancers in African American women; 10-fold higher rates of BRCA pathogenic variants in Ashkenazi Jewish women).
Continue to: Use validated tools to assess risk comprehensively...
Use validated tools to assess risk comprehensively
Women aged 50 to 74 with extremely dense breasts have reduced interval cancers following a normal biennial mammogram if supplemental MRI is offered, but the long-term benefit of identifying these cancers earlier is unclear. Until more data are available on important long-term outcomes (such as breast cancer mortality and need for more invasive treatments), providers should consider breast density in the context of a more comprehensive assessment of breast cancer risk using a validated breast cancer risk assessment tool.
I prefer the modified version of the International Breast Cancer Intervention Study (IBIS) tool, which is readily available online (https://ibis.ikonopedia.com/).7 This tool incorporates several breast cancer risk factors, including reproductive risk factors, body mass index, BRCA gene status, breast density, and family history. The tool takes 1 to 2 minutes to complete and provides an estimate of a woman’s 10-year risk and lifetime risk of breast cancer.
If the lifetime risk exceeds 20%, I offer the patient supplemental MRI screening, consistent with current recommendations of the National Comprehensive Cancer Network and the American Cancer Society.8,9 I generally recommend starting breast imaging screening 7 to 10 years prior to the youngest breast cancer occurrence in the family, with mammography starting no earlier than age 30 and MRI no earlier than age 25. Other validated tools also can be used.10-13
Incorporating breast density and other important risk factors allows a more comprehensive analysis upon which to counsel women about the value (benefits and harms) of breast imaging.8
- Sprague BL, Gagnon RE, Burt V, et al. Prevalence of mammographically dense breasts in the United States. J Natl Cancer Inst. 2014;106:dju255. doi: 10.1093/jcni/dju255.
- Boyd NF, Guo H, Martin LJ, et al. Mammographic density and the risk and detection of breast cancer. N Engl J Med. 2007;356:227-236.
- Bakker MF, de Lange SV, Pijnappel RM, et al; for the DENSE Trial Study Group. Supplemental MRI screening for women with extremely dense breast tissue. N Engl J Med. 2019;381:2091-2102.
- Drukker CA, Schmidt MK, Rutgers EJT, et al. Mammographic screening detects low-risk tumor biology breast cancers. Breast Cancer Res Treat. 2014;144:103-111.
- Statista website. Resident population of the United States by sex and age as of July 1, 2018. https://www.statista.com/statistics/241488/population-of-the-us-by-sex-and-age. Accessed January 6, 2020.
- US Food and Drug Administration website. Mammography: what you need to know. https://www.fda.gov/consumers/consumer-updates/mammography-what-you-need-know. Accessed January 13, 2020.
- IBIS (International Breast Cancer Intervention Study) website. Online Tyrer-Cuzick Model Breast Cancer Risk Evaluation Tool. ibis.ikonopedia.com. Accessed January 13, 2020.
- Bevers TB, Anderson BO, Bonaccio E, et al; National Comprehensive Cancer Network. Breast cancer screening and diagnosis: NCCN practice guidelines in oncology. JNCCN. 2009;7:1060-1096.
- Saslow D, Boetes C, Burke W, et al. American Cancer Society guidelines for breast screening with MRI as an adjunct to mammography. CA Cancer J Clin. 2007;57:75-89.
- Antoniou AC, Cunningham AP, Peto J, et al. The BOADICEA model of genetic susceptibility to breast and ovarian cancers: updates and extensions. Br J Cancer. 2008;98:1457-1466.
- Claus EB, Risch N, Thompson WD. Autosomal dominant inheritance of early-onset breast cancer: implications for risk prediction. Cancer. 1994;73:643-651.
- Parmigiani G, Berry D, Aguilar O. Determining carrier probabilities for breast cancer-susceptibility genes BRCA1 and BRCA2. Am J Hum Genet. 1998;62:145-158.
- Tyrer J, Duffy SW, Cuzick J. A breast cancer prediction model incorporating familial and personal risk factors. Stat Med. 2004;23:1111-1130.
- Sprague BL, Gagnon RE, Burt V, et al. Prevalence of mammographically dense breasts in the United States. J Natl Cancer Inst. 2014;106:dju255. doi: 10.1093/jcni/dju255.
- Boyd NF, Guo H, Martin LJ, et al. Mammographic density and the risk and detection of breast cancer. N Engl J Med. 2007;356:227-236.
- Bakker MF, de Lange SV, Pijnappel RM, et al; for the DENSE Trial Study Group. Supplemental MRI screening for women with extremely dense breast tissue. N Engl J Med. 2019;381:2091-2102.
- Drukker CA, Schmidt MK, Rutgers EJT, et al. Mammographic screening detects low-risk tumor biology breast cancers. Breast Cancer Res Treat. 2014;144:103-111.
- Statista website. Resident population of the United States by sex and age as of July 1, 2018. https://www.statista.com/statistics/241488/population-of-the-us-by-sex-and-age. Accessed January 6, 2020.
- US Food and Drug Administration website. Mammography: what you need to know. https://www.fda.gov/consumers/consumer-updates/mammography-what-you-need-know. Accessed January 13, 2020.
- IBIS (International Breast Cancer Intervention Study) website. Online Tyrer-Cuzick Model Breast Cancer Risk Evaluation Tool. ibis.ikonopedia.com. Accessed January 13, 2020.
- Bevers TB, Anderson BO, Bonaccio E, et al; National Comprehensive Cancer Network. Breast cancer screening and diagnosis: NCCN practice guidelines in oncology. JNCCN. 2009;7:1060-1096.
- Saslow D, Boetes C, Burke W, et al. American Cancer Society guidelines for breast screening with MRI as an adjunct to mammography. CA Cancer J Clin. 2007;57:75-89.
- Antoniou AC, Cunningham AP, Peto J, et al. The BOADICEA model of genetic susceptibility to breast and ovarian cancers: updates and extensions. Br J Cancer. 2008;98:1457-1466.
- Claus EB, Risch N, Thompson WD. Autosomal dominant inheritance of early-onset breast cancer: implications for risk prediction. Cancer. 1994;73:643-651.
- Parmigiani G, Berry D, Aguilar O. Determining carrier probabilities for breast cancer-susceptibility genes BRCA1 and BRCA2. Am J Hum Genet. 1998;62:145-158.
- Tyrer J, Duffy SW, Cuzick J. A breast cancer prediction model incorporating familial and personal risk factors. Stat Med. 2004;23:1111-1130.
Delaying flu vaccine didn’t drop fever rate for childhood immunizations
according to a randomized trial.
An increased risk for febrile seizures had been seen when the three vaccines were administered together, wrote Emmanuel B. Walter, MD, MPH, and coauthors, so they constructed a trial that compared a simultaneous administration strategy that delayed inactivated influenza vaccine (IIV) administration by about 2 weeks.
In all, 221 children aged 12-16 months were enrolled in the randomized study. A total of 110 children received quadrivalent IIV (IIV4), DTaP, and 13-valent pneumococcal conjugate vaccine (PCV13) simultaneously and returned for a dental health education visit 2 weeks later. For 111 children, DTaP and PCV13 were administered at study visit 1, and IIV4 was given along with dental health education 2 weeks later. Most children in both groups also received at least one nonstudy vaccine at the first study visit. Eleven children in the simultaneous group and four in the sequential group didn’t complete the study.
There was no difference between study groups in the combined rates of fever on the first 2 days after study visits 1 and 2 taken together: 8% of children in the simultaneous group and 9% of those in the sequential group had fever of 38° C or higher (adjusted relative risk, 0.87; 95% confidence interval, 0.36-2.10).
However, children in the simultaneous group were more likely to receive antipyretic medication in the first 2 days after visit 1 (37% versus 22%; P = .020), reported Dr. Walter, professor of pediatrics at Duke University, Durham, N.C., and coauthors. Because it’s rare for febrile seizures to occur after immunization, the authors didn’t make the occurrence of febrile seizure a primary or secondary endpoint of the study; no seizures occurred in study participants. They did hypothesize that the total proportion of children having fever would be higher in the simultaneous than in the sequential group – a hypothesis not supported by the study findings.
Children were excluded, or their study vaccinations were delayed, if they had received antipyretic medication within the 72 hours preceding the visit or at the study visit, or if they had a temperature of 38° C or more.
Parents monitored participants’ temperatures for 8 days after visits by using a study-provided temporal thermometer once daily at about the same time, and also by checking the temperature if their child felt feverish. Parents also recorded any antipyretic use, medical care, other symptoms, and febrile seizures.
The study was stopped earlier than anticipated because unexpectedly high levels of influenza activity made it unethical to delay influenza immunization, explained Dr. Walter and coauthors.
Participants were a median 15 months old; most were non-Hispanic white and had private insurance. Most participants didn’t attend day care.
“Nearly all fever episodes and days of fever on days 1-2 after the study visits occurred after visit 1,” reported Dr. Walter and coinvestigators. They saw no difference between groups in the proportion of children who had a fever of 38.6° C on days 1-2 after either study visit.
The mean peak temperature – about 38.5° C – on combined study visits 1 and 2 didn’t differ between groups. Similarly, for those participants who had a fever, the mean postvisit fever duration of 1.3 days was identical between groups.
Parents also were asked about their perceptions of the vaccination schedule their children received. Over half of parents overall (56%) reported that they disliked having to bring their child in for two separate clinic visits, with more parents in the sequential group than the simultaneous group reporting this (65% versus 48%).
Generalizability of the findings and comparison with previous studies are limited, noted Dr. Walter and coinvestigators, because the composition of influenza vaccine varies from year to year. No signal for seizures was seen in the Vaccine Safety Datalink after IIV during the 2017-2018 influenza season, wrote the investigators. The 2010-2011 influenza season’s IIV formulation was associated with increased febrile seizure risk, indicating that the IIV formulation for that year may have been more pyrogenic than the 2017-2018 formulation.
Also, children deemed at higher risk of febrile seizure were excluded from the study, so findings may have limited applicability to these children. The lack of parental blinding also may have influenced antipyretic administration or other symptom reporting, although objective temperature measurement should not have been affected by the lack of blinding, wrote Dr. Walker and collaborators.
The study was funded by the Centers for Disease Control and Prevention. One coauthor reported potential conflicts of interest from financial support received from GlaxoSmithKline, Sanofi Pasteur, Pfizer, Merck, Protein Science, Dynavax, and Medimmune. The remaining authors have no relevant financial disclosures.
SOURCE: Walter EB et al. Pediatrics. 2020;145(3):e20191909.
according to a randomized trial.
An increased risk for febrile seizures had been seen when the three vaccines were administered together, wrote Emmanuel B. Walter, MD, MPH, and coauthors, so they constructed a trial that compared a simultaneous administration strategy that delayed inactivated influenza vaccine (IIV) administration by about 2 weeks.
In all, 221 children aged 12-16 months were enrolled in the randomized study. A total of 110 children received quadrivalent IIV (IIV4), DTaP, and 13-valent pneumococcal conjugate vaccine (PCV13) simultaneously and returned for a dental health education visit 2 weeks later. For 111 children, DTaP and PCV13 were administered at study visit 1, and IIV4 was given along with dental health education 2 weeks later. Most children in both groups also received at least one nonstudy vaccine at the first study visit. Eleven children in the simultaneous group and four in the sequential group didn’t complete the study.
There was no difference between study groups in the combined rates of fever on the first 2 days after study visits 1 and 2 taken together: 8% of children in the simultaneous group and 9% of those in the sequential group had fever of 38° C or higher (adjusted relative risk, 0.87; 95% confidence interval, 0.36-2.10).
However, children in the simultaneous group were more likely to receive antipyretic medication in the first 2 days after visit 1 (37% versus 22%; P = .020), reported Dr. Walter, professor of pediatrics at Duke University, Durham, N.C., and coauthors. Because it’s rare for febrile seizures to occur after immunization, the authors didn’t make the occurrence of febrile seizure a primary or secondary endpoint of the study; no seizures occurred in study participants. They did hypothesize that the total proportion of children having fever would be higher in the simultaneous than in the sequential group – a hypothesis not supported by the study findings.
Children were excluded, or their study vaccinations were delayed, if they had received antipyretic medication within the 72 hours preceding the visit or at the study visit, or if they had a temperature of 38° C or more.
Parents monitored participants’ temperatures for 8 days after visits by using a study-provided temporal thermometer once daily at about the same time, and also by checking the temperature if their child felt feverish. Parents also recorded any antipyretic use, medical care, other symptoms, and febrile seizures.
The study was stopped earlier than anticipated because unexpectedly high levels of influenza activity made it unethical to delay influenza immunization, explained Dr. Walter and coauthors.
Participants were a median 15 months old; most were non-Hispanic white and had private insurance. Most participants didn’t attend day care.
“Nearly all fever episodes and days of fever on days 1-2 after the study visits occurred after visit 1,” reported Dr. Walter and coinvestigators. They saw no difference between groups in the proportion of children who had a fever of 38.6° C on days 1-2 after either study visit.
The mean peak temperature – about 38.5° C – on combined study visits 1 and 2 didn’t differ between groups. Similarly, for those participants who had a fever, the mean postvisit fever duration of 1.3 days was identical between groups.
Parents also were asked about their perceptions of the vaccination schedule their children received. Over half of parents overall (56%) reported that they disliked having to bring their child in for two separate clinic visits, with more parents in the sequential group than the simultaneous group reporting this (65% versus 48%).
Generalizability of the findings and comparison with previous studies are limited, noted Dr. Walter and coinvestigators, because the composition of influenza vaccine varies from year to year. No signal for seizures was seen in the Vaccine Safety Datalink after IIV during the 2017-2018 influenza season, wrote the investigators. The 2010-2011 influenza season’s IIV formulation was associated with increased febrile seizure risk, indicating that the IIV formulation for that year may have been more pyrogenic than the 2017-2018 formulation.
Also, children deemed at higher risk of febrile seizure were excluded from the study, so findings may have limited applicability to these children. The lack of parental blinding also may have influenced antipyretic administration or other symptom reporting, although objective temperature measurement should not have been affected by the lack of blinding, wrote Dr. Walker and collaborators.
The study was funded by the Centers for Disease Control and Prevention. One coauthor reported potential conflicts of interest from financial support received from GlaxoSmithKline, Sanofi Pasteur, Pfizer, Merck, Protein Science, Dynavax, and Medimmune. The remaining authors have no relevant financial disclosures.
SOURCE: Walter EB et al. Pediatrics. 2020;145(3):e20191909.
according to a randomized trial.
An increased risk for febrile seizures had been seen when the three vaccines were administered together, wrote Emmanuel B. Walter, MD, MPH, and coauthors, so they constructed a trial that compared a simultaneous administration strategy that delayed inactivated influenza vaccine (IIV) administration by about 2 weeks.
In all, 221 children aged 12-16 months were enrolled in the randomized study. A total of 110 children received quadrivalent IIV (IIV4), DTaP, and 13-valent pneumococcal conjugate vaccine (PCV13) simultaneously and returned for a dental health education visit 2 weeks later. For 111 children, DTaP and PCV13 were administered at study visit 1, and IIV4 was given along with dental health education 2 weeks later. Most children in both groups also received at least one nonstudy vaccine at the first study visit. Eleven children in the simultaneous group and four in the sequential group didn’t complete the study.
There was no difference between study groups in the combined rates of fever on the first 2 days after study visits 1 and 2 taken together: 8% of children in the simultaneous group and 9% of those in the sequential group had fever of 38° C or higher (adjusted relative risk, 0.87; 95% confidence interval, 0.36-2.10).
However, children in the simultaneous group were more likely to receive antipyretic medication in the first 2 days after visit 1 (37% versus 22%; P = .020), reported Dr. Walter, professor of pediatrics at Duke University, Durham, N.C., and coauthors. Because it’s rare for febrile seizures to occur after immunization, the authors didn’t make the occurrence of febrile seizure a primary or secondary endpoint of the study; no seizures occurred in study participants. They did hypothesize that the total proportion of children having fever would be higher in the simultaneous than in the sequential group – a hypothesis not supported by the study findings.
Children were excluded, or their study vaccinations were delayed, if they had received antipyretic medication within the 72 hours preceding the visit or at the study visit, or if they had a temperature of 38° C or more.
Parents monitored participants’ temperatures for 8 days after visits by using a study-provided temporal thermometer once daily at about the same time, and also by checking the temperature if their child felt feverish. Parents also recorded any antipyretic use, medical care, other symptoms, and febrile seizures.
The study was stopped earlier than anticipated because unexpectedly high levels of influenza activity made it unethical to delay influenza immunization, explained Dr. Walter and coauthors.
Participants were a median 15 months old; most were non-Hispanic white and had private insurance. Most participants didn’t attend day care.
“Nearly all fever episodes and days of fever on days 1-2 after the study visits occurred after visit 1,” reported Dr. Walter and coinvestigators. They saw no difference between groups in the proportion of children who had a fever of 38.6° C on days 1-2 after either study visit.
The mean peak temperature – about 38.5° C – on combined study visits 1 and 2 didn’t differ between groups. Similarly, for those participants who had a fever, the mean postvisit fever duration of 1.3 days was identical between groups.
Parents also were asked about their perceptions of the vaccination schedule their children received. Over half of parents overall (56%) reported that they disliked having to bring their child in for two separate clinic visits, with more parents in the sequential group than the simultaneous group reporting this (65% versus 48%).
Generalizability of the findings and comparison with previous studies are limited, noted Dr. Walter and coinvestigators, because the composition of influenza vaccine varies from year to year. No signal for seizures was seen in the Vaccine Safety Datalink after IIV during the 2017-2018 influenza season, wrote the investigators. The 2010-2011 influenza season’s IIV formulation was associated with increased febrile seizure risk, indicating that the IIV formulation for that year may have been more pyrogenic than the 2017-2018 formulation.
Also, children deemed at higher risk of febrile seizure were excluded from the study, so findings may have limited applicability to these children. The lack of parental blinding also may have influenced antipyretic administration or other symptom reporting, although objective temperature measurement should not have been affected by the lack of blinding, wrote Dr. Walker and collaborators.
The study was funded by the Centers for Disease Control and Prevention. One coauthor reported potential conflicts of interest from financial support received from GlaxoSmithKline, Sanofi Pasteur, Pfizer, Merck, Protein Science, Dynavax, and Medimmune. The remaining authors have no relevant financial disclosures.
SOURCE: Walter EB et al. Pediatrics. 2020;145(3):e20191909.
FROM PEDIATRICS
Key clinical point: Fevers were no less common when influenza vaccine was delayed for children receiving DTaP and pneumococcal vaccinations.
Major finding: There was no difference between study groups in the combined rates of fever on the first 2 days after study visits 1 and 2 taken together: 8% of children in the simultaneous group and 9% of those in the sequential group had fever of 38° C or higher (adjusted relative risk, 0.87).
Study details: Randomized, nonblinded trial of 221 children aged 12-16 months receiving scheduled vaccinations.
Disclosures: The study was funded by the Centers for Disease Control and Prevention. One coauthor reported financial support received from GlaxoSmithKline, Sanofi Pasteur, Pfizer, Merck, Protein Science, Dynavax, and Medimmune.
Source: Walter EB et al. Pediatrics. 2020;145(3):e20191909.
The Zzzzzuper Bowl, and 4D needles
HDL 35, LDL 220, hike!
Super Bowl Sunday is, for all intents and purposes, an American national holiday. And if there’s one thing we Americans love to do on our national holidays, it’s eat. And eat. Oh, and also eat.
According to research from LetsGetChecked, about 70% of Americans who watch the Super Bowl overindulge on game day. Actually, the term “overindulge” may not be entirely adequate: On Super Bowl Sunday, the average football fan ate nearly 11,000 calories and 180 g of saturated fat. That’s more than four times the recommended daily calorie intake, and seven times the recommended saturated fat intake.
Naturally, the chief medical officer for LetsGetChecked termed this level of food consumption as potentially dangerous if it becomes a regular occurrence and asked that people “question if they need to be eating quite so much.” Yeah, we think he’s being a party pooper, too.
So, just what did Joe Schmoe eat this past Sunday that has the experts all worried?
LetsGetChecked thoughtfully asked, and the list is something to be proud of: wings, pizza, fries, burgers, hot dogs, ribs, nachos, sausage, ice cream, chocolate, cake. The average fan ate all these, and more. Our personal favorite: the 2.3 portions of salad. Wouldn’t want to be too unhealthy now. Gotta have that salad to balance everything else out.
Strangely, the survey didn’t seem to ask about the presumably prodigious quantities of alcohol the average Super Bowl fan consumed. So, if anything, that 11,000 calories is an underestimation. And it really doesn’t get more American than that.
Zzzzzuper Bowl
Hardly, according to the buzzzzzz-kills [Ed. note: Why so many Zs? Author note: Wait for it ...] at the American Academy of Sleep Medicine. In a report with the sleep-inducing title “AASM Sleep Prioritization Survey Monday after the Super Bowl,” the academy pulls the sheets back on America’s somnolent post–Super Bowl secret: We’re sleep deprived.
More than one-third of the 2,003 adults alert enough to answer the AASM survey said they were more tired than usual the day after the Super Bowl. And 12% of respondents admitted that they were “extremely tired.”
Millennials were the generation most likely to meet Monday morning in an extreme stupor, followed by the few Gen X’ers who could even be bothered to cynically answer such an utterly pointless collection of survey questions. Baby boomers had already gone to bed before the academy could poll them.
AASM noted that Cleveland fans were stumped by the survey’s questions about the Super Bowl, given that the Browns are always well rested on the Monday morning after the game.
The gift that keeps on grabbing
Rutgers, you had us at “morph into new shapes.”
We read a lot of press releases here at LOTME world headquarters, but when we saw New Jersey’s state university announcing that a new 4D-printed microneedle array could “morph into new shapes,” we were hooked, so to speak.
Right now, though, you’re probably wondering what 4D printing is. We wondered that, too. It’s like 3D printing, but “with smart materials that are programmed to change shape after printing. Time is the fourth dimension that allows materials to morph into new shapes,” as senior investigator Howon Lee, PhD, and associates explained it.
Microneedles are becoming increasing popular as a replacement for hypodermics, but their “weak adhesion to tissues is a major challenge for controlled drug delivery over the long run,” the investigators noted. To try and solve the adhesion problem, they turned to – that’s right, you guessed it – insects and parasites.
When you think about it, it does make sense. What’s better at holding onto tissue than the barbed stinger of a honeybee or the microhooks of a tapeworm?
The microneedle array that Dr. Lee and his team have come up has backward-facing barbs that interlock with tissue when it is inserted, which improves adhesion. It was those barbs that required the whole 4D-printing approach, they explained in Advanced Functional Materials.
That’s sounds great, you’re probably thinking now – but we need to show you the money, right? Okay.
During testing on chicken muscle tissue, adhesion with the new microneedle was “18 times stronger than with a barbless microneedle,” they reported.
The 4D microneedle’s next stop? Its own commercial during next year’s Super Bowl, according to its new agent.
HDL 35, LDL 220, hike!
Super Bowl Sunday is, for all intents and purposes, an American national holiday. And if there’s one thing we Americans love to do on our national holidays, it’s eat. And eat. Oh, and also eat.
According to research from LetsGetChecked, about 70% of Americans who watch the Super Bowl overindulge on game day. Actually, the term “overindulge” may not be entirely adequate: On Super Bowl Sunday, the average football fan ate nearly 11,000 calories and 180 g of saturated fat. That’s more than four times the recommended daily calorie intake, and seven times the recommended saturated fat intake.
Naturally, the chief medical officer for LetsGetChecked termed this level of food consumption as potentially dangerous if it becomes a regular occurrence and asked that people “question if they need to be eating quite so much.” Yeah, we think he’s being a party pooper, too.
So, just what did Joe Schmoe eat this past Sunday that has the experts all worried?
LetsGetChecked thoughtfully asked, and the list is something to be proud of: wings, pizza, fries, burgers, hot dogs, ribs, nachos, sausage, ice cream, chocolate, cake. The average fan ate all these, and more. Our personal favorite: the 2.3 portions of salad. Wouldn’t want to be too unhealthy now. Gotta have that salad to balance everything else out.
Strangely, the survey didn’t seem to ask about the presumably prodigious quantities of alcohol the average Super Bowl fan consumed. So, if anything, that 11,000 calories is an underestimation. And it really doesn’t get more American than that.
Zzzzzuper Bowl
Hardly, according to the buzzzzzz-kills [Ed. note: Why so many Zs? Author note: Wait for it ...] at the American Academy of Sleep Medicine. In a report with the sleep-inducing title “AASM Sleep Prioritization Survey Monday after the Super Bowl,” the academy pulls the sheets back on America’s somnolent post–Super Bowl secret: We’re sleep deprived.
More than one-third of the 2,003 adults alert enough to answer the AASM survey said they were more tired than usual the day after the Super Bowl. And 12% of respondents admitted that they were “extremely tired.”
Millennials were the generation most likely to meet Monday morning in an extreme stupor, followed by the few Gen X’ers who could even be bothered to cynically answer such an utterly pointless collection of survey questions. Baby boomers had already gone to bed before the academy could poll them.
AASM noted that Cleveland fans were stumped by the survey’s questions about the Super Bowl, given that the Browns are always well rested on the Monday morning after the game.
The gift that keeps on grabbing
Rutgers, you had us at “morph into new shapes.”
We read a lot of press releases here at LOTME world headquarters, but when we saw New Jersey’s state university announcing that a new 4D-printed microneedle array could “morph into new shapes,” we were hooked, so to speak.
Right now, though, you’re probably wondering what 4D printing is. We wondered that, too. It’s like 3D printing, but “with smart materials that are programmed to change shape after printing. Time is the fourth dimension that allows materials to morph into new shapes,” as senior investigator Howon Lee, PhD, and associates explained it.
Microneedles are becoming increasing popular as a replacement for hypodermics, but their “weak adhesion to tissues is a major challenge for controlled drug delivery over the long run,” the investigators noted. To try and solve the adhesion problem, they turned to – that’s right, you guessed it – insects and parasites.
When you think about it, it does make sense. What’s better at holding onto tissue than the barbed stinger of a honeybee or the microhooks of a tapeworm?
The microneedle array that Dr. Lee and his team have come up has backward-facing barbs that interlock with tissue when it is inserted, which improves adhesion. It was those barbs that required the whole 4D-printing approach, they explained in Advanced Functional Materials.
That’s sounds great, you’re probably thinking now – but we need to show you the money, right? Okay.
During testing on chicken muscle tissue, adhesion with the new microneedle was “18 times stronger than with a barbless microneedle,” they reported.
The 4D microneedle’s next stop? Its own commercial during next year’s Super Bowl, according to its new agent.
HDL 35, LDL 220, hike!
Super Bowl Sunday is, for all intents and purposes, an American national holiday. And if there’s one thing we Americans love to do on our national holidays, it’s eat. And eat. Oh, and also eat.
According to research from LetsGetChecked, about 70% of Americans who watch the Super Bowl overindulge on game day. Actually, the term “overindulge” may not be entirely adequate: On Super Bowl Sunday, the average football fan ate nearly 11,000 calories and 180 g of saturated fat. That’s more than four times the recommended daily calorie intake, and seven times the recommended saturated fat intake.
Naturally, the chief medical officer for LetsGetChecked termed this level of food consumption as potentially dangerous if it becomes a regular occurrence and asked that people “question if they need to be eating quite so much.” Yeah, we think he’s being a party pooper, too.
So, just what did Joe Schmoe eat this past Sunday that has the experts all worried?
LetsGetChecked thoughtfully asked, and the list is something to be proud of: wings, pizza, fries, burgers, hot dogs, ribs, nachos, sausage, ice cream, chocolate, cake. The average fan ate all these, and more. Our personal favorite: the 2.3 portions of salad. Wouldn’t want to be too unhealthy now. Gotta have that salad to balance everything else out.
Strangely, the survey didn’t seem to ask about the presumably prodigious quantities of alcohol the average Super Bowl fan consumed. So, if anything, that 11,000 calories is an underestimation. And it really doesn’t get more American than that.
Zzzzzuper Bowl
Hardly, according to the buzzzzzz-kills [Ed. note: Why so many Zs? Author note: Wait for it ...] at the American Academy of Sleep Medicine. In a report with the sleep-inducing title “AASM Sleep Prioritization Survey Monday after the Super Bowl,” the academy pulls the sheets back on America’s somnolent post–Super Bowl secret: We’re sleep deprived.
More than one-third of the 2,003 adults alert enough to answer the AASM survey said they were more tired than usual the day after the Super Bowl. And 12% of respondents admitted that they were “extremely tired.”
Millennials were the generation most likely to meet Monday morning in an extreme stupor, followed by the few Gen X’ers who could even be bothered to cynically answer such an utterly pointless collection of survey questions. Baby boomers had already gone to bed before the academy could poll them.
AASM noted that Cleveland fans were stumped by the survey’s questions about the Super Bowl, given that the Browns are always well rested on the Monday morning after the game.
The gift that keeps on grabbing
Rutgers, you had us at “morph into new shapes.”
We read a lot of press releases here at LOTME world headquarters, but when we saw New Jersey’s state university announcing that a new 4D-printed microneedle array could “morph into new shapes,” we were hooked, so to speak.
Right now, though, you’re probably wondering what 4D printing is. We wondered that, too. It’s like 3D printing, but “with smart materials that are programmed to change shape after printing. Time is the fourth dimension that allows materials to morph into new shapes,” as senior investigator Howon Lee, PhD, and associates explained it.
Microneedles are becoming increasing popular as a replacement for hypodermics, but their “weak adhesion to tissues is a major challenge for controlled drug delivery over the long run,” the investigators noted. To try and solve the adhesion problem, they turned to – that’s right, you guessed it – insects and parasites.
When you think about it, it does make sense. What’s better at holding onto tissue than the barbed stinger of a honeybee or the microhooks of a tapeworm?
The microneedle array that Dr. Lee and his team have come up has backward-facing barbs that interlock with tissue when it is inserted, which improves adhesion. It was those barbs that required the whole 4D-printing approach, they explained in Advanced Functional Materials.
That’s sounds great, you’re probably thinking now – but we need to show you the money, right? Okay.
During testing on chicken muscle tissue, adhesion with the new microneedle was “18 times stronger than with a barbless microneedle,” they reported.
The 4D microneedle’s next stop? Its own commercial during next year’s Super Bowl, according to its new agent.