PURE Healthy Diet Score validated

Despite validation, questions remain
Article Type
Changed
Tue, 07/21/2020 - 14:18

– A formula for scoring diet quality that during its development phase significantly correlated with overall survival received validation when tested using three independent, large data sets that together included almost 80,000 people.

Vidyard Video

With these new findings the PURE Healthy Diet Score had now shown consistent, significant correlations with overall survival and the incidence of MI and stroke in a total of about 218,000 people from 50 countries who had been followed in any of four separate studies. This new validation is especially notable because the optimal diet identified by the scoring system diverged from current American diet recommendations in two important ways: Optimal food consumption included three daily servings of full-fat dairy and 1.5 servings daily of unprocessed red meat Andrew Mente, PhD, reported at the annual congress of the European Society of Cardiology. He explained this finding as possibly related to the global scope of the study, which included many people from low- or middle-income countries where average diets are usually low in important nutrients.

The PURE Healthy Diet Score should now be “considered for broad, global dietary recommendations,” Dr. Mente said in a video interview. Testing a diet profile in a large, randomized trial would be ideal, but also difficult to run. Until then, the only alternative for defining an evidence-based optimal diet is observational data, as in the current study. The PURE Healthy Diet Score “is ready for routine use,” said Dr. Mente, a clinical epidemiologist at McMaster University in Hamilton, Canada.

Dr. Andrew Mente


Dr. Mente and his associates developed the Pure Healthy Diet Score with data taken from 138,527 people enrolled in the Prospective Urban Rural Epidemiology (PURE) study. They published a pair of reports in 2017 with their initial findings that also included some of their first steps toward developing the score (Lancet. 2017 Nov 4; 380[10107]:2037-49; 380[10107]:2050-62). The PURE analysis identified seven food groups for which daily intake levels significantly linked with survival: fruits, vegetables, nuts, legumes, dairy, red meat, and fish. Based on this, they devised a scoring formula that gives a person a rating of 1-5 for each of these seven food types, from the lowest quintile of consumption, which scores 1, to the highest quintile, which scores 5. The result is a score than can range from 7 to 35. They then divided the PURE participants into quintiles based on their intakes of all seven food types and found the highest survival rate among people in the quintile with the highest intake level for all of the food groups.

The best-outcome quintile consumed on average about eight servings of fruits and vegetables daily, 2.5 servings of legumes and nuts, three servings of full-fat daily, 1.5 servings of unprocessed red meat, and 0.3 servings of fish (or about two servings of fish weekly). Energy consumption in the best-outcome quintile received 54% of calories as carbohydrates, 28% as fat, and 18% as protein. In contrast, the worst-outcomes quintile received 69% of calories from carbohydrates, 19% from fat, and 12% from protein.



In a model that adjusted for all measured confounders the people in PURE with the best-outcome diet had a statistically significant, 25% reduced all-cause mortality, compared with people in the quintile with the worst diet.

To validate the formula the researchers used data collected from three other trials run by their group at McMaster University:

 

 

  • The ONTARGET and TRANSCEND studies (N Engl J Med. 2008 Apr 10;358[15]:1547-58), which together included diet and outcomes data for 31,546 patients with vascular disease. Diet analysis and scoring showed that enrolled people in the quintile with the highest score had a statistically significant 24% relative reduction in mortality, compared with the quintile with the worst score after adjusting for measured confounders.
  • The INTERHEART study (Lancet. 2004 Sep 11;364[9438]:937-52), which had data for 27,098 people and showed that the primary outcome of incident MI was a statistically significant 22% lower after adjustment in the quintile with the best diet score, compared with the quintile with the worst score.
  • The INTERSTROKE study (Lancet. 2016 Aug 20;388[10046]:761-75), with data for 20,834 people, showed that the rate of stroke was a statistically significant 25% lower after adjustment in the quintile with the highest diet score, compared with those with the lowest score.

Dr. Mente had no financial disclosures.

Body

 

Dr. Mente and his associates have validated the PURE Healthy Diet Score. However, it remains unclear whether the score captures all of the many facets of diet, and it’s also uncertain whether the score is sensitive to changes in diet.

Mitchel L. Zoler/MDedge News
Dr. Eva Prescott
The researchers developed the PURE Healthy Diet Score with data from PURE, a large, international study. Their findings were controversial when first reported in 2017. Controversy arose over at least three of their findings: Decreased mortality was linked with increased consumption of saturated fat from dairy and red meat; higher scores did not correlate with a significant effect on cardiovascular disease in the derivation study; and the benefit from fruits and vegetables in the diet hit a plateau with an intake of about four daily servings. Their finding that decreased mortality linked with an increased intake of saturated fat ran counter to expectations.

Another issue with the quintile analysis that the researchers used to derive the formula was that the spread between the median scores of the bottom, worst-outcome quartile and the top, best-outcome quartile was only 7 points on a scale that ranged from 7 to 35. The small magnitude of the difference in scores between the bottom and top quintiles might limit the discriminatory power of this scoring system.

Eva Prescott, MD, is a cardiologist at Bispebjerg Hospital in Copenhagen. She has been an advisor to AstraZeneca, NovoNordisk, and Sanofi. She made these comments as designated discussant for the report.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Body

 

Dr. Mente and his associates have validated the PURE Healthy Diet Score. However, it remains unclear whether the score captures all of the many facets of diet, and it’s also uncertain whether the score is sensitive to changes in diet.

Mitchel L. Zoler/MDedge News
Dr. Eva Prescott
The researchers developed the PURE Healthy Diet Score with data from PURE, a large, international study. Their findings were controversial when first reported in 2017. Controversy arose over at least three of their findings: Decreased mortality was linked with increased consumption of saturated fat from dairy and red meat; higher scores did not correlate with a significant effect on cardiovascular disease in the derivation study; and the benefit from fruits and vegetables in the diet hit a plateau with an intake of about four daily servings. Their finding that decreased mortality linked with an increased intake of saturated fat ran counter to expectations.

Another issue with the quintile analysis that the researchers used to derive the formula was that the spread between the median scores of the bottom, worst-outcome quartile and the top, best-outcome quartile was only 7 points on a scale that ranged from 7 to 35. The small magnitude of the difference in scores between the bottom and top quintiles might limit the discriminatory power of this scoring system.

Eva Prescott, MD, is a cardiologist at Bispebjerg Hospital in Copenhagen. She has been an advisor to AstraZeneca, NovoNordisk, and Sanofi. She made these comments as designated discussant for the report.

Body

 

Dr. Mente and his associates have validated the PURE Healthy Diet Score. However, it remains unclear whether the score captures all of the many facets of diet, and it’s also uncertain whether the score is sensitive to changes in diet.

Mitchel L. Zoler/MDedge News
Dr. Eva Prescott
The researchers developed the PURE Healthy Diet Score with data from PURE, a large, international study. Their findings were controversial when first reported in 2017. Controversy arose over at least three of their findings: Decreased mortality was linked with increased consumption of saturated fat from dairy and red meat; higher scores did not correlate with a significant effect on cardiovascular disease in the derivation study; and the benefit from fruits and vegetables in the diet hit a plateau with an intake of about four daily servings. Their finding that decreased mortality linked with an increased intake of saturated fat ran counter to expectations.

Another issue with the quintile analysis that the researchers used to derive the formula was that the spread between the median scores of the bottom, worst-outcome quartile and the top, best-outcome quartile was only 7 points on a scale that ranged from 7 to 35. The small magnitude of the difference in scores between the bottom and top quintiles might limit the discriminatory power of this scoring system.

Eva Prescott, MD, is a cardiologist at Bispebjerg Hospital in Copenhagen. She has been an advisor to AstraZeneca, NovoNordisk, and Sanofi. She made these comments as designated discussant for the report.

Title
Despite validation, questions remain
Despite validation, questions remain

– A formula for scoring diet quality that during its development phase significantly correlated with overall survival received validation when tested using three independent, large data sets that together included almost 80,000 people.

Vidyard Video

With these new findings the PURE Healthy Diet Score had now shown consistent, significant correlations with overall survival and the incidence of MI and stroke in a total of about 218,000 people from 50 countries who had been followed in any of four separate studies. This new validation is especially notable because the optimal diet identified by the scoring system diverged from current American diet recommendations in two important ways: Optimal food consumption included three daily servings of full-fat dairy and 1.5 servings daily of unprocessed red meat Andrew Mente, PhD, reported at the annual congress of the European Society of Cardiology. He explained this finding as possibly related to the global scope of the study, which included many people from low- or middle-income countries where average diets are usually low in important nutrients.

The PURE Healthy Diet Score should now be “considered for broad, global dietary recommendations,” Dr. Mente said in a video interview. Testing a diet profile in a large, randomized trial would be ideal, but also difficult to run. Until then, the only alternative for defining an evidence-based optimal diet is observational data, as in the current study. The PURE Healthy Diet Score “is ready for routine use,” said Dr. Mente, a clinical epidemiologist at McMaster University in Hamilton, Canada.

Dr. Andrew Mente


Dr. Mente and his associates developed the Pure Healthy Diet Score with data taken from 138,527 people enrolled in the Prospective Urban Rural Epidemiology (PURE) study. They published a pair of reports in 2017 with their initial findings that also included some of their first steps toward developing the score (Lancet. 2017 Nov 4; 380[10107]:2037-49; 380[10107]:2050-62). The PURE analysis identified seven food groups for which daily intake levels significantly linked with survival: fruits, vegetables, nuts, legumes, dairy, red meat, and fish. Based on this, they devised a scoring formula that gives a person a rating of 1-5 for each of these seven food types, from the lowest quintile of consumption, which scores 1, to the highest quintile, which scores 5. The result is a score than can range from 7 to 35. They then divided the PURE participants into quintiles based on their intakes of all seven food types and found the highest survival rate among people in the quintile with the highest intake level for all of the food groups.

The best-outcome quintile consumed on average about eight servings of fruits and vegetables daily, 2.5 servings of legumes and nuts, three servings of full-fat daily, 1.5 servings of unprocessed red meat, and 0.3 servings of fish (or about two servings of fish weekly). Energy consumption in the best-outcome quintile received 54% of calories as carbohydrates, 28% as fat, and 18% as protein. In contrast, the worst-outcomes quintile received 69% of calories from carbohydrates, 19% from fat, and 12% from protein.



In a model that adjusted for all measured confounders the people in PURE with the best-outcome diet had a statistically significant, 25% reduced all-cause mortality, compared with people in the quintile with the worst diet.

To validate the formula the researchers used data collected from three other trials run by their group at McMaster University:

 

 

  • The ONTARGET and TRANSCEND studies (N Engl J Med. 2008 Apr 10;358[15]:1547-58), which together included diet and outcomes data for 31,546 patients with vascular disease. Diet analysis and scoring showed that enrolled people in the quintile with the highest score had a statistically significant 24% relative reduction in mortality, compared with the quintile with the worst score after adjusting for measured confounders.
  • The INTERHEART study (Lancet. 2004 Sep 11;364[9438]:937-52), which had data for 27,098 people and showed that the primary outcome of incident MI was a statistically significant 22% lower after adjustment in the quintile with the best diet score, compared with the quintile with the worst score.
  • The INTERSTROKE study (Lancet. 2016 Aug 20;388[10046]:761-75), with data for 20,834 people, showed that the rate of stroke was a statistically significant 25% lower after adjustment in the quintile with the highest diet score, compared with those with the lowest score.

Dr. Mente had no financial disclosures.

– A formula for scoring diet quality that during its development phase significantly correlated with overall survival received validation when tested using three independent, large data sets that together included almost 80,000 people.

Vidyard Video

With these new findings the PURE Healthy Diet Score had now shown consistent, significant correlations with overall survival and the incidence of MI and stroke in a total of about 218,000 people from 50 countries who had been followed in any of four separate studies. This new validation is especially notable because the optimal diet identified by the scoring system diverged from current American diet recommendations in two important ways: Optimal food consumption included three daily servings of full-fat dairy and 1.5 servings daily of unprocessed red meat Andrew Mente, PhD, reported at the annual congress of the European Society of Cardiology. He explained this finding as possibly related to the global scope of the study, which included many people from low- or middle-income countries where average diets are usually low in important nutrients.

The PURE Healthy Diet Score should now be “considered for broad, global dietary recommendations,” Dr. Mente said in a video interview. Testing a diet profile in a large, randomized trial would be ideal, but also difficult to run. Until then, the only alternative for defining an evidence-based optimal diet is observational data, as in the current study. The PURE Healthy Diet Score “is ready for routine use,” said Dr. Mente, a clinical epidemiologist at McMaster University in Hamilton, Canada.

Dr. Andrew Mente


Dr. Mente and his associates developed the Pure Healthy Diet Score with data taken from 138,527 people enrolled in the Prospective Urban Rural Epidemiology (PURE) study. They published a pair of reports in 2017 with their initial findings that also included some of their first steps toward developing the score (Lancet. 2017 Nov 4; 380[10107]:2037-49; 380[10107]:2050-62). The PURE analysis identified seven food groups for which daily intake levels significantly linked with survival: fruits, vegetables, nuts, legumes, dairy, red meat, and fish. Based on this, they devised a scoring formula that gives a person a rating of 1-5 for each of these seven food types, from the lowest quintile of consumption, which scores 1, to the highest quintile, which scores 5. The result is a score than can range from 7 to 35. They then divided the PURE participants into quintiles based on their intakes of all seven food types and found the highest survival rate among people in the quintile with the highest intake level for all of the food groups.

The best-outcome quintile consumed on average about eight servings of fruits and vegetables daily, 2.5 servings of legumes and nuts, three servings of full-fat daily, 1.5 servings of unprocessed red meat, and 0.3 servings of fish (or about two servings of fish weekly). Energy consumption in the best-outcome quintile received 54% of calories as carbohydrates, 28% as fat, and 18% as protein. In contrast, the worst-outcomes quintile received 69% of calories from carbohydrates, 19% from fat, and 12% from protein.



In a model that adjusted for all measured confounders the people in PURE with the best-outcome diet had a statistically significant, 25% reduced all-cause mortality, compared with people in the quintile with the worst diet.

To validate the formula the researchers used data collected from three other trials run by their group at McMaster University:

 

 

  • The ONTARGET and TRANSCEND studies (N Engl J Med. 2008 Apr 10;358[15]:1547-58), which together included diet and outcomes data for 31,546 patients with vascular disease. Diet analysis and scoring showed that enrolled people in the quintile with the highest score had a statistically significant 24% relative reduction in mortality, compared with the quintile with the worst score after adjusting for measured confounders.
  • The INTERHEART study (Lancet. 2004 Sep 11;364[9438]:937-52), which had data for 27,098 people and showed that the primary outcome of incident MI was a statistically significant 22% lower after adjustment in the quintile with the best diet score, compared with the quintile with the worst score.
  • The INTERSTROKE study (Lancet. 2016 Aug 20;388[10046]:761-75), with data for 20,834 people, showed that the rate of stroke was a statistically significant 25% lower after adjustment in the quintile with the highest diet score, compared with those with the lowest score.

Dr. Mente had no financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE ESC CONGRESS 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: The PURE Healthy Diet Score correlated with survival and cardiovascular events in three new databases.

Major finding: The highest-scoring quintiles had about 25% fewer deaths, MIs, and strokes, compared with the lowest-scoring quintiles.

Study details: The PURE Healthy Diet Score underwent validation using three independent data sets with a total of 79,478 people.

Disclosures: Dr. Mente had no financial disclosures.

Disqus Comments
Default
Use ProPublica

Optimizing use of TKIs in chronic leukemia

Article Type
Changed
Wed, 02/06/2019 - 11:45

 

DUBROVNIK, CROATIA – Long-term efficacy and toxicity should inform decisions about tyrosine kinase inhibitors (TKIs) in chronic myeloid leukemia (CML), according to one expert.

Dr. Hagop M. Kantarjian

Studies have indicated that long-term survival rates are similar whether CML patients receive frontline treatment with imatinib or second-generation TKIs. But the newer TKIs pose a higher risk of uncommon toxicities, Hagop M. Kantarjian, MD, said during the keynote presentation at Leukemia and Lymphoma, a meeting jointly sponsored by the University of Texas MD Anderson Cancer Center and the School of Medicine at the University of Zagreb, Croatia.

Dr. Kantarjian, a professor at MD Anderson Cancer Center in Houston, said most CML patients should receive daily treatment with TKIs – even if they are in complete cytogenetic response or 100% Philadelphia chromosome positive – because they will live longer.

Frontline treatment options for CML that are approved by the Food and Drug Administration include imatinib, dasatinib, nilotinib, and bosutinib.

Dr. Kantarjian noted that dasatinib and nilotinib bested imatinib in early analyses from clinical trials, but all three TKIs produced similar rates of overall survival (OS) and progression-free survival (PFS) at extended follow-up.

Dasatinib and imatinib produced similar rates of 5-year OS and PFS in the DASISION trial (J Clin Oncol. 2016 Jul 10;34[20]:2333-40).

In ENESTnd, 5-year OS and PFS rates were similar with nilotinib and imatinib (Leukemia. 2016 May;30[5]:1044-54).

However, the higher incidence of uncommon toxicities with the newer TKIs must be taken into account, Dr. Kantarjian said.
 

Choosing a TKI

Dr. Kantarjian recommends frontline imatinib for older patients (aged 65-70) and those who are low risk based on their Sokal score.

Second-generation TKIs should be given up front to patients who are at higher risk by Sokal and for “very young patients in whom early treatment discontinuation is important,” he said.

“In accelerated or blast phase, I always use the second-generation TKIs,” he said. “If there’s no binding mutation, I prefer dasatinib. I think it’s the most potent of them. If there are toxicities with dasatinib, bosutinib is equivalent in efficacy, so they are interchangeable.”

A TKI should not be discarded unless there is loss of complete cytogenetic response – not major molecular response – at the maximum tolerated adjusted dose that does not cause grade 3-4 toxicities or chronic grade 2 toxicities, Dr. Kantarjian added.

“We have to remember that we can go down on the dosages of, for example, imatinib, down to 200 mg a day, dasatinib as low as 20 mg a day, nilotinib as low as 150 mg twice a day or even 200 mg daily, and bosutinib down to 200 mg daily,” he said. “So if we have a patient who’s responding with side effects, we should not abandon the particular TKI, we should try to manipulate the dose schedule if they are having a good response.”

Dr. Kantarjian noted that pleural effusion is a toxicity of particular concern with dasatinib, but lowering the dose to 50 mg daily results in similar efficacy and significantly less toxicity than 100 mg daily. For patients over the age of 70, a 20-mg dose can be used.

Vaso-occlusive and vasospastic reactions are increasingly observed in patients treated with nilotinib. For that reason, Dr. Kantarjian said he prefers to forgo up-front nilotinib, particularly in patients who have cardiovascular or neurotoxic problems.

“The incidence of vaso-occlusive and vasospastic reactions is now close to 10%-15% at about 10 years with nilotinib,” Dr. Kantarjian said. “So it is not a trivial toxicity.”

For patients with vaso-occlusive/vasospastic reactions, “bosutinib is probably the safest drug,” Dr. Kantarjian said.

For second- or third-line therapy, patients can receive ponatinib or a second-generation TKI (dasatinib, nilotinib, or bosutinib), as well as omacetaxine or allogeneic stem cell transplant.

“If you disregard toxicities, I think ponatinib is the most powerful TKI, and I think that’s because we are using it at a higher dose that produces so many toxicities,” Dr. Kantarjian said.

Ponatinib is not used up front because of these toxicities, particularly pancreatitis, skin rashes, vaso-occlusive disorders, and hypertension, he added.

Dr. Kantarjian suggests giving ponatinib at 30 mg daily in patients with T315I mutation and those without guiding mutations who are resistant to second-generation TKIs.
 

 

 

Discontinuing a TKI

Dr. Kantarjian said patients can discontinue TKI therapy if they:

  • Are low- or intermediate-risk by Sokal.
  • Have quantifiable BCR-ABL transcripts.
  • Are in chronic phase.
  • Achieved an optimal response to their first TKI.
  • Have been on TKI therapy for more than 8 years.
  • Achieved a complete molecular response.
  • Have had a molecular response for more than 2-3 years.
  • Are available for monitoring every other month for the first 2 years.

Dr. Kantarjian did not report any conflicts of interest at the meeting. However, he has previously reported relationships with Novartis, Bristol-Myers Squibb, Pfizer, and Ariad Pharmaceuticals.

The Leukemia and Lymphoma meeting is organized by Jonathan Wood & Association, which is owned by the parent company of this news organization.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

DUBROVNIK, CROATIA – Long-term efficacy and toxicity should inform decisions about tyrosine kinase inhibitors (TKIs) in chronic myeloid leukemia (CML), according to one expert.

Dr. Hagop M. Kantarjian

Studies have indicated that long-term survival rates are similar whether CML patients receive frontline treatment with imatinib or second-generation TKIs. But the newer TKIs pose a higher risk of uncommon toxicities, Hagop M. Kantarjian, MD, said during the keynote presentation at Leukemia and Lymphoma, a meeting jointly sponsored by the University of Texas MD Anderson Cancer Center and the School of Medicine at the University of Zagreb, Croatia.

Dr. Kantarjian, a professor at MD Anderson Cancer Center in Houston, said most CML patients should receive daily treatment with TKIs – even if they are in complete cytogenetic response or 100% Philadelphia chromosome positive – because they will live longer.

Frontline treatment options for CML that are approved by the Food and Drug Administration include imatinib, dasatinib, nilotinib, and bosutinib.

Dr. Kantarjian noted that dasatinib and nilotinib bested imatinib in early analyses from clinical trials, but all three TKIs produced similar rates of overall survival (OS) and progression-free survival (PFS) at extended follow-up.

Dasatinib and imatinib produced similar rates of 5-year OS and PFS in the DASISION trial (J Clin Oncol. 2016 Jul 10;34[20]:2333-40).

In ENESTnd, 5-year OS and PFS rates were similar with nilotinib and imatinib (Leukemia. 2016 May;30[5]:1044-54).

However, the higher incidence of uncommon toxicities with the newer TKIs must be taken into account, Dr. Kantarjian said.
 

Choosing a TKI

Dr. Kantarjian recommends frontline imatinib for older patients (aged 65-70) and those who are low risk based on their Sokal score.

Second-generation TKIs should be given up front to patients who are at higher risk by Sokal and for “very young patients in whom early treatment discontinuation is important,” he said.

“In accelerated or blast phase, I always use the second-generation TKIs,” he said. “If there’s no binding mutation, I prefer dasatinib. I think it’s the most potent of them. If there are toxicities with dasatinib, bosutinib is equivalent in efficacy, so they are interchangeable.”

A TKI should not be discarded unless there is loss of complete cytogenetic response – not major molecular response – at the maximum tolerated adjusted dose that does not cause grade 3-4 toxicities or chronic grade 2 toxicities, Dr. Kantarjian added.

“We have to remember that we can go down on the dosages of, for example, imatinib, down to 200 mg a day, dasatinib as low as 20 mg a day, nilotinib as low as 150 mg twice a day or even 200 mg daily, and bosutinib down to 200 mg daily,” he said. “So if we have a patient who’s responding with side effects, we should not abandon the particular TKI, we should try to manipulate the dose schedule if they are having a good response.”

Dr. Kantarjian noted that pleural effusion is a toxicity of particular concern with dasatinib, but lowering the dose to 50 mg daily results in similar efficacy and significantly less toxicity than 100 mg daily. For patients over the age of 70, a 20-mg dose can be used.

Vaso-occlusive and vasospastic reactions are increasingly observed in patients treated with nilotinib. For that reason, Dr. Kantarjian said he prefers to forgo up-front nilotinib, particularly in patients who have cardiovascular or neurotoxic problems.

“The incidence of vaso-occlusive and vasospastic reactions is now close to 10%-15% at about 10 years with nilotinib,” Dr. Kantarjian said. “So it is not a trivial toxicity.”

For patients with vaso-occlusive/vasospastic reactions, “bosutinib is probably the safest drug,” Dr. Kantarjian said.

For second- or third-line therapy, patients can receive ponatinib or a second-generation TKI (dasatinib, nilotinib, or bosutinib), as well as omacetaxine or allogeneic stem cell transplant.

“If you disregard toxicities, I think ponatinib is the most powerful TKI, and I think that’s because we are using it at a higher dose that produces so many toxicities,” Dr. Kantarjian said.

Ponatinib is not used up front because of these toxicities, particularly pancreatitis, skin rashes, vaso-occlusive disorders, and hypertension, he added.

Dr. Kantarjian suggests giving ponatinib at 30 mg daily in patients with T315I mutation and those without guiding mutations who are resistant to second-generation TKIs.
 

 

 

Discontinuing a TKI

Dr. Kantarjian said patients can discontinue TKI therapy if they:

  • Are low- or intermediate-risk by Sokal.
  • Have quantifiable BCR-ABL transcripts.
  • Are in chronic phase.
  • Achieved an optimal response to their first TKI.
  • Have been on TKI therapy for more than 8 years.
  • Achieved a complete molecular response.
  • Have had a molecular response for more than 2-3 years.
  • Are available for monitoring every other month for the first 2 years.

Dr. Kantarjian did not report any conflicts of interest at the meeting. However, he has previously reported relationships with Novartis, Bristol-Myers Squibb, Pfizer, and Ariad Pharmaceuticals.

The Leukemia and Lymphoma meeting is organized by Jonathan Wood & Association, which is owned by the parent company of this news organization.

 

DUBROVNIK, CROATIA – Long-term efficacy and toxicity should inform decisions about tyrosine kinase inhibitors (TKIs) in chronic myeloid leukemia (CML), according to one expert.

Dr. Hagop M. Kantarjian

Studies have indicated that long-term survival rates are similar whether CML patients receive frontline treatment with imatinib or second-generation TKIs. But the newer TKIs pose a higher risk of uncommon toxicities, Hagop M. Kantarjian, MD, said during the keynote presentation at Leukemia and Lymphoma, a meeting jointly sponsored by the University of Texas MD Anderson Cancer Center and the School of Medicine at the University of Zagreb, Croatia.

Dr. Kantarjian, a professor at MD Anderson Cancer Center in Houston, said most CML patients should receive daily treatment with TKIs – even if they are in complete cytogenetic response or 100% Philadelphia chromosome positive – because they will live longer.

Frontline treatment options for CML that are approved by the Food and Drug Administration include imatinib, dasatinib, nilotinib, and bosutinib.

Dr. Kantarjian noted that dasatinib and nilotinib bested imatinib in early analyses from clinical trials, but all three TKIs produced similar rates of overall survival (OS) and progression-free survival (PFS) at extended follow-up.

Dasatinib and imatinib produced similar rates of 5-year OS and PFS in the DASISION trial (J Clin Oncol. 2016 Jul 10;34[20]:2333-40).

In ENESTnd, 5-year OS and PFS rates were similar with nilotinib and imatinib (Leukemia. 2016 May;30[5]:1044-54).

However, the higher incidence of uncommon toxicities with the newer TKIs must be taken into account, Dr. Kantarjian said.
 

Choosing a TKI

Dr. Kantarjian recommends frontline imatinib for older patients (aged 65-70) and those who are low risk based on their Sokal score.

Second-generation TKIs should be given up front to patients who are at higher risk by Sokal and for “very young patients in whom early treatment discontinuation is important,” he said.

“In accelerated or blast phase, I always use the second-generation TKIs,” he said. “If there’s no binding mutation, I prefer dasatinib. I think it’s the most potent of them. If there are toxicities with dasatinib, bosutinib is equivalent in efficacy, so they are interchangeable.”

A TKI should not be discarded unless there is loss of complete cytogenetic response – not major molecular response – at the maximum tolerated adjusted dose that does not cause grade 3-4 toxicities or chronic grade 2 toxicities, Dr. Kantarjian added.

“We have to remember that we can go down on the dosages of, for example, imatinib, down to 200 mg a day, dasatinib as low as 20 mg a day, nilotinib as low as 150 mg twice a day or even 200 mg daily, and bosutinib down to 200 mg daily,” he said. “So if we have a patient who’s responding with side effects, we should not abandon the particular TKI, we should try to manipulate the dose schedule if they are having a good response.”

Dr. Kantarjian noted that pleural effusion is a toxicity of particular concern with dasatinib, but lowering the dose to 50 mg daily results in similar efficacy and significantly less toxicity than 100 mg daily. For patients over the age of 70, a 20-mg dose can be used.

Vaso-occlusive and vasospastic reactions are increasingly observed in patients treated with nilotinib. For that reason, Dr. Kantarjian said he prefers to forgo up-front nilotinib, particularly in patients who have cardiovascular or neurotoxic problems.

“The incidence of vaso-occlusive and vasospastic reactions is now close to 10%-15% at about 10 years with nilotinib,” Dr. Kantarjian said. “So it is not a trivial toxicity.”

For patients with vaso-occlusive/vasospastic reactions, “bosutinib is probably the safest drug,” Dr. Kantarjian said.

For second- or third-line therapy, patients can receive ponatinib or a second-generation TKI (dasatinib, nilotinib, or bosutinib), as well as omacetaxine or allogeneic stem cell transplant.

“If you disregard toxicities, I think ponatinib is the most powerful TKI, and I think that’s because we are using it at a higher dose that produces so many toxicities,” Dr. Kantarjian said.

Ponatinib is not used up front because of these toxicities, particularly pancreatitis, skin rashes, vaso-occlusive disorders, and hypertension, he added.

Dr. Kantarjian suggests giving ponatinib at 30 mg daily in patients with T315I mutation and those without guiding mutations who are resistant to second-generation TKIs.
 

 

 

Discontinuing a TKI

Dr. Kantarjian said patients can discontinue TKI therapy if they:

  • Are low- or intermediate-risk by Sokal.
  • Have quantifiable BCR-ABL transcripts.
  • Are in chronic phase.
  • Achieved an optimal response to their first TKI.
  • Have been on TKI therapy for more than 8 years.
  • Achieved a complete molecular response.
  • Have had a molecular response for more than 2-3 years.
  • Are available for monitoring every other month for the first 2 years.

Dr. Kantarjian did not report any conflicts of interest at the meeting. However, he has previously reported relationships with Novartis, Bristol-Myers Squibb, Pfizer, and Ariad Pharmaceuticals.

The Leukemia and Lymphoma meeting is organized by Jonathan Wood & Association, which is owned by the parent company of this news organization.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM LEUKEMIA AND LYMPHOMA 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Real-world data, machine learning, and the reemergence of humanism

Article Type
Changed
Fri, 01/18/2019 - 18:02

As we relentlessly enter information into our EHRs, we typically perceive that we are just recording information about our patients to provide continuity of care and have an accurate representation of what was done. While that is true, the information we record is now increasingly being examined for many additional purposes. A whole new area of study has emerged over the last few years known as “real-world data,” and innovators are beginning to explore how machine learning (currently employed in other areas by such companies as Amazon and Google) may be used to improve the care of patients. The information we are putting into our EHRs is being translated into discrete data and is then combined with data from labs, pharmacies, and claims databases to examine how medications actually work when used in the wide and wild world of practice.

Dr. Chris Notte and Dr. Neil Skolnik

Let’s first talk about why real-world data are important. Traditionally, the evidence we rely upon in medicine has come from randomized trials to give us an unbiased assessment about the safety and the efficacy of the medications that we use. The Achilles’ heel of randomized trials is that, by their nature, they employ a carefully defined group of patients – with specific inclusion and exclusion criteria – who may not be like the patients in our practices. Randomized trials are also conducted in sites that are different than most of our offices. The clinics where randomized trials are conducted have dedicated personnel to follow up on patients, to make sure that patients take their medications, and ensure that patients remember their follow up visits. What this means is that the results in of those studies might not reflect the results seen in the real world.

A nice example of this was reported recently in the area of diabetes management. Randomized trials have shown that the glucagonlike peptide–1 (GLP-1) class of medications have about twice the effectiveness in lowering hemoglobin A1c as do the dipeptidyl peptidase–4 (DPP-4) inhibitor class of medications, but that difference in efficacy is not seen in practice. When looked at in real-world studies, the two classes of medications have about the same glucose-lowering efficacy. Why might that be? In reality, it might be that compliance with GLP-1s is less than that of DPP-4s because of the side effects of nausea and GI intolerance. When patients miss more doses of their GLP-1, they do not achieve the HbA1c lowering seen in trials in which compliance is far better.1

This exploration of real-world outcomes is just a first step in using the information documented in our charts. The exciting next step will be machine learning, also called deep learning.2 In this process, computers look at an enormous number of data points and find relationships that would otherwise not be detected. Imagine a supercomputer analyzing every blood pressure after any medication is changed across thousands, or even millions, of patients, and linking the outcome of that medication choice with the next blood pressure.3 Then imagine the computer meshing millions of data points that include all patients’ weights, ages, sexes, family histories of cardiovascular disease, renal function, etc. and matching those parameters with the specific medication and follow-up blood pressures. While much has been discussed about using genetics to advance personalized medicine, one can imagine these machine-based algorithms discovering connections about which medications work best for individuals with specific characteristics – without the need for additional testing. When the final loop of this cascade is connected, the computer could present recommendations to the clinician about which medication is optimal for the patient and then refine these recommendations, based on outcomes, to optimize safety and efficacy.

Some have argued that there is no way a computer will be able to perform as well as an experienced clinician who utilizes a combination of data and intuition to choose the best medication for his or her patient. This argument is similar to the controversy over autonomous driving cars. Many have asked how you can be assured that the cars will never have an accident. That is, of course, the wrong question. The correct question, as articulated very nicely by one of the innovators in that field, George Holtz, is how we can make a car that is safer than the way that cars are currently being driven (which means fewer deaths than the 15,000 that occur annually with humans behind the wheel).4

Our current method of providing care often leaves patients without appropriate guideline-recommended medications, and many don’t reach their HbA1c, blood pressure, cholesterol, and asthma-control goals. The era of machine learning with machine-generated algorithms may be much closer than we think, which will allow us to spend more time talking with patients, educating them about their disease, and supporting them in their efforts to remain healthy – an attractive future for both us and our patients.
 

 

 

References

1. Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017 Nov;40(11):1469-78.

2. Naylor CD. On the prospects for a (deep) learning health care system. JAMA. 2018 Sep 18;320(11):1099-100.

3. Wang YR et al. Outpatient hypertension treatment, treatment intensification, and control in Western Europe and the United States. Arch Intern Med. 2007 Jan 22;167(2):141-7.

4. Super Hacker George Hotz: “I can make your car drive itself for under $1,000.

Publications
Topics
Sections

As we relentlessly enter information into our EHRs, we typically perceive that we are just recording information about our patients to provide continuity of care and have an accurate representation of what was done. While that is true, the information we record is now increasingly being examined for many additional purposes. A whole new area of study has emerged over the last few years known as “real-world data,” and innovators are beginning to explore how machine learning (currently employed in other areas by such companies as Amazon and Google) may be used to improve the care of patients. The information we are putting into our EHRs is being translated into discrete data and is then combined with data from labs, pharmacies, and claims databases to examine how medications actually work when used in the wide and wild world of practice.

Dr. Chris Notte and Dr. Neil Skolnik

Let’s first talk about why real-world data are important. Traditionally, the evidence we rely upon in medicine has come from randomized trials to give us an unbiased assessment about the safety and the efficacy of the medications that we use. The Achilles’ heel of randomized trials is that, by their nature, they employ a carefully defined group of patients – with specific inclusion and exclusion criteria – who may not be like the patients in our practices. Randomized trials are also conducted in sites that are different than most of our offices. The clinics where randomized trials are conducted have dedicated personnel to follow up on patients, to make sure that patients take their medications, and ensure that patients remember their follow up visits. What this means is that the results in of those studies might not reflect the results seen in the real world.

A nice example of this was reported recently in the area of diabetes management. Randomized trials have shown that the glucagonlike peptide–1 (GLP-1) class of medications have about twice the effectiveness in lowering hemoglobin A1c as do the dipeptidyl peptidase–4 (DPP-4) inhibitor class of medications, but that difference in efficacy is not seen in practice. When looked at in real-world studies, the two classes of medications have about the same glucose-lowering efficacy. Why might that be? In reality, it might be that compliance with GLP-1s is less than that of DPP-4s because of the side effects of nausea and GI intolerance. When patients miss more doses of their GLP-1, they do not achieve the HbA1c lowering seen in trials in which compliance is far better.1

This exploration of real-world outcomes is just a first step in using the information documented in our charts. The exciting next step will be machine learning, also called deep learning.2 In this process, computers look at an enormous number of data points and find relationships that would otherwise not be detected. Imagine a supercomputer analyzing every blood pressure after any medication is changed across thousands, or even millions, of patients, and linking the outcome of that medication choice with the next blood pressure.3 Then imagine the computer meshing millions of data points that include all patients’ weights, ages, sexes, family histories of cardiovascular disease, renal function, etc. and matching those parameters with the specific medication and follow-up blood pressures. While much has been discussed about using genetics to advance personalized medicine, one can imagine these machine-based algorithms discovering connections about which medications work best for individuals with specific characteristics – without the need for additional testing. When the final loop of this cascade is connected, the computer could present recommendations to the clinician about which medication is optimal for the patient and then refine these recommendations, based on outcomes, to optimize safety and efficacy.

Some have argued that there is no way a computer will be able to perform as well as an experienced clinician who utilizes a combination of data and intuition to choose the best medication for his or her patient. This argument is similar to the controversy over autonomous driving cars. Many have asked how you can be assured that the cars will never have an accident. That is, of course, the wrong question. The correct question, as articulated very nicely by one of the innovators in that field, George Holtz, is how we can make a car that is safer than the way that cars are currently being driven (which means fewer deaths than the 15,000 that occur annually with humans behind the wheel).4

Our current method of providing care often leaves patients without appropriate guideline-recommended medications, and many don’t reach their HbA1c, blood pressure, cholesterol, and asthma-control goals. The era of machine learning with machine-generated algorithms may be much closer than we think, which will allow us to spend more time talking with patients, educating them about their disease, and supporting them in their efforts to remain healthy – an attractive future for both us and our patients.
 

 

 

References

1. Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017 Nov;40(11):1469-78.

2. Naylor CD. On the prospects for a (deep) learning health care system. JAMA. 2018 Sep 18;320(11):1099-100.

3. Wang YR et al. Outpatient hypertension treatment, treatment intensification, and control in Western Europe and the United States. Arch Intern Med. 2007 Jan 22;167(2):141-7.

4. Super Hacker George Hotz: “I can make your car drive itself for under $1,000.

As we relentlessly enter information into our EHRs, we typically perceive that we are just recording information about our patients to provide continuity of care and have an accurate representation of what was done. While that is true, the information we record is now increasingly being examined for many additional purposes. A whole new area of study has emerged over the last few years known as “real-world data,” and innovators are beginning to explore how machine learning (currently employed in other areas by such companies as Amazon and Google) may be used to improve the care of patients. The information we are putting into our EHRs is being translated into discrete data and is then combined with data from labs, pharmacies, and claims databases to examine how medications actually work when used in the wide and wild world of practice.

Dr. Chris Notte and Dr. Neil Skolnik

Let’s first talk about why real-world data are important. Traditionally, the evidence we rely upon in medicine has come from randomized trials to give us an unbiased assessment about the safety and the efficacy of the medications that we use. The Achilles’ heel of randomized trials is that, by their nature, they employ a carefully defined group of patients – with specific inclusion and exclusion criteria – who may not be like the patients in our practices. Randomized trials are also conducted in sites that are different than most of our offices. The clinics where randomized trials are conducted have dedicated personnel to follow up on patients, to make sure that patients take their medications, and ensure that patients remember their follow up visits. What this means is that the results in of those studies might not reflect the results seen in the real world.

A nice example of this was reported recently in the area of diabetes management. Randomized trials have shown that the glucagonlike peptide–1 (GLP-1) class of medications have about twice the effectiveness in lowering hemoglobin A1c as do the dipeptidyl peptidase–4 (DPP-4) inhibitor class of medications, but that difference in efficacy is not seen in practice. When looked at in real-world studies, the two classes of medications have about the same glucose-lowering efficacy. Why might that be? In reality, it might be that compliance with GLP-1s is less than that of DPP-4s because of the side effects of nausea and GI intolerance. When patients miss more doses of their GLP-1, they do not achieve the HbA1c lowering seen in trials in which compliance is far better.1

This exploration of real-world outcomes is just a first step in using the information documented in our charts. The exciting next step will be machine learning, also called deep learning.2 In this process, computers look at an enormous number of data points and find relationships that would otherwise not be detected. Imagine a supercomputer analyzing every blood pressure after any medication is changed across thousands, or even millions, of patients, and linking the outcome of that medication choice with the next blood pressure.3 Then imagine the computer meshing millions of data points that include all patients’ weights, ages, sexes, family histories of cardiovascular disease, renal function, etc. and matching those parameters with the specific medication and follow-up blood pressures. While much has been discussed about using genetics to advance personalized medicine, one can imagine these machine-based algorithms discovering connections about which medications work best for individuals with specific characteristics – without the need for additional testing. When the final loop of this cascade is connected, the computer could present recommendations to the clinician about which medication is optimal for the patient and then refine these recommendations, based on outcomes, to optimize safety and efficacy.

Some have argued that there is no way a computer will be able to perform as well as an experienced clinician who utilizes a combination of data and intuition to choose the best medication for his or her patient. This argument is similar to the controversy over autonomous driving cars. Many have asked how you can be assured that the cars will never have an accident. That is, of course, the wrong question. The correct question, as articulated very nicely by one of the innovators in that field, George Holtz, is how we can make a car that is safer than the way that cars are currently being driven (which means fewer deaths than the 15,000 that occur annually with humans behind the wheel).4

Our current method of providing care often leaves patients without appropriate guideline-recommended medications, and many don’t reach their HbA1c, blood pressure, cholesterol, and asthma-control goals. The era of machine learning with machine-generated algorithms may be much closer than we think, which will allow us to spend more time talking with patients, educating them about their disease, and supporting them in their efforts to remain healthy – an attractive future for both us and our patients.
 

 

 

References

1. Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017 Nov;40(11):1469-78.

2. Naylor CD. On the prospects for a (deep) learning health care system. JAMA. 2018 Sep 18;320(11):1099-100.

3. Wang YR et al. Outpatient hypertension treatment, treatment intensification, and control in Western Europe and the United States. Arch Intern Med. 2007 Jan 22;167(2):141-7.

4. Super Hacker George Hotz: “I can make your car drive itself for under $1,000.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Diagnosis is an ongoing concern in endometriosis

Article Type
Changed
Fri, 01/18/2019 - 18:02

 

Endometriosis can mean living with symptoms, multiple surgeries, and infertility for the estimated 10% of U.S. women who experience it, according to a new survey by Health Union, a family of online health communities.

Advances in support and understanding have been made through research and dissemination of information via the Internet, but complete control of endometriosis remains elusive, as only 13% of the 1,239 women surveyed from June 13 to July 14, 2018, said that their condition was under control with their current treatment plan.

Before control, of course, comes diagnosis, and the average gap between onset of symptoms and diagnosis was 8.6 years. Such a gap “can lead to delayed treatment and a potentially negative impact on quality of life,” Health Union said in a written statement. Those years of delays often involved visits to multiple physicians: 44% of respondents saw 3-5 physicians before receiving a diagnosis and 11% had to see 10 or more physicians.

“When comparing differences between symptom onset-to-diagnosis groups, there are some significant findings that suggest a fair amount of progress has been made, for the better,” Health Union said, noting that women who received a diagnosis in less than 5 years “were significantly less likely to think their symptoms were related to their menstrual cycles than those with a longer symptoms-to-diagnosis gap.” Respondents who had a gap of less than 2 years “were more likely to seek medical care as soon as possible” and to have used hormone therapies than those with longer gaps, the group said.

The most common diagnostic tests were laparoscopy, reported by 85% of respondents, and pelvic/transvaginal ultrasound, reported by 46%. Of the women who did not have a laparoscopy, 43% were undergoing a surgical procedure for another condition when their endometriosis was discovered. Laparoscopy also was by far the most common surgery to treat endometriosis, with a 79% prevalence among respondents, compared with 16% for laparotomy and 12% for oophorectomy, Health Union reported in Endometriosis in America 2018.

Common nonsurgical tactics to improve symptoms included increased water intake (79%), use of a heating pad (75%), and increased fresh fruit (64%) or green vegetables (62%) in the diet. Three-quarters of respondents also tried alternative and complementary therapies such as vitamins, exercise, and acupuncture, the report showed.

“Living with endometriosis is much easier now than it was not even a decade ago, as the Internet and social media have definitely increased knowledge about the disease,” said Endometriosis.net (one of the Health Union online communities) patient advocate Laura Kiesel. “When I first suspected I had the disease, in the mid-90s, hardly anyone had heard about it, and those aware of it didn’t think it was very serious. All these years later, I get a lot more sympathy and support – both online and in person – and people understand how serious, painful, and life altering it could be.”

Publications
Topics
Sections

 

Endometriosis can mean living with symptoms, multiple surgeries, and infertility for the estimated 10% of U.S. women who experience it, according to a new survey by Health Union, a family of online health communities.

Advances in support and understanding have been made through research and dissemination of information via the Internet, but complete control of endometriosis remains elusive, as only 13% of the 1,239 women surveyed from June 13 to July 14, 2018, said that their condition was under control with their current treatment plan.

Before control, of course, comes diagnosis, and the average gap between onset of symptoms and diagnosis was 8.6 years. Such a gap “can lead to delayed treatment and a potentially negative impact on quality of life,” Health Union said in a written statement. Those years of delays often involved visits to multiple physicians: 44% of respondents saw 3-5 physicians before receiving a diagnosis and 11% had to see 10 or more physicians.

“When comparing differences between symptom onset-to-diagnosis groups, there are some significant findings that suggest a fair amount of progress has been made, for the better,” Health Union said, noting that women who received a diagnosis in less than 5 years “were significantly less likely to think their symptoms were related to their menstrual cycles than those with a longer symptoms-to-diagnosis gap.” Respondents who had a gap of less than 2 years “were more likely to seek medical care as soon as possible” and to have used hormone therapies than those with longer gaps, the group said.

The most common diagnostic tests were laparoscopy, reported by 85% of respondents, and pelvic/transvaginal ultrasound, reported by 46%. Of the women who did not have a laparoscopy, 43% were undergoing a surgical procedure for another condition when their endometriosis was discovered. Laparoscopy also was by far the most common surgery to treat endometriosis, with a 79% prevalence among respondents, compared with 16% for laparotomy and 12% for oophorectomy, Health Union reported in Endometriosis in America 2018.

Common nonsurgical tactics to improve symptoms included increased water intake (79%), use of a heating pad (75%), and increased fresh fruit (64%) or green vegetables (62%) in the diet. Three-quarters of respondents also tried alternative and complementary therapies such as vitamins, exercise, and acupuncture, the report showed.

“Living with endometriosis is much easier now than it was not even a decade ago, as the Internet and social media have definitely increased knowledge about the disease,” said Endometriosis.net (one of the Health Union online communities) patient advocate Laura Kiesel. “When I first suspected I had the disease, in the mid-90s, hardly anyone had heard about it, and those aware of it didn’t think it was very serious. All these years later, I get a lot more sympathy and support – both online and in person – and people understand how serious, painful, and life altering it could be.”

 

Endometriosis can mean living with symptoms, multiple surgeries, and infertility for the estimated 10% of U.S. women who experience it, according to a new survey by Health Union, a family of online health communities.

Advances in support and understanding have been made through research and dissemination of information via the Internet, but complete control of endometriosis remains elusive, as only 13% of the 1,239 women surveyed from June 13 to July 14, 2018, said that their condition was under control with their current treatment plan.

Before control, of course, comes diagnosis, and the average gap between onset of symptoms and diagnosis was 8.6 years. Such a gap “can lead to delayed treatment and a potentially negative impact on quality of life,” Health Union said in a written statement. Those years of delays often involved visits to multiple physicians: 44% of respondents saw 3-5 physicians before receiving a diagnosis and 11% had to see 10 or more physicians.

“When comparing differences between symptom onset-to-diagnosis groups, there are some significant findings that suggest a fair amount of progress has been made, for the better,” Health Union said, noting that women who received a diagnosis in less than 5 years “were significantly less likely to think their symptoms were related to their menstrual cycles than those with a longer symptoms-to-diagnosis gap.” Respondents who had a gap of less than 2 years “were more likely to seek medical care as soon as possible” and to have used hormone therapies than those with longer gaps, the group said.

The most common diagnostic tests were laparoscopy, reported by 85% of respondents, and pelvic/transvaginal ultrasound, reported by 46%. Of the women who did not have a laparoscopy, 43% were undergoing a surgical procedure for another condition when their endometriosis was discovered. Laparoscopy also was by far the most common surgery to treat endometriosis, with a 79% prevalence among respondents, compared with 16% for laparotomy and 12% for oophorectomy, Health Union reported in Endometriosis in America 2018.

Common nonsurgical tactics to improve symptoms included increased water intake (79%), use of a heating pad (75%), and increased fresh fruit (64%) or green vegetables (62%) in the diet. Three-quarters of respondents also tried alternative and complementary therapies such as vitamins, exercise, and acupuncture, the report showed.

“Living with endometriosis is much easier now than it was not even a decade ago, as the Internet and social media have definitely increased knowledge about the disease,” said Endometriosis.net (one of the Health Union online communities) patient advocate Laura Kiesel. “When I first suspected I had the disease, in the mid-90s, hardly anyone had heard about it, and those aware of it didn’t think it was very serious. All these years later, I get a lot more sympathy and support – both online and in person – and people understand how serious, painful, and life altering it could be.”

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Moderate hypofractionation preferred in new guideline for localized PC

Article Type
Changed
Fri, 01/04/2019 - 14:25

 

Moderate hypofractionation is preferred over conventional fractionation in treatment of patients with localized prostate cancer who are candidates for external beam radiotherapy (EBRT), according to new a clinical practice guideline.

A meta-analysis of randomized clinical trials showed that moderate fractionation delivered the same efficacy as did conventional fractionation with a mild increase in gastrointestinal toxicity, reported lead author Scott C. Morgan, MD of OSF Medical Group in Bloomington, Illinois, and his colleagues. The drawback of toxicity is outweighed by distinct advantages in resource utilization and patient convenience, which make moderate hypofractionation the winning choice.

For many types of cancer, a shift toward fewer fractions of higher radiation is ongoing, driven largely by technological advances in radiation planning and delivery.

“Technical advances have permitted more precise and conformal delivery of escalated doses of radiation to the prostate, thereby improving the therapeutic ratio,” the authors wrote in the Journal of Clinical Oncology.

Fractionation is typically limited by adjacent tissue sensitivity, but prostate tumors are more sensitive to radiation than the rectum, allowing for higher doses of radiation without damaging healthy tissue. While conventional fractionation doses are between 180 and 200 cGy, moderate hypofractionation delivers doses of 240-340 cGy. Ultrahypofractionation is defined by doses equal to or greater than 500 cGy (the upper limit of the linear-quadratic model of cell survival).

The present guideline was developed through a 2-year, collaborative effort between the American Society of Radiation Oncology, the Society of Clinical Oncology, and the American Urological Association. Task force members included urologic surgeons and oncologists, medical physicists, and radiation oncologists from academic and nonacademic settings. A patient representative and radiation oncology resident also were involved. After completing a systematic literature review, the team developed recommendations with varying degrees of strength. Supporting evidence quality and level of consensus also were described.

Of note, the guideline calls for moderate hypofractionation for patients with localized prostate cancer regardless of urinary function, anatomy, comorbidity, or age, with or without radiation to the seminal vesicles. Along with this recommendation, clinicians should discuss with patients the small increased risk of acute gastrointestinal toxicity, compared with conventional fractionation and the limited follow-up time in most relevant clinical trials (often less than 5 years).

The guideline conveyed more skepticism regarding ultrahypofractionation because of a lack of supporting evidence and comparative trials. As such, the authors conditionally recommended ultrahypofractionation for low-risk and intermediate patients, the latter of whom should be encouraged to enter clinical trials.

“The conditional recommendations regarding ultrahypofractionation underscore the importance of shared decision making between clinicians and patients in this setting,” the authors wrote. “The decision to use ultrahypofractionated EBRT at this time should follow a detailed discussion of the existing uncertainties in the risk-benefit balance associated with this treatment approach and should be informed at all stages by the patient’s values and preferences.”

The authors reported financial affiliations with Amgen, GlaxoSmithKline, Bristol-Myers Squibb, and others.

SOURCE: Morgan et al. J Clin Oncol. 2018 Oct 11. doi: 10.1200/JCO.18.01097.

Publications
Topics
Sections

 

Moderate hypofractionation is preferred over conventional fractionation in treatment of patients with localized prostate cancer who are candidates for external beam radiotherapy (EBRT), according to new a clinical practice guideline.

A meta-analysis of randomized clinical trials showed that moderate fractionation delivered the same efficacy as did conventional fractionation with a mild increase in gastrointestinal toxicity, reported lead author Scott C. Morgan, MD of OSF Medical Group in Bloomington, Illinois, and his colleagues. The drawback of toxicity is outweighed by distinct advantages in resource utilization and patient convenience, which make moderate hypofractionation the winning choice.

For many types of cancer, a shift toward fewer fractions of higher radiation is ongoing, driven largely by technological advances in radiation planning and delivery.

“Technical advances have permitted more precise and conformal delivery of escalated doses of radiation to the prostate, thereby improving the therapeutic ratio,” the authors wrote in the Journal of Clinical Oncology.

Fractionation is typically limited by adjacent tissue sensitivity, but prostate tumors are more sensitive to radiation than the rectum, allowing for higher doses of radiation without damaging healthy tissue. While conventional fractionation doses are between 180 and 200 cGy, moderate hypofractionation delivers doses of 240-340 cGy. Ultrahypofractionation is defined by doses equal to or greater than 500 cGy (the upper limit of the linear-quadratic model of cell survival).

The present guideline was developed through a 2-year, collaborative effort between the American Society of Radiation Oncology, the Society of Clinical Oncology, and the American Urological Association. Task force members included urologic surgeons and oncologists, medical physicists, and radiation oncologists from academic and nonacademic settings. A patient representative and radiation oncology resident also were involved. After completing a systematic literature review, the team developed recommendations with varying degrees of strength. Supporting evidence quality and level of consensus also were described.

Of note, the guideline calls for moderate hypofractionation for patients with localized prostate cancer regardless of urinary function, anatomy, comorbidity, or age, with or without radiation to the seminal vesicles. Along with this recommendation, clinicians should discuss with patients the small increased risk of acute gastrointestinal toxicity, compared with conventional fractionation and the limited follow-up time in most relevant clinical trials (often less than 5 years).

The guideline conveyed more skepticism regarding ultrahypofractionation because of a lack of supporting evidence and comparative trials. As such, the authors conditionally recommended ultrahypofractionation for low-risk and intermediate patients, the latter of whom should be encouraged to enter clinical trials.

“The conditional recommendations regarding ultrahypofractionation underscore the importance of shared decision making between clinicians and patients in this setting,” the authors wrote. “The decision to use ultrahypofractionated EBRT at this time should follow a detailed discussion of the existing uncertainties in the risk-benefit balance associated with this treatment approach and should be informed at all stages by the patient’s values and preferences.”

The authors reported financial affiliations with Amgen, GlaxoSmithKline, Bristol-Myers Squibb, and others.

SOURCE: Morgan et al. J Clin Oncol. 2018 Oct 11. doi: 10.1200/JCO.18.01097.

 

Moderate hypofractionation is preferred over conventional fractionation in treatment of patients with localized prostate cancer who are candidates for external beam radiotherapy (EBRT), according to new a clinical practice guideline.

A meta-analysis of randomized clinical trials showed that moderate fractionation delivered the same efficacy as did conventional fractionation with a mild increase in gastrointestinal toxicity, reported lead author Scott C. Morgan, MD of OSF Medical Group in Bloomington, Illinois, and his colleagues. The drawback of toxicity is outweighed by distinct advantages in resource utilization and patient convenience, which make moderate hypofractionation the winning choice.

For many types of cancer, a shift toward fewer fractions of higher radiation is ongoing, driven largely by technological advances in radiation planning and delivery.

“Technical advances have permitted more precise and conformal delivery of escalated doses of radiation to the prostate, thereby improving the therapeutic ratio,” the authors wrote in the Journal of Clinical Oncology.

Fractionation is typically limited by adjacent tissue sensitivity, but prostate tumors are more sensitive to radiation than the rectum, allowing for higher doses of radiation without damaging healthy tissue. While conventional fractionation doses are between 180 and 200 cGy, moderate hypofractionation delivers doses of 240-340 cGy. Ultrahypofractionation is defined by doses equal to or greater than 500 cGy (the upper limit of the linear-quadratic model of cell survival).

The present guideline was developed through a 2-year, collaborative effort between the American Society of Radiation Oncology, the Society of Clinical Oncology, and the American Urological Association. Task force members included urologic surgeons and oncologists, medical physicists, and radiation oncologists from academic and nonacademic settings. A patient representative and radiation oncology resident also were involved. After completing a systematic literature review, the team developed recommendations with varying degrees of strength. Supporting evidence quality and level of consensus also were described.

Of note, the guideline calls for moderate hypofractionation for patients with localized prostate cancer regardless of urinary function, anatomy, comorbidity, or age, with or without radiation to the seminal vesicles. Along with this recommendation, clinicians should discuss with patients the small increased risk of acute gastrointestinal toxicity, compared with conventional fractionation and the limited follow-up time in most relevant clinical trials (often less than 5 years).

The guideline conveyed more skepticism regarding ultrahypofractionation because of a lack of supporting evidence and comparative trials. As such, the authors conditionally recommended ultrahypofractionation for low-risk and intermediate patients, the latter of whom should be encouraged to enter clinical trials.

“The conditional recommendations regarding ultrahypofractionation underscore the importance of shared decision making between clinicians and patients in this setting,” the authors wrote. “The decision to use ultrahypofractionated EBRT at this time should follow a detailed discussion of the existing uncertainties in the risk-benefit balance associated with this treatment approach and should be informed at all stages by the patient’s values and preferences.”

The authors reported financial affiliations with Amgen, GlaxoSmithKline, Bristol-Myers Squibb, and others.

SOURCE: Morgan et al. J Clin Oncol. 2018 Oct 11. doi: 10.1200/JCO.18.01097.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JOURNAL OF CLINICAL ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Moderate hypofractionation is preferred over conventional fractionation in treatment of patients with localized prostate cancer who are candidates for external beam radiotherapy (EBRT).

Major finding: The guideline panel reached a 94% consensus for the recommendation of moderate hypofractionation over conventional fractionation regardless of urinary function, anatomy, comorbidity, or age.

Study details: An evidence-based clinical practice guideline developed by the American Society of Radiation Oncology (ASTRO), the American Society of Clinical Oncology (ASCO), and the American Urological Association (AUA).

Disclosures: The authors reported financial affiliations with Amgen, GlaxoSmithKline, Bristol-Myers Squibb, and others.

Source: Morgan et al. J Clin Oncol. 2018 Oct 11. doi: 10.1200/JCO.18.01097.

Disqus Comments
Default
Use ProPublica

Dr. Bawa-Garba and trainee liability

Article Type
Changed
Thu, 03/28/2019 - 09:13

 

Question: Which of the following regarding medical trainee liability is best?

A. Trainees are commonly named as codefendants with their attending physician in a medical malpractice lawsuit.

B. “From a culture of blame to a culture of safety” is a rallying cry against poor work conditions.

C. House officers are always judged by a lower standard, because they are not fully qualified.

D. A, B, and C are correct.

E. A and C are correct.

Answer: A. A recent case of trainee liability in the United Kingdom resulted in criminal prosecution followed by the trainee being struck off the medical register.1 Dr. Hadiza Bawa-Garba, a pediatric trainee in the U.K. National Health Service, was prosecuted in a court of law and found guilty of manslaughter by gross negligence for the septic death of a 6-year-old boy with Down syndrome. The General Medical Council (GMC), the U.K. medical regulatory agency, voted to take away her license. The decision aroused the ire of physicians worldwide, who noted the poor supervision and undue pressures she was under.

In August 2018, the U.K. Court of Appeal noted that the general clinical competency of Dr. Bawa-Garba was never at issue, and that “the risk of her clinical practice suddenly and without explanation falling below the standards expected on any given day is no higher than for any other reasonably competent doctor.” It reversed the expulsion order and reinstated the 1-year suspension recommended by the Medical Practitioners Tribunal.

Even as the GMC accepted this appellate decision and had convened a commission to look into criminal negligence, it nonetheless received heavy criticism for having overreacted – and for its failure to speak out more forcefully to support those practicing under oppressive conditions.

For example, the Doctors’ Association UK said the GMC had shown it could not be trusted to be objective and nonpunitive. The case, it noted, had “united the medical profession in fear and outrage,” whereby “a pediatrician in training ... a highly regarded doctor, with a previously unblemished record, [was] convicted of [the criminal offence of] gross negligence manslaughter for judgments made whilst doing the jobs of several doctors at once, covering six wards across four floors, responding to numerous pediatric emergencies, without a functioning IT system, and in the absence of a consultant [senior physician], all when just returning from 14 months of maternity leave.”

The Royal College of Pediatrics and Child Health said it had “previously flagged the importance of fostering a culture of supporting doctors to learn from their mistakes, rather than one which seeks to blame.” And the British Medical Association said, “lessons must be learned from this case, which raises wider issues about the multiple factors that affect patient safety in an NHS under extreme pressure, rather than narrowly focusing only on individuals.”2

The fiasco surrounding the Dr. Bawa-Garba case will hopefully result in action similar to that following the seminal report that medical errors account for nearly 100,000 annual hospital deaths in the United States. That study was not restricted to house staff mistakes, but involved multiple hospitals and hospital staff members. It spawned a nationwide reappraisal of how to approach medical errors, and it spurred the Institute of Medicine to recommend that the profession shift “from a culture of blame to a culture of safety.”3

Criminal prosecution in the United States is decidedly rare in death or injury occurring during the course of patient care – for either trainees or attending physicians. A malpractice lawsuit would have been a far more likely outcome had the Dr. Bawa-Garba case taken place in the United States.

Lawsuits against U.S. house staff are not rare, and resident physicians are regularly joined as codefendants with their supervisors, who may be medical school faculty or community practitioners admitting to “team care.” Regulatory actions are typically directed against fully licensed physicians, rather than the trainees. Instead, the director of the training program itself would take corrective action against an errant resident, if warranted, which can range from a warning to outright dismissal from the program.

How is negligence law applied to a trainee? Should it demand the same standard of care as it would a fully qualified attending physician?4 Surprisingly, the courts are split on this question. Some have favored using a dual standard of conduct, with trainees being held to a lower standard.

This was articulated in Rush v. Akron General Hospital, which involved a patient who had fallen through a glass door. The patient suffered several lacerations to his shoulder, which the intern treated. However, when two remaining pieces of glass were later discovered in the area of injury, the patient sued the intern for negligence.

The court dismissed the claim, finding that the intern had practiced with the skill and care of his peers of similar training. “It would be unreasonable to exact from an intern, doing emergency work in a hospital, that high degree of skill which is impliedly possessed by a physician and surgeon in the general practice of his profession, with an extensive and constant practice in hospitals and the community,” the court noted.5

However, not all courts have embraced this dual standard of review. The New Jersey Superior Court held that licensed residents should be judged by a standard applicable to a general practitioner, because any reduction in the standard of care would set a problematic precedent.6 In that case, the residents allegedly failed to reinsert a nasogastric tube, which caused the patient to aspirate.

And in Pratt v. Stein, a second-year resident was judged by an even higher standard – that of a specialist – after he had allegedly administered a toxic dose of neomycin to a postoperative patient, which resulted in deafness. Although the lower court had ruled that the resident should be held to the standard of an ordinary physician, the Pennsylvania appellate court disagreed, reasoning that “a resident should be held to the standard of a specialist when the resident is acting within his field of specialty. In our estimation, this is a sound conclusion. A resident is already a physician who has chosen to specialize, and thus possesses a higher degree of knowledge and skill in the chosen specialty than does the nonspecialist.”7

However, a subsequent decision from the same jurisdiction suggests a retreat from this unrealistic standard.

An orthopedic resident allegedly applied a cast with insufficient padding to the broken wrist of a patient. The plaintiff claimed this led to soft-tissue infection with Staphylococcus aureus, with complicating septicemia, staphylococcal endocarditis, and eventual death.

Dr. S.Y. Tan

The court held that the resident’s standard of care should be “higher than that for general practitioners but less than that for fully trained orthopedic specialists. ... To require a resident to meet the same standard of care as fully trained specialists would be unrealistic. A resident may have had only days or weeks of training in the specialized residency program; a specialist, on the other hand, will have completed the residency program and may also have had years of experience in the specialized field. If we were to require the resident to exercise the same degree of skill and training as the specialist, we would, in effect, be requiring the resident to do the impossible.”8
 

 

 

Dr. Tan is emeritus professor of medicine and former adjunct professor of law at the University of Hawaii, Honolulu. This article is meant to be educational and does not constitute medical, ethical, or legal advice. For additional information, readers may contact the author at [email protected].

References

1. Saurabh Jha, “To Err Is Homicide in Britain: The Case of Hadiza Bawa-Garba.” The Health Care Blog, Jan. 30, 2018.

2. “‘Lessons Must Be Learned’: UK Societies on Bawa-Garba Ruling.” Medscape, Aug. 14, 2018.

3. “To Err is Human: Building a Safer Health System.” Institute of Medicine, National Academies Press, Washington D.C., 1999.

4. JAMA. 2004 Sep 1;292(9):1051-6.

5. Rush v. Akron General Hospital, 171 N.E.2d 378 (Ohio Ct. App. 1987).

6. Clark v. University Hospital, 914 A.2d 838 (N.J. Super. 2006).

7. Pratt v. Stein, 444 A.2d 674 (Pa. Super. 1980).

8. Jistarri v. Nappi, 549 A.2d 210 (Pa. Super. 1988).

Publications
Topics
Sections

 

Question: Which of the following regarding medical trainee liability is best?

A. Trainees are commonly named as codefendants with their attending physician in a medical malpractice lawsuit.

B. “From a culture of blame to a culture of safety” is a rallying cry against poor work conditions.

C. House officers are always judged by a lower standard, because they are not fully qualified.

D. A, B, and C are correct.

E. A and C are correct.

Answer: A. A recent case of trainee liability in the United Kingdom resulted in criminal prosecution followed by the trainee being struck off the medical register.1 Dr. Hadiza Bawa-Garba, a pediatric trainee in the U.K. National Health Service, was prosecuted in a court of law and found guilty of manslaughter by gross negligence for the septic death of a 6-year-old boy with Down syndrome. The General Medical Council (GMC), the U.K. medical regulatory agency, voted to take away her license. The decision aroused the ire of physicians worldwide, who noted the poor supervision and undue pressures she was under.

In August 2018, the U.K. Court of Appeal noted that the general clinical competency of Dr. Bawa-Garba was never at issue, and that “the risk of her clinical practice suddenly and without explanation falling below the standards expected on any given day is no higher than for any other reasonably competent doctor.” It reversed the expulsion order and reinstated the 1-year suspension recommended by the Medical Practitioners Tribunal.

Even as the GMC accepted this appellate decision and had convened a commission to look into criminal negligence, it nonetheless received heavy criticism for having overreacted – and for its failure to speak out more forcefully to support those practicing under oppressive conditions.

For example, the Doctors’ Association UK said the GMC had shown it could not be trusted to be objective and nonpunitive. The case, it noted, had “united the medical profession in fear and outrage,” whereby “a pediatrician in training ... a highly regarded doctor, with a previously unblemished record, [was] convicted of [the criminal offence of] gross negligence manslaughter for judgments made whilst doing the jobs of several doctors at once, covering six wards across four floors, responding to numerous pediatric emergencies, without a functioning IT system, and in the absence of a consultant [senior physician], all when just returning from 14 months of maternity leave.”

The Royal College of Pediatrics and Child Health said it had “previously flagged the importance of fostering a culture of supporting doctors to learn from their mistakes, rather than one which seeks to blame.” And the British Medical Association said, “lessons must be learned from this case, which raises wider issues about the multiple factors that affect patient safety in an NHS under extreme pressure, rather than narrowly focusing only on individuals.”2

The fiasco surrounding the Dr. Bawa-Garba case will hopefully result in action similar to that following the seminal report that medical errors account for nearly 100,000 annual hospital deaths in the United States. That study was not restricted to house staff mistakes, but involved multiple hospitals and hospital staff members. It spawned a nationwide reappraisal of how to approach medical errors, and it spurred the Institute of Medicine to recommend that the profession shift “from a culture of blame to a culture of safety.”3

Criminal prosecution in the United States is decidedly rare in death or injury occurring during the course of patient care – for either trainees or attending physicians. A malpractice lawsuit would have been a far more likely outcome had the Dr. Bawa-Garba case taken place in the United States.

Lawsuits against U.S. house staff are not rare, and resident physicians are regularly joined as codefendants with their supervisors, who may be medical school faculty or community practitioners admitting to “team care.” Regulatory actions are typically directed against fully licensed physicians, rather than the trainees. Instead, the director of the training program itself would take corrective action against an errant resident, if warranted, which can range from a warning to outright dismissal from the program.

How is negligence law applied to a trainee? Should it demand the same standard of care as it would a fully qualified attending physician?4 Surprisingly, the courts are split on this question. Some have favored using a dual standard of conduct, with trainees being held to a lower standard.

This was articulated in Rush v. Akron General Hospital, which involved a patient who had fallen through a glass door. The patient suffered several lacerations to his shoulder, which the intern treated. However, when two remaining pieces of glass were later discovered in the area of injury, the patient sued the intern for negligence.

The court dismissed the claim, finding that the intern had practiced with the skill and care of his peers of similar training. “It would be unreasonable to exact from an intern, doing emergency work in a hospital, that high degree of skill which is impliedly possessed by a physician and surgeon in the general practice of his profession, with an extensive and constant practice in hospitals and the community,” the court noted.5

However, not all courts have embraced this dual standard of review. The New Jersey Superior Court held that licensed residents should be judged by a standard applicable to a general practitioner, because any reduction in the standard of care would set a problematic precedent.6 In that case, the residents allegedly failed to reinsert a nasogastric tube, which caused the patient to aspirate.

And in Pratt v. Stein, a second-year resident was judged by an even higher standard – that of a specialist – after he had allegedly administered a toxic dose of neomycin to a postoperative patient, which resulted in deafness. Although the lower court had ruled that the resident should be held to the standard of an ordinary physician, the Pennsylvania appellate court disagreed, reasoning that “a resident should be held to the standard of a specialist when the resident is acting within his field of specialty. In our estimation, this is a sound conclusion. A resident is already a physician who has chosen to specialize, and thus possesses a higher degree of knowledge and skill in the chosen specialty than does the nonspecialist.”7

However, a subsequent decision from the same jurisdiction suggests a retreat from this unrealistic standard.

An orthopedic resident allegedly applied a cast with insufficient padding to the broken wrist of a patient. The plaintiff claimed this led to soft-tissue infection with Staphylococcus aureus, with complicating septicemia, staphylococcal endocarditis, and eventual death.

Dr. S.Y. Tan

The court held that the resident’s standard of care should be “higher than that for general practitioners but less than that for fully trained orthopedic specialists. ... To require a resident to meet the same standard of care as fully trained specialists would be unrealistic. A resident may have had only days or weeks of training in the specialized residency program; a specialist, on the other hand, will have completed the residency program and may also have had years of experience in the specialized field. If we were to require the resident to exercise the same degree of skill and training as the specialist, we would, in effect, be requiring the resident to do the impossible.”8
 

 

 

Dr. Tan is emeritus professor of medicine and former adjunct professor of law at the University of Hawaii, Honolulu. This article is meant to be educational and does not constitute medical, ethical, or legal advice. For additional information, readers may contact the author at [email protected].

References

1. Saurabh Jha, “To Err Is Homicide in Britain: The Case of Hadiza Bawa-Garba.” The Health Care Blog, Jan. 30, 2018.

2. “‘Lessons Must Be Learned’: UK Societies on Bawa-Garba Ruling.” Medscape, Aug. 14, 2018.

3. “To Err is Human: Building a Safer Health System.” Institute of Medicine, National Academies Press, Washington D.C., 1999.

4. JAMA. 2004 Sep 1;292(9):1051-6.

5. Rush v. Akron General Hospital, 171 N.E.2d 378 (Ohio Ct. App. 1987).

6. Clark v. University Hospital, 914 A.2d 838 (N.J. Super. 2006).

7. Pratt v. Stein, 444 A.2d 674 (Pa. Super. 1980).

8. Jistarri v. Nappi, 549 A.2d 210 (Pa. Super. 1988).

 

Question: Which of the following regarding medical trainee liability is best?

A. Trainees are commonly named as codefendants with their attending physician in a medical malpractice lawsuit.

B. “From a culture of blame to a culture of safety” is a rallying cry against poor work conditions.

C. House officers are always judged by a lower standard, because they are not fully qualified.

D. A, B, and C are correct.

E. A and C are correct.

Answer: A. A recent case of trainee liability in the United Kingdom resulted in criminal prosecution followed by the trainee being struck off the medical register.1 Dr. Hadiza Bawa-Garba, a pediatric trainee in the U.K. National Health Service, was prosecuted in a court of law and found guilty of manslaughter by gross negligence for the septic death of a 6-year-old boy with Down syndrome. The General Medical Council (GMC), the U.K. medical regulatory agency, voted to take away her license. The decision aroused the ire of physicians worldwide, who noted the poor supervision and undue pressures she was under.

In August 2018, the U.K. Court of Appeal noted that the general clinical competency of Dr. Bawa-Garba was never at issue, and that “the risk of her clinical practice suddenly and without explanation falling below the standards expected on any given day is no higher than for any other reasonably competent doctor.” It reversed the expulsion order and reinstated the 1-year suspension recommended by the Medical Practitioners Tribunal.

Even as the GMC accepted this appellate decision and had convened a commission to look into criminal negligence, it nonetheless received heavy criticism for having overreacted – and for its failure to speak out more forcefully to support those practicing under oppressive conditions.

For example, the Doctors’ Association UK said the GMC had shown it could not be trusted to be objective and nonpunitive. The case, it noted, had “united the medical profession in fear and outrage,” whereby “a pediatrician in training ... a highly regarded doctor, with a previously unblemished record, [was] convicted of [the criminal offence of] gross negligence manslaughter for judgments made whilst doing the jobs of several doctors at once, covering six wards across four floors, responding to numerous pediatric emergencies, without a functioning IT system, and in the absence of a consultant [senior physician], all when just returning from 14 months of maternity leave.”

The Royal College of Pediatrics and Child Health said it had “previously flagged the importance of fostering a culture of supporting doctors to learn from their mistakes, rather than one which seeks to blame.” And the British Medical Association said, “lessons must be learned from this case, which raises wider issues about the multiple factors that affect patient safety in an NHS under extreme pressure, rather than narrowly focusing only on individuals.”2

The fiasco surrounding the Dr. Bawa-Garba case will hopefully result in action similar to that following the seminal report that medical errors account for nearly 100,000 annual hospital deaths in the United States. That study was not restricted to house staff mistakes, but involved multiple hospitals and hospital staff members. It spawned a nationwide reappraisal of how to approach medical errors, and it spurred the Institute of Medicine to recommend that the profession shift “from a culture of blame to a culture of safety.”3

Criminal prosecution in the United States is decidedly rare in death or injury occurring during the course of patient care – for either trainees or attending physicians. A malpractice lawsuit would have been a far more likely outcome had the Dr. Bawa-Garba case taken place in the United States.

Lawsuits against U.S. house staff are not rare, and resident physicians are regularly joined as codefendants with their supervisors, who may be medical school faculty or community practitioners admitting to “team care.” Regulatory actions are typically directed against fully licensed physicians, rather than the trainees. Instead, the director of the training program itself would take corrective action against an errant resident, if warranted, which can range from a warning to outright dismissal from the program.

How is negligence law applied to a trainee? Should it demand the same standard of care as it would a fully qualified attending physician?4 Surprisingly, the courts are split on this question. Some have favored using a dual standard of conduct, with trainees being held to a lower standard.

This was articulated in Rush v. Akron General Hospital, which involved a patient who had fallen through a glass door. The patient suffered several lacerations to his shoulder, which the intern treated. However, when two remaining pieces of glass were later discovered in the area of injury, the patient sued the intern for negligence.

The court dismissed the claim, finding that the intern had practiced with the skill and care of his peers of similar training. “It would be unreasonable to exact from an intern, doing emergency work in a hospital, that high degree of skill which is impliedly possessed by a physician and surgeon in the general practice of his profession, with an extensive and constant practice in hospitals and the community,” the court noted.5

However, not all courts have embraced this dual standard of review. The New Jersey Superior Court held that licensed residents should be judged by a standard applicable to a general practitioner, because any reduction in the standard of care would set a problematic precedent.6 In that case, the residents allegedly failed to reinsert a nasogastric tube, which caused the patient to aspirate.

And in Pratt v. Stein, a second-year resident was judged by an even higher standard – that of a specialist – after he had allegedly administered a toxic dose of neomycin to a postoperative patient, which resulted in deafness. Although the lower court had ruled that the resident should be held to the standard of an ordinary physician, the Pennsylvania appellate court disagreed, reasoning that “a resident should be held to the standard of a specialist when the resident is acting within his field of specialty. In our estimation, this is a sound conclusion. A resident is already a physician who has chosen to specialize, and thus possesses a higher degree of knowledge and skill in the chosen specialty than does the nonspecialist.”7

However, a subsequent decision from the same jurisdiction suggests a retreat from this unrealistic standard.

An orthopedic resident allegedly applied a cast with insufficient padding to the broken wrist of a patient. The plaintiff claimed this led to soft-tissue infection with Staphylococcus aureus, with complicating septicemia, staphylococcal endocarditis, and eventual death.

Dr. S.Y. Tan

The court held that the resident’s standard of care should be “higher than that for general practitioners but less than that for fully trained orthopedic specialists. ... To require a resident to meet the same standard of care as fully trained specialists would be unrealistic. A resident may have had only days or weeks of training in the specialized residency program; a specialist, on the other hand, will have completed the residency program and may also have had years of experience in the specialized field. If we were to require the resident to exercise the same degree of skill and training as the specialist, we would, in effect, be requiring the resident to do the impossible.”8
 

 

 

Dr. Tan is emeritus professor of medicine and former adjunct professor of law at the University of Hawaii, Honolulu. This article is meant to be educational and does not constitute medical, ethical, or legal advice. For additional information, readers may contact the author at [email protected].

References

1. Saurabh Jha, “To Err Is Homicide in Britain: The Case of Hadiza Bawa-Garba.” The Health Care Blog, Jan. 30, 2018.

2. “‘Lessons Must Be Learned’: UK Societies on Bawa-Garba Ruling.” Medscape, Aug. 14, 2018.

3. “To Err is Human: Building a Safer Health System.” Institute of Medicine, National Academies Press, Washington D.C., 1999.

4. JAMA. 2004 Sep 1;292(9):1051-6.

5. Rush v. Akron General Hospital, 171 N.E.2d 378 (Ohio Ct. App. 1987).

6. Clark v. University Hospital, 914 A.2d 838 (N.J. Super. 2006).

7. Pratt v. Stein, 444 A.2d 674 (Pa. Super. 1980).

8. Jistarri v. Nappi, 549 A.2d 210 (Pa. Super. 1988).

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Have apheresis units, will travel

Article Type
Changed
Fri, 01/04/2019 - 10:36

 

BOSTON – If donors can’t get to the apheresis center, bring the apheresis center to the donors.

Neil Osterweil/ MDedge News
David Anthony and Amber Lazareff

Responding to the request of a patient with cancer, David Anthony, Amber Lazareff, RN, and their colleagues at the University of California at Los Angeles Blood and Platelet Center explored adding mobile apheresis units to their existing community blood drives. They found that, with careful planning and coordination, they could augment their supply of vital blood products and introduce potential new donors to the idea of apheresis donations at the hospital.

“There was a needs drive for an oncology patient at UCLA. She wanted to bring in donors and had her whole community behind her, and we thought well, she’s an oncology patient and she uses platelets, and we had talked about doing platelets out in the field rather than just at fixed sites, and we thought that this would be a good chance to try it,” Mr. Anthony said in an interview at AABB 2018, the annual meeting of the group formerly known as the American Association of Blood Banks.

Until the mobile unit was established, apheresis platelet collections for the hospital-based donor center were limited to two fixed collection sites, with mobile units used only for collection of whole blood.

To see whether concurrent whole blood and platelet community drives were practical, the center’s blood donor field recruiter requested to schedule a community drive in a region of the county where potential donors had expressed a high level of interest in apheresis platelet donations.

Operations staff visited the site to assess its suitability, including appropriate space for donor registration and history taking, separate areas for whole blood and apheresis donations, and a donor recovery area. The assessment included ensuring that there were suitable electrical outlets, space, and support for apheresis machines.

“Over about 2 weeks we discussed with our medical directors, [infusion technicians], and our mobile people what we would need to do it. The recruiter out in the field was able to go to a high school drive out in that area, recruit donors, and get [platelet] precounts from them so that we could find out who was a good candidate,” Mr. Anthony said.

Once they had platelet counts from potential apheresis donors, 10 donors were prescreened based on their eligibility to donate multiple products, history of donations and red blood cell loss, and, for women who had previously had more than one pregnancy, favorable HLA test results.

Four of the prescreened donors were scheduled to donate platelets, and the time slot also included two backup donors, one of whom ultimately donated platelets. Of the four apheresis donors, three were first-time platelet donors.

The first drive collected seven platelet products, including three double products and one single product.

The donated products resulted in about a $3,000 cost savings by obviating the need for purchasing products from an outside supplier, and bolstered the blood bank’s inventory on a normally low collection day, the authors reported.

“We’ve had two more apheresis drives since then, and we’ll have another one in 3 weeks,” Mr. Anthony said.

He acknowledged that it is more challenging to recruit, educate, and ideally retain donors in the field than in the brick-and-mortar hospital setting.

“We have to make sure that they’re going to show up if we’re going to make the effort to take a machine out there, whereas at our centers, we have regular donors who come in every 2 weeks, it’s easy for them to make an appointment, and they know where we are,” he said.

The center plans to continue concurrent monthly whole blood and platelet collection drives, he added.

The pilot program was internally funded. The authors reported having no relevant conflicts of interest.

SOURCE: Anthony D et al., AABB 2018, Poster BBC 135.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

BOSTON – If donors can’t get to the apheresis center, bring the apheresis center to the donors.

Neil Osterweil/ MDedge News
David Anthony and Amber Lazareff

Responding to the request of a patient with cancer, David Anthony, Amber Lazareff, RN, and their colleagues at the University of California at Los Angeles Blood and Platelet Center explored adding mobile apheresis units to their existing community blood drives. They found that, with careful planning and coordination, they could augment their supply of vital blood products and introduce potential new donors to the idea of apheresis donations at the hospital.

“There was a needs drive for an oncology patient at UCLA. She wanted to bring in donors and had her whole community behind her, and we thought well, she’s an oncology patient and she uses platelets, and we had talked about doing platelets out in the field rather than just at fixed sites, and we thought that this would be a good chance to try it,” Mr. Anthony said in an interview at AABB 2018, the annual meeting of the group formerly known as the American Association of Blood Banks.

Until the mobile unit was established, apheresis platelet collections for the hospital-based donor center were limited to two fixed collection sites, with mobile units used only for collection of whole blood.

To see whether concurrent whole blood and platelet community drives were practical, the center’s blood donor field recruiter requested to schedule a community drive in a region of the county where potential donors had expressed a high level of interest in apheresis platelet donations.

Operations staff visited the site to assess its suitability, including appropriate space for donor registration and history taking, separate areas for whole blood and apheresis donations, and a donor recovery area. The assessment included ensuring that there were suitable electrical outlets, space, and support for apheresis machines.

“Over about 2 weeks we discussed with our medical directors, [infusion technicians], and our mobile people what we would need to do it. The recruiter out in the field was able to go to a high school drive out in that area, recruit donors, and get [platelet] precounts from them so that we could find out who was a good candidate,” Mr. Anthony said.

Once they had platelet counts from potential apheresis donors, 10 donors were prescreened based on their eligibility to donate multiple products, history of donations and red blood cell loss, and, for women who had previously had more than one pregnancy, favorable HLA test results.

Four of the prescreened donors were scheduled to donate platelets, and the time slot also included two backup donors, one of whom ultimately donated platelets. Of the four apheresis donors, three were first-time platelet donors.

The first drive collected seven platelet products, including three double products and one single product.

The donated products resulted in about a $3,000 cost savings by obviating the need for purchasing products from an outside supplier, and bolstered the blood bank’s inventory on a normally low collection day, the authors reported.

“We’ve had two more apheresis drives since then, and we’ll have another one in 3 weeks,” Mr. Anthony said.

He acknowledged that it is more challenging to recruit, educate, and ideally retain donors in the field than in the brick-and-mortar hospital setting.

“We have to make sure that they’re going to show up if we’re going to make the effort to take a machine out there, whereas at our centers, we have regular donors who come in every 2 weeks, it’s easy for them to make an appointment, and they know where we are,” he said.

The center plans to continue concurrent monthly whole blood and platelet collection drives, he added.

The pilot program was internally funded. The authors reported having no relevant conflicts of interest.

SOURCE: Anthony D et al., AABB 2018, Poster BBC 135.

 

BOSTON – If donors can’t get to the apheresis center, bring the apheresis center to the donors.

Neil Osterweil/ MDedge News
David Anthony and Amber Lazareff

Responding to the request of a patient with cancer, David Anthony, Amber Lazareff, RN, and their colleagues at the University of California at Los Angeles Blood and Platelet Center explored adding mobile apheresis units to their existing community blood drives. They found that, with careful planning and coordination, they could augment their supply of vital blood products and introduce potential new donors to the idea of apheresis donations at the hospital.

“There was a needs drive for an oncology patient at UCLA. She wanted to bring in donors and had her whole community behind her, and we thought well, she’s an oncology patient and she uses platelets, and we had talked about doing platelets out in the field rather than just at fixed sites, and we thought that this would be a good chance to try it,” Mr. Anthony said in an interview at AABB 2018, the annual meeting of the group formerly known as the American Association of Blood Banks.

Until the mobile unit was established, apheresis platelet collections for the hospital-based donor center were limited to two fixed collection sites, with mobile units used only for collection of whole blood.

To see whether concurrent whole blood and platelet community drives were practical, the center’s blood donor field recruiter requested to schedule a community drive in a region of the county where potential donors had expressed a high level of interest in apheresis platelet donations.

Operations staff visited the site to assess its suitability, including appropriate space for donor registration and history taking, separate areas for whole blood and apheresis donations, and a donor recovery area. The assessment included ensuring that there were suitable electrical outlets, space, and support for apheresis machines.

“Over about 2 weeks we discussed with our medical directors, [infusion technicians], and our mobile people what we would need to do it. The recruiter out in the field was able to go to a high school drive out in that area, recruit donors, and get [platelet] precounts from them so that we could find out who was a good candidate,” Mr. Anthony said.

Once they had platelet counts from potential apheresis donors, 10 donors were prescreened based on their eligibility to donate multiple products, history of donations and red blood cell loss, and, for women who had previously had more than one pregnancy, favorable HLA test results.

Four of the prescreened donors were scheduled to donate platelets, and the time slot also included two backup donors, one of whom ultimately donated platelets. Of the four apheresis donors, three were first-time platelet donors.

The first drive collected seven platelet products, including three double products and one single product.

The donated products resulted in about a $3,000 cost savings by obviating the need for purchasing products from an outside supplier, and bolstered the blood bank’s inventory on a normally low collection day, the authors reported.

“We’ve had two more apheresis drives since then, and we’ll have another one in 3 weeks,” Mr. Anthony said.

He acknowledged that it is more challenging to recruit, educate, and ideally retain donors in the field than in the brick-and-mortar hospital setting.

“We have to make sure that they’re going to show up if we’re going to make the effort to take a machine out there, whereas at our centers, we have regular donors who come in every 2 weeks, it’s easy for them to make an appointment, and they know where we are,” he said.

The center plans to continue concurrent monthly whole blood and platelet collection drives, he added.

The pilot program was internally funded. The authors reported having no relevant conflicts of interest.

SOURCE: Anthony D et al., AABB 2018, Poster BBC 135.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT AABB 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Apheresis platelet collection in the field is practical with proper planning and support.

Major finding: Field-based collection of platelet products saved costs and augmented the hospital’s supply on a normally low collection day.

Study details: Pilot program testing apheresis platelet donations during community blood drives.

Disclosures: The pilot program was internally funded. The authors reported having no relevant conflicts of interest.

Source: Anthony D et al. AABB 2018, Poster BBC 135.

Disqus Comments
Default
Use ProPublica

Does the preterm birth racial disparity persist among black and white IVF users?

Article Type
Changed
Tue, 10/16/2018 - 09:17
Display Headline
Does the preterm birth racial disparity persist among black and white IVF users?

Investigators from the National Institutes of Health and Shady Grove Fertility found that among women having a singleton live birth resulting from in vitro fertilization (IVF) that black women are at higher risk for lower gestational age and preterm delivery than white women.1 The study results were presented at the American Society for Reproductive Medicine (ASRM) 2018 annual meeting (October 6 to 10, Denver, Colorado).

Kate Devine, MD, coinvestigator of the retrospective cohort study said in an interview with OBG Management that “It’s been well documented that African Americans have a higher preterm birth rate in the United States compared to Caucasians and the overall population. While the exact mechanism of preterm birth is unknown and likely varied, and while the mechanism for the preterm birth rate being higher in African Americans is not well understood, it has been hypothesized that socioeconomic factors are responsible at least in part.”2 She added that the investigators used a population of women receiving IVF for the study because “access to reproductive care and IVF is in some way a leveling factor in terms of socioeconomics.”

Details of the study. The investigators reviewed all singleton IVF pregnancies ending in live birth among women self-identifying as white, black, Asian, or Hispanic from 2004 to 2016 at a private IVF practice (N=10,371). The primary outcome was gestational age at birth, calculated as the number of days from oocyte retrieval to birth, plus 14, among white, black, Asian, and Hispanic women receiving IVF.

Births among black women occurred more than 6 days earlier than births among white women. The researchers noted that some of the shorter gestations among the black women could be explained by the higher average body mass index of the group (P<.0001). Dr. Devine explained that another contributing factor was the higher incidence of fibroid uterus among the black women (P<.0001). But after adjusting for these and other demographic variables, the black women still delivered 5.5 days earlier than the white women, and they were more than 3 times as likely to have either very preterm or extremely preterm deliveries (TABLE).1



Research implications. Dr. Devine said that black pregnant patients “perhaps should be monitored more closely” for signs or symptoms suggestive of preterm labor and would like to see more research into understanding the mechanisms of preterm birth that are resulting in greater rates of preterm birth among black women. She mentioned that research into how fibroids impact obstetric outcomes is also important.

Share your thoughts! Send your Letters to the Editor to [email protected]. Please include your name and the city and state in which you practice.

 

 

 

References
  1. Bishop LA, Devine K, Sasson I, et al. Lower gestational age and increased risk of preterm birth associated with singleton live birth resulting from in vitro fertilization (IVF) among African American versus comparable Caucasian women. Fertil Steril. 2018;110(45 suppl):e7.
Issue
OBG Management - 30(10)
Publications
Topics
Sections
Related Articles

Investigators from the National Institutes of Health and Shady Grove Fertility found that among women having a singleton live birth resulting from in vitro fertilization (IVF) that black women are at higher risk for lower gestational age and preterm delivery than white women.1 The study results were presented at the American Society for Reproductive Medicine (ASRM) 2018 annual meeting (October 6 to 10, Denver, Colorado).

Kate Devine, MD, coinvestigator of the retrospective cohort study said in an interview with OBG Management that “It’s been well documented that African Americans have a higher preterm birth rate in the United States compared to Caucasians and the overall population. While the exact mechanism of preterm birth is unknown and likely varied, and while the mechanism for the preterm birth rate being higher in African Americans is not well understood, it has been hypothesized that socioeconomic factors are responsible at least in part.”2 She added that the investigators used a population of women receiving IVF for the study because “access to reproductive care and IVF is in some way a leveling factor in terms of socioeconomics.”

Details of the study. The investigators reviewed all singleton IVF pregnancies ending in live birth among women self-identifying as white, black, Asian, or Hispanic from 2004 to 2016 at a private IVF practice (N=10,371). The primary outcome was gestational age at birth, calculated as the number of days from oocyte retrieval to birth, plus 14, among white, black, Asian, and Hispanic women receiving IVF.

Births among black women occurred more than 6 days earlier than births among white women. The researchers noted that some of the shorter gestations among the black women could be explained by the higher average body mass index of the group (P<.0001). Dr. Devine explained that another contributing factor was the higher incidence of fibroid uterus among the black women (P<.0001). But after adjusting for these and other demographic variables, the black women still delivered 5.5 days earlier than the white women, and they were more than 3 times as likely to have either very preterm or extremely preterm deliveries (TABLE).1



Research implications. Dr. Devine said that black pregnant patients “perhaps should be monitored more closely” for signs or symptoms suggestive of preterm labor and would like to see more research into understanding the mechanisms of preterm birth that are resulting in greater rates of preterm birth among black women. She mentioned that research into how fibroids impact obstetric outcomes is also important.

Share your thoughts! Send your Letters to the Editor to [email protected]. Please include your name and the city and state in which you practice.

 

 

 

Investigators from the National Institutes of Health and Shady Grove Fertility found that among women having a singleton live birth resulting from in vitro fertilization (IVF) that black women are at higher risk for lower gestational age and preterm delivery than white women.1 The study results were presented at the American Society for Reproductive Medicine (ASRM) 2018 annual meeting (October 6 to 10, Denver, Colorado).

Kate Devine, MD, coinvestigator of the retrospective cohort study said in an interview with OBG Management that “It’s been well documented that African Americans have a higher preterm birth rate in the United States compared to Caucasians and the overall population. While the exact mechanism of preterm birth is unknown and likely varied, and while the mechanism for the preterm birth rate being higher in African Americans is not well understood, it has been hypothesized that socioeconomic factors are responsible at least in part.”2 She added that the investigators used a population of women receiving IVF for the study because “access to reproductive care and IVF is in some way a leveling factor in terms of socioeconomics.”

Details of the study. The investigators reviewed all singleton IVF pregnancies ending in live birth among women self-identifying as white, black, Asian, or Hispanic from 2004 to 2016 at a private IVF practice (N=10,371). The primary outcome was gestational age at birth, calculated as the number of days from oocyte retrieval to birth, plus 14, among white, black, Asian, and Hispanic women receiving IVF.

Births among black women occurred more than 6 days earlier than births among white women. The researchers noted that some of the shorter gestations among the black women could be explained by the higher average body mass index of the group (P<.0001). Dr. Devine explained that another contributing factor was the higher incidence of fibroid uterus among the black women (P<.0001). But after adjusting for these and other demographic variables, the black women still delivered 5.5 days earlier than the white women, and they were more than 3 times as likely to have either very preterm or extremely preterm deliveries (TABLE).1



Research implications. Dr. Devine said that black pregnant patients “perhaps should be monitored more closely” for signs or symptoms suggestive of preterm labor and would like to see more research into understanding the mechanisms of preterm birth that are resulting in greater rates of preterm birth among black women. She mentioned that research into how fibroids impact obstetric outcomes is also important.

Share your thoughts! Send your Letters to the Editor to [email protected]. Please include your name and the city and state in which you practice.

 

 

 

References
  1. Bishop LA, Devine K, Sasson I, et al. Lower gestational age and increased risk of preterm birth associated with singleton live birth resulting from in vitro fertilization (IVF) among African American versus comparable Caucasian women. Fertil Steril. 2018;110(45 suppl):e7.
References
  1. Bishop LA, Devine K, Sasson I, et al. Lower gestational age and increased risk of preterm birth associated with singleton live birth resulting from in vitro fertilization (IVF) among African American versus comparable Caucasian women. Fertil Steril. 2018;110(45 suppl):e7.
Issue
OBG Management - 30(10)
Issue
OBG Management - 30(10)
Publications
Publications
Topics
Article Type
Display Headline
Does the preterm birth racial disparity persist among black and white IVF users?
Display Headline
Does the preterm birth racial disparity persist among black and white IVF users?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Aspirin cuts risk of ovarian and liver cancer

Has aspirin for cancer chemoprevention “arrived”?
Article Type
Changed
Wed, 05/26/2021 - 13:48

Regular long-term aspirin use may lower the risk of hepatocellular carcinoma (HCC) and ovarian cancer, adding to the growing evidence that aspirin may play a role as a chemopreventive agent, according to two new studies published in JAMA Oncology.

In the first study, led by Tracey G. Simon, MD, of Massachusetts General Hospital, Boston, the authors evaluated the associations between aspirin dose and duration of use and the risk of developing HCC. They conducted a population-based study, with a pooled analysis of two large prospective U.S. cohort studies: the Nurses’ Health Study and the Health Professionals Follow-up Study. The cohort included a total of 133,371 health care professionals who reported long-term data on aspirin use, frequency, dosage, and duration of use.

For the 87,507 female participants, reporting began in 1980, and for the 45,864 men, reporting began in 1986. The mean age for women was 62 years and was 64 years for men at the midpoint of follow-up (1996). Compared with nonaspirin users, those who used aspirin regularly tended to be older, former smokers, and regularly used statins and multivitamins. During the follow-up period, which was more than 26 years, there were 108 incident cases of HCC (65 women, 43 men; 47 with noncirrhotic HCC).

The investigators found that regular aspirin use was associated with a significantly lower HCC risk versus nonregular use (multivariable hazard ratio, 0.51; 95% confidence interval, 0.34-0.77), and estimates were similar for both sexes. Adjustments for regular NSAID use (for example, at least two tablets per week) did not change the data, and results were similar after further adjustment for coffee consumption and adherence to a healthy diet. The benefit also appeared to be dose related, as compared with nonuse, the multivariable-adjusted HR for HCC was 0.87 (95% CI, 0.51-1.48) for up to 1.5 tablets of standard-dose aspirin per week and 0.51 (95% CI, 0.30-0.86) for 1.5-5 tablets per week. The most benefit was for at least five tablets per week (HR, 0.49; 95% CI, 0.28-0.96; P = .006).

“Our findings add to the growing literature suggesting that the chemopreventive effects of aspirin may extend beyond colorectal cancer,” they wrote.

In the second study, Mollie E. Barnard, ScD, of the Harvard School of Public Health, Boston, and her colleagues looked at whether regular aspirin or NSAID use, as well as the patterns of use, were associated with a lower risk of ovarian cancer.

The data used were obtained from 93,664 women in the Nurses’ Health Study (NHS), who were followed up from 1980 to 2014, and 111,834 people in the Nurses’ Health Study II (NHSII), who were followed up from 1989 to 2015. For each type of agent, including aspirin, low-dose aspirin, nonaspirin NSAIDs, and acetaminophen, they evaluated the timing, duration, frequency, and number of tablets that were used. The mean age of participants in the NHS at baseline was 45.9 years and 34.2 years in the NHSII.

There were 1,054 incident cases of epithelial ovarian cancer identified during the study period. The authors did not detect any significant associations between aspirin and ovarian cancer risk when current users and nonusers were compared, regardless of dose (HR, 0.99; 95% CI, 0.83-1.19). But when low-dose (less than or equal to 100 mg) and standard-dose (325 mg) aspirin were analyzed separately, an inverse association for low-dose aspirin (HR, 0.77; 95% CI, 0.61-0.96) was observed. However, there was no association for standard-dose aspirin (HR, 1.17; 95% CI, 0.92-1.49).

In contrast, use of nonaspirin NSAIDs was positively associated with a higher risk of ovarian cancer when compared with nonuse (HR, 1.19; 95% CI, 1.00-1.41), and there were significant positive trends for duration of use (P = .02) and cumulative average tablets per week (P = .03). No clear associations were identified for acetaminophen use.

“Our results also suggest an increased risk of ovarian cancer among long-term, high-quantity users of nonaspirin analgesics, although this finding may reflect unmeasured confounding,” wrote Dr. Barnard and her coauthors. “Further exploration is warranted to evaluate the mechanisms by which heavy use of aspirin, nonaspirin NSAIDs, and acetaminophen may contribute to the development of ovarian cancer and to replicate our findings.”

The ovarian cancer study was supported by awards from the National Institutes of Health. Dr. Barnard was supported by awards from the National Cancer Institute, and her coauthors had no disclosures to report. The HCC study was funded by an infrastructure grant from the Nurses’ Health Study, an infrastructure grant from the Health Professionals Follow-up Study, and NIH grants to several of the authors. Dr. Chan has previously served as a consultant for Bayer on work unrelated to this article. No other disclosures were reported.

SOURCES: Barnard ME et al. JAMA Oncol. 2018 Oct 4. doi: 10.1001/jamaoncol.2018.4149; Simon TG et al. JAMA Oncol. 2018 Oct 4. doi: 10.1001/jamaoncol.2018.4154.

Body

In an accompanying editorial published in JAMA Oncology, Victoria L. Seewaldt, MD, of the City of Hope Comprehensive Cancer Center in Duarte, Calif., asked if we “have arrived,” as these two studies are a critical step in realizing the potential of aspirin for cancer chemoprevention beyond colorectal cancer.

Aspirin use is very common in the United States, with almost half of adults aged between 45 and 75 years taking it regularly. Many regular users also believe that aspirin has potential to protect against cancer, and in a 2015 study – which was conducted prior to any formal cancer prevention guidelines – 18% of those taking aspirin on a regular basis reported doing so to prevent cancer.

Based on the strength of the association between aspirin use and colorectal cancer risk reduction, the U.S. Preventive Services Task Force recommended in 2015 that, among individuals aged between 50 and 69 years who have specific cardiovascular risk profiles, colorectal cancer prevention be included as part of the rationale for regular aspirin prophylaxis, Dr. Seewaldt noted. Aspirin became the first drug to be included in USPSTF recommendations for cancer chemoprevention in a “population not characterized as having a high risk of developing cancer.”

But it now appears aspirin may be able to go beyond colorectal cancer for chemoprevention. Ovarian cancer and hepatocellular carcinoma are in need of new prevention strategies and these findings provide important information that can help guide chemoprevention with aspirin.

These two studies “have the power to start to change clinical practice,” Dr. Seewaldt wrote, but more research is needed to better understand the underlying mechanism behind the appropriate dose and duration of use. Importantly, the authors of both studies cautioned that the potential benefits of aspirin must be weighed against the risk of bleeding, which is particularly important in patients with chronic liver disease.

“To reach the full promise of aspirin’s ability to prevent cancer, there needs to be better understanding of dose, duration, and mechanism,” she emphasized.

Dr. Seewaldt reported receiving grants from the National Institutes of Health/National Cancer Institute and is supported by the Prevent Cancer Foundation.

Publications
Topics
Sections
Body

In an accompanying editorial published in JAMA Oncology, Victoria L. Seewaldt, MD, of the City of Hope Comprehensive Cancer Center in Duarte, Calif., asked if we “have arrived,” as these two studies are a critical step in realizing the potential of aspirin for cancer chemoprevention beyond colorectal cancer.

Aspirin use is very common in the United States, with almost half of adults aged between 45 and 75 years taking it regularly. Many regular users also believe that aspirin has potential to protect against cancer, and in a 2015 study – which was conducted prior to any formal cancer prevention guidelines – 18% of those taking aspirin on a regular basis reported doing so to prevent cancer.

Based on the strength of the association between aspirin use and colorectal cancer risk reduction, the U.S. Preventive Services Task Force recommended in 2015 that, among individuals aged between 50 and 69 years who have specific cardiovascular risk profiles, colorectal cancer prevention be included as part of the rationale for regular aspirin prophylaxis, Dr. Seewaldt noted. Aspirin became the first drug to be included in USPSTF recommendations for cancer chemoprevention in a “population not characterized as having a high risk of developing cancer.”

But it now appears aspirin may be able to go beyond colorectal cancer for chemoprevention. Ovarian cancer and hepatocellular carcinoma are in need of new prevention strategies and these findings provide important information that can help guide chemoprevention with aspirin.

These two studies “have the power to start to change clinical practice,” Dr. Seewaldt wrote, but more research is needed to better understand the underlying mechanism behind the appropriate dose and duration of use. Importantly, the authors of both studies cautioned that the potential benefits of aspirin must be weighed against the risk of bleeding, which is particularly important in patients with chronic liver disease.

“To reach the full promise of aspirin’s ability to prevent cancer, there needs to be better understanding of dose, duration, and mechanism,” she emphasized.

Dr. Seewaldt reported receiving grants from the National Institutes of Health/National Cancer Institute and is supported by the Prevent Cancer Foundation.

Body

In an accompanying editorial published in JAMA Oncology, Victoria L. Seewaldt, MD, of the City of Hope Comprehensive Cancer Center in Duarte, Calif., asked if we “have arrived,” as these two studies are a critical step in realizing the potential of aspirin for cancer chemoprevention beyond colorectal cancer.

Aspirin use is very common in the United States, with almost half of adults aged between 45 and 75 years taking it regularly. Many regular users also believe that aspirin has potential to protect against cancer, and in a 2015 study – which was conducted prior to any formal cancer prevention guidelines – 18% of those taking aspirin on a regular basis reported doing so to prevent cancer.

Based on the strength of the association between aspirin use and colorectal cancer risk reduction, the U.S. Preventive Services Task Force recommended in 2015 that, among individuals aged between 50 and 69 years who have specific cardiovascular risk profiles, colorectal cancer prevention be included as part of the rationale for regular aspirin prophylaxis, Dr. Seewaldt noted. Aspirin became the first drug to be included in USPSTF recommendations for cancer chemoprevention in a “population not characterized as having a high risk of developing cancer.”

But it now appears aspirin may be able to go beyond colorectal cancer for chemoprevention. Ovarian cancer and hepatocellular carcinoma are in need of new prevention strategies and these findings provide important information that can help guide chemoprevention with aspirin.

These two studies “have the power to start to change clinical practice,” Dr. Seewaldt wrote, but more research is needed to better understand the underlying mechanism behind the appropriate dose and duration of use. Importantly, the authors of both studies cautioned that the potential benefits of aspirin must be weighed against the risk of bleeding, which is particularly important in patients with chronic liver disease.

“To reach the full promise of aspirin’s ability to prevent cancer, there needs to be better understanding of dose, duration, and mechanism,” she emphasized.

Dr. Seewaldt reported receiving grants from the National Institutes of Health/National Cancer Institute and is supported by the Prevent Cancer Foundation.

Title
Has aspirin for cancer chemoprevention “arrived”?
Has aspirin for cancer chemoprevention “arrived”?

Regular long-term aspirin use may lower the risk of hepatocellular carcinoma (HCC) and ovarian cancer, adding to the growing evidence that aspirin may play a role as a chemopreventive agent, according to two new studies published in JAMA Oncology.

In the first study, led by Tracey G. Simon, MD, of Massachusetts General Hospital, Boston, the authors evaluated the associations between aspirin dose and duration of use and the risk of developing HCC. They conducted a population-based study, with a pooled analysis of two large prospective U.S. cohort studies: the Nurses’ Health Study and the Health Professionals Follow-up Study. The cohort included a total of 133,371 health care professionals who reported long-term data on aspirin use, frequency, dosage, and duration of use.

For the 87,507 female participants, reporting began in 1980, and for the 45,864 men, reporting began in 1986. The mean age for women was 62 years and was 64 years for men at the midpoint of follow-up (1996). Compared with nonaspirin users, those who used aspirin regularly tended to be older, former smokers, and regularly used statins and multivitamins. During the follow-up period, which was more than 26 years, there were 108 incident cases of HCC (65 women, 43 men; 47 with noncirrhotic HCC).

The investigators found that regular aspirin use was associated with a significantly lower HCC risk versus nonregular use (multivariable hazard ratio, 0.51; 95% confidence interval, 0.34-0.77), and estimates were similar for both sexes. Adjustments for regular NSAID use (for example, at least two tablets per week) did not change the data, and results were similar after further adjustment for coffee consumption and adherence to a healthy diet. The benefit also appeared to be dose related, as compared with nonuse, the multivariable-adjusted HR for HCC was 0.87 (95% CI, 0.51-1.48) for up to 1.5 tablets of standard-dose aspirin per week and 0.51 (95% CI, 0.30-0.86) for 1.5-5 tablets per week. The most benefit was for at least five tablets per week (HR, 0.49; 95% CI, 0.28-0.96; P = .006).

“Our findings add to the growing literature suggesting that the chemopreventive effects of aspirin may extend beyond colorectal cancer,” they wrote.

In the second study, Mollie E. Barnard, ScD, of the Harvard School of Public Health, Boston, and her colleagues looked at whether regular aspirin or NSAID use, as well as the patterns of use, were associated with a lower risk of ovarian cancer.

The data used were obtained from 93,664 women in the Nurses’ Health Study (NHS), who were followed up from 1980 to 2014, and 111,834 people in the Nurses’ Health Study II (NHSII), who were followed up from 1989 to 2015. For each type of agent, including aspirin, low-dose aspirin, nonaspirin NSAIDs, and acetaminophen, they evaluated the timing, duration, frequency, and number of tablets that were used. The mean age of participants in the NHS at baseline was 45.9 years and 34.2 years in the NHSII.

There were 1,054 incident cases of epithelial ovarian cancer identified during the study period. The authors did not detect any significant associations between aspirin and ovarian cancer risk when current users and nonusers were compared, regardless of dose (HR, 0.99; 95% CI, 0.83-1.19). But when low-dose (less than or equal to 100 mg) and standard-dose (325 mg) aspirin were analyzed separately, an inverse association for low-dose aspirin (HR, 0.77; 95% CI, 0.61-0.96) was observed. However, there was no association for standard-dose aspirin (HR, 1.17; 95% CI, 0.92-1.49).

In contrast, use of nonaspirin NSAIDs was positively associated with a higher risk of ovarian cancer when compared with nonuse (HR, 1.19; 95% CI, 1.00-1.41), and there were significant positive trends for duration of use (P = .02) and cumulative average tablets per week (P = .03). No clear associations were identified for acetaminophen use.

“Our results also suggest an increased risk of ovarian cancer among long-term, high-quantity users of nonaspirin analgesics, although this finding may reflect unmeasured confounding,” wrote Dr. Barnard and her coauthors. “Further exploration is warranted to evaluate the mechanisms by which heavy use of aspirin, nonaspirin NSAIDs, and acetaminophen may contribute to the development of ovarian cancer and to replicate our findings.”

The ovarian cancer study was supported by awards from the National Institutes of Health. Dr. Barnard was supported by awards from the National Cancer Institute, and her coauthors had no disclosures to report. The HCC study was funded by an infrastructure grant from the Nurses’ Health Study, an infrastructure grant from the Health Professionals Follow-up Study, and NIH grants to several of the authors. Dr. Chan has previously served as a consultant for Bayer on work unrelated to this article. No other disclosures were reported.

SOURCES: Barnard ME et al. JAMA Oncol. 2018 Oct 4. doi: 10.1001/jamaoncol.2018.4149; Simon TG et al. JAMA Oncol. 2018 Oct 4. doi: 10.1001/jamaoncol.2018.4154.

Regular long-term aspirin use may lower the risk of hepatocellular carcinoma (HCC) and ovarian cancer, adding to the growing evidence that aspirin may play a role as a chemopreventive agent, according to two new studies published in JAMA Oncology.

In the first study, led by Tracey G. Simon, MD, of Massachusetts General Hospital, Boston, the authors evaluated the associations between aspirin dose and duration of use and the risk of developing HCC. They conducted a population-based study, with a pooled analysis of two large prospective U.S. cohort studies: the Nurses’ Health Study and the Health Professionals Follow-up Study. The cohort included a total of 133,371 health care professionals who reported long-term data on aspirin use, frequency, dosage, and duration of use.

For the 87,507 female participants, reporting began in 1980, and for the 45,864 men, reporting began in 1986. The mean age for women was 62 years and was 64 years for men at the midpoint of follow-up (1996). Compared with nonaspirin users, those who used aspirin regularly tended to be older, former smokers, and regularly used statins and multivitamins. During the follow-up period, which was more than 26 years, there were 108 incident cases of HCC (65 women, 43 men; 47 with noncirrhotic HCC).

The investigators found that regular aspirin use was associated with a significantly lower HCC risk versus nonregular use (multivariable hazard ratio, 0.51; 95% confidence interval, 0.34-0.77), and estimates were similar for both sexes. Adjustments for regular NSAID use (for example, at least two tablets per week) did not change the data, and results were similar after further adjustment for coffee consumption and adherence to a healthy diet. The benefit also appeared to be dose related, as compared with nonuse, the multivariable-adjusted HR for HCC was 0.87 (95% CI, 0.51-1.48) for up to 1.5 tablets of standard-dose aspirin per week and 0.51 (95% CI, 0.30-0.86) for 1.5-5 tablets per week. The most benefit was for at least five tablets per week (HR, 0.49; 95% CI, 0.28-0.96; P = .006).

“Our findings add to the growing literature suggesting that the chemopreventive effects of aspirin may extend beyond colorectal cancer,” they wrote.

In the second study, Mollie E. Barnard, ScD, of the Harvard School of Public Health, Boston, and her colleagues looked at whether regular aspirin or NSAID use, as well as the patterns of use, were associated with a lower risk of ovarian cancer.

The data used were obtained from 93,664 women in the Nurses’ Health Study (NHS), who were followed up from 1980 to 2014, and 111,834 people in the Nurses’ Health Study II (NHSII), who were followed up from 1989 to 2015. For each type of agent, including aspirin, low-dose aspirin, nonaspirin NSAIDs, and acetaminophen, they evaluated the timing, duration, frequency, and number of tablets that were used. The mean age of participants in the NHS at baseline was 45.9 years and 34.2 years in the NHSII.

There were 1,054 incident cases of epithelial ovarian cancer identified during the study period. The authors did not detect any significant associations between aspirin and ovarian cancer risk when current users and nonusers were compared, regardless of dose (HR, 0.99; 95% CI, 0.83-1.19). But when low-dose (less than or equal to 100 mg) and standard-dose (325 mg) aspirin were analyzed separately, an inverse association for low-dose aspirin (HR, 0.77; 95% CI, 0.61-0.96) was observed. However, there was no association for standard-dose aspirin (HR, 1.17; 95% CI, 0.92-1.49).

In contrast, use of nonaspirin NSAIDs was positively associated with a higher risk of ovarian cancer when compared with nonuse (HR, 1.19; 95% CI, 1.00-1.41), and there were significant positive trends for duration of use (P = .02) and cumulative average tablets per week (P = .03). No clear associations were identified for acetaminophen use.

“Our results also suggest an increased risk of ovarian cancer among long-term, high-quantity users of nonaspirin analgesics, although this finding may reflect unmeasured confounding,” wrote Dr. Barnard and her coauthors. “Further exploration is warranted to evaluate the mechanisms by which heavy use of aspirin, nonaspirin NSAIDs, and acetaminophen may contribute to the development of ovarian cancer and to replicate our findings.”

The ovarian cancer study was supported by awards from the National Institutes of Health. Dr. Barnard was supported by awards from the National Cancer Institute, and her coauthors had no disclosures to report. The HCC study was funded by an infrastructure grant from the Nurses’ Health Study, an infrastructure grant from the Health Professionals Follow-up Study, and NIH grants to several of the authors. Dr. Chan has previously served as a consultant for Bayer on work unrelated to this article. No other disclosures were reported.

SOURCES: Barnard ME et al. JAMA Oncol. 2018 Oct 4. doi: 10.1001/jamaoncol.2018.4149; Simon TG et al. JAMA Oncol. 2018 Oct 4. doi: 10.1001/jamaoncol.2018.4154.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

Key clinical point: Regular aspirin use was associated with a decreased risk of ovarian cancer and hepatocellular carcinoma.

Major finding: Low-dose aspirin was associated with a 23% lower risk of ovarian cancer and a 49% reduced risk of developing hepatocellular carcinoma.

Study details: The hepatocellular carcinoma study was a population-based study of two nationwide, prospective cohorts of 87,507 men and 45,864 women; the ovarian cancer study was a cohort study using data from two prospective cohorts, with 93,664 people in one and 111,834 in the other.

Disclosures: The ovarian cancer study was supported by awards from the National Institutes of Health. Dr. Barnard was supported by awards from the National Cancer Institute, and her coauthors had no disclosures to report. The hepatocellular carcinoma study was funded by an infrastructure grant from the Nurses’ Health Study, an infrastructure grant from the Health Professionals Follow-up Study, and NIH grants to several of the authors. Dr. Chan has previously served as a consultant for Bayer on work unrelated to this article. No other disclosures were reported.

Sources: Barnard ME et al. JAMA Oncol. 2018 Oct 4. doi: 10.1001/jamaoncol.2018.4149; Simon TG et al. JAMA Oncol. 2018 Oct 4. doi: 10.1001/jamaoncol.2018.4154.

Disqus Comments
Default
Use ProPublica

Two-thirds of COPD patients not using inhalers correctly

Article Type
Changed
Fri, 06/23/2023 - 16:26

– Two-thirds of U.S. adults with COPD or asthma are making multiple errors in using their metered-dose inhalers (MDIs), according to new research. About half of patients failed to inhale slowly and deeply to ensure they received the appropriate dose, and about 40% of patients failed to hold their breath for 5-10 seconds afterward so that the medication made its way to their lungs, the findings show.

“There’s a need to educate patients on proper inhalation technique to optimize the appropriate delivery of medication,” Maryam Navaie, DrPH, of Advance Health Solutions in New York told attendees at the annual meeting of the American College of Chest Physicians. She also urged practitioners to think more carefully about what devices to prescribe to patients based on their own personal attributes.

“Nebulizer devices may be a better consideration for patients who have difficulty performing the necessary steps required by handheld inhalers,” Dr. Navaie said.

She and fellow researchers conducted a systematic review to gain more insights into the errors and difficulties experienced by U.S. adults using MDIs for COPD or asthma. They combed through PubMed, EMBASE, PsycINFO, Cochrane, and Google Scholar databases for English language studies about MDI-related errors in U.S. adult COPD or asthma patients published between January 2003 and February 2017.

The researchers included only randomized controlled trials and cross-sectional and observational studies, and they excluded studies with combined error rates across multiple devices so they could better parse out the data. They also used baseline rates only in studies that involved an intervention to reduce errors.

The researchers defined the proportion of overall MDI errors as “the percentage of patients who made errors in equal to or greater than 20% of inhalation steps.” They computed pooled estimates and created forest plots for both overall errors and for errors according to each step in using an MDI.

The eight studies they identified involved 1,221 patients, with ages ranging from a mean 48 to 82 years, 53% of whom were female. Nearly two-thirds of the patients had COPD (63.6%) while 36.4% had asthma. Most of the devices studied were MDIs alone (68.8%), while 31.2% included a spacer.

The pooled weighted average revealed a 66.5% error rate, that is, two-thirds of all the patients were making at least two errors during the 10 steps involved in using their device. The researchers then used individual error rates data in five studies to calculate the overall error rate for each step in using MDIs. The most common error, made by 73.8% of people in those five studies, was failing to attach the inhaler to the spacer. In addition, 68.7% of patients were failing to exhale fully and away from the inhaler before inhaling, and 47.8% were inhaling too fast instead of inhaling deeply.

“So these [findings] actually give you [some specific] ideas of how we could help improve patients’ ability to use the device properly,” Dr. Navaie told attendees, adding that these data can inform patient education needs and interventions.

Based on the data from those five studies, the error rates for all 10 steps to using an MDI were as follows:

  • Failed to shake inhaler before use (37.9%).
  • Failed to attach inhaler to spacer (73.8%).
  • Failed to exhale fully and away from inhaler before inhalation (68.7%).
  • Failed to place mouthpiece between teeth and sealed lips (7.4%).
  • Failed to actuate once during inhalation (24.4%).
  • Inhalation too fast, not deep (47.8%).
  • Failed to hold breath for 5-10 seconds (40.1%).
  • Failed to remove the inhaler/spacer from mouth (11.3%).
  • Failed to exhale after inhalation (33.2%).
  • Failed to repeat steps for second puff (36.7%).

Dr. Navaie also noted the investigators were surprised to learn that physicians themselves sometimes make several of these errors in explaining to patients how to use their devices.

“I think for the reps and other people who go out and visit doctors, it’s important to think about making sure the clinicians are using the devices properly,” Dr. Navaie said. She pointed out the potential for patients to forget steps between visits.

“One of the things a lot of our clinicians and key opinion leaders told us during the course of this study is that you shouldn’t just educate the patient at the time you are scripting the device but repeatedly because patients forget,” she said. She recommended having patients demonstrate their use of the device at each visit. If patients continue to struggle, it may be worth considering other therapies, such as a nebulizer, for patients unable to regularly use their devices correctly.

The meta-analysis was limited by the sparse research available in general on MDI errors in the U.S. adult population, so the data on error rates for each individual step may not be broadly generalizable. The studies also did not distinguish between rates among users with asthma vs. users with COPD. Further, too few data exist on associations between MDI errors and health outcomes to have a clear picture of the clinical implications of regularly making multiple errors in MDI use.

Dr. Navaie is employed by Advance Health Solutions, which received Sunovion Pharmaceuticals funding for the study.

SOURCE: Navaie M et al. CHEST 2018. doi: 10.1016/j.chest.2018.08.705.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Two-thirds of U.S. adults with COPD or asthma are making multiple errors in using their metered-dose inhalers (MDIs), according to new research. About half of patients failed to inhale slowly and deeply to ensure they received the appropriate dose, and about 40% of patients failed to hold their breath for 5-10 seconds afterward so that the medication made its way to their lungs, the findings show.

“There’s a need to educate patients on proper inhalation technique to optimize the appropriate delivery of medication,” Maryam Navaie, DrPH, of Advance Health Solutions in New York told attendees at the annual meeting of the American College of Chest Physicians. She also urged practitioners to think more carefully about what devices to prescribe to patients based on their own personal attributes.

“Nebulizer devices may be a better consideration for patients who have difficulty performing the necessary steps required by handheld inhalers,” Dr. Navaie said.

She and fellow researchers conducted a systematic review to gain more insights into the errors and difficulties experienced by U.S. adults using MDIs for COPD or asthma. They combed through PubMed, EMBASE, PsycINFO, Cochrane, and Google Scholar databases for English language studies about MDI-related errors in U.S. adult COPD or asthma patients published between January 2003 and February 2017.

The researchers included only randomized controlled trials and cross-sectional and observational studies, and they excluded studies with combined error rates across multiple devices so they could better parse out the data. They also used baseline rates only in studies that involved an intervention to reduce errors.

The researchers defined the proportion of overall MDI errors as “the percentage of patients who made errors in equal to or greater than 20% of inhalation steps.” They computed pooled estimates and created forest plots for both overall errors and for errors according to each step in using an MDI.

The eight studies they identified involved 1,221 patients, with ages ranging from a mean 48 to 82 years, 53% of whom were female. Nearly two-thirds of the patients had COPD (63.6%) while 36.4% had asthma. Most of the devices studied were MDIs alone (68.8%), while 31.2% included a spacer.

The pooled weighted average revealed a 66.5% error rate, that is, two-thirds of all the patients were making at least two errors during the 10 steps involved in using their device. The researchers then used individual error rates data in five studies to calculate the overall error rate for each step in using MDIs. The most common error, made by 73.8% of people in those five studies, was failing to attach the inhaler to the spacer. In addition, 68.7% of patients were failing to exhale fully and away from the inhaler before inhaling, and 47.8% were inhaling too fast instead of inhaling deeply.

“So these [findings] actually give you [some specific] ideas of how we could help improve patients’ ability to use the device properly,” Dr. Navaie told attendees, adding that these data can inform patient education needs and interventions.

Based on the data from those five studies, the error rates for all 10 steps to using an MDI were as follows:

  • Failed to shake inhaler before use (37.9%).
  • Failed to attach inhaler to spacer (73.8%).
  • Failed to exhale fully and away from inhaler before inhalation (68.7%).
  • Failed to place mouthpiece between teeth and sealed lips (7.4%).
  • Failed to actuate once during inhalation (24.4%).
  • Inhalation too fast, not deep (47.8%).
  • Failed to hold breath for 5-10 seconds (40.1%).
  • Failed to remove the inhaler/spacer from mouth (11.3%).
  • Failed to exhale after inhalation (33.2%).
  • Failed to repeat steps for second puff (36.7%).

Dr. Navaie also noted the investigators were surprised to learn that physicians themselves sometimes make several of these errors in explaining to patients how to use their devices.

“I think for the reps and other people who go out and visit doctors, it’s important to think about making sure the clinicians are using the devices properly,” Dr. Navaie said. She pointed out the potential for patients to forget steps between visits.

“One of the things a lot of our clinicians and key opinion leaders told us during the course of this study is that you shouldn’t just educate the patient at the time you are scripting the device but repeatedly because patients forget,” she said. She recommended having patients demonstrate their use of the device at each visit. If patients continue to struggle, it may be worth considering other therapies, such as a nebulizer, for patients unable to regularly use their devices correctly.

The meta-analysis was limited by the sparse research available in general on MDI errors in the U.S. adult population, so the data on error rates for each individual step may not be broadly generalizable. The studies also did not distinguish between rates among users with asthma vs. users with COPD. Further, too few data exist on associations between MDI errors and health outcomes to have a clear picture of the clinical implications of regularly making multiple errors in MDI use.

Dr. Navaie is employed by Advance Health Solutions, which received Sunovion Pharmaceuticals funding for the study.

SOURCE: Navaie M et al. CHEST 2018. doi: 10.1016/j.chest.2018.08.705.

– Two-thirds of U.S. adults with COPD or asthma are making multiple errors in using their metered-dose inhalers (MDIs), according to new research. About half of patients failed to inhale slowly and deeply to ensure they received the appropriate dose, and about 40% of patients failed to hold their breath for 5-10 seconds afterward so that the medication made its way to their lungs, the findings show.

“There’s a need to educate patients on proper inhalation technique to optimize the appropriate delivery of medication,” Maryam Navaie, DrPH, of Advance Health Solutions in New York told attendees at the annual meeting of the American College of Chest Physicians. She also urged practitioners to think more carefully about what devices to prescribe to patients based on their own personal attributes.

“Nebulizer devices may be a better consideration for patients who have difficulty performing the necessary steps required by handheld inhalers,” Dr. Navaie said.

She and fellow researchers conducted a systematic review to gain more insights into the errors and difficulties experienced by U.S. adults using MDIs for COPD or asthma. They combed through PubMed, EMBASE, PsycINFO, Cochrane, and Google Scholar databases for English language studies about MDI-related errors in U.S. adult COPD or asthma patients published between January 2003 and February 2017.

The researchers included only randomized controlled trials and cross-sectional and observational studies, and they excluded studies with combined error rates across multiple devices so they could better parse out the data. They also used baseline rates only in studies that involved an intervention to reduce errors.

The researchers defined the proportion of overall MDI errors as “the percentage of patients who made errors in equal to or greater than 20% of inhalation steps.” They computed pooled estimates and created forest plots for both overall errors and for errors according to each step in using an MDI.

The eight studies they identified involved 1,221 patients, with ages ranging from a mean 48 to 82 years, 53% of whom were female. Nearly two-thirds of the patients had COPD (63.6%) while 36.4% had asthma. Most of the devices studied were MDIs alone (68.8%), while 31.2% included a spacer.

The pooled weighted average revealed a 66.5% error rate, that is, two-thirds of all the patients were making at least two errors during the 10 steps involved in using their device. The researchers then used individual error rates data in five studies to calculate the overall error rate for each step in using MDIs. The most common error, made by 73.8% of people in those five studies, was failing to attach the inhaler to the spacer. In addition, 68.7% of patients were failing to exhale fully and away from the inhaler before inhaling, and 47.8% were inhaling too fast instead of inhaling deeply.

“So these [findings] actually give you [some specific] ideas of how we could help improve patients’ ability to use the device properly,” Dr. Navaie told attendees, adding that these data can inform patient education needs and interventions.

Based on the data from those five studies, the error rates for all 10 steps to using an MDI were as follows:

  • Failed to shake inhaler before use (37.9%).
  • Failed to attach inhaler to spacer (73.8%).
  • Failed to exhale fully and away from inhaler before inhalation (68.7%).
  • Failed to place mouthpiece between teeth and sealed lips (7.4%).
  • Failed to actuate once during inhalation (24.4%).
  • Inhalation too fast, not deep (47.8%).
  • Failed to hold breath for 5-10 seconds (40.1%).
  • Failed to remove the inhaler/spacer from mouth (11.3%).
  • Failed to exhale after inhalation (33.2%).
  • Failed to repeat steps for second puff (36.7%).

Dr. Navaie also noted the investigators were surprised to learn that physicians themselves sometimes make several of these errors in explaining to patients how to use their devices.

“I think for the reps and other people who go out and visit doctors, it’s important to think about making sure the clinicians are using the devices properly,” Dr. Navaie said. She pointed out the potential for patients to forget steps between visits.

“One of the things a lot of our clinicians and key opinion leaders told us during the course of this study is that you shouldn’t just educate the patient at the time you are scripting the device but repeatedly because patients forget,” she said. She recommended having patients demonstrate their use of the device at each visit. If patients continue to struggle, it may be worth considering other therapies, such as a nebulizer, for patients unable to regularly use their devices correctly.

The meta-analysis was limited by the sparse research available in general on MDI errors in the U.S. adult population, so the data on error rates for each individual step may not be broadly generalizable. The studies also did not distinguish between rates among users with asthma vs. users with COPD. Further, too few data exist on associations between MDI errors and health outcomes to have a clear picture of the clinical implications of regularly making multiple errors in MDI use.

Dr. Navaie is employed by Advance Health Solutions, which received Sunovion Pharmaceuticals funding for the study.

SOURCE: Navaie M et al. CHEST 2018. doi: 10.1016/j.chest.2018.08.705.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM CHEST 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: 67% of US adult patients with COPD or asthma report making errors in using metered-dose inhalers.

Major finding: 69% of patients do not exhale fully and away from the inhaler before inhalation; 50% do not inhale slowly and deeply.

Study details: Meta-analysis of eight studies involving 1,221 U.S. adult patients with COPD or asthma who use metered-dose inhalers.

Disclosures: Dr. Navaie is employed by Advance Health Solutions, which received Sunovion Pharmaceuticals funding for the study.

Source: Navaie M et al. CHEST 2018. doi: 10.1016/j.chest.2018.08.705.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article