User login
Practice patterns shifted little in the 12 months following the publishing of study results and in the 12 months following incorporation into clinical practice guidelines, based on an analysis of four comparative effectiveness research case studies, funded by the National Pharmaceutical Council (NPC).
The case studies, published in the American Journal of Managed Care (Am. J. Manag. Care 2014;20:e208-e220), looked at practice changes after the 2004 PROVE-IT study on statin therapies, the 2004 Mammography With MRI study on breast cancer surveillance methods in women with BRCA1 or BRCA2 mutations, the 2006 SPORT study comparing standard open diskectomy versus nonoperative treatments for patients with intervertebal disk herniation, and the 2007 COURAGE trial comparing optimal medical therapy and percutaneous coronary intervention versus optimal medical therapy alone.
"In some cases, we might have expected an uptick or change in what was happening," said report author and NPC Director of Comparative Effectiveness Research Jennifer Graff, Pharm.D., speaking in an interview. "For instance, in the PROVE-IT study, we would have expected, based upon the results, that you would have seen many more providers and patients using intensive statin therapy, ... [but it took] 3 years after the study’s publication before you started to see the change in which types of statins were being used."
The report authors developed suggestions on how to get clinicians to more quickly incorporate the results of comparative effectiveness research.
Involve clinicians, payers, policy makers, and patients in the research design "to make sure we are asking the right questions," said Dr. Graff. This approach is now being taken at the Food and Drug Administration to spur drug development and at the Patient-Centered Outcomes Research Institute, a comparative effectiveness research body created as part of the Affordable Care Act.
Perform more confirmatory studies. "One single study probably won’t change the mind of a provider who is seeing many patients and has in their mind what treatments work," Dr. Graff said. "Similarly, we need to fund studies when clinical opinion is shifting."
Align financial incentives with study results. This can be accomplished through value-based insurance design or incentivized provider pay based on outcomes. Bundled payments as well as incentivizing the use of clinical pathways, such as WellPoint’s recently announced program to provide bonus payments in oncology, are examples of such options, Dr. Graff said.
The study was funded by the NPC. Authors Teresa Gibson, Emily Ehrlich, and Amanda Farr reported employment with Truven Health Analytics, which received consulting fees from NPC. Authors Dr. Robert Dubois and Dr. Jennifer Graff are employees of the NPC. The remaining authors reported no financial conflicts of interest.
I disagree with the first sentence of this article. The gist of the article and the interview is that clinicians are guilty of not adopting important new changes in evidence into their practices soon enough.
![]() |
Dr. Larry Kraiss |
What this report does not make clear is that the four comparative effectiveness research (CER) studies cited in the American Journal Managed of Care paper were specifically chosen for analysis because no subsequent studies had been published to contradict the major findings. The period of analysis extended to the end of 2009, so in the case of the two 2004 studies, the waiting period to see if contradictory results were published lasted as long as 5 years.
To retrospectively criticize practitioners for not responding to the major findings in these studies sooner is logically fallacious because it presumes that we all should have known that no conflicting information was going to later appear. This is hard to swallow by those of us smarting from being duped by the discredited DECREASE trials regarding perioperative beta-blockade usage.
The article also claims that clinician behavior is not influenced by clinical practice guidelines. This generalization is inaccurate because the American Journal of Managed Care article does indicate that clinical practice guidelines do influence practice.
Finally, in the interview, one of the authors bemoans the 3-year lag between the publication of PROVE-IT and more widespread use of statins. In my opinion, this is not an excessively long lag period.
A healthy sense of skepticism regarding the results of individual CER trials, especially randomized controlled trials where generalization can be suspect, is still warranted. Waiting for the appearance of clinical practice guidelines is prudent. This is what happened with PROVE-IT and what should still happen going forward.
Dr. Larry Kraiss is a professor and chief of the Division of Vascular Surgery and medical director of the Noninvasive Vascular Laboratory at the University of Utah School of Medicine and an associate medical editor of Vascular Specialist.
I disagree with the first sentence of this article. The gist of the article and the interview is that clinicians are guilty of not adopting important new changes in evidence into their practices soon enough.
![]() |
Dr. Larry Kraiss |
What this report does not make clear is that the four comparative effectiveness research (CER) studies cited in the American Journal Managed of Care paper were specifically chosen for analysis because no subsequent studies had been published to contradict the major findings. The period of analysis extended to the end of 2009, so in the case of the two 2004 studies, the waiting period to see if contradictory results were published lasted as long as 5 years.
To retrospectively criticize practitioners for not responding to the major findings in these studies sooner is logically fallacious because it presumes that we all should have known that no conflicting information was going to later appear. This is hard to swallow by those of us smarting from being duped by the discredited DECREASE trials regarding perioperative beta-blockade usage.
The article also claims that clinician behavior is not influenced by clinical practice guidelines. This generalization is inaccurate because the American Journal of Managed Care article does indicate that clinical practice guidelines do influence practice.
Finally, in the interview, one of the authors bemoans the 3-year lag between the publication of PROVE-IT and more widespread use of statins. In my opinion, this is not an excessively long lag period.
A healthy sense of skepticism regarding the results of individual CER trials, especially randomized controlled trials where generalization can be suspect, is still warranted. Waiting for the appearance of clinical practice guidelines is prudent. This is what happened with PROVE-IT and what should still happen going forward.
Dr. Larry Kraiss is a professor and chief of the Division of Vascular Surgery and medical director of the Noninvasive Vascular Laboratory at the University of Utah School of Medicine and an associate medical editor of Vascular Specialist.
I disagree with the first sentence of this article. The gist of the article and the interview is that clinicians are guilty of not adopting important new changes in evidence into their practices soon enough.
![]() |
Dr. Larry Kraiss |
What this report does not make clear is that the four comparative effectiveness research (CER) studies cited in the American Journal Managed of Care paper were specifically chosen for analysis because no subsequent studies had been published to contradict the major findings. The period of analysis extended to the end of 2009, so in the case of the two 2004 studies, the waiting period to see if contradictory results were published lasted as long as 5 years.
To retrospectively criticize practitioners for not responding to the major findings in these studies sooner is logically fallacious because it presumes that we all should have known that no conflicting information was going to later appear. This is hard to swallow by those of us smarting from being duped by the discredited DECREASE trials regarding perioperative beta-blockade usage.
The article also claims that clinician behavior is not influenced by clinical practice guidelines. This generalization is inaccurate because the American Journal of Managed Care article does indicate that clinical practice guidelines do influence practice.
Finally, in the interview, one of the authors bemoans the 3-year lag between the publication of PROVE-IT and more widespread use of statins. In my opinion, this is not an excessively long lag period.
A healthy sense of skepticism regarding the results of individual CER trials, especially randomized controlled trials where generalization can be suspect, is still warranted. Waiting for the appearance of clinical practice guidelines is prudent. This is what happened with PROVE-IT and what should still happen going forward.
Dr. Larry Kraiss is a professor and chief of the Division of Vascular Surgery and medical director of the Noninvasive Vascular Laboratory at the University of Utah School of Medicine and an associate medical editor of Vascular Specialist.
Practice patterns shifted little in the 12 months following the publishing of study results and in the 12 months following incorporation into clinical practice guidelines, based on an analysis of four comparative effectiveness research case studies, funded by the National Pharmaceutical Council (NPC).
The case studies, published in the American Journal of Managed Care (Am. J. Manag. Care 2014;20:e208-e220), looked at practice changes after the 2004 PROVE-IT study on statin therapies, the 2004 Mammography With MRI study on breast cancer surveillance methods in women with BRCA1 or BRCA2 mutations, the 2006 SPORT study comparing standard open diskectomy versus nonoperative treatments for patients with intervertebal disk herniation, and the 2007 COURAGE trial comparing optimal medical therapy and percutaneous coronary intervention versus optimal medical therapy alone.
"In some cases, we might have expected an uptick or change in what was happening," said report author and NPC Director of Comparative Effectiveness Research Jennifer Graff, Pharm.D., speaking in an interview. "For instance, in the PROVE-IT study, we would have expected, based upon the results, that you would have seen many more providers and patients using intensive statin therapy, ... [but it took] 3 years after the study’s publication before you started to see the change in which types of statins were being used."
The report authors developed suggestions on how to get clinicians to more quickly incorporate the results of comparative effectiveness research.
Involve clinicians, payers, policy makers, and patients in the research design "to make sure we are asking the right questions," said Dr. Graff. This approach is now being taken at the Food and Drug Administration to spur drug development and at the Patient-Centered Outcomes Research Institute, a comparative effectiveness research body created as part of the Affordable Care Act.
Perform more confirmatory studies. "One single study probably won’t change the mind of a provider who is seeing many patients and has in their mind what treatments work," Dr. Graff said. "Similarly, we need to fund studies when clinical opinion is shifting."
Align financial incentives with study results. This can be accomplished through value-based insurance design or incentivized provider pay based on outcomes. Bundled payments as well as incentivizing the use of clinical pathways, such as WellPoint’s recently announced program to provide bonus payments in oncology, are examples of such options, Dr. Graff said.
The study was funded by the NPC. Authors Teresa Gibson, Emily Ehrlich, and Amanda Farr reported employment with Truven Health Analytics, which received consulting fees from NPC. Authors Dr. Robert Dubois and Dr. Jennifer Graff are employees of the NPC. The remaining authors reported no financial conflicts of interest.
Practice patterns shifted little in the 12 months following the publishing of study results and in the 12 months following incorporation into clinical practice guidelines, based on an analysis of four comparative effectiveness research case studies, funded by the National Pharmaceutical Council (NPC).
The case studies, published in the American Journal of Managed Care (Am. J. Manag. Care 2014;20:e208-e220), looked at practice changes after the 2004 PROVE-IT study on statin therapies, the 2004 Mammography With MRI study on breast cancer surveillance methods in women with BRCA1 or BRCA2 mutations, the 2006 SPORT study comparing standard open diskectomy versus nonoperative treatments for patients with intervertebal disk herniation, and the 2007 COURAGE trial comparing optimal medical therapy and percutaneous coronary intervention versus optimal medical therapy alone.
"In some cases, we might have expected an uptick or change in what was happening," said report author and NPC Director of Comparative Effectiveness Research Jennifer Graff, Pharm.D., speaking in an interview. "For instance, in the PROVE-IT study, we would have expected, based upon the results, that you would have seen many more providers and patients using intensive statin therapy, ... [but it took] 3 years after the study’s publication before you started to see the change in which types of statins were being used."
The report authors developed suggestions on how to get clinicians to more quickly incorporate the results of comparative effectiveness research.
Involve clinicians, payers, policy makers, and patients in the research design "to make sure we are asking the right questions," said Dr. Graff. This approach is now being taken at the Food and Drug Administration to spur drug development and at the Patient-Centered Outcomes Research Institute, a comparative effectiveness research body created as part of the Affordable Care Act.
Perform more confirmatory studies. "One single study probably won’t change the mind of a provider who is seeing many patients and has in their mind what treatments work," Dr. Graff said. "Similarly, we need to fund studies when clinical opinion is shifting."
Align financial incentives with study results. This can be accomplished through value-based insurance design or incentivized provider pay based on outcomes. Bundled payments as well as incentivizing the use of clinical pathways, such as WellPoint’s recently announced program to provide bonus payments in oncology, are examples of such options, Dr. Graff said.
The study was funded by the NPC. Authors Teresa Gibson, Emily Ehrlich, and Amanda Farr reported employment with Truven Health Analytics, which received consulting fees from NPC. Authors Dr. Robert Dubois and Dr. Jennifer Graff are employees of the NPC. The remaining authors reported no financial conflicts of interest.
FROM THE AMERICAN JOURNAL OF MANAGED CARE