User login
Simple Interventions Save Lives
A new Health Affairs study tested three relatively simple and inexpensive interventions on a hospital unit to prevent the kinds of hospital-acquired infections that cause the deaths of an estimated 99,000 patients each year. Principal investigator Bradford Harris, MD, and colleagues conducted the research on a pediatric ICU at the University of North Carolina at Chapel Hill School of Medicine, finding that patients admitted after these interventions were implemented left the hospital on average two days earlier, at lower cost, and with a 2.3% lower death rate. Study authors projected annual savings of $12 million for a single PICU.1
The simple measures include strict enforcement of standard hand hygiene policies; guideline-recommended measures for ventilator patients, such as elevating the head of the hospital bed; and compliance with guidelines for maintaining central line catheters, along with educational posters and the use of oral care kits.
A recent article in the “Cleveland Plain Dealer” describes efforts in that city’s hospitals to enforce proper hand hygiene.2 MetroHealth Medical Center hired four employees it calls “infection prevention observers,” whose entire job is to make sure that every caregiver who comes near a patient washes his or her hands. They openly appear on the units carrying clipboards and filling out sheets tracking non-compliance.
The hospital’s hand hygiene compliance rate has reached 98% on all medical units (nationwide, the rate is around 50%), while bloodstream infections have dropped to one-third of what they were in 2010. Cleveland Clinic and University Hospitals achieved similar compliance by employing secret observers of staff hand-washing.
CDC epidemiologist and hand hygiene expert Kate Ellingson, MD, told the newspaper that while awareness of the importance of hand hygiene has long been understood, it is difficult for healthcare workers to follow. But hospitals that use employee monitors, post data, and implement other hand hygiene initiatives tend to show strong compliance.
References
- Harris BD, Hanson H, Christy C, et al. Strict hand hygiene and other practices shortened stays and cut costs and mortality in a pediatric intensive care unit. Health Affairs. 2011;30(9):1751-1761.
- Tribble SJ. Cleveland MetroHealth Medical Center increases hand washing, reduces infections. “Cleveland Plain Dealer” website. Available at: http://www.cleveland.com/healthfit/index.ssf/2011/09/metrohealth_increases_hand_was.html. Accessed Oct. 15, 2011.
A new Health Affairs study tested three relatively simple and inexpensive interventions on a hospital unit to prevent the kinds of hospital-acquired infections that cause the deaths of an estimated 99,000 patients each year. Principal investigator Bradford Harris, MD, and colleagues conducted the research on a pediatric ICU at the University of North Carolina at Chapel Hill School of Medicine, finding that patients admitted after these interventions were implemented left the hospital on average two days earlier, at lower cost, and with a 2.3% lower death rate. Study authors projected annual savings of $12 million for a single PICU.1
The simple measures include strict enforcement of standard hand hygiene policies; guideline-recommended measures for ventilator patients, such as elevating the head of the hospital bed; and compliance with guidelines for maintaining central line catheters, along with educational posters and the use of oral care kits.
A recent article in the “Cleveland Plain Dealer” describes efforts in that city’s hospitals to enforce proper hand hygiene.2 MetroHealth Medical Center hired four employees it calls “infection prevention observers,” whose entire job is to make sure that every caregiver who comes near a patient washes his or her hands. They openly appear on the units carrying clipboards and filling out sheets tracking non-compliance.
The hospital’s hand hygiene compliance rate has reached 98% on all medical units (nationwide, the rate is around 50%), while bloodstream infections have dropped to one-third of what they were in 2010. Cleveland Clinic and University Hospitals achieved similar compliance by employing secret observers of staff hand-washing.
CDC epidemiologist and hand hygiene expert Kate Ellingson, MD, told the newspaper that while awareness of the importance of hand hygiene has long been understood, it is difficult for healthcare workers to follow. But hospitals that use employee monitors, post data, and implement other hand hygiene initiatives tend to show strong compliance.
References
- Harris BD, Hanson H, Christy C, et al. Strict hand hygiene and other practices shortened stays and cut costs and mortality in a pediatric intensive care unit. Health Affairs. 2011;30(9):1751-1761.
- Tribble SJ. Cleveland MetroHealth Medical Center increases hand washing, reduces infections. “Cleveland Plain Dealer” website. Available at: http://www.cleveland.com/healthfit/index.ssf/2011/09/metrohealth_increases_hand_was.html. Accessed Oct. 15, 2011.
A new Health Affairs study tested three relatively simple and inexpensive interventions on a hospital unit to prevent the kinds of hospital-acquired infections that cause the deaths of an estimated 99,000 patients each year. Principal investigator Bradford Harris, MD, and colleagues conducted the research on a pediatric ICU at the University of North Carolina at Chapel Hill School of Medicine, finding that patients admitted after these interventions were implemented left the hospital on average two days earlier, at lower cost, and with a 2.3% lower death rate. Study authors projected annual savings of $12 million for a single PICU.1
The simple measures include strict enforcement of standard hand hygiene policies; guideline-recommended measures for ventilator patients, such as elevating the head of the hospital bed; and compliance with guidelines for maintaining central line catheters, along with educational posters and the use of oral care kits.
A recent article in the “Cleveland Plain Dealer” describes efforts in that city’s hospitals to enforce proper hand hygiene.2 MetroHealth Medical Center hired four employees it calls “infection prevention observers,” whose entire job is to make sure that every caregiver who comes near a patient washes his or her hands. They openly appear on the units carrying clipboards and filling out sheets tracking non-compliance.
The hospital’s hand hygiene compliance rate has reached 98% on all medical units (nationwide, the rate is around 50%), while bloodstream infections have dropped to one-third of what they were in 2010. Cleveland Clinic and University Hospitals achieved similar compliance by employing secret observers of staff hand-washing.
CDC epidemiologist and hand hygiene expert Kate Ellingson, MD, told the newspaper that while awareness of the importance of hand hygiene has long been understood, it is difficult for healthcare workers to follow. But hospitals that use employee monitors, post data, and implement other hand hygiene initiatives tend to show strong compliance.
References
- Harris BD, Hanson H, Christy C, et al. Strict hand hygiene and other practices shortened stays and cut costs and mortality in a pediatric intensive care unit. Health Affairs. 2011;30(9):1751-1761.
- Tribble SJ. Cleveland MetroHealth Medical Center increases hand washing, reduces infections. “Cleveland Plain Dealer” website. Available at: http://www.cleveland.com/healthfit/index.ssf/2011/09/metrohealth_increases_hand_was.html. Accessed Oct. 15, 2011.
Dartmouth Atlas: Little Progress Reducing Readmissions
The newest Dartmouth Atlas report, released Sept. 28, documents striking variation in 30-day hospital readmission rates for Medicare patients across 308 hospital-referral regions.1 The authors found little progress in decreasing 30-day readmissions from 2004 to 2009, while for some conditions and many regions, rates actually went up.
National readmission rates following surgery were 12.7% in both 2004 and 2009; readmissions for medical conditions rose slightly, from 15.9% to 16.1%, over the same period. Only 42% of hospitalized Medicare patients discharged to home had a PCP contact within 14 days of discharge, according to the report.
The Dartmouth Atlas Project (www.dartmouthatlas.org) documents geographic variation in healthcare utilization unrelated to outcome. It offers an extensive database for comparison by state, county, region and facility.
The new report is the first to identify an association nationally between readmissions rates and “the overall intensity of inpatient care provided to patients within a region or hospital,” with patterns of relatively high hospital utilization often corresponding with areas of higher readmissions. “Other patients are readmitted simply because they live in a locale where the hospital is used more frequently as a site of care,” the authors note.
Without continuous, high-quality care coordination across sites, the authors write, discharged patients can repeatedly bounce back to emergency rooms and hospitals.
Reference
The newest Dartmouth Atlas report, released Sept. 28, documents striking variation in 30-day hospital readmission rates for Medicare patients across 308 hospital-referral regions.1 The authors found little progress in decreasing 30-day readmissions from 2004 to 2009, while for some conditions and many regions, rates actually went up.
National readmission rates following surgery were 12.7% in both 2004 and 2009; readmissions for medical conditions rose slightly, from 15.9% to 16.1%, over the same period. Only 42% of hospitalized Medicare patients discharged to home had a PCP contact within 14 days of discharge, according to the report.
The Dartmouth Atlas Project (www.dartmouthatlas.org) documents geographic variation in healthcare utilization unrelated to outcome. It offers an extensive database for comparison by state, county, region and facility.
The new report is the first to identify an association nationally between readmissions rates and “the overall intensity of inpatient care provided to patients within a region or hospital,” with patterns of relatively high hospital utilization often corresponding with areas of higher readmissions. “Other patients are readmitted simply because they live in a locale where the hospital is used more frequently as a site of care,” the authors note.
Without continuous, high-quality care coordination across sites, the authors write, discharged patients can repeatedly bounce back to emergency rooms and hospitals.
Reference
The newest Dartmouth Atlas report, released Sept. 28, documents striking variation in 30-day hospital readmission rates for Medicare patients across 308 hospital-referral regions.1 The authors found little progress in decreasing 30-day readmissions from 2004 to 2009, while for some conditions and many regions, rates actually went up.
National readmission rates following surgery were 12.7% in both 2004 and 2009; readmissions for medical conditions rose slightly, from 15.9% to 16.1%, over the same period. Only 42% of hospitalized Medicare patients discharged to home had a PCP contact within 14 days of discharge, according to the report.
The Dartmouth Atlas Project (www.dartmouthatlas.org) documents geographic variation in healthcare utilization unrelated to outcome. It offers an extensive database for comparison by state, county, region and facility.
The new report is the first to identify an association nationally between readmissions rates and “the overall intensity of inpatient care provided to patients within a region or hospital,” with patterns of relatively high hospital utilization often corresponding with areas of higher readmissions. “Other patients are readmitted simply because they live in a locale where the hospital is used more frequently as a site of care,” the authors note.
Without continuous, high-quality care coordination across sites, the authors write, discharged patients can repeatedly bounce back to emergency rooms and hospitals.
Reference
You've Got (Post-Discharge) Mail
An automated email system that notifies both hospitalists and PCPs about post-discharge test results can help ensure results don’t “fall through the cracks,” according to an abstract presented at HM11.
The report, “Design and Implementation of an Automated Email Notification System for Results of Tests Pending at Discharge,” suggests that by providing an automatic email when results are completed, inpatient physicians will be more responsible for the patient, and create a dialogue with primary-care physicians (PCPs) as well. The authors estimate that physicians are aware of 40% of the final results of tests pending at discharge.
“Things fall through the cracks,” says Anuj Dalal, MD, FHM, a hospitalist at Brigham and Women’s Hospital in Boston. “This is a method to make sure these test results don’t fall through the cracks.”
Dr. Dalal’s team created the automatic emails across five services—chemistry, hematology, microbiology, pathology, and radiology—in the past two years. Preliminary data show that the system helps ensure physicians are aware of more test results, but additional research is needed.
Still, Dr. Dalal believes creating an email system at a given institution helps if only by drawing attention to the issue of pending results once a patient has left the hospital. And even if the implementation of the system at a less-wired hospital is difficult, the omnipresence of email should help with adopting.
“Everyone has email today,” he adds.
An automated email system that notifies both hospitalists and PCPs about post-discharge test results can help ensure results don’t “fall through the cracks,” according to an abstract presented at HM11.
The report, “Design and Implementation of an Automated Email Notification System for Results of Tests Pending at Discharge,” suggests that by providing an automatic email when results are completed, inpatient physicians will be more responsible for the patient, and create a dialogue with primary-care physicians (PCPs) as well. The authors estimate that physicians are aware of 40% of the final results of tests pending at discharge.
“Things fall through the cracks,” says Anuj Dalal, MD, FHM, a hospitalist at Brigham and Women’s Hospital in Boston. “This is a method to make sure these test results don’t fall through the cracks.”
Dr. Dalal’s team created the automatic emails across five services—chemistry, hematology, microbiology, pathology, and radiology—in the past two years. Preliminary data show that the system helps ensure physicians are aware of more test results, but additional research is needed.
Still, Dr. Dalal believes creating an email system at a given institution helps if only by drawing attention to the issue of pending results once a patient has left the hospital. And even if the implementation of the system at a less-wired hospital is difficult, the omnipresence of email should help with adopting.
“Everyone has email today,” he adds.
An automated email system that notifies both hospitalists and PCPs about post-discharge test results can help ensure results don’t “fall through the cracks,” according to an abstract presented at HM11.
The report, “Design and Implementation of an Automated Email Notification System for Results of Tests Pending at Discharge,” suggests that by providing an automatic email when results are completed, inpatient physicians will be more responsible for the patient, and create a dialogue with primary-care physicians (PCPs) as well. The authors estimate that physicians are aware of 40% of the final results of tests pending at discharge.
“Things fall through the cracks,” says Anuj Dalal, MD, FHM, a hospitalist at Brigham and Women’s Hospital in Boston. “This is a method to make sure these test results don’t fall through the cracks.”
Dr. Dalal’s team created the automatic emails across five services—chemistry, hematology, microbiology, pathology, and radiology—in the past two years. Preliminary data show that the system helps ensure physicians are aware of more test results, but additional research is needed.
Still, Dr. Dalal believes creating an email system at a given institution helps if only by drawing attention to the issue of pending results once a patient has left the hospital. And even if the implementation of the system at a less-wired hospital is difficult, the omnipresence of email should help with adopting.
“Everyone has email today,” he adds.
Congrats to the Class of 2013
Clinical informatics, the principle of blending health information technology (HIT) with patient care, is going mainstream. The subspecialty, popular in hospitalist circles, is scheduled to offer board certification following its recent approval by the American Board of Medical Specialties. The first examination will be administered by the American Board of Preventative Medicine and could be held as early as fall 2012, with certificates awarded early in 2013.
AMIA, the informatics trade group, believes the recognition will help push more medical schools to integrate informatics into the curriculum, which will only further solidify the subspecialty place in modern medicine.
Clinical informatics, the principle of blending health information technology (HIT) with patient care, is going mainstream. The subspecialty, popular in hospitalist circles, is scheduled to offer board certification following its recent approval by the American Board of Medical Specialties. The first examination will be administered by the American Board of Preventative Medicine and could be held as early as fall 2012, with certificates awarded early in 2013.
AMIA, the informatics trade group, believes the recognition will help push more medical schools to integrate informatics into the curriculum, which will only further solidify the subspecialty place in modern medicine.
Clinical informatics, the principle of blending health information technology (HIT) with patient care, is going mainstream. The subspecialty, popular in hospitalist circles, is scheduled to offer board certification following its recent approval by the American Board of Medical Specialties. The first examination will be administered by the American Board of Preventative Medicine and could be held as early as fall 2012, with certificates awarded early in 2013.
AMIA, the informatics trade group, believes the recognition will help push more medical schools to integrate informatics into the curriculum, which will only further solidify the subspecialty place in modern medicine.
By the Numbers: 209,000
Projected total number of adult in-hospital cardiac arrests that are treated with a resuscitation response each year in U.S. hospitals.1 Raina Merchant, MD, and colleagues from the University of Pennsylvania Health System derived several estimates from the American Heart Association’s Get with the Guidelines-Resuscitation registry for 2003 to 2007, weighted for total U.S. hospital bed days. Survival rate for in-hospital cardiac arrests is 21%, compared with 10% for arrests in other settings. But the authors note that arrests might be rising, which is “important for understanding the burden of in-hospital cardiac arrest and developing strategies to improve care for hospitalized patients,” Dr. Merchant says.
Reference
Projected total number of adult in-hospital cardiac arrests that are treated with a resuscitation response each year in U.S. hospitals.1 Raina Merchant, MD, and colleagues from the University of Pennsylvania Health System derived several estimates from the American Heart Association’s Get with the Guidelines-Resuscitation registry for 2003 to 2007, weighted for total U.S. hospital bed days. Survival rate for in-hospital cardiac arrests is 21%, compared with 10% for arrests in other settings. But the authors note that arrests might be rising, which is “important for understanding the burden of in-hospital cardiac arrest and developing strategies to improve care for hospitalized patients,” Dr. Merchant says.
Reference
Projected total number of adult in-hospital cardiac arrests that are treated with a resuscitation response each year in U.S. hospitals.1 Raina Merchant, MD, and colleagues from the University of Pennsylvania Health System derived several estimates from the American Heart Association’s Get with the Guidelines-Resuscitation registry for 2003 to 2007, weighted for total U.S. hospital bed days. Survival rate for in-hospital cardiac arrests is 21%, compared with 10% for arrests in other settings. But the authors note that arrests might be rising, which is “important for understanding the burden of in-hospital cardiac arrest and developing strategies to improve care for hospitalized patients,” Dr. Merchant says.
Reference
Quality, Defined
Pornography. There can be few better hooks for readers than that. Just typing the word is a bit uncomfortable. As is, I imagine, reading it. But it’s effective, and likely why you’ve made it to word 37 of my column—34 words further than you usually get, I imagine.
“What about pornography?” you ask with bated breath. “What could pornography possibly have to do with hospital medicine?” your mind wonders. “Is this the column that (finally) gets Glasheen fired?” the ambulance chaser in you titillates.
By now, you’ve no doubt heard the famous Potter Stewart definition of pornography: “I know it when I see it.” That’s how the former U.S. Supreme Court justice described his threshold for recognizing pornography. It was made famous in a 1960s decision about whether a particular movie scene was protected by the 1st Amendment right to free speech or, indeed, a pornographic obscenity to be censured. Stewart, who clearly recognized the need to “define” pornography, also recognized the inherent challenges in doing so. The I-know-it-when-I-see-it benchmark is, of course, flawed, but I defy you to come up with a better definition.
Quality Is, of Course…
I was thinking about pornography (another discomforting phrase to type) recently—and Potter Stewart’s challenge in defining it, specifically—when I was asked about quality in healthcare. The query, which occurred during a several-hour, mind-numbing meeting (is there another type of several-hour meeting?), was “What is quality?” The question, laced with hostility and dripping with antagonism, was posed by a senior physician and directed pointedly at me. Indignantly, I cleared my throat, mentally stepping onto my pedestal to ceremoniously topple this academic egghead with my erudite response.
“Well, quality is, of course,” I confidently retorted, the “of course” added to demonstrate my moral superiority, “the ability to … uhhh, you see … ummmm, you know.” At which point I again cleared my throat not once, not twice, but a socially awkward three times before employing the timed-honored, full-body shock-twitch that signifies that you’ve just received an urgent vibrate page (faked, of course) and excused myself from the meeting, never to return.
The reality is that I struggle to define quality. Like Chief Justice Stewart, I think I know quality when I see it, but more precise definitions can be elusive.
And distracting.
It’s Not My Job
Just this morning, I read a news release from a respected physician group trumpeting the fact that their advocacy resulted in the federal government reducing the number of quality data-point requirements in their final rule for accountable-care organizations (ACOs) from 66 to 33. Trumpeting? Is this a good thing? Should we be supporting fewer quality measures? The article quoted a physician leader saying that the original reporting requirements were too burdensome. Too burdensome to whom? My guess is the recipients of our care, often referred to as our patients, wouldn’t categorize quality assurance as “too burdensome.”
I was at another meeting recently in which a respected colleague related her take on the physician role in improving quality. “I don’t think that’s a physician’s job. That’s what we have a quality department for,” she noted. “It’s just too expensive, time-consuming, and boring for physicians to do that kind of work.”
Too burdensome? Not a physician’s job to ensure the delivery of quality care? While I understand the sentiment (the need to have support staff collecting data, recognition of the huge infrastructure requirements, etc.), I can’t help but think that these types of responses are a large part of the struggle we are having with improving quality.
Then again, I would hazard that 0.0 percent of physicians would argue with the premise that we are obliged by the Hippocratic Oath, our moral compass, and our sense of professionalism to provide the best possible care to our patients. If we accept that we aren’t doing that—and we aren’t—then what is the disconnect? Why aren’t we seeking more quality data points? Why isn’t this “our job”?
Definitional Disconnect
Well, the truth is, it is our job. And we know it. The problem is that quality isn’t universally defined and the process of trying to define it often distracts us from the true task at hand—improving patient care.
Few of us would argue that a wrong-site surgery or anaphylaxis from administration of a medication known to have caused an allergy represents a suboptimal level of care. But more often than not, we see quality being measured and defined in less concrete, more obscure ways—ways that my eyes may not view as low-quality. These definitions are inherently flawed and breed contempt among providers who are told they aren’t passing muster in metrics they don’t see as “quality.”
So the real disconnect is definitional. Is quality defined by the Institute of Medicine characteristics of safe, effective, patient-centered, timely, efficient, and equitable care? Or is it the rates of underuse, overuse, and misuse of medical treatments and procedures? Or is it defined by individual quality metrics such as those captured by the Centers for Medicare & Medicaid Services (CMS)—you know, things like hospital fall rates, perioperative antibiotic usage, beta-blockers after MI, or whether a patient reported their bathroom as being clean?
Is 30% of the quality of care that we deliver referable to the patient experience (as measured by HCAHPS), as the new value-based purchasing program would have us believe? Is it hospital accreditation through the Joint Commission? Or physician certification through our parent boards? Is quality measured by a physician’s cognitive or technical skills, or where they went to school? Is it experience, medical knowledge, guideline usage?
We use such a mystifying array of metrics to define quality that it confuses the issue such that physicians who personally believe they are doing a good job can become disenfranchised. To a physician who provides clinically appropriate care around a surgical procedure or treatment of pneumonia, it can be demeaning and demoralizing to suggest that his or her patient did not receive “high quality” care because the bathroom wasn’t clean or the patient didn’t get a flu shot. Yet, this is the message we often send—a message that alienates many physicians, making them cynical about quality and disengaged in quality improvement. The result is that they seek fewer quality data points and defer the job of improving quality to someone else.
Make no mistake: Quality measures have an important role in our healthcare landscape. But to the degree that defining quality confuses, alienates, or disenfranchises providers, we should stop trying to define it. Quality is not a thing, a metric, or an outcome. It is not an elusive, unquantifiable creature that is achievable only by the elite. Quality is simply providing the best possible care. And quality improvement is simply closing the gap between the best possible care and actual care.
In this regard, we can learn a lot from Potter Stewart. We know quality when we see it. And we know what an absence of quality looks like.
Let’s close that gap by putting less energy into defining quality, and putting more energy into the tenacious pursuit of quality.
Dr. Glasheen is physician editor of The Hospitalist.
Pornography. There can be few better hooks for readers than that. Just typing the word is a bit uncomfortable. As is, I imagine, reading it. But it’s effective, and likely why you’ve made it to word 37 of my column—34 words further than you usually get, I imagine.
“What about pornography?” you ask with bated breath. “What could pornography possibly have to do with hospital medicine?” your mind wonders. “Is this the column that (finally) gets Glasheen fired?” the ambulance chaser in you titillates.
By now, you’ve no doubt heard the famous Potter Stewart definition of pornography: “I know it when I see it.” That’s how the former U.S. Supreme Court justice described his threshold for recognizing pornography. It was made famous in a 1960s decision about whether a particular movie scene was protected by the 1st Amendment right to free speech or, indeed, a pornographic obscenity to be censured. Stewart, who clearly recognized the need to “define” pornography, also recognized the inherent challenges in doing so. The I-know-it-when-I-see-it benchmark is, of course, flawed, but I defy you to come up with a better definition.
Quality Is, of Course…
I was thinking about pornography (another discomforting phrase to type) recently—and Potter Stewart’s challenge in defining it, specifically—when I was asked about quality in healthcare. The query, which occurred during a several-hour, mind-numbing meeting (is there another type of several-hour meeting?), was “What is quality?” The question, laced with hostility and dripping with antagonism, was posed by a senior physician and directed pointedly at me. Indignantly, I cleared my throat, mentally stepping onto my pedestal to ceremoniously topple this academic egghead with my erudite response.
“Well, quality is, of course,” I confidently retorted, the “of course” added to demonstrate my moral superiority, “the ability to … uhhh, you see … ummmm, you know.” At which point I again cleared my throat not once, not twice, but a socially awkward three times before employing the timed-honored, full-body shock-twitch that signifies that you’ve just received an urgent vibrate page (faked, of course) and excused myself from the meeting, never to return.
The reality is that I struggle to define quality. Like Chief Justice Stewart, I think I know quality when I see it, but more precise definitions can be elusive.
And distracting.
It’s Not My Job
Just this morning, I read a news release from a respected physician group trumpeting the fact that their advocacy resulted in the federal government reducing the number of quality data-point requirements in their final rule for accountable-care organizations (ACOs) from 66 to 33. Trumpeting? Is this a good thing? Should we be supporting fewer quality measures? The article quoted a physician leader saying that the original reporting requirements were too burdensome. Too burdensome to whom? My guess is the recipients of our care, often referred to as our patients, wouldn’t categorize quality assurance as “too burdensome.”
I was at another meeting recently in which a respected colleague related her take on the physician role in improving quality. “I don’t think that’s a physician’s job. That’s what we have a quality department for,” she noted. “It’s just too expensive, time-consuming, and boring for physicians to do that kind of work.”
Too burdensome? Not a physician’s job to ensure the delivery of quality care? While I understand the sentiment (the need to have support staff collecting data, recognition of the huge infrastructure requirements, etc.), I can’t help but think that these types of responses are a large part of the struggle we are having with improving quality.
Then again, I would hazard that 0.0 percent of physicians would argue with the premise that we are obliged by the Hippocratic Oath, our moral compass, and our sense of professionalism to provide the best possible care to our patients. If we accept that we aren’t doing that—and we aren’t—then what is the disconnect? Why aren’t we seeking more quality data points? Why isn’t this “our job”?
Definitional Disconnect
Well, the truth is, it is our job. And we know it. The problem is that quality isn’t universally defined and the process of trying to define it often distracts us from the true task at hand—improving patient care.
Few of us would argue that a wrong-site surgery or anaphylaxis from administration of a medication known to have caused an allergy represents a suboptimal level of care. But more often than not, we see quality being measured and defined in less concrete, more obscure ways—ways that my eyes may not view as low-quality. These definitions are inherently flawed and breed contempt among providers who are told they aren’t passing muster in metrics they don’t see as “quality.”
So the real disconnect is definitional. Is quality defined by the Institute of Medicine characteristics of safe, effective, patient-centered, timely, efficient, and equitable care? Or is it the rates of underuse, overuse, and misuse of medical treatments and procedures? Or is it defined by individual quality metrics such as those captured by the Centers for Medicare & Medicaid Services (CMS)—you know, things like hospital fall rates, perioperative antibiotic usage, beta-blockers after MI, or whether a patient reported their bathroom as being clean?
Is 30% of the quality of care that we deliver referable to the patient experience (as measured by HCAHPS), as the new value-based purchasing program would have us believe? Is it hospital accreditation through the Joint Commission? Or physician certification through our parent boards? Is quality measured by a physician’s cognitive or technical skills, or where they went to school? Is it experience, medical knowledge, guideline usage?
We use such a mystifying array of metrics to define quality that it confuses the issue such that physicians who personally believe they are doing a good job can become disenfranchised. To a physician who provides clinically appropriate care around a surgical procedure or treatment of pneumonia, it can be demeaning and demoralizing to suggest that his or her patient did not receive “high quality” care because the bathroom wasn’t clean or the patient didn’t get a flu shot. Yet, this is the message we often send—a message that alienates many physicians, making them cynical about quality and disengaged in quality improvement. The result is that they seek fewer quality data points and defer the job of improving quality to someone else.
Make no mistake: Quality measures have an important role in our healthcare landscape. But to the degree that defining quality confuses, alienates, or disenfranchises providers, we should stop trying to define it. Quality is not a thing, a metric, or an outcome. It is not an elusive, unquantifiable creature that is achievable only by the elite. Quality is simply providing the best possible care. And quality improvement is simply closing the gap between the best possible care and actual care.
In this regard, we can learn a lot from Potter Stewart. We know quality when we see it. And we know what an absence of quality looks like.
Let’s close that gap by putting less energy into defining quality, and putting more energy into the tenacious pursuit of quality.
Dr. Glasheen is physician editor of The Hospitalist.
Pornography. There can be few better hooks for readers than that. Just typing the word is a bit uncomfortable. As is, I imagine, reading it. But it’s effective, and likely why you’ve made it to word 37 of my column—34 words further than you usually get, I imagine.
“What about pornography?” you ask with bated breath. “What could pornography possibly have to do with hospital medicine?” your mind wonders. “Is this the column that (finally) gets Glasheen fired?” the ambulance chaser in you titillates.
By now, you’ve no doubt heard the famous Potter Stewart definition of pornography: “I know it when I see it.” That’s how the former U.S. Supreme Court justice described his threshold for recognizing pornography. It was made famous in a 1960s decision about whether a particular movie scene was protected by the 1st Amendment right to free speech or, indeed, a pornographic obscenity to be censured. Stewart, who clearly recognized the need to “define” pornography, also recognized the inherent challenges in doing so. The I-know-it-when-I-see-it benchmark is, of course, flawed, but I defy you to come up with a better definition.
Quality Is, of Course…
I was thinking about pornography (another discomforting phrase to type) recently—and Potter Stewart’s challenge in defining it, specifically—when I was asked about quality in healthcare. The query, which occurred during a several-hour, mind-numbing meeting (is there another type of several-hour meeting?), was “What is quality?” The question, laced with hostility and dripping with antagonism, was posed by a senior physician and directed pointedly at me. Indignantly, I cleared my throat, mentally stepping onto my pedestal to ceremoniously topple this academic egghead with my erudite response.
“Well, quality is, of course,” I confidently retorted, the “of course” added to demonstrate my moral superiority, “the ability to … uhhh, you see … ummmm, you know.” At which point I again cleared my throat not once, not twice, but a socially awkward three times before employing the timed-honored, full-body shock-twitch that signifies that you’ve just received an urgent vibrate page (faked, of course) and excused myself from the meeting, never to return.
The reality is that I struggle to define quality. Like Chief Justice Stewart, I think I know quality when I see it, but more precise definitions can be elusive.
And distracting.
It’s Not My Job
Just this morning, I read a news release from a respected physician group trumpeting the fact that their advocacy resulted in the federal government reducing the number of quality data-point requirements in their final rule for accountable-care organizations (ACOs) from 66 to 33. Trumpeting? Is this a good thing? Should we be supporting fewer quality measures? The article quoted a physician leader saying that the original reporting requirements were too burdensome. Too burdensome to whom? My guess is the recipients of our care, often referred to as our patients, wouldn’t categorize quality assurance as “too burdensome.”
I was at another meeting recently in which a respected colleague related her take on the physician role in improving quality. “I don’t think that’s a physician’s job. That’s what we have a quality department for,” she noted. “It’s just too expensive, time-consuming, and boring for physicians to do that kind of work.”
Too burdensome? Not a physician’s job to ensure the delivery of quality care? While I understand the sentiment (the need to have support staff collecting data, recognition of the huge infrastructure requirements, etc.), I can’t help but think that these types of responses are a large part of the struggle we are having with improving quality.
Then again, I would hazard that 0.0 percent of physicians would argue with the premise that we are obliged by the Hippocratic Oath, our moral compass, and our sense of professionalism to provide the best possible care to our patients. If we accept that we aren’t doing that—and we aren’t—then what is the disconnect? Why aren’t we seeking more quality data points? Why isn’t this “our job”?
Definitional Disconnect
Well, the truth is, it is our job. And we know it. The problem is that quality isn’t universally defined and the process of trying to define it often distracts us from the true task at hand—improving patient care.
Few of us would argue that a wrong-site surgery or anaphylaxis from administration of a medication known to have caused an allergy represents a suboptimal level of care. But more often than not, we see quality being measured and defined in less concrete, more obscure ways—ways that my eyes may not view as low-quality. These definitions are inherently flawed and breed contempt among providers who are told they aren’t passing muster in metrics they don’t see as “quality.”
So the real disconnect is definitional. Is quality defined by the Institute of Medicine characteristics of safe, effective, patient-centered, timely, efficient, and equitable care? Or is it the rates of underuse, overuse, and misuse of medical treatments and procedures? Or is it defined by individual quality metrics such as those captured by the Centers for Medicare & Medicaid Services (CMS)—you know, things like hospital fall rates, perioperative antibiotic usage, beta-blockers after MI, or whether a patient reported their bathroom as being clean?
Is 30% of the quality of care that we deliver referable to the patient experience (as measured by HCAHPS), as the new value-based purchasing program would have us believe? Is it hospital accreditation through the Joint Commission? Or physician certification through our parent boards? Is quality measured by a physician’s cognitive or technical skills, or where they went to school? Is it experience, medical knowledge, guideline usage?
We use such a mystifying array of metrics to define quality that it confuses the issue such that physicians who personally believe they are doing a good job can become disenfranchised. To a physician who provides clinically appropriate care around a surgical procedure or treatment of pneumonia, it can be demeaning and demoralizing to suggest that his or her patient did not receive “high quality” care because the bathroom wasn’t clean or the patient didn’t get a flu shot. Yet, this is the message we often send—a message that alienates many physicians, making them cynical about quality and disengaged in quality improvement. The result is that they seek fewer quality data points and defer the job of improving quality to someone else.
Make no mistake: Quality measures have an important role in our healthcare landscape. But to the degree that defining quality confuses, alienates, or disenfranchises providers, we should stop trying to define it. Quality is not a thing, a metric, or an outcome. It is not an elusive, unquantifiable creature that is achievable only by the elite. Quality is simply providing the best possible care. And quality improvement is simply closing the gap between the best possible care and actual care.
In this regard, we can learn a lot from Potter Stewart. We know quality when we see it. And we know what an absence of quality looks like.
Let’s close that gap by putting less energy into defining quality, and putting more energy into the tenacious pursuit of quality.
Dr. Glasheen is physician editor of The Hospitalist.
Seven-Day Schedule Could Improve Hospital Quality, Capacity
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
Intermountain Risk Score Could Help Heart Failure Cases
A risk measurement model created by the Heart Institute at Intermountain Medical Center in Murray, Utah, may one day be a familiar tool to HM groups.
Known as the Intermountain Risk Score (http://intermountainhealthcare.org/IMRS/), the tool uses 15 parameters culled from complete blood counts (CBC) and the basic metabolic profile (BMP) to determine risk. The model, which is free, was used to stratify mortality risk in heart failure patients receiving an internal cardioverter defibrillator (ICD) in a paper presented in September at the 15th annual scientific meeting of the Heart Failure Society of America.
The report found that mortality at one-year post-ICD was 2.4%, 11.8%, and 28.2% for the low-, moderate-, and high-risk groups, respectively. And while the study was narrow in its topic, Benjamin Horne, PhD, director of cardiovascular and genetic epidemiology at the institute, says its application to a multitude of inpatient settings is a natural evolution for the tool.
“One of the things about the innovation of this risk score is the lab tests are so common already,” Dr. Horne says. “They are so familiar to physicians. They’ve been around for decades. What no one had realized before is they had additional risk information contained within them.”
A risk measurement model created by the Heart Institute at Intermountain Medical Center in Murray, Utah, may one day be a familiar tool to HM groups.
Known as the Intermountain Risk Score (http://intermountainhealthcare.org/IMRS/), the tool uses 15 parameters culled from complete blood counts (CBC) and the basic metabolic profile (BMP) to determine risk. The model, which is free, was used to stratify mortality risk in heart failure patients receiving an internal cardioverter defibrillator (ICD) in a paper presented in September at the 15th annual scientific meeting of the Heart Failure Society of America.
The report found that mortality at one-year post-ICD was 2.4%, 11.8%, and 28.2% for the low-, moderate-, and high-risk groups, respectively. And while the study was narrow in its topic, Benjamin Horne, PhD, director of cardiovascular and genetic epidemiology at the institute, says its application to a multitude of inpatient settings is a natural evolution for the tool.
“One of the things about the innovation of this risk score is the lab tests are so common already,” Dr. Horne says. “They are so familiar to physicians. They’ve been around for decades. What no one had realized before is they had additional risk information contained within them.”
A risk measurement model created by the Heart Institute at Intermountain Medical Center in Murray, Utah, may one day be a familiar tool to HM groups.
Known as the Intermountain Risk Score (http://intermountainhealthcare.org/IMRS/), the tool uses 15 parameters culled from complete blood counts (CBC) and the basic metabolic profile (BMP) to determine risk. The model, which is free, was used to stratify mortality risk in heart failure patients receiving an internal cardioverter defibrillator (ICD) in a paper presented in September at the 15th annual scientific meeting of the Heart Failure Society of America.
The report found that mortality at one-year post-ICD was 2.4%, 11.8%, and 28.2% for the low-, moderate-, and high-risk groups, respectively. And while the study was narrow in its topic, Benjamin Horne, PhD, director of cardiovascular and genetic epidemiology at the institute, says its application to a multitude of inpatient settings is a natural evolution for the tool.
“One of the things about the innovation of this risk score is the lab tests are so common already,” Dr. Horne says. “They are so familiar to physicians. They’ve been around for decades. What no one had realized before is they had additional risk information contained within them.”
New Jersey Hospital Funds Care-Transitions “Coach”
Robert Wood Johnson University Hospital in Hamilton, N.J., has partnered with Jewish Family and Children’s Services of Greater Mercer County to support care transitions for 350 chronically ill older patients. Patients will receive a transitions coach following hospital discharge for education, support, and encouragement to keep appointments with their physicians. This “coach” will develop a plan of care for the patient, making one hospital visit, one home visit, and three phone calls, says Joyce Schwarz, the hospital’s vice president of quality and the project’s director.
The hospital received a $300,000 grant under the New Jersey Health Initiative from the Robert Wood Johnson Foundation to use an evidence-based intervention to improve care transitions and reduce readmissions, acting as a bridge between hospital personnel and community physicians.
Robert Wood Johnson University Hospital in Hamilton, N.J., has partnered with Jewish Family and Children’s Services of Greater Mercer County to support care transitions for 350 chronically ill older patients. Patients will receive a transitions coach following hospital discharge for education, support, and encouragement to keep appointments with their physicians. This “coach” will develop a plan of care for the patient, making one hospital visit, one home visit, and three phone calls, says Joyce Schwarz, the hospital’s vice president of quality and the project’s director.
The hospital received a $300,000 grant under the New Jersey Health Initiative from the Robert Wood Johnson Foundation to use an evidence-based intervention to improve care transitions and reduce readmissions, acting as a bridge between hospital personnel and community physicians.
Robert Wood Johnson University Hospital in Hamilton, N.J., has partnered with Jewish Family and Children’s Services of Greater Mercer County to support care transitions for 350 chronically ill older patients. Patients will receive a transitions coach following hospital discharge for education, support, and encouragement to keep appointments with their physicians. This “coach” will develop a plan of care for the patient, making one hospital visit, one home visit, and three phone calls, says Joyce Schwarz, the hospital’s vice president of quality and the project’s director.
The hospital received a $300,000 grant under the New Jersey Health Initiative from the Robert Wood Johnson Foundation to use an evidence-based intervention to improve care transitions and reduce readmissions, acting as a bridge between hospital personnel and community physicians.
Should CMS Allow Access to Patient-Protected Medicare Data for Public Reporting?
PRO
Observational, database studies provide a powerful QI supplement
The proposed rules by the Centers for Medicare & Medicaid Services (CMS), which will allow access to patient-protected Medicare data, will provide for greater transparency and for data that could be utilized toward comparative-effectiveness research (CER). Thus, these rules have the potential to improve the quality of healthcare and impact patient safety.
The Institute of Medicine in December 1999 issued its now-famous article “To Err is Human,” which reported that medical errors cause up to 98,000 deaths and more than 1 million injuries each year in the U.S.6 However, the evidence shows minimal impact on improving patient safety in the past 10 years.
A retrospective study of 10 North Carolina hospitals reported in the New England Journal of Medicine by Landrigan and colleagues found that harms resulting from medical care remained extremely common, with little evidence for improvement.7 It also is estimated that it takes 17 years on average for clinical research to become incorporated into the majority of clinical practices.8 Although the randomized control trial (RCT) is unquestionably the best research tool to explore simple components of clinical care (i.e. tests, drugs, and procedures), its translation into daily clinical practice remains difficult.
Improving the process of care leading to quality remains an extremely difficult proposition based on such sociological issues as resistance to change, the need for interdisciplinary teamwork, level of support staff, economic factors, information retrieval inadequacies, and, most important, the complexity of patients with multiple comorbidities that do not fit the parameters of the RCT.
Don Berwick, MD, the lead author in the landmark IOM report and currently CMS administrator, has stated “in such complex terrain, the RCT is an impoverished way to learn.”9 Factors that cause this chasm include:10
- Too narrowly focused RCT;
- More required resources, including financial and personnel support with RCT, compared with usually clinical practices;
- Lack of collaboration between academic medical center researchers and community clinicians; and
- Lack of expertise and experience to undertake quality improvement in healthcare.
CER has received a $1.1 billion investment with the passage of the American Recovery and Reinvestment Act to provide evidence on the effectiveness, benefits, and harms of various treatment options.11 As part of this research to improve IOM’s goals to improve healthcare, better evidence is desperately needed to cross the translational gap between clinical research and the bedside.12 Observational outcome studies based on registries or databases derived primarily from clinical care can provide a powerful supplement to quality improvement.13
Thus, the ability to combine Medicare claims with other data through the Availability of Medicare Data for Performance Measurement would supply a wealth of information to potentially impact quality. As a cautionary note, safeguards such as provider review and appeal, monitoring the validity of the information, and only using the data for quality improvement are vital.
Dr. Holder is medical director of hospitalist services and chief medical information officer at Decatur (Ill.) Memorial Hospital. He is a member of Team Hospitalist.
CON
Unanswered questions, risks make CMS plan a bad idea
On June 8, the Centers for Medicare & Medicaid Services (CMS) proposed a rule to allow “qualified entities” access to patient-protected Medicare data for provider performance publication. CMS allowed 60 days for public comment and a start date of Jan. 1, 2012. But this controversial rule appeared with short notice, little discussion, and abbreviated opportunity for comment.
CMS maintains this rule will result in higher quality and more cost-effective care. Considering the present volume of data published on multiple performance parameters for both hospitals and providers, it would seem prudent to have solid data for efficacy prior to implementing more required reporting and costs to the industry.1,2,3
Physicians and hospitals will have 30 days to review and verify three years of CMS claims data before it is released. The burden and cost of review will be borne by the private practices involved.1 This process will impose added administrative costs, and it is unlikely three years of data can be carefully reviewed in just 30 days. If practitioners find the review too cumbersome and expensive, which is likely, they will forgo review, putting the accuracy of the data in question.
Quality data already is published for both physicians and hospitals. Is there evidence this process will significantly increase transparency? Adding more layers of administrative work for both CMS and caregivers—higher overhead without defined benefit—seems an ill-conceived idea. From an evidence-based-practice standpoint, where is the evidence that this rule will improve “quality” and make care “cost-effective”? Have the risks (added bureaucracy, increased overhead, questionable data) and benefits (added transparency) been evaluated?
Additionally, it is unclear who will be monitoring the quality of the data published and who will provide oversight for the “entities” to ensure these data are fairly and accurately presented. Who will pay for this oversight, and what recourse will be afforded physicians and hospitals that feel they have been wronged?4,5
The “qualified entities” will pay CMS to cover their cost of providing data, raising concerns that this practice could evolve into patient-data “purchasing.” Although it is likely the selected entities will be industry leaders (or at least initially) with the capability to protect data, is this not another opportunity for misuse or corruption in the system?
Other issues not clearly addressed include the nature of the patient-protected information and who will interpret this data in a clinical context. How will these data be adjusted for patient comorbidities and case mix, or will the data be published without regard to these important confounders?1,3
Publishing clinical data for quality assurance and feedback purposes is essential for quality care. Transparency has increased consumer confidence in the healthcare system and, indeed, has increased the healthcare system’s responsiveness to quality concerns. Granting the benefits of transparency, published data must be precise, accurate, and managed with good oversight in order to ensure the process does not target providers or skew results. Another program, especially one being fast-tracked and making once-protected patient information available to unspecified entities, raises many questions. Who will be watching these agencies for a clear interpretation? Is this yet another layer of CMS bureaucracy? In an era of evidence-based medicine, where is the evidence that this program will improve the system for the better?
Dr. Brezina is a hospitalist at Durham Regional Hospital in North Carolina.
References
- Under the magnifying glass (again): CMS proposes new access to Medicare data for public provider performance reports. Bass, Berry and Sims website. Available at: http://www.bassberry.com/communicationscenter/newsletters/. Accessed Aug. 31, 2011.
- Controversial rule to allow access to Medicare data. Modern Health website. Available at: http://www.modernHealthcare.com. Accessed Aug. 31, 2011.
- Physician report cards must give correct grades. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/09/05/edsa0905.htm. Accessed Sept. 12, 2011.
- OIG identifies huge lapses in hospital security, shifts its focus from CMS to OCR. Atlantic Information Services Inc. website. Available at: http://www.AISHealth.com. Accessed Sept. 12, 2011.
- Berry M. Insurers mishandle 1 in 5 claims, AMA finds. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/07/04/prl20704.htm. Accessed Sept. 12, 2011.
- Kohn LT, Corrigan JM, Donaldson MS, eds. To error is human: building a safer health system. Washington: National Academies Press; 1999.
- Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363(22):2124-2134.
- Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington: National Academy Press; 2001:13.
- Berwick DM. The science of improvement. JAMA. 2008;299(10):1182-1184.
- Ting HH, Shojania KG, Montori VM, Bradley EH. Quality improvement science and action. Circulation. 2009;119:1962-1974.
- Committee on Comparative Research Prioritization. Institute of Medicine Initial National Priorities for Comparative Effectiveness Research. Washington: National Academy Press; 2009.
- Sullivan P, Goldman D. The promise of comparative effectiveness research. JAMA. 2011;305(4):400-401.
- Washington AE, Lipstein SH. The patient-centered outcomes research institute: promoting better information, decisions and health. Sept. 28, 2011; DOI: 10.10.1056/NEJMp1109407.
PRO
Observational, database studies provide a powerful QI supplement
The proposed rules by the Centers for Medicare & Medicaid Services (CMS), which will allow access to patient-protected Medicare data, will provide for greater transparency and for data that could be utilized toward comparative-effectiveness research (CER). Thus, these rules have the potential to improve the quality of healthcare and impact patient safety.
The Institute of Medicine in December 1999 issued its now-famous article “To Err is Human,” which reported that medical errors cause up to 98,000 deaths and more than 1 million injuries each year in the U.S.6 However, the evidence shows minimal impact on improving patient safety in the past 10 years.
A retrospective study of 10 North Carolina hospitals reported in the New England Journal of Medicine by Landrigan and colleagues found that harms resulting from medical care remained extremely common, with little evidence for improvement.7 It also is estimated that it takes 17 years on average for clinical research to become incorporated into the majority of clinical practices.8 Although the randomized control trial (RCT) is unquestionably the best research tool to explore simple components of clinical care (i.e. tests, drugs, and procedures), its translation into daily clinical practice remains difficult.
Improving the process of care leading to quality remains an extremely difficult proposition based on such sociological issues as resistance to change, the need for interdisciplinary teamwork, level of support staff, economic factors, information retrieval inadequacies, and, most important, the complexity of patients with multiple comorbidities that do not fit the parameters of the RCT.
Don Berwick, MD, the lead author in the landmark IOM report and currently CMS administrator, has stated “in such complex terrain, the RCT is an impoverished way to learn.”9 Factors that cause this chasm include:10
- Too narrowly focused RCT;
- More required resources, including financial and personnel support with RCT, compared with usually clinical practices;
- Lack of collaboration between academic medical center researchers and community clinicians; and
- Lack of expertise and experience to undertake quality improvement in healthcare.
CER has received a $1.1 billion investment with the passage of the American Recovery and Reinvestment Act to provide evidence on the effectiveness, benefits, and harms of various treatment options.11 As part of this research to improve IOM’s goals to improve healthcare, better evidence is desperately needed to cross the translational gap between clinical research and the bedside.12 Observational outcome studies based on registries or databases derived primarily from clinical care can provide a powerful supplement to quality improvement.13
Thus, the ability to combine Medicare claims with other data through the Availability of Medicare Data for Performance Measurement would supply a wealth of information to potentially impact quality. As a cautionary note, safeguards such as provider review and appeal, monitoring the validity of the information, and only using the data for quality improvement are vital.
Dr. Holder is medical director of hospitalist services and chief medical information officer at Decatur (Ill.) Memorial Hospital. He is a member of Team Hospitalist.
CON
Unanswered questions, risks make CMS plan a bad idea
On June 8, the Centers for Medicare & Medicaid Services (CMS) proposed a rule to allow “qualified entities” access to patient-protected Medicare data for provider performance publication. CMS allowed 60 days for public comment and a start date of Jan. 1, 2012. But this controversial rule appeared with short notice, little discussion, and abbreviated opportunity for comment.
CMS maintains this rule will result in higher quality and more cost-effective care. Considering the present volume of data published on multiple performance parameters for both hospitals and providers, it would seem prudent to have solid data for efficacy prior to implementing more required reporting and costs to the industry.1,2,3
Physicians and hospitals will have 30 days to review and verify three years of CMS claims data before it is released. The burden and cost of review will be borne by the private practices involved.1 This process will impose added administrative costs, and it is unlikely three years of data can be carefully reviewed in just 30 days. If practitioners find the review too cumbersome and expensive, which is likely, they will forgo review, putting the accuracy of the data in question.
Quality data already is published for both physicians and hospitals. Is there evidence this process will significantly increase transparency? Adding more layers of administrative work for both CMS and caregivers—higher overhead without defined benefit—seems an ill-conceived idea. From an evidence-based-practice standpoint, where is the evidence that this rule will improve “quality” and make care “cost-effective”? Have the risks (added bureaucracy, increased overhead, questionable data) and benefits (added transparency) been evaluated?
Additionally, it is unclear who will be monitoring the quality of the data published and who will provide oversight for the “entities” to ensure these data are fairly and accurately presented. Who will pay for this oversight, and what recourse will be afforded physicians and hospitals that feel they have been wronged?4,5
The “qualified entities” will pay CMS to cover their cost of providing data, raising concerns that this practice could evolve into patient-data “purchasing.” Although it is likely the selected entities will be industry leaders (or at least initially) with the capability to protect data, is this not another opportunity for misuse or corruption in the system?
Other issues not clearly addressed include the nature of the patient-protected information and who will interpret this data in a clinical context. How will these data be adjusted for patient comorbidities and case mix, or will the data be published without regard to these important confounders?1,3
Publishing clinical data for quality assurance and feedback purposes is essential for quality care. Transparency has increased consumer confidence in the healthcare system and, indeed, has increased the healthcare system’s responsiveness to quality concerns. Granting the benefits of transparency, published data must be precise, accurate, and managed with good oversight in order to ensure the process does not target providers or skew results. Another program, especially one being fast-tracked and making once-protected patient information available to unspecified entities, raises many questions. Who will be watching these agencies for a clear interpretation? Is this yet another layer of CMS bureaucracy? In an era of evidence-based medicine, where is the evidence that this program will improve the system for the better?
Dr. Brezina is a hospitalist at Durham Regional Hospital in North Carolina.
References
- Under the magnifying glass (again): CMS proposes new access to Medicare data for public provider performance reports. Bass, Berry and Sims website. Available at: http://www.bassberry.com/communicationscenter/newsletters/. Accessed Aug. 31, 2011.
- Controversial rule to allow access to Medicare data. Modern Health website. Available at: http://www.modernHealthcare.com. Accessed Aug. 31, 2011.
- Physician report cards must give correct grades. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/09/05/edsa0905.htm. Accessed Sept. 12, 2011.
- OIG identifies huge lapses in hospital security, shifts its focus from CMS to OCR. Atlantic Information Services Inc. website. Available at: http://www.AISHealth.com. Accessed Sept. 12, 2011.
- Berry M. Insurers mishandle 1 in 5 claims, AMA finds. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/07/04/prl20704.htm. Accessed Sept. 12, 2011.
- Kohn LT, Corrigan JM, Donaldson MS, eds. To error is human: building a safer health system. Washington: National Academies Press; 1999.
- Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363(22):2124-2134.
- Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington: National Academy Press; 2001:13.
- Berwick DM. The science of improvement. JAMA. 2008;299(10):1182-1184.
- Ting HH, Shojania KG, Montori VM, Bradley EH. Quality improvement science and action. Circulation. 2009;119:1962-1974.
- Committee on Comparative Research Prioritization. Institute of Medicine Initial National Priorities for Comparative Effectiveness Research. Washington: National Academy Press; 2009.
- Sullivan P, Goldman D. The promise of comparative effectiveness research. JAMA. 2011;305(4):400-401.
- Washington AE, Lipstein SH. The patient-centered outcomes research institute: promoting better information, decisions and health. Sept. 28, 2011; DOI: 10.10.1056/NEJMp1109407.
PRO
Observational, database studies provide a powerful QI supplement
The proposed rules by the Centers for Medicare & Medicaid Services (CMS), which will allow access to patient-protected Medicare data, will provide for greater transparency and for data that could be utilized toward comparative-effectiveness research (CER). Thus, these rules have the potential to improve the quality of healthcare and impact patient safety.
The Institute of Medicine in December 1999 issued its now-famous article “To Err is Human,” which reported that medical errors cause up to 98,000 deaths and more than 1 million injuries each year in the U.S.6 However, the evidence shows minimal impact on improving patient safety in the past 10 years.
A retrospective study of 10 North Carolina hospitals reported in the New England Journal of Medicine by Landrigan and colleagues found that harms resulting from medical care remained extremely common, with little evidence for improvement.7 It also is estimated that it takes 17 years on average for clinical research to become incorporated into the majority of clinical practices.8 Although the randomized control trial (RCT) is unquestionably the best research tool to explore simple components of clinical care (i.e. tests, drugs, and procedures), its translation into daily clinical practice remains difficult.
Improving the process of care leading to quality remains an extremely difficult proposition based on such sociological issues as resistance to change, the need for interdisciplinary teamwork, level of support staff, economic factors, information retrieval inadequacies, and, most important, the complexity of patients with multiple comorbidities that do not fit the parameters of the RCT.
Don Berwick, MD, the lead author in the landmark IOM report and currently CMS administrator, has stated “in such complex terrain, the RCT is an impoverished way to learn.”9 Factors that cause this chasm include:10
- Too narrowly focused RCT;
- More required resources, including financial and personnel support with RCT, compared with usually clinical practices;
- Lack of collaboration between academic medical center researchers and community clinicians; and
- Lack of expertise and experience to undertake quality improvement in healthcare.
CER has received a $1.1 billion investment with the passage of the American Recovery and Reinvestment Act to provide evidence on the effectiveness, benefits, and harms of various treatment options.11 As part of this research to improve IOM’s goals to improve healthcare, better evidence is desperately needed to cross the translational gap between clinical research and the bedside.12 Observational outcome studies based on registries or databases derived primarily from clinical care can provide a powerful supplement to quality improvement.13
Thus, the ability to combine Medicare claims with other data through the Availability of Medicare Data for Performance Measurement would supply a wealth of information to potentially impact quality. As a cautionary note, safeguards such as provider review and appeal, monitoring the validity of the information, and only using the data for quality improvement are vital.
Dr. Holder is medical director of hospitalist services and chief medical information officer at Decatur (Ill.) Memorial Hospital. He is a member of Team Hospitalist.
CON
Unanswered questions, risks make CMS plan a bad idea
On June 8, the Centers for Medicare & Medicaid Services (CMS) proposed a rule to allow “qualified entities” access to patient-protected Medicare data for provider performance publication. CMS allowed 60 days for public comment and a start date of Jan. 1, 2012. But this controversial rule appeared with short notice, little discussion, and abbreviated opportunity for comment.
CMS maintains this rule will result in higher quality and more cost-effective care. Considering the present volume of data published on multiple performance parameters for both hospitals and providers, it would seem prudent to have solid data for efficacy prior to implementing more required reporting and costs to the industry.1,2,3
Physicians and hospitals will have 30 days to review and verify three years of CMS claims data before it is released. The burden and cost of review will be borne by the private practices involved.1 This process will impose added administrative costs, and it is unlikely three years of data can be carefully reviewed in just 30 days. If practitioners find the review too cumbersome and expensive, which is likely, they will forgo review, putting the accuracy of the data in question.
Quality data already is published for both physicians and hospitals. Is there evidence this process will significantly increase transparency? Adding more layers of administrative work for both CMS and caregivers—higher overhead without defined benefit—seems an ill-conceived idea. From an evidence-based-practice standpoint, where is the evidence that this rule will improve “quality” and make care “cost-effective”? Have the risks (added bureaucracy, increased overhead, questionable data) and benefits (added transparency) been evaluated?
Additionally, it is unclear who will be monitoring the quality of the data published and who will provide oversight for the “entities” to ensure these data are fairly and accurately presented. Who will pay for this oversight, and what recourse will be afforded physicians and hospitals that feel they have been wronged?4,5
The “qualified entities” will pay CMS to cover their cost of providing data, raising concerns that this practice could evolve into patient-data “purchasing.” Although it is likely the selected entities will be industry leaders (or at least initially) with the capability to protect data, is this not another opportunity for misuse or corruption in the system?
Other issues not clearly addressed include the nature of the patient-protected information and who will interpret this data in a clinical context. How will these data be adjusted for patient comorbidities and case mix, or will the data be published without regard to these important confounders?1,3
Publishing clinical data for quality assurance and feedback purposes is essential for quality care. Transparency has increased consumer confidence in the healthcare system and, indeed, has increased the healthcare system’s responsiveness to quality concerns. Granting the benefits of transparency, published data must be precise, accurate, and managed with good oversight in order to ensure the process does not target providers or skew results. Another program, especially one being fast-tracked and making once-protected patient information available to unspecified entities, raises many questions. Who will be watching these agencies for a clear interpretation? Is this yet another layer of CMS bureaucracy? In an era of evidence-based medicine, where is the evidence that this program will improve the system for the better?
Dr. Brezina is a hospitalist at Durham Regional Hospital in North Carolina.
References
- Under the magnifying glass (again): CMS proposes new access to Medicare data for public provider performance reports. Bass, Berry and Sims website. Available at: http://www.bassberry.com/communicationscenter/newsletters/. Accessed Aug. 31, 2011.
- Controversial rule to allow access to Medicare data. Modern Health website. Available at: http://www.modernHealthcare.com. Accessed Aug. 31, 2011.
- Physician report cards must give correct grades. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/09/05/edsa0905.htm. Accessed Sept. 12, 2011.
- OIG identifies huge lapses in hospital security, shifts its focus from CMS to OCR. Atlantic Information Services Inc. website. Available at: http://www.AISHealth.com. Accessed Sept. 12, 2011.
- Berry M. Insurers mishandle 1 in 5 claims, AMA finds. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/07/04/prl20704.htm. Accessed Sept. 12, 2011.
- Kohn LT, Corrigan JM, Donaldson MS, eds. To error is human: building a safer health system. Washington: National Academies Press; 1999.
- Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363(22):2124-2134.
- Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington: National Academy Press; 2001:13.
- Berwick DM. The science of improvement. JAMA. 2008;299(10):1182-1184.
- Ting HH, Shojania KG, Montori VM, Bradley EH. Quality improvement science and action. Circulation. 2009;119:1962-1974.
- Committee on Comparative Research Prioritization. Institute of Medicine Initial National Priorities for Comparative Effectiveness Research. Washington: National Academy Press; 2009.
- Sullivan P, Goldman D. The promise of comparative effectiveness research. JAMA. 2011;305(4):400-401.
- Washington AE, Lipstein SH. The patient-centered outcomes research institute: promoting better information, decisions and health. Sept. 28, 2011; DOI: 10.10.1056/NEJMp1109407.