User login
Lost in Transition
It’s been nearly two decades since I graduated from medical school. I think back and I honestly do not remember any lectures about transitions of care.
During residency, I remember some attending physicians would insist that when I discharged patients from the hospital, the patients had to leave with post-discharge appointments in hand. Like any diligent intern, I did as I was told. I telephoned the administrative assistants in clinic and booked follow-up appointments for my patients. I always asked for the first available appointment. Why? Because that was what my senior resident told me to do. I suspect he learned that from his resident as well.
Sometimes the appointment was scheduled for the week following discharge; other times it was six months later. I honestly didn’t give it much thought. There was a blank on the discharge paperwork and I filled it in with a date and time. I was doing my job—or so I thought.
Can you imagine if someone just gave you a slip of paper today telling you when to show up to get your teeth cleaned without consulting your schedule? How about scheduling the oil change for your car at a garage 100 miles away? Seems pretty silly, doesn’t it? Nothing about it seems customer-centric or cost-efficient.
With such a system in place, why are we surprised when patients do not show up for their follow-up appointments? When the patient presents to the ED later and is readmitted to the hospital, we label them as “non-compliant” because they failed to show up for their follow-up appointment.
Inefficient, Ineffective, Inappropriate
There are multiple problems with the above situation. The first problem: Why are doctors calling to schedule follow-up appointments in the first place? Do we ask airline pilots to serve refreshments? I suppose they could, but I’d rather they concentrate on flying the plane. It also seems like an awful waste of money and resources when we could accomplish the same feat with less-expensive airline attendants who are better trained to interact with passengers.
At most teaching hospitals across the country, I suspect we still rely on trainees to book follow-up appointments for patients. At hospitals without trainees, I suspect some of this responsibility falls on nurses and unit coordinators. Again, I wonder how often these people are actually in a position to schedule an appointment that the patient is likely to keep—or whether they are filling in a box on a checklist like I used to do.
Common Problem?
How do other industries address this issue? Well, many utilize customer service representatives to help consumers book their appointments. Some industries have advanced software, which allows consumers to book their own appointment online. I have to tell you that I am chuckling as I write this. I’m chuckling not because this is funny—I am just amazed that something that is so common sense is not utilized consistently across the hospital industry. When was the last time you actually called a hotel to book a room? Most of us find it so much more convenient to book airline tickets or hotel rooms online.
If we were to create a system with the consumer’s satisfaction and cost in mind, would you rely on trainees, nurses, or unit coordinators to book follow-up appointments? I suppose Hypothetical System 2.0 would include consumer representatives speaking with patients to book appointments. Hypothetical System 3.0 would allow patients and/or a family member to book the appointment online.
I can tell you that folks at Beth Israel Deaconess Medical Center in Boston, where I work, have given this some thought. We are nowhere near a 3.0 version, but we do rely on professional appointment-makers to work with our hospitalized patients to book follow-up appointments. Inpatient providers put in the order online requesting follow-up appointments for their hospitalized patients. The online application asks the provider to specify the requests. Does the patient need follow-up with specialists, as well as their primary outpatient provider? The inpatient provider can specify the window of time in which they recommend follow-up for the patient. If I want my patient to follow up with their primary-care physician (PCP) within one week and with their cardiologist within two weeks, the appointment-maker will work with the patient and the respective doctors’ offices to make this happen. I am contacted only if any issues arise.
All of this information is provided to the patient with their other discharge paperwork. Some of you might be asking: How can the hospital afford to pay for this software and for the cadre of professional appointment-makers? I am wondering how hospitals can afford not to. It’s like worrying about the cost of a college degree until you realize how difficult it is trying to get a job without one.
Part of the PCP “access” problem we have in this country is due to the fact that not every patient shows up for scheduled appointments. Our appointment-makers minimize the “no show” rate because, by speaking with patients about their schedules, they are providing appointments to patients with knowledge that they are likely to make the appointment. One of the things we learned at Beth Israel was that our trainees were sometimes requesting appointments for patients within one week of discharge when I knew darn well that the patient was unlikely to make that appointment because the patient most likely would still be at rehab.
Prior to this system, we also had the occasional PCP who was upset because we booked their patient’s follow-up with a specialist who was outside that PCP’s “inner circle” of specialists. How in the world are any of us supposed to remember this information?
Well, our professional appointment-makers utilize this information as part of the algorithm they follow when booking appointments for patients. As our nation moves towards a value-based purchasing system for healthcare, we don’t need to recreate the wheel; we can adopt proven practices from other cost-effective industries—and we can improve customer satisfaction.
I am interested in hearing how appointments are arranged for your hospitalized patients. Send me your thoughts at [email protected].
Dr. Li is president of SHM.
It’s been nearly two decades since I graduated from medical school. I think back and I honestly do not remember any lectures about transitions of care.
During residency, I remember some attending physicians would insist that when I discharged patients from the hospital, the patients had to leave with post-discharge appointments in hand. Like any diligent intern, I did as I was told. I telephoned the administrative assistants in clinic and booked follow-up appointments for my patients. I always asked for the first available appointment. Why? Because that was what my senior resident told me to do. I suspect he learned that from his resident as well.
Sometimes the appointment was scheduled for the week following discharge; other times it was six months later. I honestly didn’t give it much thought. There was a blank on the discharge paperwork and I filled it in with a date and time. I was doing my job—or so I thought.
Can you imagine if someone just gave you a slip of paper today telling you when to show up to get your teeth cleaned without consulting your schedule? How about scheduling the oil change for your car at a garage 100 miles away? Seems pretty silly, doesn’t it? Nothing about it seems customer-centric or cost-efficient.
With such a system in place, why are we surprised when patients do not show up for their follow-up appointments? When the patient presents to the ED later and is readmitted to the hospital, we label them as “non-compliant” because they failed to show up for their follow-up appointment.
Inefficient, Ineffective, Inappropriate
There are multiple problems with the above situation. The first problem: Why are doctors calling to schedule follow-up appointments in the first place? Do we ask airline pilots to serve refreshments? I suppose they could, but I’d rather they concentrate on flying the plane. It also seems like an awful waste of money and resources when we could accomplish the same feat with less-expensive airline attendants who are better trained to interact with passengers.
At most teaching hospitals across the country, I suspect we still rely on trainees to book follow-up appointments for patients. At hospitals without trainees, I suspect some of this responsibility falls on nurses and unit coordinators. Again, I wonder how often these people are actually in a position to schedule an appointment that the patient is likely to keep—or whether they are filling in a box on a checklist like I used to do.
Common Problem?
How do other industries address this issue? Well, many utilize customer service representatives to help consumers book their appointments. Some industries have advanced software, which allows consumers to book their own appointment online. I have to tell you that I am chuckling as I write this. I’m chuckling not because this is funny—I am just amazed that something that is so common sense is not utilized consistently across the hospital industry. When was the last time you actually called a hotel to book a room? Most of us find it so much more convenient to book airline tickets or hotel rooms online.
If we were to create a system with the consumer’s satisfaction and cost in mind, would you rely on trainees, nurses, or unit coordinators to book follow-up appointments? I suppose Hypothetical System 2.0 would include consumer representatives speaking with patients to book appointments. Hypothetical System 3.0 would allow patients and/or a family member to book the appointment online.
I can tell you that folks at Beth Israel Deaconess Medical Center in Boston, where I work, have given this some thought. We are nowhere near a 3.0 version, but we do rely on professional appointment-makers to work with our hospitalized patients to book follow-up appointments. Inpatient providers put in the order online requesting follow-up appointments for their hospitalized patients. The online application asks the provider to specify the requests. Does the patient need follow-up with specialists, as well as their primary outpatient provider? The inpatient provider can specify the window of time in which they recommend follow-up for the patient. If I want my patient to follow up with their primary-care physician (PCP) within one week and with their cardiologist within two weeks, the appointment-maker will work with the patient and the respective doctors’ offices to make this happen. I am contacted only if any issues arise.
All of this information is provided to the patient with their other discharge paperwork. Some of you might be asking: How can the hospital afford to pay for this software and for the cadre of professional appointment-makers? I am wondering how hospitals can afford not to. It’s like worrying about the cost of a college degree until you realize how difficult it is trying to get a job without one.
Part of the PCP “access” problem we have in this country is due to the fact that not every patient shows up for scheduled appointments. Our appointment-makers minimize the “no show” rate because, by speaking with patients about their schedules, they are providing appointments to patients with knowledge that they are likely to make the appointment. One of the things we learned at Beth Israel was that our trainees were sometimes requesting appointments for patients within one week of discharge when I knew darn well that the patient was unlikely to make that appointment because the patient most likely would still be at rehab.
Prior to this system, we also had the occasional PCP who was upset because we booked their patient’s follow-up with a specialist who was outside that PCP’s “inner circle” of specialists. How in the world are any of us supposed to remember this information?
Well, our professional appointment-makers utilize this information as part of the algorithm they follow when booking appointments for patients. As our nation moves towards a value-based purchasing system for healthcare, we don’t need to recreate the wheel; we can adopt proven practices from other cost-effective industries—and we can improve customer satisfaction.
I am interested in hearing how appointments are arranged for your hospitalized patients. Send me your thoughts at [email protected].
Dr. Li is president of SHM.
It’s been nearly two decades since I graduated from medical school. I think back and I honestly do not remember any lectures about transitions of care.
During residency, I remember some attending physicians would insist that when I discharged patients from the hospital, the patients had to leave with post-discharge appointments in hand. Like any diligent intern, I did as I was told. I telephoned the administrative assistants in clinic and booked follow-up appointments for my patients. I always asked for the first available appointment. Why? Because that was what my senior resident told me to do. I suspect he learned that from his resident as well.
Sometimes the appointment was scheduled for the week following discharge; other times it was six months later. I honestly didn’t give it much thought. There was a blank on the discharge paperwork and I filled it in with a date and time. I was doing my job—or so I thought.
Can you imagine if someone just gave you a slip of paper today telling you when to show up to get your teeth cleaned without consulting your schedule? How about scheduling the oil change for your car at a garage 100 miles away? Seems pretty silly, doesn’t it? Nothing about it seems customer-centric or cost-efficient.
With such a system in place, why are we surprised when patients do not show up for their follow-up appointments? When the patient presents to the ED later and is readmitted to the hospital, we label them as “non-compliant” because they failed to show up for their follow-up appointment.
Inefficient, Ineffective, Inappropriate
There are multiple problems with the above situation. The first problem: Why are doctors calling to schedule follow-up appointments in the first place? Do we ask airline pilots to serve refreshments? I suppose they could, but I’d rather they concentrate on flying the plane. It also seems like an awful waste of money and resources when we could accomplish the same feat with less-expensive airline attendants who are better trained to interact with passengers.
At most teaching hospitals across the country, I suspect we still rely on trainees to book follow-up appointments for patients. At hospitals without trainees, I suspect some of this responsibility falls on nurses and unit coordinators. Again, I wonder how often these people are actually in a position to schedule an appointment that the patient is likely to keep—or whether they are filling in a box on a checklist like I used to do.
Common Problem?
How do other industries address this issue? Well, many utilize customer service representatives to help consumers book their appointments. Some industries have advanced software, which allows consumers to book their own appointment online. I have to tell you that I am chuckling as I write this. I’m chuckling not because this is funny—I am just amazed that something that is so common sense is not utilized consistently across the hospital industry. When was the last time you actually called a hotel to book a room? Most of us find it so much more convenient to book airline tickets or hotel rooms online.
If we were to create a system with the consumer’s satisfaction and cost in mind, would you rely on trainees, nurses, or unit coordinators to book follow-up appointments? I suppose Hypothetical System 2.0 would include consumer representatives speaking with patients to book appointments. Hypothetical System 3.0 would allow patients and/or a family member to book the appointment online.
I can tell you that folks at Beth Israel Deaconess Medical Center in Boston, where I work, have given this some thought. We are nowhere near a 3.0 version, but we do rely on professional appointment-makers to work with our hospitalized patients to book follow-up appointments. Inpatient providers put in the order online requesting follow-up appointments for their hospitalized patients. The online application asks the provider to specify the requests. Does the patient need follow-up with specialists, as well as their primary outpatient provider? The inpatient provider can specify the window of time in which they recommend follow-up for the patient. If I want my patient to follow up with their primary-care physician (PCP) within one week and with their cardiologist within two weeks, the appointment-maker will work with the patient and the respective doctors’ offices to make this happen. I am contacted only if any issues arise.
All of this information is provided to the patient with their other discharge paperwork. Some of you might be asking: How can the hospital afford to pay for this software and for the cadre of professional appointment-makers? I am wondering how hospitals can afford not to. It’s like worrying about the cost of a college degree until you realize how difficult it is trying to get a job without one.
Part of the PCP “access” problem we have in this country is due to the fact that not every patient shows up for scheduled appointments. Our appointment-makers minimize the “no show” rate because, by speaking with patients about their schedules, they are providing appointments to patients with knowledge that they are likely to make the appointment. One of the things we learned at Beth Israel was that our trainees were sometimes requesting appointments for patients within one week of discharge when I knew darn well that the patient was unlikely to make that appointment because the patient most likely would still be at rehab.
Prior to this system, we also had the occasional PCP who was upset because we booked their patient’s follow-up with a specialist who was outside that PCP’s “inner circle” of specialists. How in the world are any of us supposed to remember this information?
Well, our professional appointment-makers utilize this information as part of the algorithm they follow when booking appointments for patients. As our nation moves towards a value-based purchasing system for healthcare, we don’t need to recreate the wheel; we can adopt proven practices from other cost-effective industries—and we can improve customer satisfaction.
I am interested in hearing how appointments are arranged for your hospitalized patients. Send me your thoughts at [email protected].
Dr. Li is president of SHM.
Quality, Defined
Pornography. There can be few better hooks for readers than that. Just typing the word is a bit uncomfortable. As is, I imagine, reading it. But it’s effective, and likely why you’ve made it to word 37 of my column—34 words further than you usually get, I imagine.
“What about pornography?” you ask with bated breath. “What could pornography possibly have to do with hospital medicine?” your mind wonders. “Is this the column that (finally) gets Glasheen fired?” the ambulance chaser in you titillates.
By now, you’ve no doubt heard the famous Potter Stewart definition of pornography: “I know it when I see it.” That’s how the former U.S. Supreme Court justice described his threshold for recognizing pornography. It was made famous in a 1960s decision about whether a particular movie scene was protected by the 1st Amendment right to free speech or, indeed, a pornographic obscenity to be censured. Stewart, who clearly recognized the need to “define” pornography, also recognized the inherent challenges in doing so. The I-know-it-when-I-see-it benchmark is, of course, flawed, but I defy you to come up with a better definition.
Quality Is, of Course…
I was thinking about pornography (another discomforting phrase to type) recently—and Potter Stewart’s challenge in defining it, specifically—when I was asked about quality in healthcare. The query, which occurred during a several-hour, mind-numbing meeting (is there another type of several-hour meeting?), was “What is quality?” The question, laced with hostility and dripping with antagonism, was posed by a senior physician and directed pointedly at me. Indignantly, I cleared my throat, mentally stepping onto my pedestal to ceremoniously topple this academic egghead with my erudite response.
“Well, quality is, of course,” I confidently retorted, the “of course” added to demonstrate my moral superiority, “the ability to … uhhh, you see … ummmm, you know.” At which point I again cleared my throat not once, not twice, but a socially awkward three times before employing the timed-honored, full-body shock-twitch that signifies that you’ve just received an urgent vibrate page (faked, of course) and excused myself from the meeting, never to return.
The reality is that I struggle to define quality. Like Chief Justice Stewart, I think I know quality when I see it, but more precise definitions can be elusive.
And distracting.
It’s Not My Job
Just this morning, I read a news release from a respected physician group trumpeting the fact that their advocacy resulted in the federal government reducing the number of quality data-point requirements in their final rule for accountable-care organizations (ACOs) from 66 to 33. Trumpeting? Is this a good thing? Should we be supporting fewer quality measures? The article quoted a physician leader saying that the original reporting requirements were too burdensome. Too burdensome to whom? My guess is the recipients of our care, often referred to as our patients, wouldn’t categorize quality assurance as “too burdensome.”
I was at another meeting recently in which a respected colleague related her take on the physician role in improving quality. “I don’t think that’s a physician’s job. That’s what we have a quality department for,” she noted. “It’s just too expensive, time-consuming, and boring for physicians to do that kind of work.”
Too burdensome? Not a physician’s job to ensure the delivery of quality care? While I understand the sentiment (the need to have support staff collecting data, recognition of the huge infrastructure requirements, etc.), I can’t help but think that these types of responses are a large part of the struggle we are having with improving quality.
Then again, I would hazard that 0.0 percent of physicians would argue with the premise that we are obliged by the Hippocratic Oath, our moral compass, and our sense of professionalism to provide the best possible care to our patients. If we accept that we aren’t doing that—and we aren’t—then what is the disconnect? Why aren’t we seeking more quality data points? Why isn’t this “our job”?
Definitional Disconnect
Well, the truth is, it is our job. And we know it. The problem is that quality isn’t universally defined and the process of trying to define it often distracts us from the true task at hand—improving patient care.
Few of us would argue that a wrong-site surgery or anaphylaxis from administration of a medication known to have caused an allergy represents a suboptimal level of care. But more often than not, we see quality being measured and defined in less concrete, more obscure ways—ways that my eyes may not view as low-quality. These definitions are inherently flawed and breed contempt among providers who are told they aren’t passing muster in metrics they don’t see as “quality.”
So the real disconnect is definitional. Is quality defined by the Institute of Medicine characteristics of safe, effective, patient-centered, timely, efficient, and equitable care? Or is it the rates of underuse, overuse, and misuse of medical treatments and procedures? Or is it defined by individual quality metrics such as those captured by the Centers for Medicare & Medicaid Services (CMS)—you know, things like hospital fall rates, perioperative antibiotic usage, beta-blockers after MI, or whether a patient reported their bathroom as being clean?
Is 30% of the quality of care that we deliver referable to the patient experience (as measured by HCAHPS), as the new value-based purchasing program would have us believe? Is it hospital accreditation through the Joint Commission? Or physician certification through our parent boards? Is quality measured by a physician’s cognitive or technical skills, or where they went to school? Is it experience, medical knowledge, guideline usage?
We use such a mystifying array of metrics to define quality that it confuses the issue such that physicians who personally believe they are doing a good job can become disenfranchised. To a physician who provides clinically appropriate care around a surgical procedure or treatment of pneumonia, it can be demeaning and demoralizing to suggest that his or her patient did not receive “high quality” care because the bathroom wasn’t clean or the patient didn’t get a flu shot. Yet, this is the message we often send—a message that alienates many physicians, making them cynical about quality and disengaged in quality improvement. The result is that they seek fewer quality data points and defer the job of improving quality to someone else.
Make no mistake: Quality measures have an important role in our healthcare landscape. But to the degree that defining quality confuses, alienates, or disenfranchises providers, we should stop trying to define it. Quality is not a thing, a metric, or an outcome. It is not an elusive, unquantifiable creature that is achievable only by the elite. Quality is simply providing the best possible care. And quality improvement is simply closing the gap between the best possible care and actual care.
In this regard, we can learn a lot from Potter Stewart. We know quality when we see it. And we know what an absence of quality looks like.
Let’s close that gap by putting less energy into defining quality, and putting more energy into the tenacious pursuit of quality.
Dr. Glasheen is physician editor of The Hospitalist.
Pornography. There can be few better hooks for readers than that. Just typing the word is a bit uncomfortable. As is, I imagine, reading it. But it’s effective, and likely why you’ve made it to word 37 of my column—34 words further than you usually get, I imagine.
“What about pornography?” you ask with bated breath. “What could pornography possibly have to do with hospital medicine?” your mind wonders. “Is this the column that (finally) gets Glasheen fired?” the ambulance chaser in you titillates.
By now, you’ve no doubt heard the famous Potter Stewart definition of pornography: “I know it when I see it.” That’s how the former U.S. Supreme Court justice described his threshold for recognizing pornography. It was made famous in a 1960s decision about whether a particular movie scene was protected by the 1st Amendment right to free speech or, indeed, a pornographic obscenity to be censured. Stewart, who clearly recognized the need to “define” pornography, also recognized the inherent challenges in doing so. The I-know-it-when-I-see-it benchmark is, of course, flawed, but I defy you to come up with a better definition.
Quality Is, of Course…
I was thinking about pornography (another discomforting phrase to type) recently—and Potter Stewart’s challenge in defining it, specifically—when I was asked about quality in healthcare. The query, which occurred during a several-hour, mind-numbing meeting (is there another type of several-hour meeting?), was “What is quality?” The question, laced with hostility and dripping with antagonism, was posed by a senior physician and directed pointedly at me. Indignantly, I cleared my throat, mentally stepping onto my pedestal to ceremoniously topple this academic egghead with my erudite response.
“Well, quality is, of course,” I confidently retorted, the “of course” added to demonstrate my moral superiority, “the ability to … uhhh, you see … ummmm, you know.” At which point I again cleared my throat not once, not twice, but a socially awkward three times before employing the timed-honored, full-body shock-twitch that signifies that you’ve just received an urgent vibrate page (faked, of course) and excused myself from the meeting, never to return.
The reality is that I struggle to define quality. Like Chief Justice Stewart, I think I know quality when I see it, but more precise definitions can be elusive.
And distracting.
It’s Not My Job
Just this morning, I read a news release from a respected physician group trumpeting the fact that their advocacy resulted in the federal government reducing the number of quality data-point requirements in their final rule for accountable-care organizations (ACOs) from 66 to 33. Trumpeting? Is this a good thing? Should we be supporting fewer quality measures? The article quoted a physician leader saying that the original reporting requirements were too burdensome. Too burdensome to whom? My guess is the recipients of our care, often referred to as our patients, wouldn’t categorize quality assurance as “too burdensome.”
I was at another meeting recently in which a respected colleague related her take on the physician role in improving quality. “I don’t think that’s a physician’s job. That’s what we have a quality department for,” she noted. “It’s just too expensive, time-consuming, and boring for physicians to do that kind of work.”
Too burdensome? Not a physician’s job to ensure the delivery of quality care? While I understand the sentiment (the need to have support staff collecting data, recognition of the huge infrastructure requirements, etc.), I can’t help but think that these types of responses are a large part of the struggle we are having with improving quality.
Then again, I would hazard that 0.0 percent of physicians would argue with the premise that we are obliged by the Hippocratic Oath, our moral compass, and our sense of professionalism to provide the best possible care to our patients. If we accept that we aren’t doing that—and we aren’t—then what is the disconnect? Why aren’t we seeking more quality data points? Why isn’t this “our job”?
Definitional Disconnect
Well, the truth is, it is our job. And we know it. The problem is that quality isn’t universally defined and the process of trying to define it often distracts us from the true task at hand—improving patient care.
Few of us would argue that a wrong-site surgery or anaphylaxis from administration of a medication known to have caused an allergy represents a suboptimal level of care. But more often than not, we see quality being measured and defined in less concrete, more obscure ways—ways that my eyes may not view as low-quality. These definitions are inherently flawed and breed contempt among providers who are told they aren’t passing muster in metrics they don’t see as “quality.”
So the real disconnect is definitional. Is quality defined by the Institute of Medicine characteristics of safe, effective, patient-centered, timely, efficient, and equitable care? Or is it the rates of underuse, overuse, and misuse of medical treatments and procedures? Or is it defined by individual quality metrics such as those captured by the Centers for Medicare & Medicaid Services (CMS)—you know, things like hospital fall rates, perioperative antibiotic usage, beta-blockers after MI, or whether a patient reported their bathroom as being clean?
Is 30% of the quality of care that we deliver referable to the patient experience (as measured by HCAHPS), as the new value-based purchasing program would have us believe? Is it hospital accreditation through the Joint Commission? Or physician certification through our parent boards? Is quality measured by a physician’s cognitive or technical skills, or where they went to school? Is it experience, medical knowledge, guideline usage?
We use such a mystifying array of metrics to define quality that it confuses the issue such that physicians who personally believe they are doing a good job can become disenfranchised. To a physician who provides clinically appropriate care around a surgical procedure or treatment of pneumonia, it can be demeaning and demoralizing to suggest that his or her patient did not receive “high quality” care because the bathroom wasn’t clean or the patient didn’t get a flu shot. Yet, this is the message we often send—a message that alienates many physicians, making them cynical about quality and disengaged in quality improvement. The result is that they seek fewer quality data points and defer the job of improving quality to someone else.
Make no mistake: Quality measures have an important role in our healthcare landscape. But to the degree that defining quality confuses, alienates, or disenfranchises providers, we should stop trying to define it. Quality is not a thing, a metric, or an outcome. It is not an elusive, unquantifiable creature that is achievable only by the elite. Quality is simply providing the best possible care. And quality improvement is simply closing the gap between the best possible care and actual care.
In this regard, we can learn a lot from Potter Stewart. We know quality when we see it. And we know what an absence of quality looks like.
Let’s close that gap by putting less energy into defining quality, and putting more energy into the tenacious pursuit of quality.
Dr. Glasheen is physician editor of The Hospitalist.
Pornography. There can be few better hooks for readers than that. Just typing the word is a bit uncomfortable. As is, I imagine, reading it. But it’s effective, and likely why you’ve made it to word 37 of my column—34 words further than you usually get, I imagine.
“What about pornography?” you ask with bated breath. “What could pornography possibly have to do with hospital medicine?” your mind wonders. “Is this the column that (finally) gets Glasheen fired?” the ambulance chaser in you titillates.
By now, you’ve no doubt heard the famous Potter Stewart definition of pornography: “I know it when I see it.” That’s how the former U.S. Supreme Court justice described his threshold for recognizing pornography. It was made famous in a 1960s decision about whether a particular movie scene was protected by the 1st Amendment right to free speech or, indeed, a pornographic obscenity to be censured. Stewart, who clearly recognized the need to “define” pornography, also recognized the inherent challenges in doing so. The I-know-it-when-I-see-it benchmark is, of course, flawed, but I defy you to come up with a better definition.
Quality Is, of Course…
I was thinking about pornography (another discomforting phrase to type) recently—and Potter Stewart’s challenge in defining it, specifically—when I was asked about quality in healthcare. The query, which occurred during a several-hour, mind-numbing meeting (is there another type of several-hour meeting?), was “What is quality?” The question, laced with hostility and dripping with antagonism, was posed by a senior physician and directed pointedly at me. Indignantly, I cleared my throat, mentally stepping onto my pedestal to ceremoniously topple this academic egghead with my erudite response.
“Well, quality is, of course,” I confidently retorted, the “of course” added to demonstrate my moral superiority, “the ability to … uhhh, you see … ummmm, you know.” At which point I again cleared my throat not once, not twice, but a socially awkward three times before employing the timed-honored, full-body shock-twitch that signifies that you’ve just received an urgent vibrate page (faked, of course) and excused myself from the meeting, never to return.
The reality is that I struggle to define quality. Like Chief Justice Stewart, I think I know quality when I see it, but more precise definitions can be elusive.
And distracting.
It’s Not My Job
Just this morning, I read a news release from a respected physician group trumpeting the fact that their advocacy resulted in the federal government reducing the number of quality data-point requirements in their final rule for accountable-care organizations (ACOs) from 66 to 33. Trumpeting? Is this a good thing? Should we be supporting fewer quality measures? The article quoted a physician leader saying that the original reporting requirements were too burdensome. Too burdensome to whom? My guess is the recipients of our care, often referred to as our patients, wouldn’t categorize quality assurance as “too burdensome.”
I was at another meeting recently in which a respected colleague related her take on the physician role in improving quality. “I don’t think that’s a physician’s job. That’s what we have a quality department for,” she noted. “It’s just too expensive, time-consuming, and boring for physicians to do that kind of work.”
Too burdensome? Not a physician’s job to ensure the delivery of quality care? While I understand the sentiment (the need to have support staff collecting data, recognition of the huge infrastructure requirements, etc.), I can’t help but think that these types of responses are a large part of the struggle we are having with improving quality.
Then again, I would hazard that 0.0 percent of physicians would argue with the premise that we are obliged by the Hippocratic Oath, our moral compass, and our sense of professionalism to provide the best possible care to our patients. If we accept that we aren’t doing that—and we aren’t—then what is the disconnect? Why aren’t we seeking more quality data points? Why isn’t this “our job”?
Definitional Disconnect
Well, the truth is, it is our job. And we know it. The problem is that quality isn’t universally defined and the process of trying to define it often distracts us from the true task at hand—improving patient care.
Few of us would argue that a wrong-site surgery or anaphylaxis from administration of a medication known to have caused an allergy represents a suboptimal level of care. But more often than not, we see quality being measured and defined in less concrete, more obscure ways—ways that my eyes may not view as low-quality. These definitions are inherently flawed and breed contempt among providers who are told they aren’t passing muster in metrics they don’t see as “quality.”
So the real disconnect is definitional. Is quality defined by the Institute of Medicine characteristics of safe, effective, patient-centered, timely, efficient, and equitable care? Or is it the rates of underuse, overuse, and misuse of medical treatments and procedures? Or is it defined by individual quality metrics such as those captured by the Centers for Medicare & Medicaid Services (CMS)—you know, things like hospital fall rates, perioperative antibiotic usage, beta-blockers after MI, or whether a patient reported their bathroom as being clean?
Is 30% of the quality of care that we deliver referable to the patient experience (as measured by HCAHPS), as the new value-based purchasing program would have us believe? Is it hospital accreditation through the Joint Commission? Or physician certification through our parent boards? Is quality measured by a physician’s cognitive or technical skills, or where they went to school? Is it experience, medical knowledge, guideline usage?
We use such a mystifying array of metrics to define quality that it confuses the issue such that physicians who personally believe they are doing a good job can become disenfranchised. To a physician who provides clinically appropriate care around a surgical procedure or treatment of pneumonia, it can be demeaning and demoralizing to suggest that his or her patient did not receive “high quality” care because the bathroom wasn’t clean or the patient didn’t get a flu shot. Yet, this is the message we often send—a message that alienates many physicians, making them cynical about quality and disengaged in quality improvement. The result is that they seek fewer quality data points and defer the job of improving quality to someone else.
Make no mistake: Quality measures have an important role in our healthcare landscape. But to the degree that defining quality confuses, alienates, or disenfranchises providers, we should stop trying to define it. Quality is not a thing, a metric, or an outcome. It is not an elusive, unquantifiable creature that is achievable only by the elite. Quality is simply providing the best possible care. And quality improvement is simply closing the gap between the best possible care and actual care.
In this regard, we can learn a lot from Potter Stewart. We know quality when we see it. And we know what an absence of quality looks like.
Let’s close that gap by putting less energy into defining quality, and putting more energy into the tenacious pursuit of quality.
Dr. Glasheen is physician editor of The Hospitalist.
Seven-Day Schedule Could Improve Hospital Quality, Capacity
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
Intermountain Risk Score Could Help Heart Failure Cases
A risk measurement model created by the Heart Institute at Intermountain Medical Center in Murray, Utah, may one day be a familiar tool to HM groups.
Known as the Intermountain Risk Score (http://intermountainhealthcare.org/IMRS/), the tool uses 15 parameters culled from complete blood counts (CBC) and the basic metabolic profile (BMP) to determine risk. The model, which is free, was used to stratify mortality risk in heart failure patients receiving an internal cardioverter defibrillator (ICD) in a paper presented in September at the 15th annual scientific meeting of the Heart Failure Society of America.
The report found that mortality at one-year post-ICD was 2.4%, 11.8%, and 28.2% for the low-, moderate-, and high-risk groups, respectively. And while the study was narrow in its topic, Benjamin Horne, PhD, director of cardiovascular and genetic epidemiology at the institute, says its application to a multitude of inpatient settings is a natural evolution for the tool.
“One of the things about the innovation of this risk score is the lab tests are so common already,” Dr. Horne says. “They are so familiar to physicians. They’ve been around for decades. What no one had realized before is they had additional risk information contained within them.”
A risk measurement model created by the Heart Institute at Intermountain Medical Center in Murray, Utah, may one day be a familiar tool to HM groups.
Known as the Intermountain Risk Score (http://intermountainhealthcare.org/IMRS/), the tool uses 15 parameters culled from complete blood counts (CBC) and the basic metabolic profile (BMP) to determine risk. The model, which is free, was used to stratify mortality risk in heart failure patients receiving an internal cardioverter defibrillator (ICD) in a paper presented in September at the 15th annual scientific meeting of the Heart Failure Society of America.
The report found that mortality at one-year post-ICD was 2.4%, 11.8%, and 28.2% for the low-, moderate-, and high-risk groups, respectively. And while the study was narrow in its topic, Benjamin Horne, PhD, director of cardiovascular and genetic epidemiology at the institute, says its application to a multitude of inpatient settings is a natural evolution for the tool.
“One of the things about the innovation of this risk score is the lab tests are so common already,” Dr. Horne says. “They are so familiar to physicians. They’ve been around for decades. What no one had realized before is they had additional risk information contained within them.”
A risk measurement model created by the Heart Institute at Intermountain Medical Center in Murray, Utah, may one day be a familiar tool to HM groups.
Known as the Intermountain Risk Score (http://intermountainhealthcare.org/IMRS/), the tool uses 15 parameters culled from complete blood counts (CBC) and the basic metabolic profile (BMP) to determine risk. The model, which is free, was used to stratify mortality risk in heart failure patients receiving an internal cardioverter defibrillator (ICD) in a paper presented in September at the 15th annual scientific meeting of the Heart Failure Society of America.
The report found that mortality at one-year post-ICD was 2.4%, 11.8%, and 28.2% for the low-, moderate-, and high-risk groups, respectively. And while the study was narrow in its topic, Benjamin Horne, PhD, director of cardiovascular and genetic epidemiology at the institute, says its application to a multitude of inpatient settings is a natural evolution for the tool.
“One of the things about the innovation of this risk score is the lab tests are so common already,” Dr. Horne says. “They are so familiar to physicians. They’ve been around for decades. What no one had realized before is they had additional risk information contained within them.”
New Jersey Hospital Funds Care-Transitions “Coach”
Robert Wood Johnson University Hospital in Hamilton, N.J., has partnered with Jewish Family and Children’s Services of Greater Mercer County to support care transitions for 350 chronically ill older patients. Patients will receive a transitions coach following hospital discharge for education, support, and encouragement to keep appointments with their physicians. This “coach” will develop a plan of care for the patient, making one hospital visit, one home visit, and three phone calls, says Joyce Schwarz, the hospital’s vice president of quality and the project’s director.
The hospital received a $300,000 grant under the New Jersey Health Initiative from the Robert Wood Johnson Foundation to use an evidence-based intervention to improve care transitions and reduce readmissions, acting as a bridge between hospital personnel and community physicians.
Robert Wood Johnson University Hospital in Hamilton, N.J., has partnered with Jewish Family and Children’s Services of Greater Mercer County to support care transitions for 350 chronically ill older patients. Patients will receive a transitions coach following hospital discharge for education, support, and encouragement to keep appointments with their physicians. This “coach” will develop a plan of care for the patient, making one hospital visit, one home visit, and three phone calls, says Joyce Schwarz, the hospital’s vice president of quality and the project’s director.
The hospital received a $300,000 grant under the New Jersey Health Initiative from the Robert Wood Johnson Foundation to use an evidence-based intervention to improve care transitions and reduce readmissions, acting as a bridge between hospital personnel and community physicians.
Robert Wood Johnson University Hospital in Hamilton, N.J., has partnered with Jewish Family and Children’s Services of Greater Mercer County to support care transitions for 350 chronically ill older patients. Patients will receive a transitions coach following hospital discharge for education, support, and encouragement to keep appointments with their physicians. This “coach” will develop a plan of care for the patient, making one hospital visit, one home visit, and three phone calls, says Joyce Schwarz, the hospital’s vice president of quality and the project’s director.
The hospital received a $300,000 grant under the New Jersey Health Initiative from the Robert Wood Johnson Foundation to use an evidence-based intervention to improve care transitions and reduce readmissions, acting as a bridge between hospital personnel and community physicians.
‘Smoothing’ Strategies in Children’s Hospitals Reduce Overcrowding
A report published online May 24 in the Journal of Hospital Medicine found that smoothing inpatient occupancy and scheduled admissions in 39 children’s hospitals helped reduce midweek overcrowding. Evan S. Fieldston, MD, MBA, MSHP, of the University of Pennsylvania School of Medicine in Philadelphia and colleagues previously demonstrated occupancy variability and midweek crowding weekends (J Hosp Med. 2011;6:81-87). Strategies the team studied included controlling admissions when possible to achieve more level occupancy, with a mean of 2.6% of admissions moved to a different day of the week.
A report published online May 24 in the Journal of Hospital Medicine found that smoothing inpatient occupancy and scheduled admissions in 39 children’s hospitals helped reduce midweek overcrowding. Evan S. Fieldston, MD, MBA, MSHP, of the University of Pennsylvania School of Medicine in Philadelphia and colleagues previously demonstrated occupancy variability and midweek crowding weekends (J Hosp Med. 2011;6:81-87). Strategies the team studied included controlling admissions when possible to achieve more level occupancy, with a mean of 2.6% of admissions moved to a different day of the week.
A report published online May 24 in the Journal of Hospital Medicine found that smoothing inpatient occupancy and scheduled admissions in 39 children’s hospitals helped reduce midweek overcrowding. Evan S. Fieldston, MD, MBA, MSHP, of the University of Pennsylvania School of Medicine in Philadelphia and colleagues previously demonstrated occupancy variability and midweek crowding weekends (J Hosp Med. 2011;6:81-87). Strategies the team studied included controlling admissions when possible to achieve more level occupancy, with a mean of 2.6% of admissions moved to a different day of the week.
Academic Opportunity
Academic hospitalists will find new opportunities to learn, network, and showcase their own insights at HM12, SHM’s annual meeting April 1-4 in San Diego.
This year, poster presenters will have even more time to present cutting-edge topics in hospital medicine. The popular Research, Innovation, and Clinical Vignettes (RIV) poster sessions will be split into two days.
The Research and Innovations poster reception will be held 5 to 7 p.m. April 2, while the Vignettes poster session will be held during lunch the next day. However, some things about the receptions won’t change: Sessions will be held in the exhibit hall.
The move to two poster receptions was in response to previous attendee feedback. As the numbers of attendees and poster presenters has grown, visiting all the posters and having meaningful conversations with the presenters became increasingly difficult. Now attendees—both academic and community-based hospitalist—can take their time and soak in more of the best thinking in the specialty.
If you’re thinking about submitting a poster for any of the three categories, now is the time to act: The submission deadline for abstracts is Dec. 2.
Poster sessions aren’t the only new chances for academic hospitalists to find valuable face time at HM12, either. This year’s program includes new opportunities to collaborate and connect with other academic hospitalists—and hospitalists from other backgrounds as well.
And the HM12 schedule will feature valuable courses specifically chosen for the unique needs and challenges of the academic hospitalist’s career.
Brendon Shank is SHM’s associate vice president of communications.
Academic hospitalists will find new opportunities to learn, network, and showcase their own insights at HM12, SHM’s annual meeting April 1-4 in San Diego.
This year, poster presenters will have even more time to present cutting-edge topics in hospital medicine. The popular Research, Innovation, and Clinical Vignettes (RIV) poster sessions will be split into two days.
The Research and Innovations poster reception will be held 5 to 7 p.m. April 2, while the Vignettes poster session will be held during lunch the next day. However, some things about the receptions won’t change: Sessions will be held in the exhibit hall.
The move to two poster receptions was in response to previous attendee feedback. As the numbers of attendees and poster presenters has grown, visiting all the posters and having meaningful conversations with the presenters became increasingly difficult. Now attendees—both academic and community-based hospitalist—can take their time and soak in more of the best thinking in the specialty.
If you’re thinking about submitting a poster for any of the three categories, now is the time to act: The submission deadline for abstracts is Dec. 2.
Poster sessions aren’t the only new chances for academic hospitalists to find valuable face time at HM12, either. This year’s program includes new opportunities to collaborate and connect with other academic hospitalists—and hospitalists from other backgrounds as well.
And the HM12 schedule will feature valuable courses specifically chosen for the unique needs and challenges of the academic hospitalist’s career.
Brendon Shank is SHM’s associate vice president of communications.
Academic hospitalists will find new opportunities to learn, network, and showcase their own insights at HM12, SHM’s annual meeting April 1-4 in San Diego.
This year, poster presenters will have even more time to present cutting-edge topics in hospital medicine. The popular Research, Innovation, and Clinical Vignettes (RIV) poster sessions will be split into two days.
The Research and Innovations poster reception will be held 5 to 7 p.m. April 2, while the Vignettes poster session will be held during lunch the next day. However, some things about the receptions won’t change: Sessions will be held in the exhibit hall.
The move to two poster receptions was in response to previous attendee feedback. As the numbers of attendees and poster presenters has grown, visiting all the posters and having meaningful conversations with the presenters became increasingly difficult. Now attendees—both academic and community-based hospitalist—can take their time and soak in more of the best thinking in the specialty.
If you’re thinking about submitting a poster for any of the three categories, now is the time to act: The submission deadline for abstracts is Dec. 2.
Poster sessions aren’t the only new chances for academic hospitalists to find valuable face time at HM12, either. This year’s program includes new opportunities to collaborate and connect with other academic hospitalists—and hospitalists from other backgrounds as well.
And the HM12 schedule will feature valuable courses specifically chosen for the unique needs and challenges of the academic hospitalist’s career.
Brendon Shank is SHM’s associate vice president of communications.
Should CMS Allow Access to Patient-Protected Medicare Data for Public Reporting?
PRO
Observational, database studies provide a powerful QI supplement
The proposed rules by the Centers for Medicare & Medicaid Services (CMS), which will allow access to patient-protected Medicare data, will provide for greater transparency and for data that could be utilized toward comparative-effectiveness research (CER). Thus, these rules have the potential to improve the quality of healthcare and impact patient safety.
The Institute of Medicine in December 1999 issued its now-famous article “To Err is Human,” which reported that medical errors cause up to 98,000 deaths and more than 1 million injuries each year in the U.S.6 However, the evidence shows minimal impact on improving patient safety in the past 10 years.
A retrospective study of 10 North Carolina hospitals reported in the New England Journal of Medicine by Landrigan and colleagues found that harms resulting from medical care remained extremely common, with little evidence for improvement.7 It also is estimated that it takes 17 years on average for clinical research to become incorporated into the majority of clinical practices.8 Although the randomized control trial (RCT) is unquestionably the best research tool to explore simple components of clinical care (i.e. tests, drugs, and procedures), its translation into daily clinical practice remains difficult.
Improving the process of care leading to quality remains an extremely difficult proposition based on such sociological issues as resistance to change, the need for interdisciplinary teamwork, level of support staff, economic factors, information retrieval inadequacies, and, most important, the complexity of patients with multiple comorbidities that do not fit the parameters of the RCT.
Don Berwick, MD, the lead author in the landmark IOM report and currently CMS administrator, has stated “in such complex terrain, the RCT is an impoverished way to learn.”9 Factors that cause this chasm include:10
- Too narrowly focused RCT;
- More required resources, including financial and personnel support with RCT, compared with usually clinical practices;
- Lack of collaboration between academic medical center researchers and community clinicians; and
- Lack of expertise and experience to undertake quality improvement in healthcare.
CER has received a $1.1 billion investment with the passage of the American Recovery and Reinvestment Act to provide evidence on the effectiveness, benefits, and harms of various treatment options.11 As part of this research to improve IOM’s goals to improve healthcare, better evidence is desperately needed to cross the translational gap between clinical research and the bedside.12 Observational outcome studies based on registries or databases derived primarily from clinical care can provide a powerful supplement to quality improvement.13
Thus, the ability to combine Medicare claims with other data through the Availability of Medicare Data for Performance Measurement would supply a wealth of information to potentially impact quality. As a cautionary note, safeguards such as provider review and appeal, monitoring the validity of the information, and only using the data for quality improvement are vital.
Dr. Holder is medical director of hospitalist services and chief medical information officer at Decatur (Ill.) Memorial Hospital. He is a member of Team Hospitalist.
CON
Unanswered questions, risks make CMS plan a bad idea
On June 8, the Centers for Medicare & Medicaid Services (CMS) proposed a rule to allow “qualified entities” access to patient-protected Medicare data for provider performance publication. CMS allowed 60 days for public comment and a start date of Jan. 1, 2012. But this controversial rule appeared with short notice, little discussion, and abbreviated opportunity for comment.
CMS maintains this rule will result in higher quality and more cost-effective care. Considering the present volume of data published on multiple performance parameters for both hospitals and providers, it would seem prudent to have solid data for efficacy prior to implementing more required reporting and costs to the industry.1,2,3
Physicians and hospitals will have 30 days to review and verify three years of CMS claims data before it is released. The burden and cost of review will be borne by the private practices involved.1 This process will impose added administrative costs, and it is unlikely three years of data can be carefully reviewed in just 30 days. If practitioners find the review too cumbersome and expensive, which is likely, they will forgo review, putting the accuracy of the data in question.
Quality data already is published for both physicians and hospitals. Is there evidence this process will significantly increase transparency? Adding more layers of administrative work for both CMS and caregivers—higher overhead without defined benefit—seems an ill-conceived idea. From an evidence-based-practice standpoint, where is the evidence that this rule will improve “quality” and make care “cost-effective”? Have the risks (added bureaucracy, increased overhead, questionable data) and benefits (added transparency) been evaluated?
Additionally, it is unclear who will be monitoring the quality of the data published and who will provide oversight for the “entities” to ensure these data are fairly and accurately presented. Who will pay for this oversight, and what recourse will be afforded physicians and hospitals that feel they have been wronged?4,5
The “qualified entities” will pay CMS to cover their cost of providing data, raising concerns that this practice could evolve into patient-data “purchasing.” Although it is likely the selected entities will be industry leaders (or at least initially) with the capability to protect data, is this not another opportunity for misuse or corruption in the system?
Other issues not clearly addressed include the nature of the patient-protected information and who will interpret this data in a clinical context. How will these data be adjusted for patient comorbidities and case mix, or will the data be published without regard to these important confounders?1,3
Publishing clinical data for quality assurance and feedback purposes is essential for quality care. Transparency has increased consumer confidence in the healthcare system and, indeed, has increased the healthcare system’s responsiveness to quality concerns. Granting the benefits of transparency, published data must be precise, accurate, and managed with good oversight in order to ensure the process does not target providers or skew results. Another program, especially one being fast-tracked and making once-protected patient information available to unspecified entities, raises many questions. Who will be watching these agencies for a clear interpretation? Is this yet another layer of CMS bureaucracy? In an era of evidence-based medicine, where is the evidence that this program will improve the system for the better?
Dr. Brezina is a hospitalist at Durham Regional Hospital in North Carolina.
References
- Under the magnifying glass (again): CMS proposes new access to Medicare data for public provider performance reports. Bass, Berry and Sims website. Available at: http://www.bassberry.com/communicationscenter/newsletters/. Accessed Aug. 31, 2011.
- Controversial rule to allow access to Medicare data. Modern Health website. Available at: http://www.modernHealthcare.com. Accessed Aug. 31, 2011.
- Physician report cards must give correct grades. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/09/05/edsa0905.htm. Accessed Sept. 12, 2011.
- OIG identifies huge lapses in hospital security, shifts its focus from CMS to OCR. Atlantic Information Services Inc. website. Available at: http://www.AISHealth.com. Accessed Sept. 12, 2011.
- Berry M. Insurers mishandle 1 in 5 claims, AMA finds. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/07/04/prl20704.htm. Accessed Sept. 12, 2011.
- Kohn LT, Corrigan JM, Donaldson MS, eds. To error is human: building a safer health system. Washington: National Academies Press; 1999.
- Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363(22):2124-2134.
- Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington: National Academy Press; 2001:13.
- Berwick DM. The science of improvement. JAMA. 2008;299(10):1182-1184.
- Ting HH, Shojania KG, Montori VM, Bradley EH. Quality improvement science and action. Circulation. 2009;119:1962-1974.
- Committee on Comparative Research Prioritization. Institute of Medicine Initial National Priorities for Comparative Effectiveness Research. Washington: National Academy Press; 2009.
- Sullivan P, Goldman D. The promise of comparative effectiveness research. JAMA. 2011;305(4):400-401.
- Washington AE, Lipstein SH. The patient-centered outcomes research institute: promoting better information, decisions and health. Sept. 28, 2011; DOI: 10.10.1056/NEJMp1109407.
PRO
Observational, database studies provide a powerful QI supplement
The proposed rules by the Centers for Medicare & Medicaid Services (CMS), which will allow access to patient-protected Medicare data, will provide for greater transparency and for data that could be utilized toward comparative-effectiveness research (CER). Thus, these rules have the potential to improve the quality of healthcare and impact patient safety.
The Institute of Medicine in December 1999 issued its now-famous article “To Err is Human,” which reported that medical errors cause up to 98,000 deaths and more than 1 million injuries each year in the U.S.6 However, the evidence shows minimal impact on improving patient safety in the past 10 years.
A retrospective study of 10 North Carolina hospitals reported in the New England Journal of Medicine by Landrigan and colleagues found that harms resulting from medical care remained extremely common, with little evidence for improvement.7 It also is estimated that it takes 17 years on average for clinical research to become incorporated into the majority of clinical practices.8 Although the randomized control trial (RCT) is unquestionably the best research tool to explore simple components of clinical care (i.e. tests, drugs, and procedures), its translation into daily clinical practice remains difficult.
Improving the process of care leading to quality remains an extremely difficult proposition based on such sociological issues as resistance to change, the need for interdisciplinary teamwork, level of support staff, economic factors, information retrieval inadequacies, and, most important, the complexity of patients with multiple comorbidities that do not fit the parameters of the RCT.
Don Berwick, MD, the lead author in the landmark IOM report and currently CMS administrator, has stated “in such complex terrain, the RCT is an impoverished way to learn.”9 Factors that cause this chasm include:10
- Too narrowly focused RCT;
- More required resources, including financial and personnel support with RCT, compared with usually clinical practices;
- Lack of collaboration between academic medical center researchers and community clinicians; and
- Lack of expertise and experience to undertake quality improvement in healthcare.
CER has received a $1.1 billion investment with the passage of the American Recovery and Reinvestment Act to provide evidence on the effectiveness, benefits, and harms of various treatment options.11 As part of this research to improve IOM’s goals to improve healthcare, better evidence is desperately needed to cross the translational gap between clinical research and the bedside.12 Observational outcome studies based on registries or databases derived primarily from clinical care can provide a powerful supplement to quality improvement.13
Thus, the ability to combine Medicare claims with other data through the Availability of Medicare Data for Performance Measurement would supply a wealth of information to potentially impact quality. As a cautionary note, safeguards such as provider review and appeal, monitoring the validity of the information, and only using the data for quality improvement are vital.
Dr. Holder is medical director of hospitalist services and chief medical information officer at Decatur (Ill.) Memorial Hospital. He is a member of Team Hospitalist.
CON
Unanswered questions, risks make CMS plan a bad idea
On June 8, the Centers for Medicare & Medicaid Services (CMS) proposed a rule to allow “qualified entities” access to patient-protected Medicare data for provider performance publication. CMS allowed 60 days for public comment and a start date of Jan. 1, 2012. But this controversial rule appeared with short notice, little discussion, and abbreviated opportunity for comment.
CMS maintains this rule will result in higher quality and more cost-effective care. Considering the present volume of data published on multiple performance parameters for both hospitals and providers, it would seem prudent to have solid data for efficacy prior to implementing more required reporting and costs to the industry.1,2,3
Physicians and hospitals will have 30 days to review and verify three years of CMS claims data before it is released. The burden and cost of review will be borne by the private practices involved.1 This process will impose added administrative costs, and it is unlikely three years of data can be carefully reviewed in just 30 days. If practitioners find the review too cumbersome and expensive, which is likely, they will forgo review, putting the accuracy of the data in question.
Quality data already is published for both physicians and hospitals. Is there evidence this process will significantly increase transparency? Adding more layers of administrative work for both CMS and caregivers—higher overhead without defined benefit—seems an ill-conceived idea. From an evidence-based-practice standpoint, where is the evidence that this rule will improve “quality” and make care “cost-effective”? Have the risks (added bureaucracy, increased overhead, questionable data) and benefits (added transparency) been evaluated?
Additionally, it is unclear who will be monitoring the quality of the data published and who will provide oversight for the “entities” to ensure these data are fairly and accurately presented. Who will pay for this oversight, and what recourse will be afforded physicians and hospitals that feel they have been wronged?4,5
The “qualified entities” will pay CMS to cover their cost of providing data, raising concerns that this practice could evolve into patient-data “purchasing.” Although it is likely the selected entities will be industry leaders (or at least initially) with the capability to protect data, is this not another opportunity for misuse or corruption in the system?
Other issues not clearly addressed include the nature of the patient-protected information and who will interpret this data in a clinical context. How will these data be adjusted for patient comorbidities and case mix, or will the data be published without regard to these important confounders?1,3
Publishing clinical data for quality assurance and feedback purposes is essential for quality care. Transparency has increased consumer confidence in the healthcare system and, indeed, has increased the healthcare system’s responsiveness to quality concerns. Granting the benefits of transparency, published data must be precise, accurate, and managed with good oversight in order to ensure the process does not target providers or skew results. Another program, especially one being fast-tracked and making once-protected patient information available to unspecified entities, raises many questions. Who will be watching these agencies for a clear interpretation? Is this yet another layer of CMS bureaucracy? In an era of evidence-based medicine, where is the evidence that this program will improve the system for the better?
Dr. Brezina is a hospitalist at Durham Regional Hospital in North Carolina.
References
- Under the magnifying glass (again): CMS proposes new access to Medicare data for public provider performance reports. Bass, Berry and Sims website. Available at: http://www.bassberry.com/communicationscenter/newsletters/. Accessed Aug. 31, 2011.
- Controversial rule to allow access to Medicare data. Modern Health website. Available at: http://www.modernHealthcare.com. Accessed Aug. 31, 2011.
- Physician report cards must give correct grades. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/09/05/edsa0905.htm. Accessed Sept. 12, 2011.
- OIG identifies huge lapses in hospital security, shifts its focus from CMS to OCR. Atlantic Information Services Inc. website. Available at: http://www.AISHealth.com. Accessed Sept. 12, 2011.
- Berry M. Insurers mishandle 1 in 5 claims, AMA finds. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/07/04/prl20704.htm. Accessed Sept. 12, 2011.
- Kohn LT, Corrigan JM, Donaldson MS, eds. To error is human: building a safer health system. Washington: National Academies Press; 1999.
- Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363(22):2124-2134.
- Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington: National Academy Press; 2001:13.
- Berwick DM. The science of improvement. JAMA. 2008;299(10):1182-1184.
- Ting HH, Shojania KG, Montori VM, Bradley EH. Quality improvement science and action. Circulation. 2009;119:1962-1974.
- Committee on Comparative Research Prioritization. Institute of Medicine Initial National Priorities for Comparative Effectiveness Research. Washington: National Academy Press; 2009.
- Sullivan P, Goldman D. The promise of comparative effectiveness research. JAMA. 2011;305(4):400-401.
- Washington AE, Lipstein SH. The patient-centered outcomes research institute: promoting better information, decisions and health. Sept. 28, 2011; DOI: 10.10.1056/NEJMp1109407.
PRO
Observational, database studies provide a powerful QI supplement
The proposed rules by the Centers for Medicare & Medicaid Services (CMS), which will allow access to patient-protected Medicare data, will provide for greater transparency and for data that could be utilized toward comparative-effectiveness research (CER). Thus, these rules have the potential to improve the quality of healthcare and impact patient safety.
The Institute of Medicine in December 1999 issued its now-famous article “To Err is Human,” which reported that medical errors cause up to 98,000 deaths and more than 1 million injuries each year in the U.S.6 However, the evidence shows minimal impact on improving patient safety in the past 10 years.
A retrospective study of 10 North Carolina hospitals reported in the New England Journal of Medicine by Landrigan and colleagues found that harms resulting from medical care remained extremely common, with little evidence for improvement.7 It also is estimated that it takes 17 years on average for clinical research to become incorporated into the majority of clinical practices.8 Although the randomized control trial (RCT) is unquestionably the best research tool to explore simple components of clinical care (i.e. tests, drugs, and procedures), its translation into daily clinical practice remains difficult.
Improving the process of care leading to quality remains an extremely difficult proposition based on such sociological issues as resistance to change, the need for interdisciplinary teamwork, level of support staff, economic factors, information retrieval inadequacies, and, most important, the complexity of patients with multiple comorbidities that do not fit the parameters of the RCT.
Don Berwick, MD, the lead author in the landmark IOM report and currently CMS administrator, has stated “in such complex terrain, the RCT is an impoverished way to learn.”9 Factors that cause this chasm include:10
- Too narrowly focused RCT;
- More required resources, including financial and personnel support with RCT, compared with usually clinical practices;
- Lack of collaboration between academic medical center researchers and community clinicians; and
- Lack of expertise and experience to undertake quality improvement in healthcare.
CER has received a $1.1 billion investment with the passage of the American Recovery and Reinvestment Act to provide evidence on the effectiveness, benefits, and harms of various treatment options.11 As part of this research to improve IOM’s goals to improve healthcare, better evidence is desperately needed to cross the translational gap between clinical research and the bedside.12 Observational outcome studies based on registries or databases derived primarily from clinical care can provide a powerful supplement to quality improvement.13
Thus, the ability to combine Medicare claims with other data through the Availability of Medicare Data for Performance Measurement would supply a wealth of information to potentially impact quality. As a cautionary note, safeguards such as provider review and appeal, monitoring the validity of the information, and only using the data for quality improvement are vital.
Dr. Holder is medical director of hospitalist services and chief medical information officer at Decatur (Ill.) Memorial Hospital. He is a member of Team Hospitalist.
CON
Unanswered questions, risks make CMS plan a bad idea
On June 8, the Centers for Medicare & Medicaid Services (CMS) proposed a rule to allow “qualified entities” access to patient-protected Medicare data for provider performance publication. CMS allowed 60 days for public comment and a start date of Jan. 1, 2012. But this controversial rule appeared with short notice, little discussion, and abbreviated opportunity for comment.
CMS maintains this rule will result in higher quality and more cost-effective care. Considering the present volume of data published on multiple performance parameters for both hospitals and providers, it would seem prudent to have solid data for efficacy prior to implementing more required reporting and costs to the industry.1,2,3
Physicians and hospitals will have 30 days to review and verify three years of CMS claims data before it is released. The burden and cost of review will be borne by the private practices involved.1 This process will impose added administrative costs, and it is unlikely three years of data can be carefully reviewed in just 30 days. If practitioners find the review too cumbersome and expensive, which is likely, they will forgo review, putting the accuracy of the data in question.
Quality data already is published for both physicians and hospitals. Is there evidence this process will significantly increase transparency? Adding more layers of administrative work for both CMS and caregivers—higher overhead without defined benefit—seems an ill-conceived idea. From an evidence-based-practice standpoint, where is the evidence that this rule will improve “quality” and make care “cost-effective”? Have the risks (added bureaucracy, increased overhead, questionable data) and benefits (added transparency) been evaluated?
Additionally, it is unclear who will be monitoring the quality of the data published and who will provide oversight for the “entities” to ensure these data are fairly and accurately presented. Who will pay for this oversight, and what recourse will be afforded physicians and hospitals that feel they have been wronged?4,5
The “qualified entities” will pay CMS to cover their cost of providing data, raising concerns that this practice could evolve into patient-data “purchasing.” Although it is likely the selected entities will be industry leaders (or at least initially) with the capability to protect data, is this not another opportunity for misuse or corruption in the system?
Other issues not clearly addressed include the nature of the patient-protected information and who will interpret this data in a clinical context. How will these data be adjusted for patient comorbidities and case mix, or will the data be published without regard to these important confounders?1,3
Publishing clinical data for quality assurance and feedback purposes is essential for quality care. Transparency has increased consumer confidence in the healthcare system and, indeed, has increased the healthcare system’s responsiveness to quality concerns. Granting the benefits of transparency, published data must be precise, accurate, and managed with good oversight in order to ensure the process does not target providers or skew results. Another program, especially one being fast-tracked and making once-protected patient information available to unspecified entities, raises many questions. Who will be watching these agencies for a clear interpretation? Is this yet another layer of CMS bureaucracy? In an era of evidence-based medicine, where is the evidence that this program will improve the system for the better?
Dr. Brezina is a hospitalist at Durham Regional Hospital in North Carolina.
References
- Under the magnifying glass (again): CMS proposes new access to Medicare data for public provider performance reports. Bass, Berry and Sims website. Available at: http://www.bassberry.com/communicationscenter/newsletters/. Accessed Aug. 31, 2011.
- Controversial rule to allow access to Medicare data. Modern Health website. Available at: http://www.modernHealthcare.com. Accessed Aug. 31, 2011.
- Physician report cards must give correct grades. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/09/05/edsa0905.htm. Accessed Sept. 12, 2011.
- OIG identifies huge lapses in hospital security, shifts its focus from CMS to OCR. Atlantic Information Services Inc. website. Available at: http://www.AISHealth.com. Accessed Sept. 12, 2011.
- Berry M. Insurers mishandle 1 in 5 claims, AMA finds. American Medical News website. Available at: http://www.ama-assn.org/amednews/2011/07/04/prl20704.htm. Accessed Sept. 12, 2011.
- Kohn LT, Corrigan JM, Donaldson MS, eds. To error is human: building a safer health system. Washington: National Academies Press; 1999.
- Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363(22):2124-2134.
- Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington: National Academy Press; 2001:13.
- Berwick DM. The science of improvement. JAMA. 2008;299(10):1182-1184.
- Ting HH, Shojania KG, Montori VM, Bradley EH. Quality improvement science and action. Circulation. 2009;119:1962-1974.
- Committee on Comparative Research Prioritization. Institute of Medicine Initial National Priorities for Comparative Effectiveness Research. Washington: National Academy Press; 2009.
- Sullivan P, Goldman D. The promise of comparative effectiveness research. JAMA. 2011;305(4):400-401.
- Washington AE, Lipstein SH. The patient-centered outcomes research institute: promoting better information, decisions and health. Sept. 28, 2011; DOI: 10.10.1056/NEJMp1109407.
Dr. Hospitalist
I recently became chief medical officer (CMO) of our hospital. When a hospitalist’s case comes to our patient-care committee, is it appropriate to inform the patient’s primary-care physician (PCP) of the quality issues? Our hospitalists are independent. There are questions of HIPAA. However, several committee members feel that the PCP, who does not come to the hospital, should be informed. Thank you.
K.A., M.D.
Dr. Hospitalist responds:
Good question. While I’ve participated in similar scenarios, keep in mind that I’m a hospitalist like you, not a lawyer. So, with that rejoinder in mind, let’s take this discussion a few steps further and see what happens.
You state: “when a hospitalist’s case comes to our patient-care committee.” Does that mean peer review? If it does, and what you are describing is a committee that handles privileged and confidential information, then you cannot inform the PCP because you would be violating the basic tenets of peer review.
The principle behind peer review is that it allows physicians to confidentially review the cases of their peers. This is to prevent the information contained in peer review from becoming available to a lawyer by subpoena or by discovery in the courts. The Joint Commission has mandated hospital peer review committees since 1952, and the federal government included language regarding peer-review protection in the Health Care Quality Improvement Act of 1986.
Every state has a law on the books, but the specifics and effectiveness of peer review will vary from state to state (see Florida’s Amendment 7, Kentucky, and Massachusetts). The whole idea is to allow for a process to evaluate physician practice or quality concerns without the fear of discovery or subsequent lawsuit. Even the act of referring a case to peer review is considered a confidential action in my state, so just the referral itself may not be discussed. So if you are referring to peer review, the answer is no, you cannot inform the patient’s PCP. HIPAA does not come into the picture here.
On the other hand, let’s assume, for sake of discussion, that you’ve heard a complaint (or several) about a certain hospitalist, Dr. Nogood. You could, if you desired, refer these complaints to peer review.
If so, then you are immediately bound by those rules of confidentiality. If you don’t refer the case, then you could inform the PCP that you have heard a complaint involving Dr. Nogood and that PCP’s patient.
I can’t see how that would violate HIPAA, because the PCP has an established relationship with that patient, and you might be only reporting facts (the complaint), not passing judgment on the quality of care. And I would not even go that far.
Why stop there? Why not tell that PCP exactly what you think of Dr. Nogood and his clinical practice, the details of the complaints against him, and how you think maybe that PCP should send his patients to someone else for better care? Well, you’re the CMO for the hospital. If you go beyond reporting facts and start reporting opinions, then you’ve just opened yourself up to accusations of restraint of trade by Dr. Nogood.
No matter what you may think of Dr. Nogood’s patient care, unless it falls outside the boundaries of acceptable practice (which can only be determined by a peer review committee), then you should not say anything.
Unless, of course, you want to be accused of spreading rumors, hearsay, and innuendo. Remember, we are talking about an independent practitioner, not a hospital employee.
Overall, it’s a bit of a sticky wicket. If you think the complaint has merit, then it should be sent to peer review—and you may speak no more of it. If you think the complaint is baseless, then why sustain it and tell the PCP?
Peer review is an exceptional process, and the physicians who serve on such committees perform a difficult and selfless service. We should all do our best to uphold its integrity.
I recently became chief medical officer (CMO) of our hospital. When a hospitalist’s case comes to our patient-care committee, is it appropriate to inform the patient’s primary-care physician (PCP) of the quality issues? Our hospitalists are independent. There are questions of HIPAA. However, several committee members feel that the PCP, who does not come to the hospital, should be informed. Thank you.
K.A., M.D.
Dr. Hospitalist responds:
Good question. While I’ve participated in similar scenarios, keep in mind that I’m a hospitalist like you, not a lawyer. So, with that rejoinder in mind, let’s take this discussion a few steps further and see what happens.
You state: “when a hospitalist’s case comes to our patient-care committee.” Does that mean peer review? If it does, and what you are describing is a committee that handles privileged and confidential information, then you cannot inform the PCP because you would be violating the basic tenets of peer review.
The principle behind peer review is that it allows physicians to confidentially review the cases of their peers. This is to prevent the information contained in peer review from becoming available to a lawyer by subpoena or by discovery in the courts. The Joint Commission has mandated hospital peer review committees since 1952, and the federal government included language regarding peer-review protection in the Health Care Quality Improvement Act of 1986.
Every state has a law on the books, but the specifics and effectiveness of peer review will vary from state to state (see Florida’s Amendment 7, Kentucky, and Massachusetts). The whole idea is to allow for a process to evaluate physician practice or quality concerns without the fear of discovery or subsequent lawsuit. Even the act of referring a case to peer review is considered a confidential action in my state, so just the referral itself may not be discussed. So if you are referring to peer review, the answer is no, you cannot inform the patient’s PCP. HIPAA does not come into the picture here.
On the other hand, let’s assume, for sake of discussion, that you’ve heard a complaint (or several) about a certain hospitalist, Dr. Nogood. You could, if you desired, refer these complaints to peer review.
If so, then you are immediately bound by those rules of confidentiality. If you don’t refer the case, then you could inform the PCP that you have heard a complaint involving Dr. Nogood and that PCP’s patient.
I can’t see how that would violate HIPAA, because the PCP has an established relationship with that patient, and you might be only reporting facts (the complaint), not passing judgment on the quality of care. And I would not even go that far.
Why stop there? Why not tell that PCP exactly what you think of Dr. Nogood and his clinical practice, the details of the complaints against him, and how you think maybe that PCP should send his patients to someone else for better care? Well, you’re the CMO for the hospital. If you go beyond reporting facts and start reporting opinions, then you’ve just opened yourself up to accusations of restraint of trade by Dr. Nogood.
No matter what you may think of Dr. Nogood’s patient care, unless it falls outside the boundaries of acceptable practice (which can only be determined by a peer review committee), then you should not say anything.
Unless, of course, you want to be accused of spreading rumors, hearsay, and innuendo. Remember, we are talking about an independent practitioner, not a hospital employee.
Overall, it’s a bit of a sticky wicket. If you think the complaint has merit, then it should be sent to peer review—and you may speak no more of it. If you think the complaint is baseless, then why sustain it and tell the PCP?
Peer review is an exceptional process, and the physicians who serve on such committees perform a difficult and selfless service. We should all do our best to uphold its integrity.
I recently became chief medical officer (CMO) of our hospital. When a hospitalist’s case comes to our patient-care committee, is it appropriate to inform the patient’s primary-care physician (PCP) of the quality issues? Our hospitalists are independent. There are questions of HIPAA. However, several committee members feel that the PCP, who does not come to the hospital, should be informed. Thank you.
K.A., M.D.
Dr. Hospitalist responds:
Good question. While I’ve participated in similar scenarios, keep in mind that I’m a hospitalist like you, not a lawyer. So, with that rejoinder in mind, let’s take this discussion a few steps further and see what happens.
You state: “when a hospitalist’s case comes to our patient-care committee.” Does that mean peer review? If it does, and what you are describing is a committee that handles privileged and confidential information, then you cannot inform the PCP because you would be violating the basic tenets of peer review.
The principle behind peer review is that it allows physicians to confidentially review the cases of their peers. This is to prevent the information contained in peer review from becoming available to a lawyer by subpoena or by discovery in the courts. The Joint Commission has mandated hospital peer review committees since 1952, and the federal government included language regarding peer-review protection in the Health Care Quality Improvement Act of 1986.
Every state has a law on the books, but the specifics and effectiveness of peer review will vary from state to state (see Florida’s Amendment 7, Kentucky, and Massachusetts). The whole idea is to allow for a process to evaluate physician practice or quality concerns without the fear of discovery or subsequent lawsuit. Even the act of referring a case to peer review is considered a confidential action in my state, so just the referral itself may not be discussed. So if you are referring to peer review, the answer is no, you cannot inform the patient’s PCP. HIPAA does not come into the picture here.
On the other hand, let’s assume, for sake of discussion, that you’ve heard a complaint (or several) about a certain hospitalist, Dr. Nogood. You could, if you desired, refer these complaints to peer review.
If so, then you are immediately bound by those rules of confidentiality. If you don’t refer the case, then you could inform the PCP that you have heard a complaint involving Dr. Nogood and that PCP’s patient.
I can’t see how that would violate HIPAA, because the PCP has an established relationship with that patient, and you might be only reporting facts (the complaint), not passing judgment on the quality of care. And I would not even go that far.
Why stop there? Why not tell that PCP exactly what you think of Dr. Nogood and his clinical practice, the details of the complaints against him, and how you think maybe that PCP should send his patients to someone else for better care? Well, you’re the CMO for the hospital. If you go beyond reporting facts and start reporting opinions, then you’ve just opened yourself up to accusations of restraint of trade by Dr. Nogood.
No matter what you may think of Dr. Nogood’s patient care, unless it falls outside the boundaries of acceptable practice (which can only be determined by a peer review committee), then you should not say anything.
Unless, of course, you want to be accused of spreading rumors, hearsay, and innuendo. Remember, we are talking about an independent practitioner, not a hospital employee.
Overall, it’s a bit of a sticky wicket. If you think the complaint has merit, then it should be sent to peer review—and you may speak no more of it. If you think the complaint is baseless, then why sustain it and tell the PCP?
Peer review is an exceptional process, and the physicians who serve on such committees perform a difficult and selfless service. We should all do our best to uphold its integrity.
By the Numbers: $4,000
According to a new study in American Economic Journal: Applied Economics by MIT economist Joseph Doyle, a $4,000 increase in per-patient hospital expenditures equates to a 1.4% decrease in mortality rates. Doyle studied 37,000 hospitalized patients in Florida who entered through the ED from 1996 to 2003. However, he focused on those visiting from other states in order to identify variation resulting from the level of care itself, not the prior health of the patients. The greater expense—and benefits—of care in the higher-cost hospital appeared to come from the broader application of ICU tools and greater complement of medical personnel, he notes.
“There are smart ways to spend money and ineffective ways to spend money,” he says, “and we’re still trying to figure out which are which, as much as possible.”
According to a new study in American Economic Journal: Applied Economics by MIT economist Joseph Doyle, a $4,000 increase in per-patient hospital expenditures equates to a 1.4% decrease in mortality rates. Doyle studied 37,000 hospitalized patients in Florida who entered through the ED from 1996 to 2003. However, he focused on those visiting from other states in order to identify variation resulting from the level of care itself, not the prior health of the patients. The greater expense—and benefits—of care in the higher-cost hospital appeared to come from the broader application of ICU tools and greater complement of medical personnel, he notes.
“There are smart ways to spend money and ineffective ways to spend money,” he says, “and we’re still trying to figure out which are which, as much as possible.”
According to a new study in American Economic Journal: Applied Economics by MIT economist Joseph Doyle, a $4,000 increase in per-patient hospital expenditures equates to a 1.4% decrease in mortality rates. Doyle studied 37,000 hospitalized patients in Florida who entered through the ED from 1996 to 2003. However, he focused on those visiting from other states in order to identify variation resulting from the level of care itself, not the prior health of the patients. The greater expense—and benefits—of care in the higher-cost hospital appeared to come from the broader application of ICU tools and greater complement of medical personnel, he notes.
“There are smart ways to spend money and ineffective ways to spend money,” he says, “and we’re still trying to figure out which are which, as much as possible.”