User login
The role of heroes
Back in January, I remember four different local television stations covering the memorial service for Stan Musial, a baseball player with the St. Louis Cardinals from 1941 to 1963. A few days earlier, his body was lying in state for public visitation in the great Cathedral Basilica of St. Louis, which is a block north of my home. The police were rerouting traffic to accommodate the crowd. His prowess as a hitter was unquestioned, with first ballot election to the Hall of Fame. He was known for his sportsmanship, having never been ejected from a professional baseball game. He was known for modesty and his service to the community. On Feb. 15, 2011, Stan Musial was awarded the Presidential Medal of Freedom, the highest honor given to a civilian, by President Barack Obama who called him "an icon untarnished, a beloved pillar of the community, a gentleman you’d want your kids to emulate."
I’m really not that big a baseball fan. But this pomp and circumstance for Stan was in sharp contrast to that of another athlete in the news not too long ago – the admission by Lance Armstrong of a whole career of cheating on Oprah. It came just 2 weeks after the Baseball Writers’ Association of America had failed in 2013 to elect any players to the Baseball Hall of Fame, the first time that had happened in 17 years. The era of steroids was assigned the blame. What, in the age of smartphones, is the role of the sports hero?
Part of being a pediatrician through the past two generations has been a continued expansion of the concept of child health. We’ve moved from seeking cures for disease to preventing disease with vaccines, then to promoting physical fitness, combating child abuse, and promoting injury prevention with anticipatory guidance about car seats, poisons, safer cribs, and safer toys. Fluoride toothpaste has improved dental health. We’ve begun to promote better mental health care for children. In the wake of many tragic events, we are just beginning to address problems such as bullying and social isolation.
When and how will this expansion of the idea of child health begin to address moral development? How does society promote character development, virtuous behavior, and compassion? How do pediatricians detect developmental delays in those areas? Moral development has been studied, although I suspect the works of Lawrence Kohlberg and Carol Gilligan would not be recognized by most pediatric residents. Routine newborn baby care and well-child visits screen for inborn errors of metabolism, hearing loss, iron deficiency, autism, vision, and development of gross and fine motor skills. Attention-deficit/hyperactivity disorder has been added to the list. Obsessive-compulsive disorder, depression, and anxiety don’t have very good screening tools. Screening for personality disorders and character development is nonexistent.
While reflecting on the role of heroes in the moral development of children, one also has to consider the role of heroes in the aspirations of adults. Most physicians will never invent a vaccine for polio like Jonas Salk did. They will not become medical missionaries like Albert Schweitzer. To what do today’s pediatricians aspire? Who are our heroes? Part of modern medicine has been devoted to expensive rescue medicine. Some people attach great importance to identifying a particular child whose life you can "save." Organ transplantation is a prime example. But if you’re not interested in assigning who gets the credit, public health programs offer the ability to save many more lives. The Back to Sleep program, launched in 1994, has reduced the rate of sudden infant death syndrome (SIDS) by 50%. More than 2,000 babies a year in the United States survive because of that program, although we cannot identify which ones they are. That is as many children as are saved by heart and lung transplants combined. Just because someone figured out that we should put babies to sleep on their backs.
As I finish this column, I reflect on movies I’ve seen this month. "Olympus Has Fallen" is a classic action movie with the hero killing all the bad guys. As a boy, I was raised on such movies with such heroes. But then there was the movie "42" about Jackie Robinson, who broke the color barrier in baseball by not fighting back. And that was heroic also.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no financial conflicts of interest.
Back in January, I remember four different local television stations covering the memorial service for Stan Musial, a baseball player with the St. Louis Cardinals from 1941 to 1963. A few days earlier, his body was lying in state for public visitation in the great Cathedral Basilica of St. Louis, which is a block north of my home. The police were rerouting traffic to accommodate the crowd. His prowess as a hitter was unquestioned, with first ballot election to the Hall of Fame. He was known for his sportsmanship, having never been ejected from a professional baseball game. He was known for modesty and his service to the community. On Feb. 15, 2011, Stan Musial was awarded the Presidential Medal of Freedom, the highest honor given to a civilian, by President Barack Obama who called him "an icon untarnished, a beloved pillar of the community, a gentleman you’d want your kids to emulate."
I’m really not that big a baseball fan. But this pomp and circumstance for Stan was in sharp contrast to that of another athlete in the news not too long ago – the admission by Lance Armstrong of a whole career of cheating on Oprah. It came just 2 weeks after the Baseball Writers’ Association of America had failed in 2013 to elect any players to the Baseball Hall of Fame, the first time that had happened in 17 years. The era of steroids was assigned the blame. What, in the age of smartphones, is the role of the sports hero?
Part of being a pediatrician through the past two generations has been a continued expansion of the concept of child health. We’ve moved from seeking cures for disease to preventing disease with vaccines, then to promoting physical fitness, combating child abuse, and promoting injury prevention with anticipatory guidance about car seats, poisons, safer cribs, and safer toys. Fluoride toothpaste has improved dental health. We’ve begun to promote better mental health care for children. In the wake of many tragic events, we are just beginning to address problems such as bullying and social isolation.
When and how will this expansion of the idea of child health begin to address moral development? How does society promote character development, virtuous behavior, and compassion? How do pediatricians detect developmental delays in those areas? Moral development has been studied, although I suspect the works of Lawrence Kohlberg and Carol Gilligan would not be recognized by most pediatric residents. Routine newborn baby care and well-child visits screen for inborn errors of metabolism, hearing loss, iron deficiency, autism, vision, and development of gross and fine motor skills. Attention-deficit/hyperactivity disorder has been added to the list. Obsessive-compulsive disorder, depression, and anxiety don’t have very good screening tools. Screening for personality disorders and character development is nonexistent.
While reflecting on the role of heroes in the moral development of children, one also has to consider the role of heroes in the aspirations of adults. Most physicians will never invent a vaccine for polio like Jonas Salk did. They will not become medical missionaries like Albert Schweitzer. To what do today’s pediatricians aspire? Who are our heroes? Part of modern medicine has been devoted to expensive rescue medicine. Some people attach great importance to identifying a particular child whose life you can "save." Organ transplantation is a prime example. But if you’re not interested in assigning who gets the credit, public health programs offer the ability to save many more lives. The Back to Sleep program, launched in 1994, has reduced the rate of sudden infant death syndrome (SIDS) by 50%. More than 2,000 babies a year in the United States survive because of that program, although we cannot identify which ones they are. That is as many children as are saved by heart and lung transplants combined. Just because someone figured out that we should put babies to sleep on their backs.
As I finish this column, I reflect on movies I’ve seen this month. "Olympus Has Fallen" is a classic action movie with the hero killing all the bad guys. As a boy, I was raised on such movies with such heroes. But then there was the movie "42" about Jackie Robinson, who broke the color barrier in baseball by not fighting back. And that was heroic also.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no financial conflicts of interest.
Back in January, I remember four different local television stations covering the memorial service for Stan Musial, a baseball player with the St. Louis Cardinals from 1941 to 1963. A few days earlier, his body was lying in state for public visitation in the great Cathedral Basilica of St. Louis, which is a block north of my home. The police were rerouting traffic to accommodate the crowd. His prowess as a hitter was unquestioned, with first ballot election to the Hall of Fame. He was known for his sportsmanship, having never been ejected from a professional baseball game. He was known for modesty and his service to the community. On Feb. 15, 2011, Stan Musial was awarded the Presidential Medal of Freedom, the highest honor given to a civilian, by President Barack Obama who called him "an icon untarnished, a beloved pillar of the community, a gentleman you’d want your kids to emulate."
I’m really not that big a baseball fan. But this pomp and circumstance for Stan was in sharp contrast to that of another athlete in the news not too long ago – the admission by Lance Armstrong of a whole career of cheating on Oprah. It came just 2 weeks after the Baseball Writers’ Association of America had failed in 2013 to elect any players to the Baseball Hall of Fame, the first time that had happened in 17 years. The era of steroids was assigned the blame. What, in the age of smartphones, is the role of the sports hero?
Part of being a pediatrician through the past two generations has been a continued expansion of the concept of child health. We’ve moved from seeking cures for disease to preventing disease with vaccines, then to promoting physical fitness, combating child abuse, and promoting injury prevention with anticipatory guidance about car seats, poisons, safer cribs, and safer toys. Fluoride toothpaste has improved dental health. We’ve begun to promote better mental health care for children. In the wake of many tragic events, we are just beginning to address problems such as bullying and social isolation.
When and how will this expansion of the idea of child health begin to address moral development? How does society promote character development, virtuous behavior, and compassion? How do pediatricians detect developmental delays in those areas? Moral development has been studied, although I suspect the works of Lawrence Kohlberg and Carol Gilligan would not be recognized by most pediatric residents. Routine newborn baby care and well-child visits screen for inborn errors of metabolism, hearing loss, iron deficiency, autism, vision, and development of gross and fine motor skills. Attention-deficit/hyperactivity disorder has been added to the list. Obsessive-compulsive disorder, depression, and anxiety don’t have very good screening tools. Screening for personality disorders and character development is nonexistent.
While reflecting on the role of heroes in the moral development of children, one also has to consider the role of heroes in the aspirations of adults. Most physicians will never invent a vaccine for polio like Jonas Salk did. They will not become medical missionaries like Albert Schweitzer. To what do today’s pediatricians aspire? Who are our heroes? Part of modern medicine has been devoted to expensive rescue medicine. Some people attach great importance to identifying a particular child whose life you can "save." Organ transplantation is a prime example. But if you’re not interested in assigning who gets the credit, public health programs offer the ability to save many more lives. The Back to Sleep program, launched in 1994, has reduced the rate of sudden infant death syndrome (SIDS) by 50%. More than 2,000 babies a year in the United States survive because of that program, although we cannot identify which ones they are. That is as many children as are saved by heart and lung transplants combined. Just because someone figured out that we should put babies to sleep on their backs.
As I finish this column, I reflect on movies I’ve seen this month. "Olympus Has Fallen" is a classic action movie with the hero killing all the bad guys. As a boy, I was raised on such movies with such heroes. But then there was the movie "42" about Jackie Robinson, who broke the color barrier in baseball by not fighting back. And that was heroic also.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no financial conflicts of interest.
Coordination of care
The teenager was transferred from the local inpatient psychiatric facility to the emergency department of Children’s Hospital. There had been repeated episodes, witnessed by their nursing staff, during which he became unresponsive, even to sternal rub. There was no cyanosis during the episodes, and no incontinence. There have never been any injuries from falls associated with the onset of these events.
Careful history in the ED indicates these events have been going on for months. Initially they were solely occurring at school, but they have increased in frequency and duration. Neurology examines him and concludes these aren’t seizures. They do not recommend any testing and don’t want him on their service, so the ED admits him to the general pediatric service overnight. Cardiac exam is normal. EKG is normal. Overnight telemetry shows no arrhythmias or vital sign changes. Neurologic exam is normal. The clinical medicine team concurs with the diagnosis of pseudoseizures, although malingering and conversion reaction remain in the differential. The inpatient psychiatric service would be the best place to investigate and treat this. The resident relays the findings to staff at the psychiatric hospital, and an ambulance takes him back.
However, while this is occurring, there is a changeover in the attending of record at the psychiatric facility. The new physician is not satisfied with the mere clinical opinions of the neurologists. She demands testing to prove these episodes are not seizures. The ambulance brings him back to the Children’s Hospital, where he is readmitted. Neurology, under duress, performs an EEG the following day, which is normal. However, no unresponsive episodes occur during the EEG. On discussion with the other facility, the stakes are increased again. Should a video EEG of 24 hours, or even 72 hours be done? What about a Holter?
Pseudoseizures are not the only diagnosis causing these conflicts. The psychiatric facility frequently sends children with intractable abdominal pain in for evaluation. I haven’t found a case of appendicitis yet among that group. I’ve certainly admitted many children with chronic abdominal pain who I believe would be better treated by a psychiatrist.
The interactions and transitions of care between subspecialty services are fraught with anomalies. Business people refer to it as silo mentality, which is at best inefficient, at worst a cause of failure. When I did locum tenens in one small town, the ED nurses were surprised that I, a general pediatrician, would discharge a child with abdominal pain without a consult and examination by a surgeon. Two months later, at another facility, a general surgeon made it clear that until there was a CT scan with a radiology reading confirming appendicitis, he wasn’t coming in from home. There is wide variation in customary care.
As a hospitalist, part of my time is dedicated to improving the system. Reviewing sentinel events such as readmissions is part of the quality improvement process. A round-trip ambulance ride is clearly not good care. Fixing the problem, however, requires finding the right tool.
One option is the morbidity and mortality (M&M) conference. While surgeons find that approach useful, I observe it to be frequently dysfunctional. Somebody sees something that didn’t go right. They propose a new policy which will prevent that problem from recurring. Nobody does research to prove that the solution really works. They never consider, much less measure, whether this new practice causes more problems than it solves. Then, years later, no one can quite recall why we began doing things this way, which makes dysfunctional procedures very refractory to correction.
Another tool is a peer review committee which investigates cases. On that committee, I strongly advocate that before any decision is made, a request for information letter be sent to the staff involved to hear their side of the story, which frequently isn’t captured in the documentation.
In any human endeavor, differences of opinions will occur. Professionalism is an art of resolving these differences while keeping the focus on what is best for the patient. If the physician at the psychiatric facility truly thought these events were seizures, and that neurology had not adequately investigated them, I as a hospitalist needed to find a way of addressing that concern. There is a limit, however, to allowing a physician of one specialty to insist upon a test being done by a different subspecialist.
If you were expecting this column to end with a nice, tidy description of the solution I found to keep this particular situation from happening again, I’m sorry to disappoint you. I’m an engineer and a pediatrician, not a miracle worker. As a hospitalist, I patch together a quilt of different subspecialty services to form a blanket that will cover all the patient’s needs. Sometimes, quick-on-my-feet problem-solving is more effective than policies and procedures. I do advocate for a policy that, as long as the patient is on my service, consultants are advisers and ultimate decision-making responsibility rests on me. Still, I wish everyone would play together nicely in the sandbox.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no financial conflicts of interest.
The teenager was transferred from the local inpatient psychiatric facility to the emergency department of Children’s Hospital. There had been repeated episodes, witnessed by their nursing staff, during which he became unresponsive, even to sternal rub. There was no cyanosis during the episodes, and no incontinence. There have never been any injuries from falls associated with the onset of these events.
Careful history in the ED indicates these events have been going on for months. Initially they were solely occurring at school, but they have increased in frequency and duration. Neurology examines him and concludes these aren’t seizures. They do not recommend any testing and don’t want him on their service, so the ED admits him to the general pediatric service overnight. Cardiac exam is normal. EKG is normal. Overnight telemetry shows no arrhythmias or vital sign changes. Neurologic exam is normal. The clinical medicine team concurs with the diagnosis of pseudoseizures, although malingering and conversion reaction remain in the differential. The inpatient psychiatric service would be the best place to investigate and treat this. The resident relays the findings to staff at the psychiatric hospital, and an ambulance takes him back.
However, while this is occurring, there is a changeover in the attending of record at the psychiatric facility. The new physician is not satisfied with the mere clinical opinions of the neurologists. She demands testing to prove these episodes are not seizures. The ambulance brings him back to the Children’s Hospital, where he is readmitted. Neurology, under duress, performs an EEG the following day, which is normal. However, no unresponsive episodes occur during the EEG. On discussion with the other facility, the stakes are increased again. Should a video EEG of 24 hours, or even 72 hours be done? What about a Holter?
Pseudoseizures are not the only diagnosis causing these conflicts. The psychiatric facility frequently sends children with intractable abdominal pain in for evaluation. I haven’t found a case of appendicitis yet among that group. I’ve certainly admitted many children with chronic abdominal pain who I believe would be better treated by a psychiatrist.
The interactions and transitions of care between subspecialty services are fraught with anomalies. Business people refer to it as silo mentality, which is at best inefficient, at worst a cause of failure. When I did locum tenens in one small town, the ED nurses were surprised that I, a general pediatrician, would discharge a child with abdominal pain without a consult and examination by a surgeon. Two months later, at another facility, a general surgeon made it clear that until there was a CT scan with a radiology reading confirming appendicitis, he wasn’t coming in from home. There is wide variation in customary care.
As a hospitalist, part of my time is dedicated to improving the system. Reviewing sentinel events such as readmissions is part of the quality improvement process. A round-trip ambulance ride is clearly not good care. Fixing the problem, however, requires finding the right tool.
One option is the morbidity and mortality (M&M) conference. While surgeons find that approach useful, I observe it to be frequently dysfunctional. Somebody sees something that didn’t go right. They propose a new policy which will prevent that problem from recurring. Nobody does research to prove that the solution really works. They never consider, much less measure, whether this new practice causes more problems than it solves. Then, years later, no one can quite recall why we began doing things this way, which makes dysfunctional procedures very refractory to correction.
Another tool is a peer review committee which investigates cases. On that committee, I strongly advocate that before any decision is made, a request for information letter be sent to the staff involved to hear their side of the story, which frequently isn’t captured in the documentation.
In any human endeavor, differences of opinions will occur. Professionalism is an art of resolving these differences while keeping the focus on what is best for the patient. If the physician at the psychiatric facility truly thought these events were seizures, and that neurology had not adequately investigated them, I as a hospitalist needed to find a way of addressing that concern. There is a limit, however, to allowing a physician of one specialty to insist upon a test being done by a different subspecialist.
If you were expecting this column to end with a nice, tidy description of the solution I found to keep this particular situation from happening again, I’m sorry to disappoint you. I’m an engineer and a pediatrician, not a miracle worker. As a hospitalist, I patch together a quilt of different subspecialty services to form a blanket that will cover all the patient’s needs. Sometimes, quick-on-my-feet problem-solving is more effective than policies and procedures. I do advocate for a policy that, as long as the patient is on my service, consultants are advisers and ultimate decision-making responsibility rests on me. Still, I wish everyone would play together nicely in the sandbox.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no financial conflicts of interest.
The teenager was transferred from the local inpatient psychiatric facility to the emergency department of Children’s Hospital. There had been repeated episodes, witnessed by their nursing staff, during which he became unresponsive, even to sternal rub. There was no cyanosis during the episodes, and no incontinence. There have never been any injuries from falls associated with the onset of these events.
Careful history in the ED indicates these events have been going on for months. Initially they were solely occurring at school, but they have increased in frequency and duration. Neurology examines him and concludes these aren’t seizures. They do not recommend any testing and don’t want him on their service, so the ED admits him to the general pediatric service overnight. Cardiac exam is normal. EKG is normal. Overnight telemetry shows no arrhythmias or vital sign changes. Neurologic exam is normal. The clinical medicine team concurs with the diagnosis of pseudoseizures, although malingering and conversion reaction remain in the differential. The inpatient psychiatric service would be the best place to investigate and treat this. The resident relays the findings to staff at the psychiatric hospital, and an ambulance takes him back.
However, while this is occurring, there is a changeover in the attending of record at the psychiatric facility. The new physician is not satisfied with the mere clinical opinions of the neurologists. She demands testing to prove these episodes are not seizures. The ambulance brings him back to the Children’s Hospital, where he is readmitted. Neurology, under duress, performs an EEG the following day, which is normal. However, no unresponsive episodes occur during the EEG. On discussion with the other facility, the stakes are increased again. Should a video EEG of 24 hours, or even 72 hours be done? What about a Holter?
Pseudoseizures are not the only diagnosis causing these conflicts. The psychiatric facility frequently sends children with intractable abdominal pain in for evaluation. I haven’t found a case of appendicitis yet among that group. I’ve certainly admitted many children with chronic abdominal pain who I believe would be better treated by a psychiatrist.
The interactions and transitions of care between subspecialty services are fraught with anomalies. Business people refer to it as silo mentality, which is at best inefficient, at worst a cause of failure. When I did locum tenens in one small town, the ED nurses were surprised that I, a general pediatrician, would discharge a child with abdominal pain without a consult and examination by a surgeon. Two months later, at another facility, a general surgeon made it clear that until there was a CT scan with a radiology reading confirming appendicitis, he wasn’t coming in from home. There is wide variation in customary care.
As a hospitalist, part of my time is dedicated to improving the system. Reviewing sentinel events such as readmissions is part of the quality improvement process. A round-trip ambulance ride is clearly not good care. Fixing the problem, however, requires finding the right tool.
One option is the morbidity and mortality (M&M) conference. While surgeons find that approach useful, I observe it to be frequently dysfunctional. Somebody sees something that didn’t go right. They propose a new policy which will prevent that problem from recurring. Nobody does research to prove that the solution really works. They never consider, much less measure, whether this new practice causes more problems than it solves. Then, years later, no one can quite recall why we began doing things this way, which makes dysfunctional procedures very refractory to correction.
Another tool is a peer review committee which investigates cases. On that committee, I strongly advocate that before any decision is made, a request for information letter be sent to the staff involved to hear their side of the story, which frequently isn’t captured in the documentation.
In any human endeavor, differences of opinions will occur. Professionalism is an art of resolving these differences while keeping the focus on what is best for the patient. If the physician at the psychiatric facility truly thought these events were seizures, and that neurology had not adequately investigated them, I as a hospitalist needed to find a way of addressing that concern. There is a limit, however, to allowing a physician of one specialty to insist upon a test being done by a different subspecialist.
If you were expecting this column to end with a nice, tidy description of the solution I found to keep this particular situation from happening again, I’m sorry to disappoint you. I’m an engineer and a pediatrician, not a miracle worker. As a hospitalist, I patch together a quilt of different subspecialty services to form a blanket that will cover all the patient’s needs. Sometimes, quick-on-my-feet problem-solving is more effective than policies and procedures. I do advocate for a policy that, as long as the patient is on my service, consultants are advisers and ultimate decision-making responsibility rests on me. Still, I wish everyone would play together nicely in the sandbox.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no financial conflicts of interest.
$750 Billion in Waste
"You actually try and avoid ordering a test?" There was an element of surprise, even some incredulity, in the subspecialist’s voice. "Yes," I said.
The case involved a 2-year-old with ... Well, those details don’t matter. The point is that all tests have a downside. There is the discomfort of the test, particularly in pediatrics. This week I had the baby who required four IV sticks and a radial artery poke to get enough blood to work up a potential (though very unlikely) inborn error of metabolism. There are potential complications, especially the risk of cancer with any x-ray or CT scan. There is the potential to be misled. In my career I’ve seen several patients misdiagnosed based on a test with poor specificity. But the biggest issue is the waste of resources in a society where people are dying due to lack of access to affordable care.
I recall a TV series where in each episode a patient arrives at the hospital emergency department and a brilliant young intern rattles off a litany of tests before examining the patient. It makes a good action script but bad medical care. As an attending at a teaching hospital, I’m frequently reviewing the tests proposed by residents. Part of that review is asking whether a test is really necessary. What are you going to do differently if the repeat CBC had a white count of 20,000 versus 10,000? What do practice guidelines say about the utility of a blood culture for a febrile but nontoxic 2-year-old with community-acquired pneumonia? In your judgment, how helpful will another CT scan be for this teenager with chronic abdominal pain who has had three scans in the past 12 months? Given recently published data on the risks of radiation, is a head CT of a child who bumped his head really indicated "just to be on the safe side" or is it harmful defensive medicine?
With an ever-growing bag of tests, modern medicine is as much about what not to order as it is about what to order. The same is true for therapies. Just last week I managed to keep a hospitalized child from getting an IV. I object strongly whenever a resident implies that we need to start an IV solely as a means to justify a hospitalization. Given that the child had already failed a course of outpatient therapy, a different oral antibiotic was used, and the entire hospitalization was appropriately covered by insurance as an observation stay.
Various paradigms have been proposed to rein in the out-of-control increase in the cost of health care. This is particularly true in the United States, where costs continue to rise while life expectancy is below that of many other developed countries that spend half as much. Initially labeled rationing, cost control efforts have morphed into many forms. There have been attempts to align financial incentives through diagnosis-related groups. Gatekeeping by primary care physicians was attempted. Second opinions before surgery became a requirement for a few years. In the 1990s, my office was constantly dealing with a high school–trained clerk at the insurance company with a book that said whether or not a particular test or procedure would be covered.
Lately the push has been less about attempting to limit care and more about simply curtailing waste. Last spring, the Choosing Wisely campaign announced "nine physician organizations that each identified five tests or procedures in their respective fields that may be overused or unnecessary."
It is a variation of the "Think globally, but act locally" paradigm.
Another effort this summer has been a campaign emphasizing "Avoiding avoidable waste." Both campaigns have more appropriate mottos than my sentiment of "Don’t act stupid."
Poor-quality research and defensive medicine have influenced many physicians to overtest and overtreat. Comparative effectiveness research has been inadequate. Yet according to a recent Medscape poll, the plurality of physicians have "no intention of reducing the amount of tests, procedures, and treatments they perform because they believe the quality guidelines and cost-containment measures aren’t in patients’ best interest." Obviously, policy makers, researchers, and clinical ethicists have a long way to go on this problem of professional stewardship.
As I delinquently turn this column in to my editor, a new report has just come out. The Institute of Medicine has released a report asserting the United States wasted $750 billion on health care in 2009.
As we approach an election heavily influenced by health care reform and the fiscal cliff, I suspect this report on waste may be as seminal an event for the next decade as the IOM report on medical error was in 1999.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell at [email protected].
"You actually try and avoid ordering a test?" There was an element of surprise, even some incredulity, in the subspecialist’s voice. "Yes," I said.
The case involved a 2-year-old with ... Well, those details don’t matter. The point is that all tests have a downside. There is the discomfort of the test, particularly in pediatrics. This week I had the baby who required four IV sticks and a radial artery poke to get enough blood to work up a potential (though very unlikely) inborn error of metabolism. There are potential complications, especially the risk of cancer with any x-ray or CT scan. There is the potential to be misled. In my career I’ve seen several patients misdiagnosed based on a test with poor specificity. But the biggest issue is the waste of resources in a society where people are dying due to lack of access to affordable care.
I recall a TV series where in each episode a patient arrives at the hospital emergency department and a brilliant young intern rattles off a litany of tests before examining the patient. It makes a good action script but bad medical care. As an attending at a teaching hospital, I’m frequently reviewing the tests proposed by residents. Part of that review is asking whether a test is really necessary. What are you going to do differently if the repeat CBC had a white count of 20,000 versus 10,000? What do practice guidelines say about the utility of a blood culture for a febrile but nontoxic 2-year-old with community-acquired pneumonia? In your judgment, how helpful will another CT scan be for this teenager with chronic abdominal pain who has had three scans in the past 12 months? Given recently published data on the risks of radiation, is a head CT of a child who bumped his head really indicated "just to be on the safe side" or is it harmful defensive medicine?
With an ever-growing bag of tests, modern medicine is as much about what not to order as it is about what to order. The same is true for therapies. Just last week I managed to keep a hospitalized child from getting an IV. I object strongly whenever a resident implies that we need to start an IV solely as a means to justify a hospitalization. Given that the child had already failed a course of outpatient therapy, a different oral antibiotic was used, and the entire hospitalization was appropriately covered by insurance as an observation stay.
Various paradigms have been proposed to rein in the out-of-control increase in the cost of health care. This is particularly true in the United States, where costs continue to rise while life expectancy is below that of many other developed countries that spend half as much. Initially labeled rationing, cost control efforts have morphed into many forms. There have been attempts to align financial incentives through diagnosis-related groups. Gatekeeping by primary care physicians was attempted. Second opinions before surgery became a requirement for a few years. In the 1990s, my office was constantly dealing with a high school–trained clerk at the insurance company with a book that said whether or not a particular test or procedure would be covered.
Lately the push has been less about attempting to limit care and more about simply curtailing waste. Last spring, the Choosing Wisely campaign announced "nine physician organizations that each identified five tests or procedures in their respective fields that may be overused or unnecessary."
It is a variation of the "Think globally, but act locally" paradigm.
Another effort this summer has been a campaign emphasizing "Avoiding avoidable waste." Both campaigns have more appropriate mottos than my sentiment of "Don’t act stupid."
Poor-quality research and defensive medicine have influenced many physicians to overtest and overtreat. Comparative effectiveness research has been inadequate. Yet according to a recent Medscape poll, the plurality of physicians have "no intention of reducing the amount of tests, procedures, and treatments they perform because they believe the quality guidelines and cost-containment measures aren’t in patients’ best interest." Obviously, policy makers, researchers, and clinical ethicists have a long way to go on this problem of professional stewardship.
As I delinquently turn this column in to my editor, a new report has just come out. The Institute of Medicine has released a report asserting the United States wasted $750 billion on health care in 2009.
As we approach an election heavily influenced by health care reform and the fiscal cliff, I suspect this report on waste may be as seminal an event for the next decade as the IOM report on medical error was in 1999.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell at [email protected].
"You actually try and avoid ordering a test?" There was an element of surprise, even some incredulity, in the subspecialist’s voice. "Yes," I said.
The case involved a 2-year-old with ... Well, those details don’t matter. The point is that all tests have a downside. There is the discomfort of the test, particularly in pediatrics. This week I had the baby who required four IV sticks and a radial artery poke to get enough blood to work up a potential (though very unlikely) inborn error of metabolism. There are potential complications, especially the risk of cancer with any x-ray or CT scan. There is the potential to be misled. In my career I’ve seen several patients misdiagnosed based on a test with poor specificity. But the biggest issue is the waste of resources in a society where people are dying due to lack of access to affordable care.
I recall a TV series where in each episode a patient arrives at the hospital emergency department and a brilliant young intern rattles off a litany of tests before examining the patient. It makes a good action script but bad medical care. As an attending at a teaching hospital, I’m frequently reviewing the tests proposed by residents. Part of that review is asking whether a test is really necessary. What are you going to do differently if the repeat CBC had a white count of 20,000 versus 10,000? What do practice guidelines say about the utility of a blood culture for a febrile but nontoxic 2-year-old with community-acquired pneumonia? In your judgment, how helpful will another CT scan be for this teenager with chronic abdominal pain who has had three scans in the past 12 months? Given recently published data on the risks of radiation, is a head CT of a child who bumped his head really indicated "just to be on the safe side" or is it harmful defensive medicine?
With an ever-growing bag of tests, modern medicine is as much about what not to order as it is about what to order. The same is true for therapies. Just last week I managed to keep a hospitalized child from getting an IV. I object strongly whenever a resident implies that we need to start an IV solely as a means to justify a hospitalization. Given that the child had already failed a course of outpatient therapy, a different oral antibiotic was used, and the entire hospitalization was appropriately covered by insurance as an observation stay.
Various paradigms have been proposed to rein in the out-of-control increase in the cost of health care. This is particularly true in the United States, where costs continue to rise while life expectancy is below that of many other developed countries that spend half as much. Initially labeled rationing, cost control efforts have morphed into many forms. There have been attempts to align financial incentives through diagnosis-related groups. Gatekeeping by primary care physicians was attempted. Second opinions before surgery became a requirement for a few years. In the 1990s, my office was constantly dealing with a high school–trained clerk at the insurance company with a book that said whether or not a particular test or procedure would be covered.
Lately the push has been less about attempting to limit care and more about simply curtailing waste. Last spring, the Choosing Wisely campaign announced "nine physician organizations that each identified five tests or procedures in their respective fields that may be overused or unnecessary."
It is a variation of the "Think globally, but act locally" paradigm.
Another effort this summer has been a campaign emphasizing "Avoiding avoidable waste." Both campaigns have more appropriate mottos than my sentiment of "Don’t act stupid."
Poor-quality research and defensive medicine have influenced many physicians to overtest and overtreat. Comparative effectiveness research has been inadequate. Yet according to a recent Medscape poll, the plurality of physicians have "no intention of reducing the amount of tests, procedures, and treatments they perform because they believe the quality guidelines and cost-containment measures aren’t in patients’ best interest." Obviously, policy makers, researchers, and clinical ethicists have a long way to go on this problem of professional stewardship.
As I delinquently turn this column in to my editor, a new report has just come out. The Institute of Medicine has released a report asserting the United States wasted $750 billion on health care in 2009.
As we approach an election heavily influenced by health care reform and the fiscal cliff, I suspect this report on waste may be as seminal an event for the next decade as the IOM report on medical error was in 1999.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell at [email protected].
Not Doing Everything
The patient was an 11-year-old female who had been generally healthy until the past 3 months. Polyuria and polydipsia suggested diabetes insipidus. A splenic mass had been removed, but now there was a symptomatic mass in the left femur. The recommended aggressive course of therapy included amputation of the leg. But was aggressive treatment in the patient’s best interest, given what appeared to be metastatic cancer? As chair of an ethics committee and ethics consult service, I have facilitated many similar conversations about end-of-life care.
This time was a little different. The conversation was with my brother. The patient was his elderly German Shepherd dog. Since we lived 1,000 miles apart, it occurred over the phone. Besides those features, the conversation followed a typical trajectory over two evenings. The day after our last conversation, my brother made his decision. The vet made a compassionate house call to end the pet’s suffering.
There have been remarkable advances in veterinary medicine.* A recent New York Times article raised the ethical question about how much money and effort is appropriate for a dying pet.
I do not share my brother’s predilection to form emotional bonds with animals, but I know many people who do. My observations reveal no fundamental difference between how pet owners make loving decisions for their pets and how most people make wise decisions about end-of-life care for themselves and their relatives. At least until someone interjects that vague, loaded, and misleading question, "Do you want us to do everything?"
At what point does aggressive treatment merely prolong the suffering?
The U.S. health care system does a very poor job addressing that question. In many ways, that question created modern clinical ethics. Perhaps the attitdudes of pet caretakers can give us insight that often gets eclipsed by political rhetoric about fictional death panels.
Western society’s attitudes toward animals have been evolving. As a child, I learned that humans were distinguished from "lower" animals because only humans used language, used tools, had emotions, or appeared distressed by ethical dilemmas. One by one, scientific research has disproved each of these characteristics. Genetics has shown a 98% homology of human DNA with that of other primates. Rather than merely trying to prevent animal cruelty, secular organizations such as People for the Ethical Treatment of Animals (PETA) seek to establish ethical norms for interacting with animals. This includes discouraging the use of furs, changing the architecture of zoos, and opposing practices in modern farming that crowd animals in cages. Religion weighs in on this subject as well. The relationship between humans and other animals has been the subject of the spring 2012 religion and science seminar at the Lutheran School of Theology at Chicago, and the subject of the Goshen College 2012 Conference on Science and Religion. Of course, Hinduism predates all this by millennia.
Life-sustaining technologies blossomed in the 1970s, partly due to advancements in medical science, and partly due to profitability from Medicare reimbursement, which began in 1965. Theologians, historically the experts on the meaning of life, began to debate whether technology was always a good thing. A Catholic voice from this era was the Rev. Kevin O’Rourke, a theologian at Saint Louis University who later moved to Loyola in Chicago. He passed away on March 28.
For decades he staunchly defended the difference between ordinary and extraordinary actions at the end of life, arguing that a patient may use, but is not obligated to employ, extraordinary measures to prolong life. Secular philosophers weighed in with similar analyses, which identified some treatments as burdensome. They defended, on the grounds of patient autonomy, the right of a patient to refuse to consent to overly burdensome medical treatment. The U.S. legal system concurred in a series of landmark cases. A generation later, clinical ethicists continue to debate this issue and to refine the means by which those decisions are made. In 2006, board certification became available in the subspecialty of hospice and palliative medicine.
Alas, many doctors are still reluctant to initiate discussions of end-of-life care with their patients. There are multiple reasons for this. Reimbursement is one barrier. Attempts to improve reimbursement for this activity in 2009 were abandoned when Sarah Palin mischaracterized the process as the establishment of government death panels.
Clinical ethicists would characterize these discussions as the epitome of patient advocacy. Can the acrimonious debate be enlightened by the choices of a pet lover?
Physicians need to carefully discuss goals of treatment rather than cop out and ask that horrifically inaccurate question, "Do you want us to do everything?"
I assert that it can. My brother now cares for two more German Shepherds adopted from rescue shelters. He is also my health care power of attorney. I anticipate his experience with his beloved pets will provide him with the wisdom and compassion to make choices for me should circumstances so require.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. E-mail Dr. Powell at [email protected].
* Author's changes made 5/10/2012.
The patient was an 11-year-old female who had been generally healthy until the past 3 months. Polyuria and polydipsia suggested diabetes insipidus. A splenic mass had been removed, but now there was a symptomatic mass in the left femur. The recommended aggressive course of therapy included amputation of the leg. But was aggressive treatment in the patient’s best interest, given what appeared to be metastatic cancer? As chair of an ethics committee and ethics consult service, I have facilitated many similar conversations about end-of-life care.
This time was a little different. The conversation was with my brother. The patient was his elderly German Shepherd dog. Since we lived 1,000 miles apart, it occurred over the phone. Besides those features, the conversation followed a typical trajectory over two evenings. The day after our last conversation, my brother made his decision. The vet made a compassionate house call to end the pet’s suffering.
There have been remarkable advances in veterinary medicine.* A recent New York Times article raised the ethical question about how much money and effort is appropriate for a dying pet.
I do not share my brother’s predilection to form emotional bonds with animals, but I know many people who do. My observations reveal no fundamental difference between how pet owners make loving decisions for their pets and how most people make wise decisions about end-of-life care for themselves and their relatives. At least until someone interjects that vague, loaded, and misleading question, "Do you want us to do everything?"
At what point does aggressive treatment merely prolong the suffering?
The U.S. health care system does a very poor job addressing that question. In many ways, that question created modern clinical ethics. Perhaps the attitdudes of pet caretakers can give us insight that often gets eclipsed by political rhetoric about fictional death panels.
Western society’s attitudes toward animals have been evolving. As a child, I learned that humans were distinguished from "lower" animals because only humans used language, used tools, had emotions, or appeared distressed by ethical dilemmas. One by one, scientific research has disproved each of these characteristics. Genetics has shown a 98% homology of human DNA with that of other primates. Rather than merely trying to prevent animal cruelty, secular organizations such as People for the Ethical Treatment of Animals (PETA) seek to establish ethical norms for interacting with animals. This includes discouraging the use of furs, changing the architecture of zoos, and opposing practices in modern farming that crowd animals in cages. Religion weighs in on this subject as well. The relationship between humans and other animals has been the subject of the spring 2012 religion and science seminar at the Lutheran School of Theology at Chicago, and the subject of the Goshen College 2012 Conference on Science and Religion. Of course, Hinduism predates all this by millennia.
Life-sustaining technologies blossomed in the 1970s, partly due to advancements in medical science, and partly due to profitability from Medicare reimbursement, which began in 1965. Theologians, historically the experts on the meaning of life, began to debate whether technology was always a good thing. A Catholic voice from this era was the Rev. Kevin O’Rourke, a theologian at Saint Louis University who later moved to Loyola in Chicago. He passed away on March 28.
For decades he staunchly defended the difference between ordinary and extraordinary actions at the end of life, arguing that a patient may use, but is not obligated to employ, extraordinary measures to prolong life. Secular philosophers weighed in with similar analyses, which identified some treatments as burdensome. They defended, on the grounds of patient autonomy, the right of a patient to refuse to consent to overly burdensome medical treatment. The U.S. legal system concurred in a series of landmark cases. A generation later, clinical ethicists continue to debate this issue and to refine the means by which those decisions are made. In 2006, board certification became available in the subspecialty of hospice and palliative medicine.
Alas, many doctors are still reluctant to initiate discussions of end-of-life care with their patients. There are multiple reasons for this. Reimbursement is one barrier. Attempts to improve reimbursement for this activity in 2009 were abandoned when Sarah Palin mischaracterized the process as the establishment of government death panels.
Clinical ethicists would characterize these discussions as the epitome of patient advocacy. Can the acrimonious debate be enlightened by the choices of a pet lover?
Physicians need to carefully discuss goals of treatment rather than cop out and ask that horrifically inaccurate question, "Do you want us to do everything?"
I assert that it can. My brother now cares for two more German Shepherds adopted from rescue shelters. He is also my health care power of attorney. I anticipate his experience with his beloved pets will provide him with the wisdom and compassion to make choices for me should circumstances so require.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. E-mail Dr. Powell at [email protected].
* Author's changes made 5/10/2012.
The patient was an 11-year-old female who had been generally healthy until the past 3 months. Polyuria and polydipsia suggested diabetes insipidus. A splenic mass had been removed, but now there was a symptomatic mass in the left femur. The recommended aggressive course of therapy included amputation of the leg. But was aggressive treatment in the patient’s best interest, given what appeared to be metastatic cancer? As chair of an ethics committee and ethics consult service, I have facilitated many similar conversations about end-of-life care.
This time was a little different. The conversation was with my brother. The patient was his elderly German Shepherd dog. Since we lived 1,000 miles apart, it occurred over the phone. Besides those features, the conversation followed a typical trajectory over two evenings. The day after our last conversation, my brother made his decision. The vet made a compassionate house call to end the pet’s suffering.
There have been remarkable advances in veterinary medicine.* A recent New York Times article raised the ethical question about how much money and effort is appropriate for a dying pet.
I do not share my brother’s predilection to form emotional bonds with animals, but I know many people who do. My observations reveal no fundamental difference between how pet owners make loving decisions for their pets and how most people make wise decisions about end-of-life care for themselves and their relatives. At least until someone interjects that vague, loaded, and misleading question, "Do you want us to do everything?"
At what point does aggressive treatment merely prolong the suffering?
The U.S. health care system does a very poor job addressing that question. In many ways, that question created modern clinical ethics. Perhaps the attitdudes of pet caretakers can give us insight that often gets eclipsed by political rhetoric about fictional death panels.
Western society’s attitudes toward animals have been evolving. As a child, I learned that humans were distinguished from "lower" animals because only humans used language, used tools, had emotions, or appeared distressed by ethical dilemmas. One by one, scientific research has disproved each of these characteristics. Genetics has shown a 98% homology of human DNA with that of other primates. Rather than merely trying to prevent animal cruelty, secular organizations such as People for the Ethical Treatment of Animals (PETA) seek to establish ethical norms for interacting with animals. This includes discouraging the use of furs, changing the architecture of zoos, and opposing practices in modern farming that crowd animals in cages. Religion weighs in on this subject as well. The relationship between humans and other animals has been the subject of the spring 2012 religion and science seminar at the Lutheran School of Theology at Chicago, and the subject of the Goshen College 2012 Conference on Science and Religion. Of course, Hinduism predates all this by millennia.
Life-sustaining technologies blossomed in the 1970s, partly due to advancements in medical science, and partly due to profitability from Medicare reimbursement, which began in 1965. Theologians, historically the experts on the meaning of life, began to debate whether technology was always a good thing. A Catholic voice from this era was the Rev. Kevin O’Rourke, a theologian at Saint Louis University who later moved to Loyola in Chicago. He passed away on March 28.
For decades he staunchly defended the difference between ordinary and extraordinary actions at the end of life, arguing that a patient may use, but is not obligated to employ, extraordinary measures to prolong life. Secular philosophers weighed in with similar analyses, which identified some treatments as burdensome. They defended, on the grounds of patient autonomy, the right of a patient to refuse to consent to overly burdensome medical treatment. The U.S. legal system concurred in a series of landmark cases. A generation later, clinical ethicists continue to debate this issue and to refine the means by which those decisions are made. In 2006, board certification became available in the subspecialty of hospice and palliative medicine.
Alas, many doctors are still reluctant to initiate discussions of end-of-life care with their patients. There are multiple reasons for this. Reimbursement is one barrier. Attempts to improve reimbursement for this activity in 2009 were abandoned when Sarah Palin mischaracterized the process as the establishment of government death panels.
Clinical ethicists would characterize these discussions as the epitome of patient advocacy. Can the acrimonious debate be enlightened by the choices of a pet lover?
Physicians need to carefully discuss goals of treatment rather than cop out and ask that horrifically inaccurate question, "Do you want us to do everything?"
I assert that it can. My brother now cares for two more German Shepherds adopted from rescue shelters. He is also my health care power of attorney. I anticipate his experience with his beloved pets will provide him with the wisdom and compassion to make choices for me should circumstances so require.
Dr. Powell is associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. E-mail Dr. Powell at [email protected].
* Author's changes made 5/10/2012.
Camouflaging Informed Consent
In 1972, Canterbury v. Spence changed American health law, and with it, the patient-physician relationship. Some ethicists cite this as the beginning of modern medical ethics. Before a patient agreed to a surgical procedure, he was to be informed about its risks and benefits. Failure to do so was negligence. Part III of that appellate court decision gives a concise history explaining why the judges did not consider their decision revolutionary. That, however, wasn’t the way doctors experienced it. A decade earlier, arguments had been advanced justifying why it was better not to tell a patient he had cancer. Not all physicians agreed with that practice, labeled therapeutic privilege. But after Canterbury, such paternalism was no longer acceptable (with extremely rare exceptions).
Per Canterbury:
"The root premise is the concept, fundamental in American jurisprudence, that ‘every human being of adult years and sound mind has a right to determine what shall be done with his own body.’ ... True consent to what happens to one’s self is the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each. The average patient has little or no understanding of the medical arts, and ordinarily has only his physician to whom he can look for enlightenment with which to reach an intelligent decision. From these almost axiomatic considerations springs the need, and in turn the requirement, of a reasonable divulgence by physician to patient to make such a decision possible."
As outlined by that court, informed consent is not a form that needs to be signed. It is the professional duty to communicate information to the patient. That includes overcoming cultural and language barriers, poor health literacy, and denial.
The standard for what is contained in informed consent has evolved. Not every minute risk need be discussed. In many states, the requirements went from a professional standard (What information did other physicians customarily disclose?) to a reasonable person standard (What information would a reasonable patient want to know?) Then, as with many ideas that start out with good intentions, the whole thing spun out of control. The amount of material to be disclosed expanded without bound. There can be too much of a good thing. All too often, modern medical consent forms have become like the End User License Agreement (EULA) I ignore whenever I install computer software. Don’t most people just click the box that says, "I have read and accepted the terms of the EULA," without actually reading it? As if you really had a choice?
On taking my mother to the emergency department recently, I was given a binder filled with 32 pages of orientation materials, privacy notifications, descriptions of patient rights and responsibilities, and other paraphernalia, in addition to a five-page consent form for general medical care. I read it only out of sheer boredom while waiting 6 hours for test results to come back. The 47 pages of information in the discharge packet 2 days later were never read. But what I really wanted was a CD copy of her MRI to take to her follow-up appointment with a different doctor. That, I was told, could be obtained only from Health Information Services, which wasn’t open on a Sunday.
This experience has strongly reinforced my prior belief that idolizing autonomy can become counterproductive. Some people have the notion that maximizing autonomy is achieved by increased information sharing, or more practically, adding another form. The word camouflage comes to mind. You can’t see the forest for the trees. A hospital a few years ago wondered whether it should notify patients that a surgical resident might perform part of the procedure. The best advice seemed to be yes, include a sentence about that on the consent form. Almost no one will ever see it.
Medicine desperately needs its own version of the Paperwork Reduction Act of 1995 (PRA). I say this not to reduce the burden on the physician, but as a patient advocate hoping to improve the focus on important facts, thereby actually enhancing decision making rather than obfuscating it.
"The PRA mandates that all federal government agencies must obtain a Control Number from OMB [Office of Management and Budget] before promulgating a form that will impose an information collection burden on the general public. Once obtained, approval must be renewed every 3 years. In order to obtain or renew such approval, an agency must fill out OMB Form 83-I, attach the proposed form, and file it with OIRA [Office of Information and Regulatory Affairs]. On Form 83-I, the agency must explain the reason why the form is needed and estimate the burden in terms of time and money that the form will impose upon the persons required to fill it out."
Having government paperwork to reduce paperwork seems a little, no, a lot oxymoronic. However, the administrator of the OIRA appointed by President Barack Obama is Cass R. Sunstein, one of the coauthors of a book called, "Nudge: Improving Decisions about Health, Wealth, and Happiness." Advertisers have accumulated a vast science on how to influence people to buy things they don’t need. "Nudge" suggests it is time to use that research in behavioral economics to nudge people to default toward making good decisions. This approach, called libertarian paternalism, may also seem like an oxymoron. But then, lately, so has the term informed consent.
Dr. Kevin T. Powell is an associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell.
In 1972, Canterbury v. Spence changed American health law, and with it, the patient-physician relationship. Some ethicists cite this as the beginning of modern medical ethics. Before a patient agreed to a surgical procedure, he was to be informed about its risks and benefits. Failure to do so was negligence. Part III of that appellate court decision gives a concise history explaining why the judges did not consider their decision revolutionary. That, however, wasn’t the way doctors experienced it. A decade earlier, arguments had been advanced justifying why it was better not to tell a patient he had cancer. Not all physicians agreed with that practice, labeled therapeutic privilege. But after Canterbury, such paternalism was no longer acceptable (with extremely rare exceptions).
Per Canterbury:
"The root premise is the concept, fundamental in American jurisprudence, that ‘every human being of adult years and sound mind has a right to determine what shall be done with his own body.’ ... True consent to what happens to one’s self is the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each. The average patient has little or no understanding of the medical arts, and ordinarily has only his physician to whom he can look for enlightenment with which to reach an intelligent decision. From these almost axiomatic considerations springs the need, and in turn the requirement, of a reasonable divulgence by physician to patient to make such a decision possible."
As outlined by that court, informed consent is not a form that needs to be signed. It is the professional duty to communicate information to the patient. That includes overcoming cultural and language barriers, poor health literacy, and denial.
The standard for what is contained in informed consent has evolved. Not every minute risk need be discussed. In many states, the requirements went from a professional standard (What information did other physicians customarily disclose?) to a reasonable person standard (What information would a reasonable patient want to know?) Then, as with many ideas that start out with good intentions, the whole thing spun out of control. The amount of material to be disclosed expanded without bound. There can be too much of a good thing. All too often, modern medical consent forms have become like the End User License Agreement (EULA) I ignore whenever I install computer software. Don’t most people just click the box that says, "I have read and accepted the terms of the EULA," without actually reading it? As if you really had a choice?
On taking my mother to the emergency department recently, I was given a binder filled with 32 pages of orientation materials, privacy notifications, descriptions of patient rights and responsibilities, and other paraphernalia, in addition to a five-page consent form for general medical care. I read it only out of sheer boredom while waiting 6 hours for test results to come back. The 47 pages of information in the discharge packet 2 days later were never read. But what I really wanted was a CD copy of her MRI to take to her follow-up appointment with a different doctor. That, I was told, could be obtained only from Health Information Services, which wasn’t open on a Sunday.
This experience has strongly reinforced my prior belief that idolizing autonomy can become counterproductive. Some people have the notion that maximizing autonomy is achieved by increased information sharing, or more practically, adding another form. The word camouflage comes to mind. You can’t see the forest for the trees. A hospital a few years ago wondered whether it should notify patients that a surgical resident might perform part of the procedure. The best advice seemed to be yes, include a sentence about that on the consent form. Almost no one will ever see it.
Medicine desperately needs its own version of the Paperwork Reduction Act of 1995 (PRA). I say this not to reduce the burden on the physician, but as a patient advocate hoping to improve the focus on important facts, thereby actually enhancing decision making rather than obfuscating it.
"The PRA mandates that all federal government agencies must obtain a Control Number from OMB [Office of Management and Budget] before promulgating a form that will impose an information collection burden on the general public. Once obtained, approval must be renewed every 3 years. In order to obtain or renew such approval, an agency must fill out OMB Form 83-I, attach the proposed form, and file it with OIRA [Office of Information and Regulatory Affairs]. On Form 83-I, the agency must explain the reason why the form is needed and estimate the burden in terms of time and money that the form will impose upon the persons required to fill it out."
Having government paperwork to reduce paperwork seems a little, no, a lot oxymoronic. However, the administrator of the OIRA appointed by President Barack Obama is Cass R. Sunstein, one of the coauthors of a book called, "Nudge: Improving Decisions about Health, Wealth, and Happiness." Advertisers have accumulated a vast science on how to influence people to buy things they don’t need. "Nudge" suggests it is time to use that research in behavioral economics to nudge people to default toward making good decisions. This approach, called libertarian paternalism, may also seem like an oxymoron. But then, lately, so has the term informed consent.
Dr. Kevin T. Powell is an associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell.
In 1972, Canterbury v. Spence changed American health law, and with it, the patient-physician relationship. Some ethicists cite this as the beginning of modern medical ethics. Before a patient agreed to a surgical procedure, he was to be informed about its risks and benefits. Failure to do so was negligence. Part III of that appellate court decision gives a concise history explaining why the judges did not consider their decision revolutionary. That, however, wasn’t the way doctors experienced it. A decade earlier, arguments had been advanced justifying why it was better not to tell a patient he had cancer. Not all physicians agreed with that practice, labeled therapeutic privilege. But after Canterbury, such paternalism was no longer acceptable (with extremely rare exceptions).
Per Canterbury:
"The root premise is the concept, fundamental in American jurisprudence, that ‘every human being of adult years and sound mind has a right to determine what shall be done with his own body.’ ... True consent to what happens to one’s self is the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each. The average patient has little or no understanding of the medical arts, and ordinarily has only his physician to whom he can look for enlightenment with which to reach an intelligent decision. From these almost axiomatic considerations springs the need, and in turn the requirement, of a reasonable divulgence by physician to patient to make such a decision possible."
As outlined by that court, informed consent is not a form that needs to be signed. It is the professional duty to communicate information to the patient. That includes overcoming cultural and language barriers, poor health literacy, and denial.
The standard for what is contained in informed consent has evolved. Not every minute risk need be discussed. In many states, the requirements went from a professional standard (What information did other physicians customarily disclose?) to a reasonable person standard (What information would a reasonable patient want to know?) Then, as with many ideas that start out with good intentions, the whole thing spun out of control. The amount of material to be disclosed expanded without bound. There can be too much of a good thing. All too often, modern medical consent forms have become like the End User License Agreement (EULA) I ignore whenever I install computer software. Don’t most people just click the box that says, "I have read and accepted the terms of the EULA," without actually reading it? As if you really had a choice?
On taking my mother to the emergency department recently, I was given a binder filled with 32 pages of orientation materials, privacy notifications, descriptions of patient rights and responsibilities, and other paraphernalia, in addition to a five-page consent form for general medical care. I read it only out of sheer boredom while waiting 6 hours for test results to come back. The 47 pages of information in the discharge packet 2 days later were never read. But what I really wanted was a CD copy of her MRI to take to her follow-up appointment with a different doctor. That, I was told, could be obtained only from Health Information Services, which wasn’t open on a Sunday.
This experience has strongly reinforced my prior belief that idolizing autonomy can become counterproductive. Some people have the notion that maximizing autonomy is achieved by increased information sharing, or more practically, adding another form. The word camouflage comes to mind. You can’t see the forest for the trees. A hospital a few years ago wondered whether it should notify patients that a surgical resident might perform part of the procedure. The best advice seemed to be yes, include a sentence about that on the consent form. Almost no one will ever see it.
Medicine desperately needs its own version of the Paperwork Reduction Act of 1995 (PRA). I say this not to reduce the burden on the physician, but as a patient advocate hoping to improve the focus on important facts, thereby actually enhancing decision making rather than obfuscating it.
"The PRA mandates that all federal government agencies must obtain a Control Number from OMB [Office of Management and Budget] before promulgating a form that will impose an information collection burden on the general public. Once obtained, approval must be renewed every 3 years. In order to obtain or renew such approval, an agency must fill out OMB Form 83-I, attach the proposed form, and file it with OIRA [Office of Information and Regulatory Affairs]. On Form 83-I, the agency must explain the reason why the form is needed and estimate the burden in terms of time and money that the form will impose upon the persons required to fill it out."
Having government paperwork to reduce paperwork seems a little, no, a lot oxymoronic. However, the administrator of the OIRA appointed by President Barack Obama is Cass R. Sunstein, one of the coauthors of a book called, "Nudge: Improving Decisions about Health, Wealth, and Happiness." Advertisers have accumulated a vast science on how to influence people to buy things they don’t need. "Nudge" suggests it is time to use that research in behavioral economics to nudge people to default toward making good decisions. This approach, called libertarian paternalism, may also seem like an oxymoron. But then, lately, so has the term informed consent.
Dr. Kevin T. Powell is an associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell.
Camouflaging Informed Consent
In 1972, Canterbury v. Spence changed American health law, and with it, the patient-physician relationship. Some ethicists cite this as the beginning of modern medical ethics. Before a patient agreed to a surgical procedure, he was to be informed about its risks and benefits. Failure to do so was negligence. Part III of that appellate court decision gives a concise history explaining why the judges did not consider their decision revolutionary. That, however, wasn’t the way doctors experienced it. A decade earlier, arguments had been advanced justifying why it was better not to tell a patient he had cancer. Not all physicians agreed with that practice, labeled therapeutic privilege. But after Canterbury, such paternalism was no longer acceptable (with extremely rare exceptions).
Per Canterbury:
"The root premise is the concept, fundamental in American jurisprudence, that ‘every human being of adult years and sound mind has a right to determine what shall be done with his own body.’ ... True consent to what happens to one’s self is the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each. The average patient has little or no understanding of the medical arts, and ordinarily has only his physician to whom he can look for enlightenment with which to reach an intelligent decision. From these almost axiomatic considerations springs the need, and in turn the requirement, of a reasonable divulgence by physician to patient to make such a decision possible."
As outlined by that court, informed consent is not a form that needs to be signed. It is the professional duty to communicate information to the patient. That includes overcoming cultural and language barriers, poor health literacy, and denial.
The standard for what is contained in informed consent has evolved. Not every minute risk need be discussed. In many states, the requirements went from a professional standard (What information did other physicians customarily disclose?) to a reasonable person standard (What information would a reasonable patient want to know?) Then, as with many ideas that start out with good intentions, the whole thing spun out of control. The amount of material to be disclosed expanded without bound. There can be too much of a good thing. All too often, modern medical consent forms have become like the End User License Agreement (EULA) I ignore whenever I install computer software. Don’t most people just click the box that says, "I have read and accepted the terms of the EULA," without actually reading it? As if you really had a choice?
On taking my mother to the emergency department recently, I was given a binder filled with 32 pages of orientation materials, privacy notifications, descriptions of patient rights and responsibilities, and other paraphernalia, in addition to a five-page consent form for general medical care. I read it only out of sheer boredom while waiting 6 hours for test results to come back. The 47 pages of information in the discharge packet 2 days later were never read. But what I really wanted was a CD copy of her MRI to take to her follow-up appointment with a different doctor. That, I was told, could be obtained only from Health Information Services, which wasn’t open on a Sunday.
This experience has strongly reinforced my prior belief that idolizing autonomy can become counterproductive. Some people have the notion that maximizing autonomy is achieved by increased information sharing, or more practically, adding another form. The word camouflage comes to mind. You can’t see the forest for the trees. A hospital a few years ago wondered whether it should notify patients that a surgical resident might perform part of the procedure. The best advice seemed to be yes, include a sentence about that on the consent form. Almost no one will ever see it.
Medicine desperately needs its own version of the Paperwork Reduction Act of 1995 (PRA). I say this not to reduce the burden on the physician, but as a patient advocate hoping to improve the focus on important facts, thereby actually enhancing decision making rather than obfuscating it.
"The PRA mandates that all federal government agencies must obtain a Control Number from OMB [Office of Management and Budget] before promulgating a form that will impose an information collection burden on the general public. Once obtained, approval must be renewed every 3 years. In order to obtain or renew such approval, an agency must fill out OMB Form 83-I, attach the proposed form, and file it with OIRA [Office of Information and Regulatory Affairs]. On Form 83-I, the agency must explain the reason why the form is needed and estimate the burden in terms of time and money that the form will impose upon the persons required to fill it out."
Having government paperwork to reduce paperwork seems a little, no, a lot oxymoronic. However, the administrator of the OIRA appointed by President Barack Obama is Cass R. Sunstein, one of the coauthors of a book called, "Nudge: Improving Decisions about Health, Wealth, and Happiness." Advertisers have accumulated a vast science on how to influence people to buy things they don’t need. "Nudge" suggests it is time to use that research in behavioral economics to nudge people to default toward making good decisions. This approach, called libertarian paternalism, may also seem like an oxymoron. But then, lately, so has the term informed consent.
Dr. Kevin T. Powell is an associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell.
In 1972, Canterbury v. Spence changed American health law, and with it, the patient-physician relationship. Some ethicists cite this as the beginning of modern medical ethics. Before a patient agreed to a surgical procedure, he was to be informed about its risks and benefits. Failure to do so was negligence. Part III of that appellate court decision gives a concise history explaining why the judges did not consider their decision revolutionary. That, however, wasn’t the way doctors experienced it. A decade earlier, arguments had been advanced justifying why it was better not to tell a patient he had cancer. Not all physicians agreed with that practice, labeled therapeutic privilege. But after Canterbury, such paternalism was no longer acceptable (with extremely rare exceptions).
Per Canterbury:
"The root premise is the concept, fundamental in American jurisprudence, that ‘every human being of adult years and sound mind has a right to determine what shall be done with his own body.’ ... True consent to what happens to one’s self is the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each. The average patient has little or no understanding of the medical arts, and ordinarily has only his physician to whom he can look for enlightenment with which to reach an intelligent decision. From these almost axiomatic considerations springs the need, and in turn the requirement, of a reasonable divulgence by physician to patient to make such a decision possible."
As outlined by that court, informed consent is not a form that needs to be signed. It is the professional duty to communicate information to the patient. That includes overcoming cultural and language barriers, poor health literacy, and denial.
The standard for what is contained in informed consent has evolved. Not every minute risk need be discussed. In many states, the requirements went from a professional standard (What information did other physicians customarily disclose?) to a reasonable person standard (What information would a reasonable patient want to know?) Then, as with many ideas that start out with good intentions, the whole thing spun out of control. The amount of material to be disclosed expanded without bound. There can be too much of a good thing. All too often, modern medical consent forms have become like the End User License Agreement (EULA) I ignore whenever I install computer software. Don’t most people just click the box that says, "I have read and accepted the terms of the EULA," without actually reading it? As if you really had a choice?
On taking my mother to the emergency department recently, I was given a binder filled with 32 pages of orientation materials, privacy notifications, descriptions of patient rights and responsibilities, and other paraphernalia, in addition to a five-page consent form for general medical care. I read it only out of sheer boredom while waiting 6 hours for test results to come back. The 47 pages of information in the discharge packet 2 days later were never read. But what I really wanted was a CD copy of her MRI to take to her follow-up appointment with a different doctor. That, I was told, could be obtained only from Health Information Services, which wasn’t open on a Sunday.
This experience has strongly reinforced my prior belief that idolizing autonomy can become counterproductive. Some people have the notion that maximizing autonomy is achieved by increased information sharing, or more practically, adding another form. The word camouflage comes to mind. You can’t see the forest for the trees. A hospital a few years ago wondered whether it should notify patients that a surgical resident might perform part of the procedure. The best advice seemed to be yes, include a sentence about that on the consent form. Almost no one will ever see it.
Medicine desperately needs its own version of the Paperwork Reduction Act of 1995 (PRA). I say this not to reduce the burden on the physician, but as a patient advocate hoping to improve the focus on important facts, thereby actually enhancing decision making rather than obfuscating it.
"The PRA mandates that all federal government agencies must obtain a Control Number from OMB [Office of Management and Budget] before promulgating a form that will impose an information collection burden on the general public. Once obtained, approval must be renewed every 3 years. In order to obtain or renew such approval, an agency must fill out OMB Form 83-I, attach the proposed form, and file it with OIRA [Office of Information and Regulatory Affairs]. On Form 83-I, the agency must explain the reason why the form is needed and estimate the burden in terms of time and money that the form will impose upon the persons required to fill it out."
Having government paperwork to reduce paperwork seems a little, no, a lot oxymoronic. However, the administrator of the OIRA appointed by President Barack Obama is Cass R. Sunstein, one of the coauthors of a book called, "Nudge: Improving Decisions about Health, Wealth, and Happiness." Advertisers have accumulated a vast science on how to influence people to buy things they don’t need. "Nudge" suggests it is time to use that research in behavioral economics to nudge people to default toward making good decisions. This approach, called libertarian paternalism, may also seem like an oxymoron. But then, lately, so has the term informed consent.
Dr. Kevin T. Powell is an associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell.
In 1972, Canterbury v. Spence changed American health law, and with it, the patient-physician relationship. Some ethicists cite this as the beginning of modern medical ethics. Before a patient agreed to a surgical procedure, he was to be informed about its risks and benefits. Failure to do so was negligence. Part III of that appellate court decision gives a concise history explaining why the judges did not consider their decision revolutionary. That, however, wasn’t the way doctors experienced it. A decade earlier, arguments had been advanced justifying why it was better not to tell a patient he had cancer. Not all physicians agreed with that practice, labeled therapeutic privilege. But after Canterbury, such paternalism was no longer acceptable (with extremely rare exceptions).
Per Canterbury:
"The root premise is the concept, fundamental in American jurisprudence, that ‘every human being of adult years and sound mind has a right to determine what shall be done with his own body.’ ... True consent to what happens to one’s self is the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each. The average patient has little or no understanding of the medical arts, and ordinarily has only his physician to whom he can look for enlightenment with which to reach an intelligent decision. From these almost axiomatic considerations springs the need, and in turn the requirement, of a reasonable divulgence by physician to patient to make such a decision possible."
As outlined by that court, informed consent is not a form that needs to be signed. It is the professional duty to communicate information to the patient. That includes overcoming cultural and language barriers, poor health literacy, and denial.
The standard for what is contained in informed consent has evolved. Not every minute risk need be discussed. In many states, the requirements went from a professional standard (What information did other physicians customarily disclose?) to a reasonable person standard (What information would a reasonable patient want to know?) Then, as with many ideas that start out with good intentions, the whole thing spun out of control. The amount of material to be disclosed expanded without bound. There can be too much of a good thing. All too often, modern medical consent forms have become like the End User License Agreement (EULA) I ignore whenever I install computer software. Don’t most people just click the box that says, "I have read and accepted the terms of the EULA," without actually reading it? As if you really had a choice?
On taking my mother to the emergency department recently, I was given a binder filled with 32 pages of orientation materials, privacy notifications, descriptions of patient rights and responsibilities, and other paraphernalia, in addition to a five-page consent form for general medical care. I read it only out of sheer boredom while waiting 6 hours for test results to come back. The 47 pages of information in the discharge packet 2 days later were never read. But what I really wanted was a CD copy of her MRI to take to her follow-up appointment with a different doctor. That, I was told, could be obtained only from Health Information Services, which wasn’t open on a Sunday.
This experience has strongly reinforced my prior belief that idolizing autonomy can become counterproductive. Some people have the notion that maximizing autonomy is achieved by increased information sharing, or more practically, adding another form. The word camouflage comes to mind. You can’t see the forest for the trees. A hospital a few years ago wondered whether it should notify patients that a surgical resident might perform part of the procedure. The best advice seemed to be yes, include a sentence about that on the consent form. Almost no one will ever see it.
Medicine desperately needs its own version of the Paperwork Reduction Act of 1995 (PRA). I say this not to reduce the burden on the physician, but as a patient advocate hoping to improve the focus on important facts, thereby actually enhancing decision making rather than obfuscating it.
"The PRA mandates that all federal government agencies must obtain a Control Number from OMB [Office of Management and Budget] before promulgating a form that will impose an information collection burden on the general public. Once obtained, approval must be renewed every 3 years. In order to obtain or renew such approval, an agency must fill out OMB Form 83-I, attach the proposed form, and file it with OIRA [Office of Information and Regulatory Affairs]. On Form 83-I, the agency must explain the reason why the form is needed and estimate the burden in terms of time and money that the form will impose upon the persons required to fill it out."
Having government paperwork to reduce paperwork seems a little, no, a lot oxymoronic. However, the administrator of the OIRA appointed by President Barack Obama is Cass R. Sunstein, one of the coauthors of a book called, "Nudge: Improving Decisions about Health, Wealth, and Happiness." Advertisers have accumulated a vast science on how to influence people to buy things they don’t need. "Nudge" suggests it is time to use that research in behavioral economics to nudge people to default toward making good decisions. This approach, called libertarian paternalism, may also seem like an oxymoron. But then, lately, so has the term informed consent.
Dr. Kevin T. Powell is an associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell.
Camouflaging Informed Consent
In 1972, Canterbury v. Spence changed American health law, and with it, the patient-physician relationship. Some ethicists cite this as the beginning of modern medical ethics. Before a patient agreed to a surgical procedure, he was to be informed about its risks and benefits. Failure to do so was negligence. Part III of that appellate court decision gives a concise history explaining why the judges did not consider their decision revolutionary. That, however, wasn’t the way doctors experienced it. A decade earlier, arguments had been advanced justifying why it was better not to tell a patient he had cancer. Not all physicians agreed with that practice, labeled therapeutic privilege. But after Canterbury, such paternalism was no longer acceptable (with extremely rare exceptions).
Per Canterbury:
"The root premise is the concept, fundamental in American jurisprudence, that ‘every human being of adult years and sound mind has a right to determine what shall be done with his own body.’ ... True consent to what happens to one’s self is the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each. The average patient has little or no understanding of the medical arts, and ordinarily has only his physician to whom he can look for enlightenment with which to reach an intelligent decision. From these almost axiomatic considerations springs the need, and in turn the requirement, of a reasonable divulgence by physician to patient to make such a decision possible."
As outlined by that court, informed consent is not a form that needs to be signed. It is the professional duty to communicate information to the patient. That includes overcoming cultural and language barriers, poor health literacy, and denial.
The standard for what is contained in informed consent has evolved. Not every minute risk need be discussed. In many states, the requirements went from a professional standard (What information did other physicians customarily disclose?) to a reasonable person standard (What information would a reasonable patient want to know?) Then, as with many ideas that start out with good intentions, the whole thing spun out of control. The amount of material to be disclosed expanded without bound. There can be too much of a good thing. All too often, modern medical consent forms have become like the End User License Agreement (EULA) I ignore whenever I install computer software. Don’t most people just click the box that says, "I have read and accepted the terms of the EULA," without actually reading it? As if you really had a choice?
On taking my mother to the emergency department recently, I was given a binder filled with 32 pages of orientation materials, privacy notifications, descriptions of patient rights and responsibilities, and other paraphernalia, in addition to a five-page consent form for general medical care. I read it only out of sheer boredom while waiting 6 hours for test results to come back. The 47 pages of information in the discharge packet 2 days later were never read. But what I really wanted was a CD copy of her MRI to take to her follow-up appointment with a different doctor. That, I was told, could be obtained only from Health Information Services, which wasn’t open on a Sunday.
This experience has strongly reinforced my prior belief that idolizing autonomy can become counterproductive. Some people have the notion that maximizing autonomy is achieved by increased information sharing, or more practically, adding another form. The word camouflage comes to mind. You can’t see the forest for the trees. A hospital a few years ago wondered whether it should notify patients that a surgical resident might perform part of the procedure. The best advice seemed to be yes, include a sentence about that on the consent form. Almost no one will ever see it.
Medicine desperately needs its own version of the Paperwork Reduction Act of 1995 (PRA). I say this not to reduce the burden on the physician, but as a patient advocate hoping to improve the focus on important facts, thereby actually enhancing decision making rather than obfuscating it.
"The PRA mandates that all federal government agencies must obtain a Control Number from OMB [Office of Management and Budget] before promulgating a form that will impose an information collection burden on the general public. Once obtained, approval must be renewed every 3 years. In order to obtain or renew such approval, an agency must fill out OMB Form 83-I, attach the proposed form, and file it with OIRA [Office of Information and Regulatory Affairs]. On Form 83-I, the agency must explain the reason why the form is needed and estimate the burden in terms of time and money that the form will impose upon the persons required to fill it out."
Having government paperwork to reduce paperwork seems a little, no, a lot oxymoronic. However, the administrator of the OIRA appointed by President Barack Obama is Cass R. Sunstein, one of the coauthors of a book called, "Nudge: Improving Decisions about Health, Wealth, and Happiness." Advertisers have accumulated a vast science on how to influence people to buy things they don’t need. "Nudge" suggests it is time to use that research in behavioral economics to nudge people to default toward making good decisions. This approach, called libertarian paternalism, may also seem like an oxymoron. But then, lately, so has the term informed consent.
Dr. Kevin T. Powell is an associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell.
In 1972, Canterbury v. Spence changed American health law, and with it, the patient-physician relationship. Some ethicists cite this as the beginning of modern medical ethics. Before a patient agreed to a surgical procedure, he was to be informed about its risks and benefits. Failure to do so was negligence. Part III of that appellate court decision gives a concise history explaining why the judges did not consider their decision revolutionary. That, however, wasn’t the way doctors experienced it. A decade earlier, arguments had been advanced justifying why it was better not to tell a patient he had cancer. Not all physicians agreed with that practice, labeled therapeutic privilege. But after Canterbury, such paternalism was no longer acceptable (with extremely rare exceptions).
Per Canterbury:
"The root premise is the concept, fundamental in American jurisprudence, that ‘every human being of adult years and sound mind has a right to determine what shall be done with his own body.’ ... True consent to what happens to one’s self is the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each. The average patient has little or no understanding of the medical arts, and ordinarily has only his physician to whom he can look for enlightenment with which to reach an intelligent decision. From these almost axiomatic considerations springs the need, and in turn the requirement, of a reasonable divulgence by physician to patient to make such a decision possible."
As outlined by that court, informed consent is not a form that needs to be signed. It is the professional duty to communicate information to the patient. That includes overcoming cultural and language barriers, poor health literacy, and denial.
The standard for what is contained in informed consent has evolved. Not every minute risk need be discussed. In many states, the requirements went from a professional standard (What information did other physicians customarily disclose?) to a reasonable person standard (What information would a reasonable patient want to know?) Then, as with many ideas that start out with good intentions, the whole thing spun out of control. The amount of material to be disclosed expanded without bound. There can be too much of a good thing. All too often, modern medical consent forms have become like the End User License Agreement (EULA) I ignore whenever I install computer software. Don’t most people just click the box that says, "I have read and accepted the terms of the EULA," without actually reading it? As if you really had a choice?
On taking my mother to the emergency department recently, I was given a binder filled with 32 pages of orientation materials, privacy notifications, descriptions of patient rights and responsibilities, and other paraphernalia, in addition to a five-page consent form for general medical care. I read it only out of sheer boredom while waiting 6 hours for test results to come back. The 47 pages of information in the discharge packet 2 days later were never read. But what I really wanted was a CD copy of her MRI to take to her follow-up appointment with a different doctor. That, I was told, could be obtained only from Health Information Services, which wasn’t open on a Sunday.
This experience has strongly reinforced my prior belief that idolizing autonomy can become counterproductive. Some people have the notion that maximizing autonomy is achieved by increased information sharing, or more practically, adding another form. The word camouflage comes to mind. You can’t see the forest for the trees. A hospital a few years ago wondered whether it should notify patients that a surgical resident might perform part of the procedure. The best advice seemed to be yes, include a sentence about that on the consent form. Almost no one will ever see it.
Medicine desperately needs its own version of the Paperwork Reduction Act of 1995 (PRA). I say this not to reduce the burden on the physician, but as a patient advocate hoping to improve the focus on important facts, thereby actually enhancing decision making rather than obfuscating it.
"The PRA mandates that all federal government agencies must obtain a Control Number from OMB [Office of Management and Budget] before promulgating a form that will impose an information collection burden on the general public. Once obtained, approval must be renewed every 3 years. In order to obtain or renew such approval, an agency must fill out OMB Form 83-I, attach the proposed form, and file it with OIRA [Office of Information and Regulatory Affairs]. On Form 83-I, the agency must explain the reason why the form is needed and estimate the burden in terms of time and money that the form will impose upon the persons required to fill it out."
Having government paperwork to reduce paperwork seems a little, no, a lot oxymoronic. However, the administrator of the OIRA appointed by President Barack Obama is Cass R. Sunstein, one of the coauthors of a book called, "Nudge: Improving Decisions about Health, Wealth, and Happiness." Advertisers have accumulated a vast science on how to influence people to buy things they don’t need. "Nudge" suggests it is time to use that research in behavioral economics to nudge people to default toward making good decisions. This approach, called libertarian paternalism, may also seem like an oxymoron. But then, lately, so has the term informed consent.
Dr. Kevin T. Powell is an associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell.
In 1972, Canterbury v. Spence changed American health law, and with it, the patient-physician relationship. Some ethicists cite this as the beginning of modern medical ethics. Before a patient agreed to a surgical procedure, he was to be informed about its risks and benefits. Failure to do so was negligence. Part III of that appellate court decision gives a concise history explaining why the judges did not consider their decision revolutionary. That, however, wasn’t the way doctors experienced it. A decade earlier, arguments had been advanced justifying why it was better not to tell a patient he had cancer. Not all physicians agreed with that practice, labeled therapeutic privilege. But after Canterbury, such paternalism was no longer acceptable (with extremely rare exceptions).
Per Canterbury:
"The root premise is the concept, fundamental in American jurisprudence, that ‘every human being of adult years and sound mind has a right to determine what shall be done with his own body.’ ... True consent to what happens to one’s self is the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options available and the risks attendant upon each. The average patient has little or no understanding of the medical arts, and ordinarily has only his physician to whom he can look for enlightenment with which to reach an intelligent decision. From these almost axiomatic considerations springs the need, and in turn the requirement, of a reasonable divulgence by physician to patient to make such a decision possible."
As outlined by that court, informed consent is not a form that needs to be signed. It is the professional duty to communicate information to the patient. That includes overcoming cultural and language barriers, poor health literacy, and denial.
The standard for what is contained in informed consent has evolved. Not every minute risk need be discussed. In many states, the requirements went from a professional standard (What information did other physicians customarily disclose?) to a reasonable person standard (What information would a reasonable patient want to know?) Then, as with many ideas that start out with good intentions, the whole thing spun out of control. The amount of material to be disclosed expanded without bound. There can be too much of a good thing. All too often, modern medical consent forms have become like the End User License Agreement (EULA) I ignore whenever I install computer software. Don’t most people just click the box that says, "I have read and accepted the terms of the EULA," without actually reading it? As if you really had a choice?
On taking my mother to the emergency department recently, I was given a binder filled with 32 pages of orientation materials, privacy notifications, descriptions of patient rights and responsibilities, and other paraphernalia, in addition to a five-page consent form for general medical care. I read it only out of sheer boredom while waiting 6 hours for test results to come back. The 47 pages of information in the discharge packet 2 days later were never read. But what I really wanted was a CD copy of her MRI to take to her follow-up appointment with a different doctor. That, I was told, could be obtained only from Health Information Services, which wasn’t open on a Sunday.
This experience has strongly reinforced my prior belief that idolizing autonomy can become counterproductive. Some people have the notion that maximizing autonomy is achieved by increased information sharing, or more practically, adding another form. The word camouflage comes to mind. You can’t see the forest for the trees. A hospital a few years ago wondered whether it should notify patients that a surgical resident might perform part of the procedure. The best advice seemed to be yes, include a sentence about that on the consent form. Almost no one will ever see it.
Medicine desperately needs its own version of the Paperwork Reduction Act of 1995 (PRA). I say this not to reduce the burden on the physician, but as a patient advocate hoping to improve the focus on important facts, thereby actually enhancing decision making rather than obfuscating it.
"The PRA mandates that all federal government agencies must obtain a Control Number from OMB [Office of Management and Budget] before promulgating a form that will impose an information collection burden on the general public. Once obtained, approval must be renewed every 3 years. In order to obtain or renew such approval, an agency must fill out OMB Form 83-I, attach the proposed form, and file it with OIRA [Office of Information and Regulatory Affairs]. On Form 83-I, the agency must explain the reason why the form is needed and estimate the burden in terms of time and money that the form will impose upon the persons required to fill it out."
Having government paperwork to reduce paperwork seems a little, no, a lot oxymoronic. However, the administrator of the OIRA appointed by President Barack Obama is Cass R. Sunstein, one of the coauthors of a book called, "Nudge: Improving Decisions about Health, Wealth, and Happiness." Advertisers have accumulated a vast science on how to influence people to buy things they don’t need. "Nudge" suggests it is time to use that research in behavioral economics to nudge people to default toward making good decisions. This approach, called libertarian paternalism, may also seem like an oxymoron. But then, lately, so has the term informed consent.
Dr. Kevin T. Powell is an associate professor of pediatrics at St. Louis University and a pediatric hospitalist at SSM Cardinal Glennon Children’s Medical Center in St. Louis. He said he had no relevant financial disclosures. E-mail Dr. Powell.