Article Type
Changed
Wed, 01/02/2019 - 09:03
Display Headline
Human experimentation: The good, the bad, and the ugly

Ever since the earliest medical practitioners treated the first patients, a tension has existed between potentially beneficial innovation and unintentional harm. For many centuries doctors relied on their own experience or intuition to determine what was best for those whom they treated. It was not until the 17th century that Francis Bacon introduced the scientific method that consisted of systematic observation and testing of hypotheses. In the case of clinical science, this provided an objective means of determining which treatments would be in the best interest of patients. Since then, society has greatly benefited from remarkable medical advancements based on what is essentially human experimentation, much of it noble, but unfortunately some episodes quite tragic, misguided, and even demonic.

The most notorious human research abuses were those perpetrated by the Nazi regime during the Holocaust. There were only 200 survivors from the 1,500 sets of twins forced to participate in Josef Mengele’s infamous twin experiments at the Auschwitz concentration camp. Many of these investigations were genetic experiments intended to prove the superiority of the Aryan race. Little useful scientific information was gained from these inhumane and evil studies.

However, totalitarianism is not a prerequisite for mistreatment of human subjects. The American research community has its own checkered past. Possibly the most well-known abuse is the Tuskegee syphilis experiments that were conducted between 1932 and 1972 by the U.S. Public Health Service. Four hundred impoverished African American males infected with syphilis, who were not fully informed about their disease, were closely followed in order to record the natural history of this deadly and debilitating illness. These patients were not treated with penicillin although the drug became available in 1947. As a result, over one-third of the subjects died of their disease, many of their wives contracted syphilis, and numerous children were unnecessarily born with congenital syphilis.

On the other end of the ethical scale are a number of noble researchers scattered throughout history who insisted on experimenting on themselves before submitting others to their treatments or procedures. A prime example is a courageous and creative German surgical intern, Werner Forssmann, who paved the path to heart surgery through self-experimentation. Even into the 20th century, it was taboo for a physician to touch the living heart. Thus, much of its physiology and pathophysiology remained shrouded in mystery. In 1929, Dr. Forssmann did a cut-down on his antecubital vein, inserted a ureteral catheter into the right side of his heart, and then descended a flight of stairs to confirm its position by x-ray. Later experiments, also performed on him, resulted in the first cardiac angiograms. Although heavily criticized by his superiors and the German medical establishment, Dr. Forssmann, an obscure urologist and general surgeon at the time, was eventually rewarded by sharing the Nobel Prize in 1956.

From the very beginning of surgery as a clinical science, surgeons have sat on the precipice of beneficial innovation versus unintentional harm to their patients. Because of the very nature of what they do, it has not usually been possible for them to self-experiment before testing their ideas on others. Every operation ever devised, occasionally with, but often without, animal experimentation, has had its initial human guinea pigs. In fact, surgeons have generally been given freer rein to try new and untested procedures or to modify older accepted ones. They have had greater license than have their counterparts who innovate with drugs and medical devices and are thus more tightly regulated by agencies such as the Food and Drug Administration.

In the best of circumstances, surgical patients are fully informed as to the potential consequences of a novel operation, both good and bad, and the results are carefully recorded to determine the benefit/harm ratio of the procedure. Ideally, though it is often not possible, the new approach is compared to a proven alternative therapy in a carefully designed trial. Unfortunately, such careful analysis has not always been done.

A glaring example of surgical human experimentation gone wrong is the frontal lobotomy story. In the early part of the 20th century, mental institutions in this country and throughout the world were filled with desperate patients for whom there were few therapeutic alternatives available. Many of these patients were incapable of giving meaningful informed consent. In 1935, frontal lobotomy was introduced by Antonio Moniz, a Portuguese neurologist, who later shared in a highly controversial Nobel Prize for his discovery. In 1946, an American neuropsychiatrist, Walter Freeman, modified the procedure so it could be done by psychiatrists with an ice pick–like instrument via a transorbital approach. A neurosurgeon performing a craniotomy, general anesthesia, and an operating room were no longer necessary, resulting in the rapid proliferation of this simpler operation despite its increasingly well-known and devastating side effects of loss of personality, decreased cognition, and even death. Only after more than 40,000 procedures were done in the United States did mounting criticism eventually lead to a ban on most lobotomies..

 

 

On the more noble side of surgical innovation, if Dr. Thomas Starzl and Dr. C. Walton Lillehei had not persisted despite failure after failure and death after death, liver transplantation and cardiac surgery would not have evolved to the lifesaving therapies they are today. These surgical pioneers and many others like them, who have persisted in the face of failure to develop new and useful approaches to surgical disease, can hardly be condemned for their human experiments that were disasters in the short term but enduring medical advancements in the long-term. Their initial patients were courageous, desperate, and hopefully well informed.

What separates these successful forerunners from those who promoted the lobotomy debacle? One factor may be history itself. Passed by Congress in response to the atrocities that had occurred earlier in the century, the National Research Act of 1974 mandated Institutional Review Boards (IRBs) in institutions conducting human research. Although the initial attempts at operating on the heart and transplanting the liver predated IRBs, much of the development of these specialties as we know them today took place under the watchful eye of these committees.

Whereas Freeman’s modifications made lobotomy a procedure that could be performed by almost anyone, cardiac surgery and liver transplantation required resources that could be provided only by major academic institutions.

While lobotomy almost became a traveling sideshow with poor documentation of results, the earliest attempts at heart surgery and liver transplantation were carefully recorded in the surgical literature for the entire academic community to analyze and ponder.

We owe much to those surgeons who persisted against great odds to develop our craft and to those patients with the courage to be a part of the great enterprise of surgical innovation. Without their daring, perseverance, and creativity, surgery would not have evolved to the diverse and noble specialty it is today. It is now incumbent upon us to make certain that future surgical innovation transpires only under an umbrella of safe, well-informed, and satisfactorily documented and controlled human experimentation.

Dr. Rikkers is the Editor in Chief of ACS Surgery News.

Publications
Sections

Ever since the earliest medical practitioners treated the first patients, a tension has existed between potentially beneficial innovation and unintentional harm. For many centuries doctors relied on their own experience or intuition to determine what was best for those whom they treated. It was not until the 17th century that Francis Bacon introduced the scientific method that consisted of systematic observation and testing of hypotheses. In the case of clinical science, this provided an objective means of determining which treatments would be in the best interest of patients. Since then, society has greatly benefited from remarkable medical advancements based on what is essentially human experimentation, much of it noble, but unfortunately some episodes quite tragic, misguided, and even demonic.

The most notorious human research abuses were those perpetrated by the Nazi regime during the Holocaust. There were only 200 survivors from the 1,500 sets of twins forced to participate in Josef Mengele’s infamous twin experiments at the Auschwitz concentration camp. Many of these investigations were genetic experiments intended to prove the superiority of the Aryan race. Little useful scientific information was gained from these inhumane and evil studies.

However, totalitarianism is not a prerequisite for mistreatment of human subjects. The American research community has its own checkered past. Possibly the most well-known abuse is the Tuskegee syphilis experiments that were conducted between 1932 and 1972 by the U.S. Public Health Service. Four hundred impoverished African American males infected with syphilis, who were not fully informed about their disease, were closely followed in order to record the natural history of this deadly and debilitating illness. These patients were not treated with penicillin although the drug became available in 1947. As a result, over one-third of the subjects died of their disease, many of their wives contracted syphilis, and numerous children were unnecessarily born with congenital syphilis.

On the other end of the ethical scale are a number of noble researchers scattered throughout history who insisted on experimenting on themselves before submitting others to their treatments or procedures. A prime example is a courageous and creative German surgical intern, Werner Forssmann, who paved the path to heart surgery through self-experimentation. Even into the 20th century, it was taboo for a physician to touch the living heart. Thus, much of its physiology and pathophysiology remained shrouded in mystery. In 1929, Dr. Forssmann did a cut-down on his antecubital vein, inserted a ureteral catheter into the right side of his heart, and then descended a flight of stairs to confirm its position by x-ray. Later experiments, also performed on him, resulted in the first cardiac angiograms. Although heavily criticized by his superiors and the German medical establishment, Dr. Forssmann, an obscure urologist and general surgeon at the time, was eventually rewarded by sharing the Nobel Prize in 1956.

From the very beginning of surgery as a clinical science, surgeons have sat on the precipice of beneficial innovation versus unintentional harm to their patients. Because of the very nature of what they do, it has not usually been possible for them to self-experiment before testing their ideas on others. Every operation ever devised, occasionally with, but often without, animal experimentation, has had its initial human guinea pigs. In fact, surgeons have generally been given freer rein to try new and untested procedures or to modify older accepted ones. They have had greater license than have their counterparts who innovate with drugs and medical devices and are thus more tightly regulated by agencies such as the Food and Drug Administration.

In the best of circumstances, surgical patients are fully informed as to the potential consequences of a novel operation, both good and bad, and the results are carefully recorded to determine the benefit/harm ratio of the procedure. Ideally, though it is often not possible, the new approach is compared to a proven alternative therapy in a carefully designed trial. Unfortunately, such careful analysis has not always been done.

A glaring example of surgical human experimentation gone wrong is the frontal lobotomy story. In the early part of the 20th century, mental institutions in this country and throughout the world were filled with desperate patients for whom there were few therapeutic alternatives available. Many of these patients were incapable of giving meaningful informed consent. In 1935, frontal lobotomy was introduced by Antonio Moniz, a Portuguese neurologist, who later shared in a highly controversial Nobel Prize for his discovery. In 1946, an American neuropsychiatrist, Walter Freeman, modified the procedure so it could be done by psychiatrists with an ice pick–like instrument via a transorbital approach. A neurosurgeon performing a craniotomy, general anesthesia, and an operating room were no longer necessary, resulting in the rapid proliferation of this simpler operation despite its increasingly well-known and devastating side effects of loss of personality, decreased cognition, and even death. Only after more than 40,000 procedures were done in the United States did mounting criticism eventually lead to a ban on most lobotomies..

 

 

On the more noble side of surgical innovation, if Dr. Thomas Starzl and Dr. C. Walton Lillehei had not persisted despite failure after failure and death after death, liver transplantation and cardiac surgery would not have evolved to the lifesaving therapies they are today. These surgical pioneers and many others like them, who have persisted in the face of failure to develop new and useful approaches to surgical disease, can hardly be condemned for their human experiments that were disasters in the short term but enduring medical advancements in the long-term. Their initial patients were courageous, desperate, and hopefully well informed.

What separates these successful forerunners from those who promoted the lobotomy debacle? One factor may be history itself. Passed by Congress in response to the atrocities that had occurred earlier in the century, the National Research Act of 1974 mandated Institutional Review Boards (IRBs) in institutions conducting human research. Although the initial attempts at operating on the heart and transplanting the liver predated IRBs, much of the development of these specialties as we know them today took place under the watchful eye of these committees.

Whereas Freeman’s modifications made lobotomy a procedure that could be performed by almost anyone, cardiac surgery and liver transplantation required resources that could be provided only by major academic institutions.

While lobotomy almost became a traveling sideshow with poor documentation of results, the earliest attempts at heart surgery and liver transplantation were carefully recorded in the surgical literature for the entire academic community to analyze and ponder.

We owe much to those surgeons who persisted against great odds to develop our craft and to those patients with the courage to be a part of the great enterprise of surgical innovation. Without their daring, perseverance, and creativity, surgery would not have evolved to the diverse and noble specialty it is today. It is now incumbent upon us to make certain that future surgical innovation transpires only under an umbrella of safe, well-informed, and satisfactorily documented and controlled human experimentation.

Dr. Rikkers is the Editor in Chief of ACS Surgery News.

Ever since the earliest medical practitioners treated the first patients, a tension has existed between potentially beneficial innovation and unintentional harm. For many centuries doctors relied on their own experience or intuition to determine what was best for those whom they treated. It was not until the 17th century that Francis Bacon introduced the scientific method that consisted of systematic observation and testing of hypotheses. In the case of clinical science, this provided an objective means of determining which treatments would be in the best interest of patients. Since then, society has greatly benefited from remarkable medical advancements based on what is essentially human experimentation, much of it noble, but unfortunately some episodes quite tragic, misguided, and even demonic.

The most notorious human research abuses were those perpetrated by the Nazi regime during the Holocaust. There were only 200 survivors from the 1,500 sets of twins forced to participate in Josef Mengele’s infamous twin experiments at the Auschwitz concentration camp. Many of these investigations were genetic experiments intended to prove the superiority of the Aryan race. Little useful scientific information was gained from these inhumane and evil studies.

However, totalitarianism is not a prerequisite for mistreatment of human subjects. The American research community has its own checkered past. Possibly the most well-known abuse is the Tuskegee syphilis experiments that were conducted between 1932 and 1972 by the U.S. Public Health Service. Four hundred impoverished African American males infected with syphilis, who were not fully informed about their disease, were closely followed in order to record the natural history of this deadly and debilitating illness. These patients were not treated with penicillin although the drug became available in 1947. As a result, over one-third of the subjects died of their disease, many of their wives contracted syphilis, and numerous children were unnecessarily born with congenital syphilis.

On the other end of the ethical scale are a number of noble researchers scattered throughout history who insisted on experimenting on themselves before submitting others to their treatments or procedures. A prime example is a courageous and creative German surgical intern, Werner Forssmann, who paved the path to heart surgery through self-experimentation. Even into the 20th century, it was taboo for a physician to touch the living heart. Thus, much of its physiology and pathophysiology remained shrouded in mystery. In 1929, Dr. Forssmann did a cut-down on his antecubital vein, inserted a ureteral catheter into the right side of his heart, and then descended a flight of stairs to confirm its position by x-ray. Later experiments, also performed on him, resulted in the first cardiac angiograms. Although heavily criticized by his superiors and the German medical establishment, Dr. Forssmann, an obscure urologist and general surgeon at the time, was eventually rewarded by sharing the Nobel Prize in 1956.

From the very beginning of surgery as a clinical science, surgeons have sat on the precipice of beneficial innovation versus unintentional harm to their patients. Because of the very nature of what they do, it has not usually been possible for them to self-experiment before testing their ideas on others. Every operation ever devised, occasionally with, but often without, animal experimentation, has had its initial human guinea pigs. In fact, surgeons have generally been given freer rein to try new and untested procedures or to modify older accepted ones. They have had greater license than have their counterparts who innovate with drugs and medical devices and are thus more tightly regulated by agencies such as the Food and Drug Administration.

In the best of circumstances, surgical patients are fully informed as to the potential consequences of a novel operation, both good and bad, and the results are carefully recorded to determine the benefit/harm ratio of the procedure. Ideally, though it is often not possible, the new approach is compared to a proven alternative therapy in a carefully designed trial. Unfortunately, such careful analysis has not always been done.

A glaring example of surgical human experimentation gone wrong is the frontal lobotomy story. In the early part of the 20th century, mental institutions in this country and throughout the world were filled with desperate patients for whom there were few therapeutic alternatives available. Many of these patients were incapable of giving meaningful informed consent. In 1935, frontal lobotomy was introduced by Antonio Moniz, a Portuguese neurologist, who later shared in a highly controversial Nobel Prize for his discovery. In 1946, an American neuropsychiatrist, Walter Freeman, modified the procedure so it could be done by psychiatrists with an ice pick–like instrument via a transorbital approach. A neurosurgeon performing a craniotomy, general anesthesia, and an operating room were no longer necessary, resulting in the rapid proliferation of this simpler operation despite its increasingly well-known and devastating side effects of loss of personality, decreased cognition, and even death. Only after more than 40,000 procedures were done in the United States did mounting criticism eventually lead to a ban on most lobotomies..

 

 

On the more noble side of surgical innovation, if Dr. Thomas Starzl and Dr. C. Walton Lillehei had not persisted despite failure after failure and death after death, liver transplantation and cardiac surgery would not have evolved to the lifesaving therapies they are today. These surgical pioneers and many others like them, who have persisted in the face of failure to develop new and useful approaches to surgical disease, can hardly be condemned for their human experiments that were disasters in the short term but enduring medical advancements in the long-term. Their initial patients were courageous, desperate, and hopefully well informed.

What separates these successful forerunners from those who promoted the lobotomy debacle? One factor may be history itself. Passed by Congress in response to the atrocities that had occurred earlier in the century, the National Research Act of 1974 mandated Institutional Review Boards (IRBs) in institutions conducting human research. Although the initial attempts at operating on the heart and transplanting the liver predated IRBs, much of the development of these specialties as we know them today took place under the watchful eye of these committees.

Whereas Freeman’s modifications made lobotomy a procedure that could be performed by almost anyone, cardiac surgery and liver transplantation required resources that could be provided only by major academic institutions.

While lobotomy almost became a traveling sideshow with poor documentation of results, the earliest attempts at heart surgery and liver transplantation were carefully recorded in the surgical literature for the entire academic community to analyze and ponder.

We owe much to those surgeons who persisted against great odds to develop our craft and to those patients with the courage to be a part of the great enterprise of surgical innovation. Without their daring, perseverance, and creativity, surgery would not have evolved to the diverse and noble specialty it is today. It is now incumbent upon us to make certain that future surgical innovation transpires only under an umbrella of safe, well-informed, and satisfactorily documented and controlled human experimentation.

Dr. Rikkers is the Editor in Chief of ACS Surgery News.

Publications
Publications
Article Type
Display Headline
Human experimentation: The good, the bad, and the ugly
Display Headline
Human experimentation: The good, the bad, and the ugly
Sections
Disallow All Ads