In Appendicitis Case, Patient Sues Clinic, Clinic Sues NP

Article Type
Changed
Thu, 03/28/2019 - 15:32
Display Headline
In Appendicitis Case, Patient Sues Clinic, Clinic Sues NP
In addition to discussing the legal technicalities of the case, David M. Lang explains "the two things in medicine that you need to know well."

A 17-year-old girl with diminished appetite, abdominal pain, and vomiting presented to a pediatrics clinic in New York, where she was examined by an NP. She was found to have hematuria as well, and the NP diagnosed viral gastroenteritis.

Eight days later, the patient returned to the clinic with worsening pain. The pediatrician who examined her had her transported to a hospital, where a ruptured appendix was diagnosed. The patient underwent immediate surgery, which included resection of portions of her colon and intestines.

Despite a good recovery, the patient claimed that she suffers residual gastrointestinal dysfunction. She further claimed that the NP should have diagnosed appendicitis during her initial visit, which would have allowed for less invasive treatment.

Initially, the plaintiff brought suit against the clinic and several employees, but not the NP. She later moved to add the NP, but that motion was denied due to the statute of limitations. The clinic then impleaded the NP, arguing that it was her negligence in failing to diagnose the appendicitis.

The matter proceeded to trial against the NP and the clinic. The defendants claimed that the plaintiff’s symptoms did not suggest appendicitis at the time of the NP’s examination.

OUTCOME
A defense verdict was returned.

Continue for David M. Lang's comments >>

 

 

COMMENT
I used to tell students, “There are only two things in medicine that you need to know well: the common and the dangerous. For everything else, there is time.” I realize now that I sound like that guy from the Dos Equis commercial.

Consider this, however: If we don’t remember the difference between polymyositis and polymyalgia rheumatica, who cares? In such cases, we have time for review—and the patient will be better served by a clinician who has the intellectual curiosity to review conditions that he or she hasn’t seen in a while.

But the diseases that are both common and dangerous require our full proficiency. Basic competence requires us to be well versed in common diseases. And dangerous conditions, even if relatively rare, must be recognized and managed immediately. Entities that are common and dangerous—such as appendicitis—should enter our thoughts often.

In this case, we have a 17-year-old girl presenting to an outpatient clinic setting with abdominal pain, vomiting, and anorexia. Unfortunately, we are not given some important historical information, including duration and location of the pain and the presence or absence of pain migration. Physical exam findings are not described.

The trouble with appendicitis is that there is no single sign or symptom that can effectively diagnose it or exclude it from the differential. When evaluating a patient in a setting in which real-time laboratory testing is not generally ordered, clinicians must distinguish between self-limiting and dangerous abdominal pain. Where does that leave us in this case? Abdominal pain and vomiting are common, and ill patients frequently report anorexia.

Other clinical features associated with appendicitis may be more helpful. For example, pain migration has been described as “the most discriminating feature of the patient’s history,”1 with a sensitivity and specificity of approximately 80%.2 When present, psoas sign is fairly specific (0.95) but not sensitive (0.16).3

When evaluating patients in an outpatient setting, we have a snapshot of a disease process—a still frame of a movie. We are told what happened up to that point (with varying degrees of accuracy). But like the patient, we don’t know what will happen after he or she leaves the office: The still frame is gone, but the movie continues.

It can be helpful to inform patients of the concerning diagnoses in your differential and alert them to patterns of clinical progression that warrant return or immediate emergency department evaluation. Calling the patient to see how he or she is doing can be very useful for clinicians and generally highly valued and appreciated by patients. Here, if gastroenteritis were suspected, a phone call after a few hours of antiemetic and rehydration therapy may have been helpful to determine if the patient’s symptoms had improved. This, of course, would not be conclusive—but at least it would give the clinician additional information and the patient additional comfort.

In this case, the jury was persuaded that the NP provided good treatment and acted within the standard of care. Diagnosing appendicitis can be tricky, even under the best circumstances. The NP’s defense was probably aided by good documentation showing that appendicitis seemed less likely at the time of her evaluation. Ultimately, she performed well enough that her care withstood scrutiny from the plaintiff, the plaintiff’s expert witness, and eventually, her own practice.

This case was interesting from a legal perspective in that the plaintiff originally failed to file suit against the NP—probably resorting to liability under the theory of respondeat superior (generally, employer liability for employee actions). While the plaintiff was unsuccessful in adding the NP later, due to the statute of limitations, the NP was brought into the case by her own practice, through a procedure known as impleader. An impleader action is brought by a co-defendant. Under typical impleader rules, the defendant becomes a “third-party plaintiff” and brings suit against a “third-party defendant” (in this case, the practice and the NP, respectively).

IN SUM
Always keep important diagnoses in mind, and document well. Anticipate a changing clinical course, and instruct patients on how to respond to potential changes. In certain cases, we are well served to pick up the phone, check on the patient, and make the presentation less of a static picture and more of a dynamic movie. —DML

REFERENCES
1. Craig S, Brenner BE. Appendicitis (updated October 26, 2012). Medscape Reference. http://emedicine.medscape.com/article/773895-overview. Accessed January 16, 2015.
2. Yeh B. Evidence-based emergency medicine/rational clinical examination abstract: does this adult patient have appendicitis? Ann Emerg Med. 2008;52(3):301-303.
3. Wagner J, McKinney WP, Carpenter JL. Does this patient have appendicitis? JAMA. 1996;276(19):1589-1594.

References

Article PDF
Author and Disclosure Information

Commentary by David M. Lang, JD, ­PA-C, an experienced PA and a former medical malpractice defense attorney who practices law in Granite Bay, California. Cases reprinted with permission from Medical Malpractice Verdicts, Settlements and Experts, Lewis Laska, Editor, (800) 298-6288.

Issue
Clinician Reviews - 25(2)
Publications
Topics
Page Number
24-25
Legacy Keywords
malpractice, viral gastroenteritis, appendicitis, hematuria, ruptured appendix, gastrointestinal dysfunction
Sections
Author and Disclosure Information

Commentary by David M. Lang, JD, ­PA-C, an experienced PA and a former medical malpractice defense attorney who practices law in Granite Bay, California. Cases reprinted with permission from Medical Malpractice Verdicts, Settlements and Experts, Lewis Laska, Editor, (800) 298-6288.

Author and Disclosure Information

Commentary by David M. Lang, JD, ­PA-C, an experienced PA and a former medical malpractice defense attorney who practices law in Granite Bay, California. Cases reprinted with permission from Medical Malpractice Verdicts, Settlements and Experts, Lewis Laska, Editor, (800) 298-6288.

Article PDF
Article PDF
Related Articles
In addition to discussing the legal technicalities of the case, David M. Lang explains "the two things in medicine that you need to know well."
In addition to discussing the legal technicalities of the case, David M. Lang explains "the two things in medicine that you need to know well."

A 17-year-old girl with diminished appetite, abdominal pain, and vomiting presented to a pediatrics clinic in New York, where she was examined by an NP. She was found to have hematuria as well, and the NP diagnosed viral gastroenteritis.

Eight days later, the patient returned to the clinic with worsening pain. The pediatrician who examined her had her transported to a hospital, where a ruptured appendix was diagnosed. The patient underwent immediate surgery, which included resection of portions of her colon and intestines.

Despite a good recovery, the patient claimed that she suffers residual gastrointestinal dysfunction. She further claimed that the NP should have diagnosed appendicitis during her initial visit, which would have allowed for less invasive treatment.

Initially, the plaintiff brought suit against the clinic and several employees, but not the NP. She later moved to add the NP, but that motion was denied due to the statute of limitations. The clinic then impleaded the NP, arguing that it was her negligence in failing to diagnose the appendicitis.

The matter proceeded to trial against the NP and the clinic. The defendants claimed that the plaintiff’s symptoms did not suggest appendicitis at the time of the NP’s examination.

OUTCOME
A defense verdict was returned.

Continue for David M. Lang's comments >>

 

 

COMMENT
I used to tell students, “There are only two things in medicine that you need to know well: the common and the dangerous. For everything else, there is time.” I realize now that I sound like that guy from the Dos Equis commercial.

Consider this, however: If we don’t remember the difference between polymyositis and polymyalgia rheumatica, who cares? In such cases, we have time for review—and the patient will be better served by a clinician who has the intellectual curiosity to review conditions that he or she hasn’t seen in a while.

But the diseases that are both common and dangerous require our full proficiency. Basic competence requires us to be well versed in common diseases. And dangerous conditions, even if relatively rare, must be recognized and managed immediately. Entities that are common and dangerous—such as appendicitis—should enter our thoughts often.

In this case, we have a 17-year-old girl presenting to an outpatient clinic setting with abdominal pain, vomiting, and anorexia. Unfortunately, we are not given some important historical information, including duration and location of the pain and the presence or absence of pain migration. Physical exam findings are not described.

The trouble with appendicitis is that there is no single sign or symptom that can effectively diagnose it or exclude it from the differential. When evaluating a patient in a setting in which real-time laboratory testing is not generally ordered, clinicians must distinguish between self-limiting and dangerous abdominal pain. Where does that leave us in this case? Abdominal pain and vomiting are common, and ill patients frequently report anorexia.

Other clinical features associated with appendicitis may be more helpful. For example, pain migration has been described as “the most discriminating feature of the patient’s history,”1 with a sensitivity and specificity of approximately 80%.2 When present, psoas sign is fairly specific (0.95) but not sensitive (0.16).3

When evaluating patients in an outpatient setting, we have a snapshot of a disease process—a still frame of a movie. We are told what happened up to that point (with varying degrees of accuracy). But like the patient, we don’t know what will happen after he or she leaves the office: The still frame is gone, but the movie continues.

It can be helpful to inform patients of the concerning diagnoses in your differential and alert them to patterns of clinical progression that warrant return or immediate emergency department evaluation. Calling the patient to see how he or she is doing can be very useful for clinicians and generally highly valued and appreciated by patients. Here, if gastroenteritis were suspected, a phone call after a few hours of antiemetic and rehydration therapy may have been helpful to determine if the patient’s symptoms had improved. This, of course, would not be conclusive—but at least it would give the clinician additional information and the patient additional comfort.

In this case, the jury was persuaded that the NP provided good treatment and acted within the standard of care. Diagnosing appendicitis can be tricky, even under the best circumstances. The NP’s defense was probably aided by good documentation showing that appendicitis seemed less likely at the time of her evaluation. Ultimately, she performed well enough that her care withstood scrutiny from the plaintiff, the plaintiff’s expert witness, and eventually, her own practice.

This case was interesting from a legal perspective in that the plaintiff originally failed to file suit against the NP—probably resorting to liability under the theory of respondeat superior (generally, employer liability for employee actions). While the plaintiff was unsuccessful in adding the NP later, due to the statute of limitations, the NP was brought into the case by her own practice, through a procedure known as impleader. An impleader action is brought by a co-defendant. Under typical impleader rules, the defendant becomes a “third-party plaintiff” and brings suit against a “third-party defendant” (in this case, the practice and the NP, respectively).

IN SUM
Always keep important diagnoses in mind, and document well. Anticipate a changing clinical course, and instruct patients on how to respond to potential changes. In certain cases, we are well served to pick up the phone, check on the patient, and make the presentation less of a static picture and more of a dynamic movie. —DML

REFERENCES
1. Craig S, Brenner BE. Appendicitis (updated October 26, 2012). Medscape Reference. http://emedicine.medscape.com/article/773895-overview. Accessed January 16, 2015.
2. Yeh B. Evidence-based emergency medicine/rational clinical examination abstract: does this adult patient have appendicitis? Ann Emerg Med. 2008;52(3):301-303.
3. Wagner J, McKinney WP, Carpenter JL. Does this patient have appendicitis? JAMA. 1996;276(19):1589-1594.

A 17-year-old girl with diminished appetite, abdominal pain, and vomiting presented to a pediatrics clinic in New York, where she was examined by an NP. She was found to have hematuria as well, and the NP diagnosed viral gastroenteritis.

Eight days later, the patient returned to the clinic with worsening pain. The pediatrician who examined her had her transported to a hospital, where a ruptured appendix was diagnosed. The patient underwent immediate surgery, which included resection of portions of her colon and intestines.

Despite a good recovery, the patient claimed that she suffers residual gastrointestinal dysfunction. She further claimed that the NP should have diagnosed appendicitis during her initial visit, which would have allowed for less invasive treatment.

Initially, the plaintiff brought suit against the clinic and several employees, but not the NP. She later moved to add the NP, but that motion was denied due to the statute of limitations. The clinic then impleaded the NP, arguing that it was her negligence in failing to diagnose the appendicitis.

The matter proceeded to trial against the NP and the clinic. The defendants claimed that the plaintiff’s symptoms did not suggest appendicitis at the time of the NP’s examination.

OUTCOME
A defense verdict was returned.

Continue for David M. Lang's comments >>

 

 

COMMENT
I used to tell students, “There are only two things in medicine that you need to know well: the common and the dangerous. For everything else, there is time.” I realize now that I sound like that guy from the Dos Equis commercial.

Consider this, however: If we don’t remember the difference between polymyositis and polymyalgia rheumatica, who cares? In such cases, we have time for review—and the patient will be better served by a clinician who has the intellectual curiosity to review conditions that he or she hasn’t seen in a while.

But the diseases that are both common and dangerous require our full proficiency. Basic competence requires us to be well versed in common diseases. And dangerous conditions, even if relatively rare, must be recognized and managed immediately. Entities that are common and dangerous—such as appendicitis—should enter our thoughts often.

In this case, we have a 17-year-old girl presenting to an outpatient clinic setting with abdominal pain, vomiting, and anorexia. Unfortunately, we are not given some important historical information, including duration and location of the pain and the presence or absence of pain migration. Physical exam findings are not described.

The trouble with appendicitis is that there is no single sign or symptom that can effectively diagnose it or exclude it from the differential. When evaluating a patient in a setting in which real-time laboratory testing is not generally ordered, clinicians must distinguish between self-limiting and dangerous abdominal pain. Where does that leave us in this case? Abdominal pain and vomiting are common, and ill patients frequently report anorexia.

Other clinical features associated with appendicitis may be more helpful. For example, pain migration has been described as “the most discriminating feature of the patient’s history,”1 with a sensitivity and specificity of approximately 80%.2 When present, psoas sign is fairly specific (0.95) but not sensitive (0.16).3

When evaluating patients in an outpatient setting, we have a snapshot of a disease process—a still frame of a movie. We are told what happened up to that point (with varying degrees of accuracy). But like the patient, we don’t know what will happen after he or she leaves the office: The still frame is gone, but the movie continues.

It can be helpful to inform patients of the concerning diagnoses in your differential and alert them to patterns of clinical progression that warrant return or immediate emergency department evaluation. Calling the patient to see how he or she is doing can be very useful for clinicians and generally highly valued and appreciated by patients. Here, if gastroenteritis were suspected, a phone call after a few hours of antiemetic and rehydration therapy may have been helpful to determine if the patient’s symptoms had improved. This, of course, would not be conclusive—but at least it would give the clinician additional information and the patient additional comfort.

In this case, the jury was persuaded that the NP provided good treatment and acted within the standard of care. Diagnosing appendicitis can be tricky, even under the best circumstances. The NP’s defense was probably aided by good documentation showing that appendicitis seemed less likely at the time of her evaluation. Ultimately, she performed well enough that her care withstood scrutiny from the plaintiff, the plaintiff’s expert witness, and eventually, her own practice.

This case was interesting from a legal perspective in that the plaintiff originally failed to file suit against the NP—probably resorting to liability under the theory of respondeat superior (generally, employer liability for employee actions). While the plaintiff was unsuccessful in adding the NP later, due to the statute of limitations, the NP was brought into the case by her own practice, through a procedure known as impleader. An impleader action is brought by a co-defendant. Under typical impleader rules, the defendant becomes a “third-party plaintiff” and brings suit against a “third-party defendant” (in this case, the practice and the NP, respectively).

IN SUM
Always keep important diagnoses in mind, and document well. Anticipate a changing clinical course, and instruct patients on how to respond to potential changes. In certain cases, we are well served to pick up the phone, check on the patient, and make the presentation less of a static picture and more of a dynamic movie. —DML

REFERENCES
1. Craig S, Brenner BE. Appendicitis (updated October 26, 2012). Medscape Reference. http://emedicine.medscape.com/article/773895-overview. Accessed January 16, 2015.
2. Yeh B. Evidence-based emergency medicine/rational clinical examination abstract: does this adult patient have appendicitis? Ann Emerg Med. 2008;52(3):301-303.
3. Wagner J, McKinney WP, Carpenter JL. Does this patient have appendicitis? JAMA. 1996;276(19):1589-1594.

References

References

Issue
Clinician Reviews - 25(2)
Issue
Clinician Reviews - 25(2)
Page Number
24-25
Page Number
24-25
Publications
Publications
Topics
Article Type
Display Headline
In Appendicitis Case, Patient Sues Clinic, Clinic Sues NP
Display Headline
In Appendicitis Case, Patient Sues Clinic, Clinic Sues NP
Legacy Keywords
malpractice, viral gastroenteritis, appendicitis, hematuria, ruptured appendix, gastrointestinal dysfunction
Legacy Keywords
malpractice, viral gastroenteritis, appendicitis, hematuria, ruptured appendix, gastrointestinal dysfunction
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

An oversight

Article Type
Changed
Thu, 12/06/2018 - 17:19
Display Headline
An oversight

After reading the Child Psychiatry Consult column “Aggression and angry outbursts” by Dr. Robert R. Althoff in the September 2014 issue of Pediatric News, I was disappointed that the differential diagnosis did not include an autism spectrum disorder such as DSM-IV Asperger syndrome.

The complex of symptoms described almost perfectly reflects the history of a child with autism. Typical autism spectrum disorder (ASD) issues of needing to direct the play, playing by their rules, and being adamant that things must be the way they see it are noted in the patient’s history. Aggression and outbursts also are typical of a patient with ASD.

Even though autistic behavior is typically predictable, parents are not always alert to the triggers. Most meltdowns are over transitions and denials. Parents of patients with autism often complain that they “walk on eggshells.”

Edward B. Aull, M.D.

Behavioral Pediatrics

St. Vincent Carmel Hospital

Carmel, Ind.

 

Dr. Althoff responds: I’d like to thank Dr. Aull for pointing out an oversight in my article. Certainly, children on the autistic spectrum can exhibit aggression, although it is not part of the diagnostic criteria for DSM-5 ASD, which include deficits in social interaction and communication, and restricted, repetitive patterns of behavior, interests, or activities. I was not intending for the case to give the impression that this child had difficulty with social communication and restricted interests, but the diagnosis of ASD should be considered on the differential. Similar to the situation in obsessive compulsive disorder or other anxiety disorders, when either the need for social communication becomes exceptionally high or the restricted behavior or interests are challenged, these children can become aggressive, although most do not. Interestingly enough, children with DSM-IV Asperger syndrome and high-functioning autism have co-occurring disorders up to 74% of the time, with the highest percentages in the disorders on the differential that I listed in the original article: behavior disorders, anxiety disorders, and mood disorders (J. Autism Dev. Disord. 2010;40:1080-93). Given these findings, one might consider that, while the diagnosis of an ASD should be considered in the differential, the aggressive behavior may not be associated with the autism symptoms, per se, but rather may be co-occurring symptoms.

Publications
Legacy Keywords
asperger, aggression, asd, autism spectrum disorder
Sections

After reading the Child Psychiatry Consult column “Aggression and angry outbursts” by Dr. Robert R. Althoff in the September 2014 issue of Pediatric News, I was disappointed that the differential diagnosis did not include an autism spectrum disorder such as DSM-IV Asperger syndrome.

The complex of symptoms described almost perfectly reflects the history of a child with autism. Typical autism spectrum disorder (ASD) issues of needing to direct the play, playing by their rules, and being adamant that things must be the way they see it are noted in the patient’s history. Aggression and outbursts also are typical of a patient with ASD.

Even though autistic behavior is typically predictable, parents are not always alert to the triggers. Most meltdowns are over transitions and denials. Parents of patients with autism often complain that they “walk on eggshells.”

Edward B. Aull, M.D.

Behavioral Pediatrics

St. Vincent Carmel Hospital

Carmel, Ind.

 

Dr. Althoff responds: I’d like to thank Dr. Aull for pointing out an oversight in my article. Certainly, children on the autistic spectrum can exhibit aggression, although it is not part of the diagnostic criteria for DSM-5 ASD, which include deficits in social interaction and communication, and restricted, repetitive patterns of behavior, interests, or activities. I was not intending for the case to give the impression that this child had difficulty with social communication and restricted interests, but the diagnosis of ASD should be considered on the differential. Similar to the situation in obsessive compulsive disorder or other anxiety disorders, when either the need for social communication becomes exceptionally high or the restricted behavior or interests are challenged, these children can become aggressive, although most do not. Interestingly enough, children with DSM-IV Asperger syndrome and high-functioning autism have co-occurring disorders up to 74% of the time, with the highest percentages in the disorders on the differential that I listed in the original article: behavior disorders, anxiety disorders, and mood disorders (J. Autism Dev. Disord. 2010;40:1080-93). Given these findings, one might consider that, while the diagnosis of an ASD should be considered in the differential, the aggressive behavior may not be associated with the autism symptoms, per se, but rather may be co-occurring symptoms.

After reading the Child Psychiatry Consult column “Aggression and angry outbursts” by Dr. Robert R. Althoff in the September 2014 issue of Pediatric News, I was disappointed that the differential diagnosis did not include an autism spectrum disorder such as DSM-IV Asperger syndrome.

The complex of symptoms described almost perfectly reflects the history of a child with autism. Typical autism spectrum disorder (ASD) issues of needing to direct the play, playing by their rules, and being adamant that things must be the way they see it are noted in the patient’s history. Aggression and outbursts also are typical of a patient with ASD.

Even though autistic behavior is typically predictable, parents are not always alert to the triggers. Most meltdowns are over transitions and denials. Parents of patients with autism often complain that they “walk on eggshells.”

Edward B. Aull, M.D.

Behavioral Pediatrics

St. Vincent Carmel Hospital

Carmel, Ind.

 

Dr. Althoff responds: I’d like to thank Dr. Aull for pointing out an oversight in my article. Certainly, children on the autistic spectrum can exhibit aggression, although it is not part of the diagnostic criteria for DSM-5 ASD, which include deficits in social interaction and communication, and restricted, repetitive patterns of behavior, interests, or activities. I was not intending for the case to give the impression that this child had difficulty with social communication and restricted interests, but the diagnosis of ASD should be considered on the differential. Similar to the situation in obsessive compulsive disorder or other anxiety disorders, when either the need for social communication becomes exceptionally high or the restricted behavior or interests are challenged, these children can become aggressive, although most do not. Interestingly enough, children with DSM-IV Asperger syndrome and high-functioning autism have co-occurring disorders up to 74% of the time, with the highest percentages in the disorders on the differential that I listed in the original article: behavior disorders, anxiety disorders, and mood disorders (J. Autism Dev. Disord. 2010;40:1080-93). Given these findings, one might consider that, while the diagnosis of an ASD should be considered in the differential, the aggressive behavior may not be associated with the autism symptoms, per se, but rather may be co-occurring symptoms.

Publications
Publications
Article Type
Display Headline
An oversight
Display Headline
An oversight
Legacy Keywords
asperger, aggression, asd, autism spectrum disorder
Legacy Keywords
asperger, aggression, asd, autism spectrum disorder
Sections
Disallow All Ads

Pertussis persists

Article Type
Changed
Fri, 01/18/2019 - 14:23
Display Headline
Pertussis persists

The Centers for Disease Control and Prevention suggests that recurring pertussis outbreaks may be the “new normal.” Such outbreaks show that some of what we “know” about pertussis is still correct, but some things are evolving. So in this new year, what do we need to know about patient vulnerability post vaccine as well as the clinical course, diagnosis, and treatment of this stubborn persisting disease?

Vulnerability after acellular pertussis vaccine

The recent large 2014 California outbreak surpassed the record numbers for the previously highest incidence year, 2010 (MMWR 2014;63:1129-32). This is scary because more cases had been reported in California in 2010 than in any prior year since the 1940s. The overall 2014 California pertussis rate (26/100,000 population) was approximately 10 times the national average for the first 9 years of this century, Are there clues as to who is most vulnerable and why?

Dr. Christopher J. Harrison

No age group was spared, but certain age groups did appear more vulnerable. Infants had a startling 174.6/100,000 incidence (six times the rate for the overall population). It is not surprising to any clinician that infants less than 1 year of age were hardest hit. Infants have the most evident symptoms with pertussis. Infants also have 5-7 months of their first year in which they are incompletely immunized. Therefore, many are not expected to be protected until about 7-9 months of age. This vulnerability could be partly remedied if all pregnant women got Tdap boosters as recommended during pregnancy.

Of note, 15-year-olds had an incidence similar to that of infants (137.8/100,000). Ethnically, non-Hispanic whites had the highest incidence among adolescents (166.2/100,000), compared with Hispanics (64.2/100,000), Asian/Pacific Islanders (43.9/100,000), and non-Hispanic blacks (23.7/100,000). Disturbingly, 87% of cases among 15-year-olds had received a prior Tdap booster dose (median time since booster Tdap = 3 years, range = 0-7 years). Previous data from the 2010 outbreak suggested that immunity to pertussis wanes 3-4 years after receipt of the last acellular pertussis (aP)–containing vaccine. This is likely part of the explanation in 2014 as well. However, waning immunity after the booster does not explain why non-Hispanic whites had two to six times the incidence of other ethnicities. Non-Hispanic whites are thought to be the demographic with the most vaccine refusal and vaccine delay in California, so this may partially explain excess cases. Racial differences in access to care or genetic differences in disease susceptibility also may play a role.

Why is this biphasic increase in incidence in California a microcosm of the new epidemiology of pertussis in the United States? A kinder, gentler DTaP vaccine replaced the whole-cell DTP in the late 1990s. This occurred in response to the public’s concern about potential central nervous system adverse effects associated with the whole-cell DTP vaccine. Immunogenicity studies seemed to show equivalent immune responses in infants and toddlers receiving DTaP, compared with those who received DTP. It has only been in the last 5 years that we now know that the new DTaP and Tdap are not working as well as we had hoped.

The two aspects to the lesser protection provided by aP vaccines are pertactin-deficient pertussis strains and quicker waning of aP vaccine–induced immunity. Antibody to pertactin appears to be important in protection against clinical pertussis. New circulating clinical strains of pertussis may not have pertactin (N. Engl. J. Med. 2013;368:583-4). The strains used in our current DTaP and Tdap were designed to protect against pertactin-containing strains and were tested for this. This means that a proportion of the antibodies induced by vaccine strains are not useful against pertactin-deficient strains. The aP vaccine still induces antibody to the pertussis toxin and other pertussis components in the vaccines, so they will likely still reduce the severity of disease. But the vaccines are not likely to prevent infections from pertactin-deficient strains. This is similar to the partial vaccine mismatch that we are seeing with the current seasonal H3N2 influenza vaccine strain.

The second aspect is that protection appears to wane approximately 3-5 years after the last dose of aP-containing vaccine. This contrasts sharply with expectations in the past of 7-10 years of protection from whole cell pertussis–containing vaccines. The less reactive aP vaccine produces fewer adverse effects by not producing as much inflammation as DPT. The problem is that part of the reason the DPT has such good protective responses is the amount of inflammation it produces. So with less aP vaccine–induced inflammation comes less robust antibody and T-cell responses.

Nevertheless, the current acellular pertussis vaccines remain the most effective available tools to reduce pertussis disease (Cochrane Database Syst. Rev. 2014;9:CD001478]). But until we have new versions of pertussis vaccines that address these two issues, we clinicians need to remain vigilant for signs and symptoms of pertussis.

 

 

Clinical course

Remember that a whoop is rarely seen in young children and often also not seen when older patients present. The many outbreaks over the last 10 years have confirmed that paroxysmal cough with/without apnea in an infant/toddler should raise our index of suspicion. Likewise, older children, adolescents, and adults with persistent cough beyond 2 weeks are potential pertussis cases. Once the diagnosis is made, treatment is not expected to have a major impact on the clinical course, in part because the diagnosis is usually delayed (more than 10 days into symptoms). This delay allows more injury to the respiratory mucosa and cilia so that healing can require 6-12 weeks after bacterial replication ceases. This prolonged healing process is what is mostly responsible for the syndrome known as the “100-day cough.” So the clinical course of pertussis has not changed in the last 10 years. However, there have been changes in the commonly used diagnostic approach.

Pertussis diagnosis and contagion

During the last 5 years, polymerase chain reaction (PCR) testing has become the preferred technology to detect pertussis. This is due to the sensitivity and quick turnaround time of the assay. The gold standard for pertussis diagnosis remains culture, but it is expensive, cumbersome, and slow (up to a week to provide results). An ongoing debate arose about how long PCR testing remains positive after the onset of symptoms or treatment. This was not the problem when culture was the diagnostic tool of choice. Data from the 1970s and 1980s indicated that cultures were rarely positive after the third week of symptoms even without treatment. Furthermore, macrolides eliminated both contagion and positive culture results of infected patients after 5 days of treatment.

So now that we use PCR most often for diagnosis, what is the outer limit of positivity? A recent prospective cohort study from Salt Lake City suggests that PCR may detect pertussis DNA way beyond 3 weeks after symptom onset (J. Ped. Infect. Dis. 2014;3:347-9). Among patients hospitalized with laboratory-confirmed Bordetella pertussis infection, half had persistently positive pertussis PCR testing more than 50 days after symptom onset, despite antibiotic treatment and clinical improvement. The median (range) for the last day for a positive test after symptom onset was 58 days (4-172 days).

This raises the question as to whether there are viable pertussis organisms in the respiratory tract beyond the traditional 3 weeks defined by culture data. It is likely that DNA persists in the thick mucus of the respiratory tract way beyond viability of the last pertussis organisms. Put another way, PCR likely detects bacterial corpses or components way beyond the time that the patient is contagious. Unfortunately, current PCR data do not tell us how long patients remain contagious with the current strains of pertussis as infecting agents. Some institutions appear to be extending the isolation time for patients treated for pertussis beyond the traditional 5 days post initiation of effective treatment. Until more data are available, we are somewhat in the dark. But I would take comfort in the fact that it is unlikely the “new” data will be much different from those derived from the traditional studies that use culture to define infectivity. The American Academy of Pediatrics Committee on Infectious Diseases Red Book appears to agree.

For hospitalized pertussis patients, the AAP Committee on Infectious Diseases Red Book recommends standard and droplet precautions for 5 days after starting effective therapy, or 3 weeks after cough onset if appropriate antimicrobial therapy has not been given.

In addition, the CDC states: “PCR has optimal sensitivity during the first 3 weeks of cough when bacterial DNA is still present in the nasopharynx. After the fourth week of cough, the amount of bacterial DNA rapidly diminishes, which increases the risk of obtaining falsely negative results.” Later in the same document, the CDC says: “PCR testing following antibiotic therapy also can result in falsely negative findings. The exact duration of positivity following antibiotic use is not well understood, but PCR testing after 5 days of antibiotic use is unlikely to be of benefit and is generally not recommended.”

So what do we know? Not all PCR assays use the same primers, so some variance from the usual experience of up to 4 weeks of positive PCR results may be due to differences in the assays. But this raises concern that the PCR that you order may be positive at times when the patient is no longer contagious.

Pertussis treatment

If strains of pertussis have changed their pertactin antigen, are they changing their antibiotic susceptibility patterns? While there have been reports of macrolide resistance in a few pertussis strains, these still remain rare. The most recent comprehensive review of treatment efficacy was a Cochrane review performed in 2005 and published in 2007 (Cochrane Database Syst. Rev. 2007;3:CD004404). They evaluated 10 trials from 1969 to 2004 in which microbiologic eradication was defined by negative results from repeat pertussis culture. While meta-analysis of microbiologic eradication was not possible because of differences in antibiotic use, the investigators did conclude that antibiotic treatment “is effective in eliminating B. pertussis from patients with the disease to render them noninfectious, but does not alter the subsequent clinical course of the illness.”

 

 

Further, they state that “the best regimens for microbiologic clearance, with fewer side effects,” are 3 days of azithromycin (a single 10-mg/kg dose on 3 consecutive days) or 7 days of clarithromycin (7.5-mg/kg dose twice daily).

Another effective regimen is 14 days of erythromycin ethylsuccinate (60 mg/kg per day in 3 divided doses) .

CDC treatment recommendations include azithromycin or erythromycin, with trimethoprim-sulfamethoxazole as a possibility for macrolide-intolerant patients, although there are less data and success rates may not be as high.

Conclusion

So what do we know now about pertussis?

• Outbreaks are ongoing and likely will continue until newer more effective vaccines are produced, including those that circumvent the problem of pertactin-deficient strains.

• Pertussis is likely contagious up to 5 days on effective therapy, and for as long as 3 weeks if effective therapy has not been administered.

• PCR is a sensitive test that may remain positive for many weeks beyond contagion.

• Treatment with macrolides appears to be the most effective way to eradicate replicating pertussis pathogens.

• Treatment is not likely to have a major impact on the clinical course of disease because most of the damage to the respiratory tract is done prior to diagnosis and treatment. Treatment does reduce infectivity and subsequent cases.

• Current aP vaccines currently are our best preventative tools – including use in pregnant women to protect young infants.

As clinicians, our best course is to continue to immunize with the current vaccines, and remain vigilant for symptoms and signs of pertussis infection of patients so that early diagnosis and treatment can prevent further spread.

Dr. Harrison is professor of pediatrics and pediatric infectious diseases at Children’s Mercy Hospitals and Clinics, Kansas City, Mo. Children’s Mercy Hospitals receives funds from GlaxoSmithKline for Dr. Harrison being principal investigator on a multicenter research study of a hexavalent pertussis-containing infant vaccine. E-mail Dr. Harrison at [email protected].

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
pertussis, outbreaks, polymerase chain reaction, PCR, persistence, protection, vaccine
Sections
Author and Disclosure Information

Author and Disclosure Information

The Centers for Disease Control and Prevention suggests that recurring pertussis outbreaks may be the “new normal.” Such outbreaks show that some of what we “know” about pertussis is still correct, but some things are evolving. So in this new year, what do we need to know about patient vulnerability post vaccine as well as the clinical course, diagnosis, and treatment of this stubborn persisting disease?

Vulnerability after acellular pertussis vaccine

The recent large 2014 California outbreak surpassed the record numbers for the previously highest incidence year, 2010 (MMWR 2014;63:1129-32). This is scary because more cases had been reported in California in 2010 than in any prior year since the 1940s. The overall 2014 California pertussis rate (26/100,000 population) was approximately 10 times the national average for the first 9 years of this century, Are there clues as to who is most vulnerable and why?

Dr. Christopher J. Harrison

No age group was spared, but certain age groups did appear more vulnerable. Infants had a startling 174.6/100,000 incidence (six times the rate for the overall population). It is not surprising to any clinician that infants less than 1 year of age were hardest hit. Infants have the most evident symptoms with pertussis. Infants also have 5-7 months of their first year in which they are incompletely immunized. Therefore, many are not expected to be protected until about 7-9 months of age. This vulnerability could be partly remedied if all pregnant women got Tdap boosters as recommended during pregnancy.

Of note, 15-year-olds had an incidence similar to that of infants (137.8/100,000). Ethnically, non-Hispanic whites had the highest incidence among adolescents (166.2/100,000), compared with Hispanics (64.2/100,000), Asian/Pacific Islanders (43.9/100,000), and non-Hispanic blacks (23.7/100,000). Disturbingly, 87% of cases among 15-year-olds had received a prior Tdap booster dose (median time since booster Tdap = 3 years, range = 0-7 years). Previous data from the 2010 outbreak suggested that immunity to pertussis wanes 3-4 years after receipt of the last acellular pertussis (aP)–containing vaccine. This is likely part of the explanation in 2014 as well. However, waning immunity after the booster does not explain why non-Hispanic whites had two to six times the incidence of other ethnicities. Non-Hispanic whites are thought to be the demographic with the most vaccine refusal and vaccine delay in California, so this may partially explain excess cases. Racial differences in access to care or genetic differences in disease susceptibility also may play a role.

Why is this biphasic increase in incidence in California a microcosm of the new epidemiology of pertussis in the United States? A kinder, gentler DTaP vaccine replaced the whole-cell DTP in the late 1990s. This occurred in response to the public’s concern about potential central nervous system adverse effects associated with the whole-cell DTP vaccine. Immunogenicity studies seemed to show equivalent immune responses in infants and toddlers receiving DTaP, compared with those who received DTP. It has only been in the last 5 years that we now know that the new DTaP and Tdap are not working as well as we had hoped.

The two aspects to the lesser protection provided by aP vaccines are pertactin-deficient pertussis strains and quicker waning of aP vaccine–induced immunity. Antibody to pertactin appears to be important in protection against clinical pertussis. New circulating clinical strains of pertussis may not have pertactin (N. Engl. J. Med. 2013;368:583-4). The strains used in our current DTaP and Tdap were designed to protect against pertactin-containing strains and were tested for this. This means that a proportion of the antibodies induced by vaccine strains are not useful against pertactin-deficient strains. The aP vaccine still induces antibody to the pertussis toxin and other pertussis components in the vaccines, so they will likely still reduce the severity of disease. But the vaccines are not likely to prevent infections from pertactin-deficient strains. This is similar to the partial vaccine mismatch that we are seeing with the current seasonal H3N2 influenza vaccine strain.

The second aspect is that protection appears to wane approximately 3-5 years after the last dose of aP-containing vaccine. This contrasts sharply with expectations in the past of 7-10 years of protection from whole cell pertussis–containing vaccines. The less reactive aP vaccine produces fewer adverse effects by not producing as much inflammation as DPT. The problem is that part of the reason the DPT has such good protective responses is the amount of inflammation it produces. So with less aP vaccine–induced inflammation comes less robust antibody and T-cell responses.

Nevertheless, the current acellular pertussis vaccines remain the most effective available tools to reduce pertussis disease (Cochrane Database Syst. Rev. 2014;9:CD001478]). But until we have new versions of pertussis vaccines that address these two issues, we clinicians need to remain vigilant for signs and symptoms of pertussis.

 

 

Clinical course

Remember that a whoop is rarely seen in young children and often also not seen when older patients present. The many outbreaks over the last 10 years have confirmed that paroxysmal cough with/without apnea in an infant/toddler should raise our index of suspicion. Likewise, older children, adolescents, and adults with persistent cough beyond 2 weeks are potential pertussis cases. Once the diagnosis is made, treatment is not expected to have a major impact on the clinical course, in part because the diagnosis is usually delayed (more than 10 days into symptoms). This delay allows more injury to the respiratory mucosa and cilia so that healing can require 6-12 weeks after bacterial replication ceases. This prolonged healing process is what is mostly responsible for the syndrome known as the “100-day cough.” So the clinical course of pertussis has not changed in the last 10 years. However, there have been changes in the commonly used diagnostic approach.

Pertussis diagnosis and contagion

During the last 5 years, polymerase chain reaction (PCR) testing has become the preferred technology to detect pertussis. This is due to the sensitivity and quick turnaround time of the assay. The gold standard for pertussis diagnosis remains culture, but it is expensive, cumbersome, and slow (up to a week to provide results). An ongoing debate arose about how long PCR testing remains positive after the onset of symptoms or treatment. This was not the problem when culture was the diagnostic tool of choice. Data from the 1970s and 1980s indicated that cultures were rarely positive after the third week of symptoms even without treatment. Furthermore, macrolides eliminated both contagion and positive culture results of infected patients after 5 days of treatment.

So now that we use PCR most often for diagnosis, what is the outer limit of positivity? A recent prospective cohort study from Salt Lake City suggests that PCR may detect pertussis DNA way beyond 3 weeks after symptom onset (J. Ped. Infect. Dis. 2014;3:347-9). Among patients hospitalized with laboratory-confirmed Bordetella pertussis infection, half had persistently positive pertussis PCR testing more than 50 days after symptom onset, despite antibiotic treatment and clinical improvement. The median (range) for the last day for a positive test after symptom onset was 58 days (4-172 days).

This raises the question as to whether there are viable pertussis organisms in the respiratory tract beyond the traditional 3 weeks defined by culture data. It is likely that DNA persists in the thick mucus of the respiratory tract way beyond viability of the last pertussis organisms. Put another way, PCR likely detects bacterial corpses or components way beyond the time that the patient is contagious. Unfortunately, current PCR data do not tell us how long patients remain contagious with the current strains of pertussis as infecting agents. Some institutions appear to be extending the isolation time for patients treated for pertussis beyond the traditional 5 days post initiation of effective treatment. Until more data are available, we are somewhat in the dark. But I would take comfort in the fact that it is unlikely the “new” data will be much different from those derived from the traditional studies that use culture to define infectivity. The American Academy of Pediatrics Committee on Infectious Diseases Red Book appears to agree.

For hospitalized pertussis patients, the AAP Committee on Infectious Diseases Red Book recommends standard and droplet precautions for 5 days after starting effective therapy, or 3 weeks after cough onset if appropriate antimicrobial therapy has not been given.

In addition, the CDC states: “PCR has optimal sensitivity during the first 3 weeks of cough when bacterial DNA is still present in the nasopharynx. After the fourth week of cough, the amount of bacterial DNA rapidly diminishes, which increases the risk of obtaining falsely negative results.” Later in the same document, the CDC says: “PCR testing following antibiotic therapy also can result in falsely negative findings. The exact duration of positivity following antibiotic use is not well understood, but PCR testing after 5 days of antibiotic use is unlikely to be of benefit and is generally not recommended.”

So what do we know? Not all PCR assays use the same primers, so some variance from the usual experience of up to 4 weeks of positive PCR results may be due to differences in the assays. But this raises concern that the PCR that you order may be positive at times when the patient is no longer contagious.

Pertussis treatment

If strains of pertussis have changed their pertactin antigen, are they changing their antibiotic susceptibility patterns? While there have been reports of macrolide resistance in a few pertussis strains, these still remain rare. The most recent comprehensive review of treatment efficacy was a Cochrane review performed in 2005 and published in 2007 (Cochrane Database Syst. Rev. 2007;3:CD004404). They evaluated 10 trials from 1969 to 2004 in which microbiologic eradication was defined by negative results from repeat pertussis culture. While meta-analysis of microbiologic eradication was not possible because of differences in antibiotic use, the investigators did conclude that antibiotic treatment “is effective in eliminating B. pertussis from patients with the disease to render them noninfectious, but does not alter the subsequent clinical course of the illness.”

 

 

Further, they state that “the best regimens for microbiologic clearance, with fewer side effects,” are 3 days of azithromycin (a single 10-mg/kg dose on 3 consecutive days) or 7 days of clarithromycin (7.5-mg/kg dose twice daily).

Another effective regimen is 14 days of erythromycin ethylsuccinate (60 mg/kg per day in 3 divided doses) .

CDC treatment recommendations include azithromycin or erythromycin, with trimethoprim-sulfamethoxazole as a possibility for macrolide-intolerant patients, although there are less data and success rates may not be as high.

Conclusion

So what do we know now about pertussis?

• Outbreaks are ongoing and likely will continue until newer more effective vaccines are produced, including those that circumvent the problem of pertactin-deficient strains.

• Pertussis is likely contagious up to 5 days on effective therapy, and for as long as 3 weeks if effective therapy has not been administered.

• PCR is a sensitive test that may remain positive for many weeks beyond contagion.

• Treatment with macrolides appears to be the most effective way to eradicate replicating pertussis pathogens.

• Treatment is not likely to have a major impact on the clinical course of disease because most of the damage to the respiratory tract is done prior to diagnosis and treatment. Treatment does reduce infectivity and subsequent cases.

• Current aP vaccines currently are our best preventative tools – including use in pregnant women to protect young infants.

As clinicians, our best course is to continue to immunize with the current vaccines, and remain vigilant for symptoms and signs of pertussis infection of patients so that early diagnosis and treatment can prevent further spread.

Dr. Harrison is professor of pediatrics and pediatric infectious diseases at Children’s Mercy Hospitals and Clinics, Kansas City, Mo. Children’s Mercy Hospitals receives funds from GlaxoSmithKline for Dr. Harrison being principal investigator on a multicenter research study of a hexavalent pertussis-containing infant vaccine. E-mail Dr. Harrison at [email protected].

The Centers for Disease Control and Prevention suggests that recurring pertussis outbreaks may be the “new normal.” Such outbreaks show that some of what we “know” about pertussis is still correct, but some things are evolving. So in this new year, what do we need to know about patient vulnerability post vaccine as well as the clinical course, diagnosis, and treatment of this stubborn persisting disease?

Vulnerability after acellular pertussis vaccine

The recent large 2014 California outbreak surpassed the record numbers for the previously highest incidence year, 2010 (MMWR 2014;63:1129-32). This is scary because more cases had been reported in California in 2010 than in any prior year since the 1940s. The overall 2014 California pertussis rate (26/100,000 population) was approximately 10 times the national average for the first 9 years of this century, Are there clues as to who is most vulnerable and why?

Dr. Christopher J. Harrison

No age group was spared, but certain age groups did appear more vulnerable. Infants had a startling 174.6/100,000 incidence (six times the rate for the overall population). It is not surprising to any clinician that infants less than 1 year of age were hardest hit. Infants have the most evident symptoms with pertussis. Infants also have 5-7 months of their first year in which they are incompletely immunized. Therefore, many are not expected to be protected until about 7-9 months of age. This vulnerability could be partly remedied if all pregnant women got Tdap boosters as recommended during pregnancy.

Of note, 15-year-olds had an incidence similar to that of infants (137.8/100,000). Ethnically, non-Hispanic whites had the highest incidence among adolescents (166.2/100,000), compared with Hispanics (64.2/100,000), Asian/Pacific Islanders (43.9/100,000), and non-Hispanic blacks (23.7/100,000). Disturbingly, 87% of cases among 15-year-olds had received a prior Tdap booster dose (median time since booster Tdap = 3 years, range = 0-7 years). Previous data from the 2010 outbreak suggested that immunity to pertussis wanes 3-4 years after receipt of the last acellular pertussis (aP)–containing vaccine. This is likely part of the explanation in 2014 as well. However, waning immunity after the booster does not explain why non-Hispanic whites had two to six times the incidence of other ethnicities. Non-Hispanic whites are thought to be the demographic with the most vaccine refusal and vaccine delay in California, so this may partially explain excess cases. Racial differences in access to care or genetic differences in disease susceptibility also may play a role.

Why is this biphasic increase in incidence in California a microcosm of the new epidemiology of pertussis in the United States? A kinder, gentler DTaP vaccine replaced the whole-cell DTP in the late 1990s. This occurred in response to the public’s concern about potential central nervous system adverse effects associated with the whole-cell DTP vaccine. Immunogenicity studies seemed to show equivalent immune responses in infants and toddlers receiving DTaP, compared with those who received DTP. It has only been in the last 5 years that we now know that the new DTaP and Tdap are not working as well as we had hoped.

The two aspects to the lesser protection provided by aP vaccines are pertactin-deficient pertussis strains and quicker waning of aP vaccine–induced immunity. Antibody to pertactin appears to be important in protection against clinical pertussis. New circulating clinical strains of pertussis may not have pertactin (N. Engl. J. Med. 2013;368:583-4). The strains used in our current DTaP and Tdap were designed to protect against pertactin-containing strains and were tested for this. This means that a proportion of the antibodies induced by vaccine strains are not useful against pertactin-deficient strains. The aP vaccine still induces antibody to the pertussis toxin and other pertussis components in the vaccines, so they will likely still reduce the severity of disease. But the vaccines are not likely to prevent infections from pertactin-deficient strains. This is similar to the partial vaccine mismatch that we are seeing with the current seasonal H3N2 influenza vaccine strain.

The second aspect is that protection appears to wane approximately 3-5 years after the last dose of aP-containing vaccine. This contrasts sharply with expectations in the past of 7-10 years of protection from whole cell pertussis–containing vaccines. The less reactive aP vaccine produces fewer adverse effects by not producing as much inflammation as DPT. The problem is that part of the reason the DPT has such good protective responses is the amount of inflammation it produces. So with less aP vaccine–induced inflammation comes less robust antibody and T-cell responses.

Nevertheless, the current acellular pertussis vaccines remain the most effective available tools to reduce pertussis disease (Cochrane Database Syst. Rev. 2014;9:CD001478]). But until we have new versions of pertussis vaccines that address these two issues, we clinicians need to remain vigilant for signs and symptoms of pertussis.

 

 

Clinical course

Remember that a whoop is rarely seen in young children and often also not seen when older patients present. The many outbreaks over the last 10 years have confirmed that paroxysmal cough with/without apnea in an infant/toddler should raise our index of suspicion. Likewise, older children, adolescents, and adults with persistent cough beyond 2 weeks are potential pertussis cases. Once the diagnosis is made, treatment is not expected to have a major impact on the clinical course, in part because the diagnosis is usually delayed (more than 10 days into symptoms). This delay allows more injury to the respiratory mucosa and cilia so that healing can require 6-12 weeks after bacterial replication ceases. This prolonged healing process is what is mostly responsible for the syndrome known as the “100-day cough.” So the clinical course of pertussis has not changed in the last 10 years. However, there have been changes in the commonly used diagnostic approach.

Pertussis diagnosis and contagion

During the last 5 years, polymerase chain reaction (PCR) testing has become the preferred technology to detect pertussis. This is due to the sensitivity and quick turnaround time of the assay. The gold standard for pertussis diagnosis remains culture, but it is expensive, cumbersome, and slow (up to a week to provide results). An ongoing debate arose about how long PCR testing remains positive after the onset of symptoms or treatment. This was not the problem when culture was the diagnostic tool of choice. Data from the 1970s and 1980s indicated that cultures were rarely positive after the third week of symptoms even without treatment. Furthermore, macrolides eliminated both contagion and positive culture results of infected patients after 5 days of treatment.

So now that we use PCR most often for diagnosis, what is the outer limit of positivity? A recent prospective cohort study from Salt Lake City suggests that PCR may detect pertussis DNA way beyond 3 weeks after symptom onset (J. Ped. Infect. Dis. 2014;3:347-9). Among patients hospitalized with laboratory-confirmed Bordetella pertussis infection, half had persistently positive pertussis PCR testing more than 50 days after symptom onset, despite antibiotic treatment and clinical improvement. The median (range) for the last day for a positive test after symptom onset was 58 days (4-172 days).

This raises the question as to whether there are viable pertussis organisms in the respiratory tract beyond the traditional 3 weeks defined by culture data. It is likely that DNA persists in the thick mucus of the respiratory tract way beyond viability of the last pertussis organisms. Put another way, PCR likely detects bacterial corpses or components way beyond the time that the patient is contagious. Unfortunately, current PCR data do not tell us how long patients remain contagious with the current strains of pertussis as infecting agents. Some institutions appear to be extending the isolation time for patients treated for pertussis beyond the traditional 5 days post initiation of effective treatment. Until more data are available, we are somewhat in the dark. But I would take comfort in the fact that it is unlikely the “new” data will be much different from those derived from the traditional studies that use culture to define infectivity. The American Academy of Pediatrics Committee on Infectious Diseases Red Book appears to agree.

For hospitalized pertussis patients, the AAP Committee on Infectious Diseases Red Book recommends standard and droplet precautions for 5 days after starting effective therapy, or 3 weeks after cough onset if appropriate antimicrobial therapy has not been given.

In addition, the CDC states: “PCR has optimal sensitivity during the first 3 weeks of cough when bacterial DNA is still present in the nasopharynx. After the fourth week of cough, the amount of bacterial DNA rapidly diminishes, which increases the risk of obtaining falsely negative results.” Later in the same document, the CDC says: “PCR testing following antibiotic therapy also can result in falsely negative findings. The exact duration of positivity following antibiotic use is not well understood, but PCR testing after 5 days of antibiotic use is unlikely to be of benefit and is generally not recommended.”

So what do we know? Not all PCR assays use the same primers, so some variance from the usual experience of up to 4 weeks of positive PCR results may be due to differences in the assays. But this raises concern that the PCR that you order may be positive at times when the patient is no longer contagious.

Pertussis treatment

If strains of pertussis have changed their pertactin antigen, are they changing their antibiotic susceptibility patterns? While there have been reports of macrolide resistance in a few pertussis strains, these still remain rare. The most recent comprehensive review of treatment efficacy was a Cochrane review performed in 2005 and published in 2007 (Cochrane Database Syst. Rev. 2007;3:CD004404). They evaluated 10 trials from 1969 to 2004 in which microbiologic eradication was defined by negative results from repeat pertussis culture. While meta-analysis of microbiologic eradication was not possible because of differences in antibiotic use, the investigators did conclude that antibiotic treatment “is effective in eliminating B. pertussis from patients with the disease to render them noninfectious, but does not alter the subsequent clinical course of the illness.”

 

 

Further, they state that “the best regimens for microbiologic clearance, with fewer side effects,” are 3 days of azithromycin (a single 10-mg/kg dose on 3 consecutive days) or 7 days of clarithromycin (7.5-mg/kg dose twice daily).

Another effective regimen is 14 days of erythromycin ethylsuccinate (60 mg/kg per day in 3 divided doses) .

CDC treatment recommendations include azithromycin or erythromycin, with trimethoprim-sulfamethoxazole as a possibility for macrolide-intolerant patients, although there are less data and success rates may not be as high.

Conclusion

So what do we know now about pertussis?

• Outbreaks are ongoing and likely will continue until newer more effective vaccines are produced, including those that circumvent the problem of pertactin-deficient strains.

• Pertussis is likely contagious up to 5 days on effective therapy, and for as long as 3 weeks if effective therapy has not been administered.

• PCR is a sensitive test that may remain positive for many weeks beyond contagion.

• Treatment with macrolides appears to be the most effective way to eradicate replicating pertussis pathogens.

• Treatment is not likely to have a major impact on the clinical course of disease because most of the damage to the respiratory tract is done prior to diagnosis and treatment. Treatment does reduce infectivity and subsequent cases.

• Current aP vaccines currently are our best preventative tools – including use in pregnant women to protect young infants.

As clinicians, our best course is to continue to immunize with the current vaccines, and remain vigilant for symptoms and signs of pertussis infection of patients so that early diagnosis and treatment can prevent further spread.

Dr. Harrison is professor of pediatrics and pediatric infectious diseases at Children’s Mercy Hospitals and Clinics, Kansas City, Mo. Children’s Mercy Hospitals receives funds from GlaxoSmithKline for Dr. Harrison being principal investigator on a multicenter research study of a hexavalent pertussis-containing infant vaccine. E-mail Dr. Harrison at [email protected].

References

References

Publications
Publications
Topics
Article Type
Display Headline
Pertussis persists
Display Headline
Pertussis persists
Legacy Keywords
pertussis, outbreaks, polymerase chain reaction, PCR, persistence, protection, vaccine
Legacy Keywords
pertussis, outbreaks, polymerase chain reaction, PCR, persistence, protection, vaccine
Sections
Article Source

PURLs Copyright

Inside the Article

ACOG outlines new treatment options for hypertensive emergencies in pregnancy

Article Type
Changed
Fri, 01/18/2019 - 14:23
Display Headline
ACOG outlines new treatment options for hypertensive emergencies in pregnancy

The American College of Obstetricians and Gynecologists has added nifedipine as a first-line treatment for acute-onset severe hypertension during pregnancy and the postpartum period in an updated opinion from its Committee on Obstetric Practice.

The update, released on Jan. 22, points to studies showing that women who received oral nifedipine had their blood pressure lowered more quickly than with either intravenous labetalol or hydralazine – the traditional first-line treatments – and had a significant increase in urine output. Concerns about neuromuscular blockade and severe hypotension with the use of nifedipine and magnesium sulphate were not borne out in a large review, the committee members wrote, but they advised careful monitoring since both drugs are calcium antagonists.

The committee opinion includes model order sets for the use of labetalol, hydralazine, and nifedipine for the initial management of acute onset severe hypertension in women who are pregnant or post partum with preeclampsia or eclampsia (Obstet. Gynecol. 2015;125:521-5).

While all three medications are appropriate in treating hypertensive emergencies during pregnancy, each drug has adverse effects.

For instance, parenteral hydralazine can increase the risk of maternal hypotension. Parenteral labetalol may cause neonatal bradycardia and should be avoided in women with asthma, heart disease, or heart failure. Nifedipine has been associated with increased maternal heart rate and overshoot hypotension.

“Patients may respond to one drug and not another,” the committee noted.

The ACOG committee also called for standardized clinical guidelines for the management of patients with preeclampsia and eclampsia.

“With the advent of pregnancy hypertension guidelines in the United Kingdom, care of maternity patients with preeclampsia or eclampsia improved significantly and maternal mortality rates decreased because of a reduction in cerebral and respiratory complications,” they wrote. “Individuals and institutions should have mechanisms in place to initiate the prompt administration of medication when a patient presents with a hypertensive emergency.”

The committee recommended checklists as one tool to help standardize the use of guidelines.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
hypertension, pregnancy, nifedipine
Sections
Author and Disclosure Information

Author and Disclosure Information

The American College of Obstetricians and Gynecologists has added nifedipine as a first-line treatment for acute-onset severe hypertension during pregnancy and the postpartum period in an updated opinion from its Committee on Obstetric Practice.

The update, released on Jan. 22, points to studies showing that women who received oral nifedipine had their blood pressure lowered more quickly than with either intravenous labetalol or hydralazine – the traditional first-line treatments – and had a significant increase in urine output. Concerns about neuromuscular blockade and severe hypotension with the use of nifedipine and magnesium sulphate were not borne out in a large review, the committee members wrote, but they advised careful monitoring since both drugs are calcium antagonists.

The committee opinion includes model order sets for the use of labetalol, hydralazine, and nifedipine for the initial management of acute onset severe hypertension in women who are pregnant or post partum with preeclampsia or eclampsia (Obstet. Gynecol. 2015;125:521-5).

While all three medications are appropriate in treating hypertensive emergencies during pregnancy, each drug has adverse effects.

For instance, parenteral hydralazine can increase the risk of maternal hypotension. Parenteral labetalol may cause neonatal bradycardia and should be avoided in women with asthma, heart disease, or heart failure. Nifedipine has been associated with increased maternal heart rate and overshoot hypotension.

“Patients may respond to one drug and not another,” the committee noted.

The ACOG committee also called for standardized clinical guidelines for the management of patients with preeclampsia and eclampsia.

“With the advent of pregnancy hypertension guidelines in the United Kingdom, care of maternity patients with preeclampsia or eclampsia improved significantly and maternal mortality rates decreased because of a reduction in cerebral and respiratory complications,” they wrote. “Individuals and institutions should have mechanisms in place to initiate the prompt administration of medication when a patient presents with a hypertensive emergency.”

The committee recommended checklists as one tool to help standardize the use of guidelines.

The American College of Obstetricians and Gynecologists has added nifedipine as a first-line treatment for acute-onset severe hypertension during pregnancy and the postpartum period in an updated opinion from its Committee on Obstetric Practice.

The update, released on Jan. 22, points to studies showing that women who received oral nifedipine had their blood pressure lowered more quickly than with either intravenous labetalol or hydralazine – the traditional first-line treatments – and had a significant increase in urine output. Concerns about neuromuscular blockade and severe hypotension with the use of nifedipine and magnesium sulphate were not borne out in a large review, the committee members wrote, but they advised careful monitoring since both drugs are calcium antagonists.

The committee opinion includes model order sets for the use of labetalol, hydralazine, and nifedipine for the initial management of acute onset severe hypertension in women who are pregnant or post partum with preeclampsia or eclampsia (Obstet. Gynecol. 2015;125:521-5).

While all three medications are appropriate in treating hypertensive emergencies during pregnancy, each drug has adverse effects.

For instance, parenteral hydralazine can increase the risk of maternal hypotension. Parenteral labetalol may cause neonatal bradycardia and should be avoided in women with asthma, heart disease, or heart failure. Nifedipine has been associated with increased maternal heart rate and overshoot hypotension.

“Patients may respond to one drug and not another,” the committee noted.

The ACOG committee also called for standardized clinical guidelines for the management of patients with preeclampsia and eclampsia.

“With the advent of pregnancy hypertension guidelines in the United Kingdom, care of maternity patients with preeclampsia or eclampsia improved significantly and maternal mortality rates decreased because of a reduction in cerebral and respiratory complications,” they wrote. “Individuals and institutions should have mechanisms in place to initiate the prompt administration of medication when a patient presents with a hypertensive emergency.”

The committee recommended checklists as one tool to help standardize the use of guidelines.

References

References

Publications
Publications
Topics
Article Type
Display Headline
ACOG outlines new treatment options for hypertensive emergencies in pregnancy
Display Headline
ACOG outlines new treatment options for hypertensive emergencies in pregnancy
Legacy Keywords
hypertension, pregnancy, nifedipine
Legacy Keywords
hypertension, pregnancy, nifedipine
Sections
Article Source

FROM OBSTETRICS AND GYNECOLOGY

PURLs Copyright

Inside the Article

Interobserver Agreement Using Computed Tomography to Assess Radiographic Fusion Criteria With a Unique Titanium Interbody Device

Article Type
Changed
Thu, 09/19/2019 - 13:37
Display Headline
Interobserver Agreement Using Computed Tomography to Assess Radiographic Fusion Criteria With a Unique Titanium Interbody Device

The accuracy of using computed tomography (CT) to assess lumbar interbody fusion with titanium implants has been questioned in the past.1-4 Reports have most often focused on older technologies using paired, threaded, smooth-surface titanium devices. Some authors have reported they could not confidently assess the quality of fusions using CT because of implant artifact.1-3

When pseudarthrosis is suspected clinically, and imaging results are inconclusive, surgical explorations may be performed with mechanical stressing of the segment to assess for motion.2,5-7 However, surgical exploration not only has the morbidity of another surgery but may not be conclusive. Direct exploration of an interbody fusion is problematic. In some cases, there may be residual normal springing motion through posterior elements, even in the presence of a solid interbody fusion, which can be confusing.5 Radiologic confirmation of fusion status is therefore preferred over surgical exploration. CT is the imaging modality used most often to assess spinal fusions.8,9

A new titanium interbody fusion implant (Endoskeleton TA; Titan Spine, Mequon, Wisconsin) preserves the endplate and has an acid-etched titanium surface for osseous integration and a wide central aperture for bone graft (Figure 1). Compared with earlier titanium implants, this design may allow for more accurate CT imaging and fusion assessment. We conducted a study to determine the interobserver reliability of using CT to evaluate bone formation and other radiographic variables with this new titanium interbody device.

Materials and Methods

After receiving institutional review board approval for this study, as well as patient consent, we obtained and analyzed CT scans of patients after they had undergone anterior lumbar interbody fusion (ALIF) at L3–S1 as part of a separate clinical outcomes study.

Each patient received an Endoskeleton TA implant. The fusion cage was packed with 2 sponges (3.0 mg per fusion level) of bone morphogenetic protein, or BMP (InFuse; Medtronic, Minneapolis, Minnesota). In addition, 1 to 3 cm3 of hydroxyapatite/β-‌tricalcium phosphate (MasterGraft, Medtronic) collagen sponge was used as graft extender to fill any remaining gaps within the cage. Pedicle screw fixation was used in all cases.

Patients were randomly assigned to have fine-cut CT scans with reconstructed images at 6, 9, or 12 months. The scans were reviewed by 2 independent radiologists who were blinded to each other’s interpretations and the clinical results. The radiographic fusion criteria are listed in Tables 1 to 3. Interobserver agreement (κ) was calculated separately for each radiographic criterion and could range from 0.00 (no agreement) to 1.00 (perfect agreement).10,11

Results

The study involved 33 patients (17 men, 16 women) with 56 lumbar spinal fusion levels. Mean age was 46 years (range, 23-66 years). Six patients (18%) were nicotine users. Seventeen patients were scanned at 6 months, 9 at 9 months, and 7 at 12 months. There were no significant differences in results between men and women, between nicotine users and nonusers, or among patients evaluated at 6, 9, or 12 months.

The radiologists agreed on 345 of the 392 data points reviewed (κ = 0.88). Interobserver agreement results for the fusion criteria are listed in Tables 1 and 3. Interobserver agreement was 0.77 for overall fusion grade, with the radiologists noting definite fusion (grade 5) in 80% and 91% of the levels (Table 1). Other radiographic criteria are listed in Tables 2 and 3. Interobserver agreement was 0.80 for degree of artifact, 0.95 for subsidence, 0.96 for both lucency and trabecular bone, 0.77 for anterior osseous bridging, and 0.95 for cystic vertebral changes.

Discussion

Radiographic analysis of interbody fusions is an important clinical issue. Investigators have shown that CT is the radiographic method of choice for assessing fusion.8,9 Others have reported that assessing fusion with metallic interbody implants is more difficult compared with PEEK (polyether ether ketone) or allograft bone.3,4,5,12

Heithoff and colleagues1,2 reported on difficulties they encountered in assessing interbody fusion with titanium implants, and their research has often been cited. The authors concluded that they could not accurately assess fusion in these cases because of artifact from the small apertures in the cages and metallic scatter. Their study was very small (8 patients, 12 surgical levels) and used paired BAK (Bagby and Kuslich) cages (Zimmer, Warsaw, Indiana).

Recently, a unique surface technology, used to manufacture osseointegrative dental implants, has been adapted for use in the spine.13-15 Acid etching modifies the surface of titanium to create a nano-scale (micron-level) alteration. Compared with PEEK and smooth titanium, acid-etched titanium stimulates a better osteogenic environment.16,17 As this technology is now used clinically in spinal surgery, we thought it important to revisit the issue of CT analysis for fusion assessment with the newer titanium implants.

 

 

Artifact

The results of our study support the idea that the design of a titanium interbody fusion implant is important to radiographic analysis. The implant studied has a large open central aperture that appears to generate less artifact than historical controls (paired cylindrical cages) have.1-4 Other investigators have reported fewer problems with artifact in their studies of implants incorporating larger openings for bone graft.6,18 The radiologists in the present study found no significant problems with artifact. Less artifact is clinically important, as the remaining fusion variables can be more clearly visualized (Table 2, Figure 2).

Anterior Osseous Bridging, Subsidence, Lysis

In this study, the bony endplates were preserved. The disc and endplate cartilage was removed without reaming or drilling. Endplate reaming most likely contributes to subsidence and loss of original fixation between implant and bone interface.1,4,12 Some authors have advocated recessing the cages deeply and then packing bone anteriorly to create a “sentinel fusion sign.”1,2,6 Deeply seating interbody implants, instead of resting them more widely on the apophyseal ring of the vertebral endplate, may also lead to subsidence.4,12 The issue of identifying a sentinel fusion sign is relevant only if the surgeon tries to create one. In the present study, the implant used was an impacted cage positioned on the apophyseal perimeter of the disc space, just slightly recessed, so there was no attempt to create a sentinel fusion sign, as reflected in the relatively low scores on anterior osseous bridging (48%, 52%).

Subsidence and peri-implant lysis are pathologic variables associated with motion and bone loss. Sethi and colleagues19 noted a high percentage of endplate resorption and subsidence in cases reviewed using PEEK or allograft spacers paired with BMP-2. Although BMP-2 was used in the present study, we found very low rates of subsidence (0%, 5%) and no significant peri-implant lucencies (2%, 4%) (Figure 2). Interobserver agreement for these variables was high (0.95, 0.96). We hypothesize that the combination of endplate-sparing surgical technique and implant–bone integration contributed to these results.

Trabecular Bone and Fusion Grade

The primary radiographic criterion for solid interbody fusion is trabecular bone throughout the cage, bridging the vertebral bodies. In our study, the success rates for this variable were 96% and 100%, and there was very high interobserver agreement (0.96) (Figure 3). This very high fusion rate may preclude detecting subtle differences in interobserver agreement, but to what degree, if any, is unknown. Other investigators have effectively identified trabecular bone across the interspace and throughout the cages.6,18 The openings for bone formation were larger in the implants they used than in first-generation fusion cages but not as large as the implant openings in the present study. Larger openings may correlate with improved ability to visualize bridging bone on CT.

Radiologists and surgeons must ultimately arrive at a conclusion regarding the likelihood a fusion has occurred. Our radiologists integrated all the separate radiologic variables cited here, as well as their overall impressions of the scans, to arrive at a final grade regarding fusion quality (Figures 3, 4). Although this category provides the most interpretive latitude of all the variables examined, the results demonstrate high interobserver reliability. Agreement to exactly the same fusion grade was 0.77, and agreement to within 1 category grade was 0.95.

This study had several limitations. Surgical explorations were not clinically indicated and were not performed. There were no suspected nonunions or hardware complications, two of the most common indications for exploration. In addition, this study was conducted not to determine specific accuracy of CT (compared with surgery exploration) for fusion assessment but to assess interobserver reliability. The clinical success rates for this population were high, and no patient required revision surgery for suspected pseudarthrosis. To assess the true accuracy of CT for fusion assessment, one would have to subject patients to follow-up exploratory surgery to test fusions mechanically.

Another limitation is the lack of a single industry-accepted radiographic fusion grading system. Fusion criteria are not standardized across all studies. Our radiologists have extensive research experience and limit their practices to neuromuscular radiology with a concentration on the spine. The radiographic criteria cited here are the same criteria they use in clinical practice, when reviewing CT scans for clinicians. Last, there was no control group for direct comparison against other cages. Historical controls were cited. This does not adversely affect the conclusions of this investigation.

Conclusion

Clinicians have been reluctant to rely on CT with titanium devices because of concerns about the accuracy of image interpretations. The interbody device used in this study demonstrated minimal artifact and minimal subsidence, and trabecular bone was easily identified throughout the implant in the majority of cases reviewed. We found high interobserver agreement scores across all fusion criteria. Although surgical exploration remains the gold standard for fusion assessment, surgeons should have confidence in using CT with this titanium implant.

References

1.    Gilbert TJ, Heithoff KB, Mullin WJ. Radiographic assessment of cage-assisted interbody fusions in the lumbar spine. Semin Spine Surg. 2001;13:311-315.

2.    Heithoff KB, Mullin WJ, Renfrew DL, Gilbert TJ. The failure of radiographic detection of pseudarthrosis in patients with titanium lumbar interbody fusion cages. In: Proceedings of the 14th Annual Meeting of the North American Spine Society; October 20-23, 1999; Chicago, IL. Abstract 14.

3.    Cizek GR, Boyd LM. Imaging pitfalls of interbody implants. Spine. 2000;25(20):2633-2636.

4.    Dorchak JD, Burkus JK, Foor BD, Sanders DL. Dual paired proximity and combined BAK/proximity interbody fusion cages: radiographic results. In: Proceedings of the 15th Annual Meeting of the North American Spine Society. New Orleans, LA: North American Spine Society; 2000:83-85.

5.    Santos ER, Goss DG, Morcom RK, Fraser RD. Radiologic assessment of interbody fusion using carbon fiber cages. Spine. 2003;28(10):997-1001.

6.    Carreon LY, Glassman SD, Schwender JD, Subach BR, Gornet MF, Ohno S. Reliability and accuracy of fine-cut computed tomography scans to determine the status of anterior interbody fusions with metallic cages. Spine J. 2008;8(6):998-1002.

7.    Fogel GR, Toohey JS, Neidre A, Brantigan JW. Fusion assessment of posterior lumbar interbody fusion using radiolucent cages: x-ray films and helical computed tomography scans compared with surgical exploration of fusion. Spine J. 2008;8(4):570-577.

8.    Selby MD, Clark SR, Hall DJ, Freeman BJ. Radiologic assessment of spinal fusion. J Am Acad Orthop Surg. 2012;20(11):694-703.

9.    Chafetz N, Cann CE, Morris JM, Steinbach LS, Goldberg HI, Ax L. Pseudarthrosis following lumbar fusion: detection by direct coronal CT scanning. Radiology. 1987;162(3):803-805.

10.  Landis RJ, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159-174.

11.  Viera AJ, Garrett JM. Understanding interobserver agreement; the kappa statistic. Fam Med. 2005;37(5):360-363.

12.  Burkus JK, Foley K, Haid RW, Lehuec JC. Surgical Interbody Research Group—radiographic assessment of interbody fusion devices: fusion criteria for anterior lumbar interbody surgery. Neurosurg Focus. 2001;10(4):E11.

13.  Albrektsson T, Zarb G, Worthington P, Eriksson AR. The long-term efficacy of currently used dental implants: a review and proposed criteria of success. Int J Oral Maxillofac Implants. 1986;1(1):11-25.

14.  De Leonardis D, Garg AK, Pecora GE. Osseointegration of rough acid-etched titanium implants: 5-year follow-up of 100 Minimatic implants. Int J Oral Maxillofac Implants. 1999;14(3):384-391.

15.  Schwartz Z, Raz P, Zhao G, et al. Effect of micrometer-scale roughness on the surface of Ti6Al4V pedicle screws in vitro and in vivo. J Bone Joint Surg Am. 2008;90(11):2485-2498.

16.  Olivares-Navarrete R, Gittens RA, Schneider JM, et al. Osteoblasts exhibit a more differentiated phenotype and increased bone morphogenetic protein production on titanium alloy substrates than on poly-ether-ether-ketone. Spine J. 2012;12(3):265-272.

17.  Olivares-Navarrete R, Hyzy SL, Gittens RA 1st, et al. Rough titanium alloys regulate osteoblast production of angiogenic factors. Spine J. 2013;13(11):1563-1570.

18.  Burkus JK, Dorchak JD, Sanders DL. Radiographic assessment of interbody fusion using recombinant human bone morphogenetic protein type 2. Spine. 2003;28(4):372-377.

19.    Sethi A, Craig J, Bartol S, et al. Radiographic and CT evaluation of recombinant human bone morphogenetic protein-2–assisted spinal interbody fusion. AJR Am J Roentgenol. 2011;197(1):W128-W133.

Article PDF
Author and Disclosure Information

Paul J. Slosar, MD, Jay Kaiser, MD, Luis Marrero, MD, and Damon Sacco, MD

Authors’ Disclosure Statement: This research was supported by a Spinal Research Foundation grant. Dr. Slosar wishes to report that he is Medical Director and a Scientific Advisory Board Member at Titan Spine, which makes the titanium interbody implant used in this study. The other authors report no actual or potential conflict of interest in relation to this article.

Issue
The American Journal of Orthopedics - 44(2)
Publications
Topics
Page Number
86-89
Legacy Keywords
american journal of orthopedics, AJO, original study, study, computed tomography, CT, radiographic, radiographs, fusion, titanium, interbody device, device, radiology, spinal fusion, spine, images, imaging, implant, bone, slosar, kaiser, marrero, sacco
Sections
Author and Disclosure Information

Paul J. Slosar, MD, Jay Kaiser, MD, Luis Marrero, MD, and Damon Sacco, MD

Authors’ Disclosure Statement: This research was supported by a Spinal Research Foundation grant. Dr. Slosar wishes to report that he is Medical Director and a Scientific Advisory Board Member at Titan Spine, which makes the titanium interbody implant used in this study. The other authors report no actual or potential conflict of interest in relation to this article.

Author and Disclosure Information

Paul J. Slosar, MD, Jay Kaiser, MD, Luis Marrero, MD, and Damon Sacco, MD

Authors’ Disclosure Statement: This research was supported by a Spinal Research Foundation grant. Dr. Slosar wishes to report that he is Medical Director and a Scientific Advisory Board Member at Titan Spine, which makes the titanium interbody implant used in this study. The other authors report no actual or potential conflict of interest in relation to this article.

Article PDF
Article PDF

The accuracy of using computed tomography (CT) to assess lumbar interbody fusion with titanium implants has been questioned in the past.1-4 Reports have most often focused on older technologies using paired, threaded, smooth-surface titanium devices. Some authors have reported they could not confidently assess the quality of fusions using CT because of implant artifact.1-3

When pseudarthrosis is suspected clinically, and imaging results are inconclusive, surgical explorations may be performed with mechanical stressing of the segment to assess for motion.2,5-7 However, surgical exploration not only has the morbidity of another surgery but may not be conclusive. Direct exploration of an interbody fusion is problematic. In some cases, there may be residual normal springing motion through posterior elements, even in the presence of a solid interbody fusion, which can be confusing.5 Radiologic confirmation of fusion status is therefore preferred over surgical exploration. CT is the imaging modality used most often to assess spinal fusions.8,9

A new titanium interbody fusion implant (Endoskeleton TA; Titan Spine, Mequon, Wisconsin) preserves the endplate and has an acid-etched titanium surface for osseous integration and a wide central aperture for bone graft (Figure 1). Compared with earlier titanium implants, this design may allow for more accurate CT imaging and fusion assessment. We conducted a study to determine the interobserver reliability of using CT to evaluate bone formation and other radiographic variables with this new titanium interbody device.

Materials and Methods

After receiving institutional review board approval for this study, as well as patient consent, we obtained and analyzed CT scans of patients after they had undergone anterior lumbar interbody fusion (ALIF) at L3–S1 as part of a separate clinical outcomes study.

Each patient received an Endoskeleton TA implant. The fusion cage was packed with 2 sponges (3.0 mg per fusion level) of bone morphogenetic protein, or BMP (InFuse; Medtronic, Minneapolis, Minnesota). In addition, 1 to 3 cm3 of hydroxyapatite/β-‌tricalcium phosphate (MasterGraft, Medtronic) collagen sponge was used as graft extender to fill any remaining gaps within the cage. Pedicle screw fixation was used in all cases.

Patients were randomly assigned to have fine-cut CT scans with reconstructed images at 6, 9, or 12 months. The scans were reviewed by 2 independent radiologists who were blinded to each other’s interpretations and the clinical results. The radiographic fusion criteria are listed in Tables 1 to 3. Interobserver agreement (κ) was calculated separately for each radiographic criterion and could range from 0.00 (no agreement) to 1.00 (perfect agreement).10,11

Results

The study involved 33 patients (17 men, 16 women) with 56 lumbar spinal fusion levels. Mean age was 46 years (range, 23-66 years). Six patients (18%) were nicotine users. Seventeen patients were scanned at 6 months, 9 at 9 months, and 7 at 12 months. There were no significant differences in results between men and women, between nicotine users and nonusers, or among patients evaluated at 6, 9, or 12 months.

The radiologists agreed on 345 of the 392 data points reviewed (κ = 0.88). Interobserver agreement results for the fusion criteria are listed in Tables 1 and 3. Interobserver agreement was 0.77 for overall fusion grade, with the radiologists noting definite fusion (grade 5) in 80% and 91% of the levels (Table 1). Other radiographic criteria are listed in Tables 2 and 3. Interobserver agreement was 0.80 for degree of artifact, 0.95 for subsidence, 0.96 for both lucency and trabecular bone, 0.77 for anterior osseous bridging, and 0.95 for cystic vertebral changes.

Discussion

Radiographic analysis of interbody fusions is an important clinical issue. Investigators have shown that CT is the radiographic method of choice for assessing fusion.8,9 Others have reported that assessing fusion with metallic interbody implants is more difficult compared with PEEK (polyether ether ketone) or allograft bone.3,4,5,12

Heithoff and colleagues1,2 reported on difficulties they encountered in assessing interbody fusion with titanium implants, and their research has often been cited. The authors concluded that they could not accurately assess fusion in these cases because of artifact from the small apertures in the cages and metallic scatter. Their study was very small (8 patients, 12 surgical levels) and used paired BAK (Bagby and Kuslich) cages (Zimmer, Warsaw, Indiana).

Recently, a unique surface technology, used to manufacture osseointegrative dental implants, has been adapted for use in the spine.13-15 Acid etching modifies the surface of titanium to create a nano-scale (micron-level) alteration. Compared with PEEK and smooth titanium, acid-etched titanium stimulates a better osteogenic environment.16,17 As this technology is now used clinically in spinal surgery, we thought it important to revisit the issue of CT analysis for fusion assessment with the newer titanium implants.

 

 

Artifact

The results of our study support the idea that the design of a titanium interbody fusion implant is important to radiographic analysis. The implant studied has a large open central aperture that appears to generate less artifact than historical controls (paired cylindrical cages) have.1-4 Other investigators have reported fewer problems with artifact in their studies of implants incorporating larger openings for bone graft.6,18 The radiologists in the present study found no significant problems with artifact. Less artifact is clinically important, as the remaining fusion variables can be more clearly visualized (Table 2, Figure 2).

Anterior Osseous Bridging, Subsidence, Lysis

In this study, the bony endplates were preserved. The disc and endplate cartilage was removed without reaming or drilling. Endplate reaming most likely contributes to subsidence and loss of original fixation between implant and bone interface.1,4,12 Some authors have advocated recessing the cages deeply and then packing bone anteriorly to create a “sentinel fusion sign.”1,2,6 Deeply seating interbody implants, instead of resting them more widely on the apophyseal ring of the vertebral endplate, may also lead to subsidence.4,12 The issue of identifying a sentinel fusion sign is relevant only if the surgeon tries to create one. In the present study, the implant used was an impacted cage positioned on the apophyseal perimeter of the disc space, just slightly recessed, so there was no attempt to create a sentinel fusion sign, as reflected in the relatively low scores on anterior osseous bridging (48%, 52%).

Subsidence and peri-implant lysis are pathologic variables associated with motion and bone loss. Sethi and colleagues19 noted a high percentage of endplate resorption and subsidence in cases reviewed using PEEK or allograft spacers paired with BMP-2. Although BMP-2 was used in the present study, we found very low rates of subsidence (0%, 5%) and no significant peri-implant lucencies (2%, 4%) (Figure 2). Interobserver agreement for these variables was high (0.95, 0.96). We hypothesize that the combination of endplate-sparing surgical technique and implant–bone integration contributed to these results.

Trabecular Bone and Fusion Grade

The primary radiographic criterion for solid interbody fusion is trabecular bone throughout the cage, bridging the vertebral bodies. In our study, the success rates for this variable were 96% and 100%, and there was very high interobserver agreement (0.96) (Figure 3). This very high fusion rate may preclude detecting subtle differences in interobserver agreement, but to what degree, if any, is unknown. Other investigators have effectively identified trabecular bone across the interspace and throughout the cages.6,18 The openings for bone formation were larger in the implants they used than in first-generation fusion cages but not as large as the implant openings in the present study. Larger openings may correlate with improved ability to visualize bridging bone on CT.

Radiologists and surgeons must ultimately arrive at a conclusion regarding the likelihood a fusion has occurred. Our radiologists integrated all the separate radiologic variables cited here, as well as their overall impressions of the scans, to arrive at a final grade regarding fusion quality (Figures 3, 4). Although this category provides the most interpretive latitude of all the variables examined, the results demonstrate high interobserver reliability. Agreement to exactly the same fusion grade was 0.77, and agreement to within 1 category grade was 0.95.

This study had several limitations. Surgical explorations were not clinically indicated and were not performed. There were no suspected nonunions or hardware complications, two of the most common indications for exploration. In addition, this study was conducted not to determine specific accuracy of CT (compared with surgery exploration) for fusion assessment but to assess interobserver reliability. The clinical success rates for this population were high, and no patient required revision surgery for suspected pseudarthrosis. To assess the true accuracy of CT for fusion assessment, one would have to subject patients to follow-up exploratory surgery to test fusions mechanically.

Another limitation is the lack of a single industry-accepted radiographic fusion grading system. Fusion criteria are not standardized across all studies. Our radiologists have extensive research experience and limit their practices to neuromuscular radiology with a concentration on the spine. The radiographic criteria cited here are the same criteria they use in clinical practice, when reviewing CT scans for clinicians. Last, there was no control group for direct comparison against other cages. Historical controls were cited. This does not adversely affect the conclusions of this investigation.

Conclusion

Clinicians have been reluctant to rely on CT with titanium devices because of concerns about the accuracy of image interpretations. The interbody device used in this study demonstrated minimal artifact and minimal subsidence, and trabecular bone was easily identified throughout the implant in the majority of cases reviewed. We found high interobserver agreement scores across all fusion criteria. Although surgical exploration remains the gold standard for fusion assessment, surgeons should have confidence in using CT with this titanium implant.

The accuracy of using computed tomography (CT) to assess lumbar interbody fusion with titanium implants has been questioned in the past.1-4 Reports have most often focused on older technologies using paired, threaded, smooth-surface titanium devices. Some authors have reported they could not confidently assess the quality of fusions using CT because of implant artifact.1-3

When pseudarthrosis is suspected clinically, and imaging results are inconclusive, surgical explorations may be performed with mechanical stressing of the segment to assess for motion.2,5-7 However, surgical exploration not only has the morbidity of another surgery but may not be conclusive. Direct exploration of an interbody fusion is problematic. In some cases, there may be residual normal springing motion through posterior elements, even in the presence of a solid interbody fusion, which can be confusing.5 Radiologic confirmation of fusion status is therefore preferred over surgical exploration. CT is the imaging modality used most often to assess spinal fusions.8,9

A new titanium interbody fusion implant (Endoskeleton TA; Titan Spine, Mequon, Wisconsin) preserves the endplate and has an acid-etched titanium surface for osseous integration and a wide central aperture for bone graft (Figure 1). Compared with earlier titanium implants, this design may allow for more accurate CT imaging and fusion assessment. We conducted a study to determine the interobserver reliability of using CT to evaluate bone formation and other radiographic variables with this new titanium interbody device.

Materials and Methods

After receiving institutional review board approval for this study, as well as patient consent, we obtained and analyzed CT scans of patients after they had undergone anterior lumbar interbody fusion (ALIF) at L3–S1 as part of a separate clinical outcomes study.

Each patient received an Endoskeleton TA implant. The fusion cage was packed with 2 sponges (3.0 mg per fusion level) of bone morphogenetic protein, or BMP (InFuse; Medtronic, Minneapolis, Minnesota). In addition, 1 to 3 cm3 of hydroxyapatite/β-‌tricalcium phosphate (MasterGraft, Medtronic) collagen sponge was used as graft extender to fill any remaining gaps within the cage. Pedicle screw fixation was used in all cases.

Patients were randomly assigned to have fine-cut CT scans with reconstructed images at 6, 9, or 12 months. The scans were reviewed by 2 independent radiologists who were blinded to each other’s interpretations and the clinical results. The radiographic fusion criteria are listed in Tables 1 to 3. Interobserver agreement (κ) was calculated separately for each radiographic criterion and could range from 0.00 (no agreement) to 1.00 (perfect agreement).10,11

Results

The study involved 33 patients (17 men, 16 women) with 56 lumbar spinal fusion levels. Mean age was 46 years (range, 23-66 years). Six patients (18%) were nicotine users. Seventeen patients were scanned at 6 months, 9 at 9 months, and 7 at 12 months. There were no significant differences in results between men and women, between nicotine users and nonusers, or among patients evaluated at 6, 9, or 12 months.

The radiologists agreed on 345 of the 392 data points reviewed (κ = 0.88). Interobserver agreement results for the fusion criteria are listed in Tables 1 and 3. Interobserver agreement was 0.77 for overall fusion grade, with the radiologists noting definite fusion (grade 5) in 80% and 91% of the levels (Table 1). Other radiographic criteria are listed in Tables 2 and 3. Interobserver agreement was 0.80 for degree of artifact, 0.95 for subsidence, 0.96 for both lucency and trabecular bone, 0.77 for anterior osseous bridging, and 0.95 for cystic vertebral changes.

Discussion

Radiographic analysis of interbody fusions is an important clinical issue. Investigators have shown that CT is the radiographic method of choice for assessing fusion.8,9 Others have reported that assessing fusion with metallic interbody implants is more difficult compared with PEEK (polyether ether ketone) or allograft bone.3,4,5,12

Heithoff and colleagues1,2 reported on difficulties they encountered in assessing interbody fusion with titanium implants, and their research has often been cited. The authors concluded that they could not accurately assess fusion in these cases because of artifact from the small apertures in the cages and metallic scatter. Their study was very small (8 patients, 12 surgical levels) and used paired BAK (Bagby and Kuslich) cages (Zimmer, Warsaw, Indiana).

Recently, a unique surface technology, used to manufacture osseointegrative dental implants, has been adapted for use in the spine.13-15 Acid etching modifies the surface of titanium to create a nano-scale (micron-level) alteration. Compared with PEEK and smooth titanium, acid-etched titanium stimulates a better osteogenic environment.16,17 As this technology is now used clinically in spinal surgery, we thought it important to revisit the issue of CT analysis for fusion assessment with the newer titanium implants.

 

 

Artifact

The results of our study support the idea that the design of a titanium interbody fusion implant is important to radiographic analysis. The implant studied has a large open central aperture that appears to generate less artifact than historical controls (paired cylindrical cages) have.1-4 Other investigators have reported fewer problems with artifact in their studies of implants incorporating larger openings for bone graft.6,18 The radiologists in the present study found no significant problems with artifact. Less artifact is clinically important, as the remaining fusion variables can be more clearly visualized (Table 2, Figure 2).

Anterior Osseous Bridging, Subsidence, Lysis

In this study, the bony endplates were preserved. The disc and endplate cartilage was removed without reaming or drilling. Endplate reaming most likely contributes to subsidence and loss of original fixation between implant and bone interface.1,4,12 Some authors have advocated recessing the cages deeply and then packing bone anteriorly to create a “sentinel fusion sign.”1,2,6 Deeply seating interbody implants, instead of resting them more widely on the apophyseal ring of the vertebral endplate, may also lead to subsidence.4,12 The issue of identifying a sentinel fusion sign is relevant only if the surgeon tries to create one. In the present study, the implant used was an impacted cage positioned on the apophyseal perimeter of the disc space, just slightly recessed, so there was no attempt to create a sentinel fusion sign, as reflected in the relatively low scores on anterior osseous bridging (48%, 52%).

Subsidence and peri-implant lysis are pathologic variables associated with motion and bone loss. Sethi and colleagues19 noted a high percentage of endplate resorption and subsidence in cases reviewed using PEEK or allograft spacers paired with BMP-2. Although BMP-2 was used in the present study, we found very low rates of subsidence (0%, 5%) and no significant peri-implant lucencies (2%, 4%) (Figure 2). Interobserver agreement for these variables was high (0.95, 0.96). We hypothesize that the combination of endplate-sparing surgical technique and implant–bone integration contributed to these results.

Trabecular Bone and Fusion Grade

The primary radiographic criterion for solid interbody fusion is trabecular bone throughout the cage, bridging the vertebral bodies. In our study, the success rates for this variable were 96% and 100%, and there was very high interobserver agreement (0.96) (Figure 3). This very high fusion rate may preclude detecting subtle differences in interobserver agreement, but to what degree, if any, is unknown. Other investigators have effectively identified trabecular bone across the interspace and throughout the cages.6,18 The openings for bone formation were larger in the implants they used than in first-generation fusion cages but not as large as the implant openings in the present study. Larger openings may correlate with improved ability to visualize bridging bone on CT.

Radiologists and surgeons must ultimately arrive at a conclusion regarding the likelihood a fusion has occurred. Our radiologists integrated all the separate radiologic variables cited here, as well as their overall impressions of the scans, to arrive at a final grade regarding fusion quality (Figures 3, 4). Although this category provides the most interpretive latitude of all the variables examined, the results demonstrate high interobserver reliability. Agreement to exactly the same fusion grade was 0.77, and agreement to within 1 category grade was 0.95.

This study had several limitations. Surgical explorations were not clinically indicated and were not performed. There were no suspected nonunions or hardware complications, two of the most common indications for exploration. In addition, this study was conducted not to determine specific accuracy of CT (compared with surgery exploration) for fusion assessment but to assess interobserver reliability. The clinical success rates for this population were high, and no patient required revision surgery for suspected pseudarthrosis. To assess the true accuracy of CT for fusion assessment, one would have to subject patients to follow-up exploratory surgery to test fusions mechanically.

Another limitation is the lack of a single industry-accepted radiographic fusion grading system. Fusion criteria are not standardized across all studies. Our radiologists have extensive research experience and limit their practices to neuromuscular radiology with a concentration on the spine. The radiographic criteria cited here are the same criteria they use in clinical practice, when reviewing CT scans for clinicians. Last, there was no control group for direct comparison against other cages. Historical controls were cited. This does not adversely affect the conclusions of this investigation.

Conclusion

Clinicians have been reluctant to rely on CT with titanium devices because of concerns about the accuracy of image interpretations. The interbody device used in this study demonstrated minimal artifact and minimal subsidence, and trabecular bone was easily identified throughout the implant in the majority of cases reviewed. We found high interobserver agreement scores across all fusion criteria. Although surgical exploration remains the gold standard for fusion assessment, surgeons should have confidence in using CT with this titanium implant.

References

1.    Gilbert TJ, Heithoff KB, Mullin WJ. Radiographic assessment of cage-assisted interbody fusions in the lumbar spine. Semin Spine Surg. 2001;13:311-315.

2.    Heithoff KB, Mullin WJ, Renfrew DL, Gilbert TJ. The failure of radiographic detection of pseudarthrosis in patients with titanium lumbar interbody fusion cages. In: Proceedings of the 14th Annual Meeting of the North American Spine Society; October 20-23, 1999; Chicago, IL. Abstract 14.

3.    Cizek GR, Boyd LM. Imaging pitfalls of interbody implants. Spine. 2000;25(20):2633-2636.

4.    Dorchak JD, Burkus JK, Foor BD, Sanders DL. Dual paired proximity and combined BAK/proximity interbody fusion cages: radiographic results. In: Proceedings of the 15th Annual Meeting of the North American Spine Society. New Orleans, LA: North American Spine Society; 2000:83-85.

5.    Santos ER, Goss DG, Morcom RK, Fraser RD. Radiologic assessment of interbody fusion using carbon fiber cages. Spine. 2003;28(10):997-1001.

6.    Carreon LY, Glassman SD, Schwender JD, Subach BR, Gornet MF, Ohno S. Reliability and accuracy of fine-cut computed tomography scans to determine the status of anterior interbody fusions with metallic cages. Spine J. 2008;8(6):998-1002.

7.    Fogel GR, Toohey JS, Neidre A, Brantigan JW. Fusion assessment of posterior lumbar interbody fusion using radiolucent cages: x-ray films and helical computed tomography scans compared with surgical exploration of fusion. Spine J. 2008;8(4):570-577.

8.    Selby MD, Clark SR, Hall DJ, Freeman BJ. Radiologic assessment of spinal fusion. J Am Acad Orthop Surg. 2012;20(11):694-703.

9.    Chafetz N, Cann CE, Morris JM, Steinbach LS, Goldberg HI, Ax L. Pseudarthrosis following lumbar fusion: detection by direct coronal CT scanning. Radiology. 1987;162(3):803-805.

10.  Landis RJ, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159-174.

11.  Viera AJ, Garrett JM. Understanding interobserver agreement; the kappa statistic. Fam Med. 2005;37(5):360-363.

12.  Burkus JK, Foley K, Haid RW, Lehuec JC. Surgical Interbody Research Group—radiographic assessment of interbody fusion devices: fusion criteria for anterior lumbar interbody surgery. Neurosurg Focus. 2001;10(4):E11.

13.  Albrektsson T, Zarb G, Worthington P, Eriksson AR. The long-term efficacy of currently used dental implants: a review and proposed criteria of success. Int J Oral Maxillofac Implants. 1986;1(1):11-25.

14.  De Leonardis D, Garg AK, Pecora GE. Osseointegration of rough acid-etched titanium implants: 5-year follow-up of 100 Minimatic implants. Int J Oral Maxillofac Implants. 1999;14(3):384-391.

15.  Schwartz Z, Raz P, Zhao G, et al. Effect of micrometer-scale roughness on the surface of Ti6Al4V pedicle screws in vitro and in vivo. J Bone Joint Surg Am. 2008;90(11):2485-2498.

16.  Olivares-Navarrete R, Gittens RA, Schneider JM, et al. Osteoblasts exhibit a more differentiated phenotype and increased bone morphogenetic protein production on titanium alloy substrates than on poly-ether-ether-ketone. Spine J. 2012;12(3):265-272.

17.  Olivares-Navarrete R, Hyzy SL, Gittens RA 1st, et al. Rough titanium alloys regulate osteoblast production of angiogenic factors. Spine J. 2013;13(11):1563-1570.

18.  Burkus JK, Dorchak JD, Sanders DL. Radiographic assessment of interbody fusion using recombinant human bone morphogenetic protein type 2. Spine. 2003;28(4):372-377.

19.    Sethi A, Craig J, Bartol S, et al. Radiographic and CT evaluation of recombinant human bone morphogenetic protein-2–assisted spinal interbody fusion. AJR Am J Roentgenol. 2011;197(1):W128-W133.

References

1.    Gilbert TJ, Heithoff KB, Mullin WJ. Radiographic assessment of cage-assisted interbody fusions in the lumbar spine. Semin Spine Surg. 2001;13:311-315.

2.    Heithoff KB, Mullin WJ, Renfrew DL, Gilbert TJ. The failure of radiographic detection of pseudarthrosis in patients with titanium lumbar interbody fusion cages. In: Proceedings of the 14th Annual Meeting of the North American Spine Society; October 20-23, 1999; Chicago, IL. Abstract 14.

3.    Cizek GR, Boyd LM. Imaging pitfalls of interbody implants. Spine. 2000;25(20):2633-2636.

4.    Dorchak JD, Burkus JK, Foor BD, Sanders DL. Dual paired proximity and combined BAK/proximity interbody fusion cages: radiographic results. In: Proceedings of the 15th Annual Meeting of the North American Spine Society. New Orleans, LA: North American Spine Society; 2000:83-85.

5.    Santos ER, Goss DG, Morcom RK, Fraser RD. Radiologic assessment of interbody fusion using carbon fiber cages. Spine. 2003;28(10):997-1001.

6.    Carreon LY, Glassman SD, Schwender JD, Subach BR, Gornet MF, Ohno S. Reliability and accuracy of fine-cut computed tomography scans to determine the status of anterior interbody fusions with metallic cages. Spine J. 2008;8(6):998-1002.

7.    Fogel GR, Toohey JS, Neidre A, Brantigan JW. Fusion assessment of posterior lumbar interbody fusion using radiolucent cages: x-ray films and helical computed tomography scans compared with surgical exploration of fusion. Spine J. 2008;8(4):570-577.

8.    Selby MD, Clark SR, Hall DJ, Freeman BJ. Radiologic assessment of spinal fusion. J Am Acad Orthop Surg. 2012;20(11):694-703.

9.    Chafetz N, Cann CE, Morris JM, Steinbach LS, Goldberg HI, Ax L. Pseudarthrosis following lumbar fusion: detection by direct coronal CT scanning. Radiology. 1987;162(3):803-805.

10.  Landis RJ, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159-174.

11.  Viera AJ, Garrett JM. Understanding interobserver agreement; the kappa statistic. Fam Med. 2005;37(5):360-363.

12.  Burkus JK, Foley K, Haid RW, Lehuec JC. Surgical Interbody Research Group—radiographic assessment of interbody fusion devices: fusion criteria for anterior lumbar interbody surgery. Neurosurg Focus. 2001;10(4):E11.

13.  Albrektsson T, Zarb G, Worthington P, Eriksson AR. The long-term efficacy of currently used dental implants: a review and proposed criteria of success. Int J Oral Maxillofac Implants. 1986;1(1):11-25.

14.  De Leonardis D, Garg AK, Pecora GE. Osseointegration of rough acid-etched titanium implants: 5-year follow-up of 100 Minimatic implants. Int J Oral Maxillofac Implants. 1999;14(3):384-391.

15.  Schwartz Z, Raz P, Zhao G, et al. Effect of micrometer-scale roughness on the surface of Ti6Al4V pedicle screws in vitro and in vivo. J Bone Joint Surg Am. 2008;90(11):2485-2498.

16.  Olivares-Navarrete R, Gittens RA, Schneider JM, et al. Osteoblasts exhibit a more differentiated phenotype and increased bone morphogenetic protein production on titanium alloy substrates than on poly-ether-ether-ketone. Spine J. 2012;12(3):265-272.

17.  Olivares-Navarrete R, Hyzy SL, Gittens RA 1st, et al. Rough titanium alloys regulate osteoblast production of angiogenic factors. Spine J. 2013;13(11):1563-1570.

18.  Burkus JK, Dorchak JD, Sanders DL. Radiographic assessment of interbody fusion using recombinant human bone morphogenetic protein type 2. Spine. 2003;28(4):372-377.

19.    Sethi A, Craig J, Bartol S, et al. Radiographic and CT evaluation of recombinant human bone morphogenetic protein-2–assisted spinal interbody fusion. AJR Am J Roentgenol. 2011;197(1):W128-W133.

Issue
The American Journal of Orthopedics - 44(2)
Issue
The American Journal of Orthopedics - 44(2)
Page Number
86-89
Page Number
86-89
Publications
Publications
Topics
Article Type
Display Headline
Interobserver Agreement Using Computed Tomography to Assess Radiographic Fusion Criteria With a Unique Titanium Interbody Device
Display Headline
Interobserver Agreement Using Computed Tomography to Assess Radiographic Fusion Criteria With a Unique Titanium Interbody Device
Legacy Keywords
american journal of orthopedics, AJO, original study, study, computed tomography, CT, radiographic, radiographs, fusion, titanium, interbody device, device, radiology, spinal fusion, spine, images, imaging, implant, bone, slosar, kaiser, marrero, sacco
Legacy Keywords
american journal of orthopedics, AJO, original study, study, computed tomography, CT, radiographic, radiographs, fusion, titanium, interbody device, device, radiology, spinal fusion, spine, images, imaging, implant, bone, slosar, kaiser, marrero, sacco
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Biomechanical Comparison of Hamstring Tendon Fixation Devices for Anterior Cruciate Ligament Reconstruction: Part 2. Four Tibial Devices

Article Type
Changed
Thu, 09/19/2019 - 13:37
Display Headline
Biomechanical Comparison of Hamstring Tendon Fixation Devices for Anterior Cruciate Ligament Reconstruction: Part 2. Four Tibial Devices

Of the procedures performed by surgeons specializing in sports medicine and by general orthopedists, anterior cruciate ligament (ACL) reconstruction remains one of the most common.1 Recent years have seen a trend toward replacing the “gold standard” of bone–patellar tendon–bone autograft with autograft or allograft hamstring tendon in ACL reconstruction.2 This shift is being made to try to avoid the donor-site morbidity of patellar tendon autografts and decrease the incidence of postoperative anterior knee pain. With increased use of hamstring grafts in ACL reconstruction, it is important to determine the strength of different methods of graft fixation.

Rigid fixation of hamstring grafts is recognized as a crucial factor in the long-term success of ACL reconstruction. Grafts must withstand early rehabilitation forces as high as 500 N.2 There is therefore much concern about the strength of tibial fixation, given the lower bone density of the tibial metaphysis versus the femoral metaphysis. In addition, stability is more a concern in the tibia, as the forces are directly in line with the tibial tunnel.3,4

The challenge has been to engineer devices that provide stable, rigid graft fixation that allows expeditious tendon-to-bone healing and increased construct stiffness. Many new fixation devices are being marketed. There is much interest in determining which devices have the most fixation strength,4-9 but so far several products have not been compared with one another.

We conducted a study to determine if tibial hamstring fixation devices used in ACL reconstruction differ in fixation strength. We hypothesized we would find no differences.

Materials and Methods

Forty porcine tibias were harvested after the animals had been euthanized for other studies at our institution. Our study was approved by the institutional animal care and use committee. Specimens were stored at –25°C and, on day of testing, thawed to room temperature. Gracilis and semitendinosus tendon grafts were donated by a tissue bank (LifeNet Health, Virginia Beach, Virginia). The grafts were stored at –25°C; on day of testing, tendons were thawed to room temperature.

We evaluated 4 different tibial fixation devices (Figure 1): Delta screw and Retroscrew (Arthrex, Naples, Florida), WasherLoc (Arthrotek, Warsaw, Indiana), and Intrafix (Depuy Mitek, Raynham, Massachusetts). For each device, 10 ACL fixation constructs were tested.

Quadrupled human semitendinosus–gracilis tendon grafts were fixed into the tibias using the 4 tibial fixation devices. All fixations were done according to manufacturer specifications. All interference screws were placed eccentrically. The testing apparatus and procedure are described in an article by Kousa and colleagues.6 The specimens were mounted on the mechanical testing apparatus by threaded bars and custom clamps to secure fixation (Figure 2). Constant tension was maintained on all 4 strands of the hamstring grafts to equalize the tendons. After the looped end of the hamstring graft was secured by clamps, 25 mm of graft was left between the clamp and the intra-articular tunnel.

In the cyclic loading test, the load was applied parallel to the long axis of the tibial tunnel. A 50-N preload was initially applied to each specimen for 10 seconds. Subsequently, 1500 loading cycles between 50 N and 200 N at a rate of 1 cycle per 120 seconds were performed. Standard force-displacement curves were then generated. Each tibial fixation device underwent 10 cyclic loading tests. Specimens surviving the cyclic loading then underwent a single-cycle load-to-failure (LTF) test in which the load was applied parallel to the long axis of the drill hole at a rate of 50 mm per minute.

Residual displacement, stiffness, and ultimate LTF data were recorded from the force-displacement curves. Residual displacement data were generated from the cyclic loading test; residual displacement was determined by subtracting preload displacement from displacement at 1, 10, 50, 100, 250, 500, 1000, and 1500 cycles. Stiffness data were generated from the single-cycle LTF test; stiffness was defined as the linear region slope of the force-displacement curve corresponding to the steepest straight-line tangent to the loading curve. Ultimate LTF (yield load) data were generated from the single-cycle LTF test; ultimate LTF was defined as the load at the point where the slope of the load displacement curve initially decreases.

Statistical analysis generated standard descriptive statistics: means, standard deviations, and proportions. One-way analysis of variance (ANOVA) was used to determine any statistically significant differences in stiffness, yield load, and residual displacement between the different fixation devices. Differences in force (load) between the single cycle and the cyclic loading test were determined by ANOVA. P < .05 was considered statistically significant for all tests.

Results

The modes of failure for the devices were similar. In all 10 tests, Intrafix was pulled through the tunnel with the hamstring allografts. WasherLoc failed in each test, with the tendons eventually being pulled through the washer and thus out through the tunnel. Delta screw and Retroscrew both failed with slippage of the fixation device and the tendons pulled out through the tunnel.

 

 

For the cyclic loading tests, 8 of the 10 Delta screws and only 2 of the 10 Retroscrews completed the 1500-cycle loading test before failure. The 2 Delta screws that did not complete the testing failed after about 500 cycles, and the 8 Retroscrews that did not complete the testing failed after about 250 cycles. All 10 WasherLoc and Intrafix devices completed the testing.

Residual displacement data were calculated from the cyclic loading tests (Table). Mean (SS) residual displacement was lowest for Intrafix at 2.9 (1.2) mm, followed by WasherLoc at 5.6 (2.2) mm and Delta at 6.4 (3.3) mm. Retroscrew at 25.5 (11.0) mm had the highest residual displacement, though only 2 completed the cyclic tests. Intrafix, WasherLoc, and Delta were not statistically different, but there was a statistical difference between Retroscrew and the other devices (P < .001).

Stiffness data were calculated from the LTF tests (Table). Mean (SD) stiffness was highest for Intrafix at 129 (32.7) N/mm, followed by WasherLoc at 97 (11.6) N/mm, Delta at 93 (9.5) N/mm, and Retroscrew at 80.2 (8.8) N/mm. Intrafix had statistically higher stiffness compared with WasherLoc (P < .05), Delta (P < .01), and Retroscrew (P < .05). There were no significant differences in stiffness among WasherLoc, Delta, and Retroscrew.

Mean (SD) ultimate LTF was highest for Intrafix at 656 (182.6) N, followed by WasherLoc at 630 (129.3) N, Delta at 430 (90.0) N, and Retroscrew at 285 (33.8) N (Table). There were significant differences between Intrafix and Delta (P < .05) and Retroscrew (P < .05). WasherLoc failed at a significantly higher load compared with Delta (P < .05) and Retroscrew (P < .05). There were no significant differences in mean LTF between Intrafix and WasherLoc.

Discussion

In this biomechanical comparison of 4 different tibial fixation devices, Intrafix had results superior to those of the other implants. Intrafix failed at higher LTF and lower residual displacement and had higher stiffness. WasherLoc performed well and had LTF similar to that of Intrafix. The interference screws performed poorly with respect to LTF, residual displacement, and stiffness, and a large proportion of them failed early into cyclic loading.

Intrafix is a central fixation device that uses a 4-quadrant sleeve and a screw to establish tensioning across all 4 hamstring graft strands. The theory is this configuration increases the contact area between graft and bone for proper integration of graft into bone. Intrafix has performed well in other biomechanical studies. Using a study design similar to ours, Kousa and colleagues7 found the performance of Intrafix to be superior to that of other devices, including interference screws and WasherLoc. Starch and colleagues10 reported that, compared with a standard interference screw, Intrafix required significantly higher load to cause a millimeter of graft laxity. They concluded that this demonstrates superior fixation strength and reduced laxity of the graft after cyclic loading. Coleridge and Amis4 found that, compared with WasherLoc and various interference screws, Intrafix had the lower residual displacement. However, they also found that, compared with Intrafix and interference screws, WasherLoc had the highest ultimate tensile strength. Their findings may be difficult to compare with ours, as they tested fixation of calf extensor tendons, and we tested human hamstring grafts.

An important concern in the present study was the poor performance of the interference screws. Other authors recently expressed concern with using interference screws in soft-tissue ACL grafts—based on biomechanical study results of increased slippage, bone tunnel widening, and less strength.11 Delta screws and Retroscrews have not been specifically evaluated, and their fixation strengths have not been directly compared with those of other devices. In the present study, Delta screws and Retroscrews consistently performed the poorest with respect to ultimate LTF, residual displacement, and stiffness. Twenty percent of the Delta screws and 80% of the Retroscrews did not complete 1500 cycles. The poor performance of the interference screws was echoed in studies by Magen and colleagues12 and Kousa and colleagues,7 in which the only complete failures were in the cyclic loading of the interference screws.

Three possible confounding factors may have affected the performance of the interference screws: bone density of porcine tibia, length of interference screw, and location of screw placement. In addition, in clinical practice these screws may be used with other modes of graft fixation. Combined fixation (interference screws, other devices) was not evaluated in this study.

Porcine models have been used in many biomechanical graft fixation studies.4,6,7,12,13 Some authors have found porcine tibia to be a poor substitute for human cadaver tibia because the volumetric density of porcine bone is higher than that of human bone.14,15 Other authors have demonstrated fairly similar bone density between human and porcine tibia.16 The concern is that interference screw fixation strength correlates with the density of the bone in which screws are fixed.17 Therefore, one limitation of our study is that we did not determine the bone density of the porcine tibias for comparison with that of young human tibias.

 

 

Another important variable that could have affected the performance of the interference screws is screw length. One study found no significant difference in screw strength between various lengths, and longer screws failed to protect against graft slippage.18 However, Selby and colleagues19 found that, compared with 28-mm screws, 35-mm bioabsorbable interference screws failed at higher LTF. This is in part why we selected 35-mm Delta screws for our study. Both 35-mm Delta screws and 20-mm Retroscrews performed poorly. However, we could not determine if the poorer performance of Retroscrews was related to their length.

We used an eccentric placement for our interference screws. Although some studies have suggested concentric placement might improve fixation strength by increasing bone–tendon contact,20 Simonian and colleagues21 found no difference in graft slippage or ultimate LTF between eccentrically and concentrically placed screws. Although they were not biomechanically tested in our study, a few grafts were fixed with concentrically placed screws, and these tendons appeared to be more clinically damaged than the eccentrically placed screws.

Combined tibial fixation techniques may be used in clinical practice, but we did not evaluate them in our study. Yoo and colleagues9 compared interference screw, interference screw plus cortical screw and spiked washer, and cortical screw and spiked washer alone. They found that stiffness nearly doubled, residual displacement was less, and ultimate LTF was significantly higher in the group with interference screw plus cortical screw and spiked washer. In a similar study, Walsh and colleagues13 demonstrated improved stiffness and LTF in cyclic testing with the combination of retrograde interference screw and suture button over interference screw alone. Further study may include direct comparisons of additional tibial fixation techniques using more than one device. Cost analysis of use of additional fixation devices would be beneficial as well.

Study results have clearly demonstrated that tibial fixation is the weak point in ACL reconstruction3,17 and that early aggressive rehabilitation can help restore range of motion, strength, and function.22,23 Implants that can withstand early loads during rehabilitation periods are therefore of utmost importance.

Conclusion

Intrafix demonstrated superior strength in the fixation of hamstring grafts in the tibia, followed closely by WasherLoc. When used as the sole tibial fixation device, interference screws had low LTF, decreased stiffness, and high residual displacement, which may have clinical implications for early rehabilitation after ACL reconstruction.

References

1.    Garrett WE Jr, Swiontkowski MF, Weinsten JN, et al. American Board of Orthopaedic Surgery Practice of the Orthopaedic Surgeon: part-II, certification examination case mix. J Bone Joint Surg Am. 2006;88(3):660-667.

2.    West RV, Harner CD. Graft selection in anterior cruciate ligament reconstruction. J Am Acad Orthop Surg. 2005;13(3):197-207.

3.    Brand J Jr, Weiler A, Caborn DN, Brown CH Jr, Johnson DL. Graft fixation in cruciate ligament reconstruction. Am J Sports Med. 2000;28(5):761-774.

4.    Coleridge SD, Amis AA. A comparison of five tibial-fixation systems in hamstring-graft anterior cruciate ligament reconstruction. Knee Surg Sports Traumatol Arthrosc. 2004;12(5):391-397.

5.    Fabbriciani C, Mulas PD, Ziranu F, Deriu L, Zarelli D, Milano G. Mechanical analysis of fixation methods for anterior cruciate ligament reconstruction with hamstring tendon graft. An experimental study in sheep knees. Knee. 2005;12(2):135-138.

6.    Kousa P, Järvinen TL, Vihavainen M, Kannus P, Järvinen M. The fixation strength of six hamstring tendon graft fixation devices in anterior cruciate ligament reconstruction. Part I: femoral site. Am J Sports Med. 2003;31(2):174-181.

7.    Kousa P, Järvinen TL, Vihavainen M, Kannus P, Järvinen M. The fixation strength of six hamstring tendon graft fixation devices in anterior cruciate ligament reconstruction. Part II: tibial site. Am J Sports Med. 2003;31(2):182-188.

8.    Weiler A, Hoffmann RF, Stähelin AC, Bail HJ, Siepe CJ, Südkamp NP. Hamstring tendon fixation using interference screws: a biomechanical study in calf tibial bone. Arthroscopy. 1998;14(1):29-37.

9.    Yoo JC, Ahn JH, Kim JH, et al. Biomechanical testing of hybrid hamstring graft tibial fixation in anterior cruciate ligament reconstruction. Knee. 2006;13(6):455-459.

10.  Starch DW, Alexander JW, Noble PC, Reddy S, Lintner DM. Multistranded hamstring tendon graft fixation with a central four-quadrant or a standard tibial interference screw for anterior cruciate ligament reconstruction. Am J Sports Med. 2003;31(3):338-344.

11.  Prodromos CC, Fu FH, Howell SM, Johnson DH, Lawhorn K. Controversies in soft-tissue anterior cruciate ligament reconstruction: grafts, bundles, tunnels, fixation, and harvest. J Am Acad Orthop Surg. 2008;16(7):376-384.

12.  Magen HE, Howell SM, Hull ML. Structural properties of six tibial fixation methods for anterior cruciate ligament soft tissue grafts. Am J Sports Med. 1999;27(1):35-43.

13.  Walsh MP, Wijdicks CA, Parker JB, Hapa O, LaPrade RF. A comparison between a retrograde interference screw, suture button, and combined fixation on the tibial side in an all-inside anterior cruciate ligament reconstruction: a biomechanical study in a porcine model. Am J Sports Med. 2009;37(1):160-167.

14.  Nurmi JT, Järvinen TL, Kannus P, Sievänen H, Toukosalo J, Järvinen M. Compaction versus extraction drilling for fixation of the hamstring tendon graft in anterior cruciate ligament reconstruction. Am J Sports Med. 2002;30(2):167-173.

15.  Nurmi JT, Sievänen H, Kannus P, Järvinen M, Järvinen TL. Porcine tibia is a poor substitute for human cadaver tibia for evaluating interference screw fixation. Am J Sports Med. 2004;32(3):765-771.

16.  Nagarkatti DG, McKeon BP, Donahue BS, Fulkerson JP. Mechanical evaluation of a soft tissue interference screw in free tendon anterior cruciate ligament graft fixation. Am J Sports Med. 2001;29(1):67-71.

17.  Brand JC Jr, Pienkowski D, Steenlage E, Hamilton D, Johnson DL, Caborn DN. Interference screw fixation strength of a quadrupled hamstring tendon graft is directly related to bone mineral density and insertion torque. Am J Sports Med. 2000;28(5):705-710.

18.  Stadelmaier DM, Lowe WR, Ilahi OA, Noble PC, Kohl HW 3rd. Cyclic pull-out strength of hamstring tendon graft fixation with soft tissue interference screws. Influence of screw length. Am J Sports Med. 1999;27(6):778-783.

19.  Selby JB, Johnson DL, Hester P, Caborn DN. Effect of screw length on bioabsorbable interference screw fixation in a tibial bone tunnel. Am J Sports Med. 2001;29(5):614-619.

20.  Shino K, Pflaster DS. Comparison of eccentric and concentric screw placement for hamstring graft fixation in the tibial tunnel. Knee Surg Sports Traumatol Arthrosc. 2000;8(2):73-75.

21.  Simonian PT, Sussmann PS, Baldini TH, Crockett HC, Wickiewicz TL. Interference screw position and hamstring graft location for anterior cruciate ligament reconstruction. Arthroscopy. 1998;14(5):459-464.

22.  Shelbourne KD, Nitz P. Accelerated rehabilitation after anterior cruciate ligament reconstruction. Am J Sports Med. 1990;18(3):292-299.

23.   Shelbourne KD, Wilckens JH. Current concepts in anterior cruciate ligament rehabilitation. Orthop Rev. 1990;19(11):957-964.

Article PDF
Author and Disclosure Information

Brian P. Scannell, MD, Bryan J. Loeffler, MD, Michael Hoenig, MD, Richard D. Peindl, PhD, Donald F. D’Alessandro, MD, Patrick M. Connor, MD, and James E. Fleischli, MD

Authors’ Disclosure Statement: All implants used in this study were donated by Biomet Sports Medicine (Arthrotek), Depuy Mitek, and Arthrex. Hamstring allografts were donated by LifeNet Health. Dr. D’Alessandro wishes to report that he is a paid consultant to Biomet Sports Medicine, and Dr. Connor wishes to report that he is a paid consultant to Biomet Sports Medicine and Zimmer. The other authors report no actual or potential conflict of interest in relation to this article.

Issue
The American Journal of Orthopedics - 44(2)
Publications
Topics
Page Number
82-85
Legacy Keywords
american journal of orthopedics, AJO, original study, study, hamstring tendon fixation devices, hamstring, tendon, devices, anterior cruciate ligament, ACL, part 2, tibial devices, ACL reconstruction, tibial, tibias, fixation, scannell, loeffler, hoenig, peindl, d'alessandro, connor, fleischli
Sections
Author and Disclosure Information

Brian P. Scannell, MD, Bryan J. Loeffler, MD, Michael Hoenig, MD, Richard D. Peindl, PhD, Donald F. D’Alessandro, MD, Patrick M. Connor, MD, and James E. Fleischli, MD

Authors’ Disclosure Statement: All implants used in this study were donated by Biomet Sports Medicine (Arthrotek), Depuy Mitek, and Arthrex. Hamstring allografts were donated by LifeNet Health. Dr. D’Alessandro wishes to report that he is a paid consultant to Biomet Sports Medicine, and Dr. Connor wishes to report that he is a paid consultant to Biomet Sports Medicine and Zimmer. The other authors report no actual or potential conflict of interest in relation to this article.

Author and Disclosure Information

Brian P. Scannell, MD, Bryan J. Loeffler, MD, Michael Hoenig, MD, Richard D. Peindl, PhD, Donald F. D’Alessandro, MD, Patrick M. Connor, MD, and James E. Fleischli, MD

Authors’ Disclosure Statement: All implants used in this study were donated by Biomet Sports Medicine (Arthrotek), Depuy Mitek, and Arthrex. Hamstring allografts were donated by LifeNet Health. Dr. D’Alessandro wishes to report that he is a paid consultant to Biomet Sports Medicine, and Dr. Connor wishes to report that he is a paid consultant to Biomet Sports Medicine and Zimmer. The other authors report no actual or potential conflict of interest in relation to this article.

Article PDF
Article PDF

Of the procedures performed by surgeons specializing in sports medicine and by general orthopedists, anterior cruciate ligament (ACL) reconstruction remains one of the most common.1 Recent years have seen a trend toward replacing the “gold standard” of bone–patellar tendon–bone autograft with autograft or allograft hamstring tendon in ACL reconstruction.2 This shift is being made to try to avoid the donor-site morbidity of patellar tendon autografts and decrease the incidence of postoperative anterior knee pain. With increased use of hamstring grafts in ACL reconstruction, it is important to determine the strength of different methods of graft fixation.

Rigid fixation of hamstring grafts is recognized as a crucial factor in the long-term success of ACL reconstruction. Grafts must withstand early rehabilitation forces as high as 500 N.2 There is therefore much concern about the strength of tibial fixation, given the lower bone density of the tibial metaphysis versus the femoral metaphysis. In addition, stability is more a concern in the tibia, as the forces are directly in line with the tibial tunnel.3,4

The challenge has been to engineer devices that provide stable, rigid graft fixation that allows expeditious tendon-to-bone healing and increased construct stiffness. Many new fixation devices are being marketed. There is much interest in determining which devices have the most fixation strength,4-9 but so far several products have not been compared with one another.

We conducted a study to determine if tibial hamstring fixation devices used in ACL reconstruction differ in fixation strength. We hypothesized we would find no differences.

Materials and Methods

Forty porcine tibias were harvested after the animals had been euthanized for other studies at our institution. Our study was approved by the institutional animal care and use committee. Specimens were stored at –25°C and, on day of testing, thawed to room temperature. Gracilis and semitendinosus tendon grafts were donated by a tissue bank (LifeNet Health, Virginia Beach, Virginia). The grafts were stored at –25°C; on day of testing, tendons were thawed to room temperature.

We evaluated 4 different tibial fixation devices (Figure 1): Delta screw and Retroscrew (Arthrex, Naples, Florida), WasherLoc (Arthrotek, Warsaw, Indiana), and Intrafix (Depuy Mitek, Raynham, Massachusetts). For each device, 10 ACL fixation constructs were tested.

Quadrupled human semitendinosus–gracilis tendon grafts were fixed into the tibias using the 4 tibial fixation devices. All fixations were done according to manufacturer specifications. All interference screws were placed eccentrically. The testing apparatus and procedure are described in an article by Kousa and colleagues.6 The specimens were mounted on the mechanical testing apparatus by threaded bars and custom clamps to secure fixation (Figure 2). Constant tension was maintained on all 4 strands of the hamstring grafts to equalize the tendons. After the looped end of the hamstring graft was secured by clamps, 25 mm of graft was left between the clamp and the intra-articular tunnel.

In the cyclic loading test, the load was applied parallel to the long axis of the tibial tunnel. A 50-N preload was initially applied to each specimen for 10 seconds. Subsequently, 1500 loading cycles between 50 N and 200 N at a rate of 1 cycle per 120 seconds were performed. Standard force-displacement curves were then generated. Each tibial fixation device underwent 10 cyclic loading tests. Specimens surviving the cyclic loading then underwent a single-cycle load-to-failure (LTF) test in which the load was applied parallel to the long axis of the drill hole at a rate of 50 mm per minute.

Residual displacement, stiffness, and ultimate LTF data were recorded from the force-displacement curves. Residual displacement data were generated from the cyclic loading test; residual displacement was determined by subtracting preload displacement from displacement at 1, 10, 50, 100, 250, 500, 1000, and 1500 cycles. Stiffness data were generated from the single-cycle LTF test; stiffness was defined as the linear region slope of the force-displacement curve corresponding to the steepest straight-line tangent to the loading curve. Ultimate LTF (yield load) data were generated from the single-cycle LTF test; ultimate LTF was defined as the load at the point where the slope of the load displacement curve initially decreases.

Statistical analysis generated standard descriptive statistics: means, standard deviations, and proportions. One-way analysis of variance (ANOVA) was used to determine any statistically significant differences in stiffness, yield load, and residual displacement between the different fixation devices. Differences in force (load) between the single cycle and the cyclic loading test were determined by ANOVA. P < .05 was considered statistically significant for all tests.

Results

The modes of failure for the devices were similar. In all 10 tests, Intrafix was pulled through the tunnel with the hamstring allografts. WasherLoc failed in each test, with the tendons eventually being pulled through the washer and thus out through the tunnel. Delta screw and Retroscrew both failed with slippage of the fixation device and the tendons pulled out through the tunnel.

 

 

For the cyclic loading tests, 8 of the 10 Delta screws and only 2 of the 10 Retroscrews completed the 1500-cycle loading test before failure. The 2 Delta screws that did not complete the testing failed after about 500 cycles, and the 8 Retroscrews that did not complete the testing failed after about 250 cycles. All 10 WasherLoc and Intrafix devices completed the testing.

Residual displacement data were calculated from the cyclic loading tests (Table). Mean (SS) residual displacement was lowest for Intrafix at 2.9 (1.2) mm, followed by WasherLoc at 5.6 (2.2) mm and Delta at 6.4 (3.3) mm. Retroscrew at 25.5 (11.0) mm had the highest residual displacement, though only 2 completed the cyclic tests. Intrafix, WasherLoc, and Delta were not statistically different, but there was a statistical difference between Retroscrew and the other devices (P < .001).

Stiffness data were calculated from the LTF tests (Table). Mean (SD) stiffness was highest for Intrafix at 129 (32.7) N/mm, followed by WasherLoc at 97 (11.6) N/mm, Delta at 93 (9.5) N/mm, and Retroscrew at 80.2 (8.8) N/mm. Intrafix had statistically higher stiffness compared with WasherLoc (P < .05), Delta (P < .01), and Retroscrew (P < .05). There were no significant differences in stiffness among WasherLoc, Delta, and Retroscrew.

Mean (SD) ultimate LTF was highest for Intrafix at 656 (182.6) N, followed by WasherLoc at 630 (129.3) N, Delta at 430 (90.0) N, and Retroscrew at 285 (33.8) N (Table). There were significant differences between Intrafix and Delta (P < .05) and Retroscrew (P < .05). WasherLoc failed at a significantly higher load compared with Delta (P < .05) and Retroscrew (P < .05). There were no significant differences in mean LTF between Intrafix and WasherLoc.

Discussion

In this biomechanical comparison of 4 different tibial fixation devices, Intrafix had results superior to those of the other implants. Intrafix failed at higher LTF and lower residual displacement and had higher stiffness. WasherLoc performed well and had LTF similar to that of Intrafix. The interference screws performed poorly with respect to LTF, residual displacement, and stiffness, and a large proportion of them failed early into cyclic loading.

Intrafix is a central fixation device that uses a 4-quadrant sleeve and a screw to establish tensioning across all 4 hamstring graft strands. The theory is this configuration increases the contact area between graft and bone for proper integration of graft into bone. Intrafix has performed well in other biomechanical studies. Using a study design similar to ours, Kousa and colleagues7 found the performance of Intrafix to be superior to that of other devices, including interference screws and WasherLoc. Starch and colleagues10 reported that, compared with a standard interference screw, Intrafix required significantly higher load to cause a millimeter of graft laxity. They concluded that this demonstrates superior fixation strength and reduced laxity of the graft after cyclic loading. Coleridge and Amis4 found that, compared with WasherLoc and various interference screws, Intrafix had the lower residual displacement. However, they also found that, compared with Intrafix and interference screws, WasherLoc had the highest ultimate tensile strength. Their findings may be difficult to compare with ours, as they tested fixation of calf extensor tendons, and we tested human hamstring grafts.

An important concern in the present study was the poor performance of the interference screws. Other authors recently expressed concern with using interference screws in soft-tissue ACL grafts—based on biomechanical study results of increased slippage, bone tunnel widening, and less strength.11 Delta screws and Retroscrews have not been specifically evaluated, and their fixation strengths have not been directly compared with those of other devices. In the present study, Delta screws and Retroscrews consistently performed the poorest with respect to ultimate LTF, residual displacement, and stiffness. Twenty percent of the Delta screws and 80% of the Retroscrews did not complete 1500 cycles. The poor performance of the interference screws was echoed in studies by Magen and colleagues12 and Kousa and colleagues,7 in which the only complete failures were in the cyclic loading of the interference screws.

Three possible confounding factors may have affected the performance of the interference screws: bone density of porcine tibia, length of interference screw, and location of screw placement. In addition, in clinical practice these screws may be used with other modes of graft fixation. Combined fixation (interference screws, other devices) was not evaluated in this study.

Porcine models have been used in many biomechanical graft fixation studies.4,6,7,12,13 Some authors have found porcine tibia to be a poor substitute for human cadaver tibia because the volumetric density of porcine bone is higher than that of human bone.14,15 Other authors have demonstrated fairly similar bone density between human and porcine tibia.16 The concern is that interference screw fixation strength correlates with the density of the bone in which screws are fixed.17 Therefore, one limitation of our study is that we did not determine the bone density of the porcine tibias for comparison with that of young human tibias.

 

 

Another important variable that could have affected the performance of the interference screws is screw length. One study found no significant difference in screw strength between various lengths, and longer screws failed to protect against graft slippage.18 However, Selby and colleagues19 found that, compared with 28-mm screws, 35-mm bioabsorbable interference screws failed at higher LTF. This is in part why we selected 35-mm Delta screws for our study. Both 35-mm Delta screws and 20-mm Retroscrews performed poorly. However, we could not determine if the poorer performance of Retroscrews was related to their length.

We used an eccentric placement for our interference screws. Although some studies have suggested concentric placement might improve fixation strength by increasing bone–tendon contact,20 Simonian and colleagues21 found no difference in graft slippage or ultimate LTF between eccentrically and concentrically placed screws. Although they were not biomechanically tested in our study, a few grafts were fixed with concentrically placed screws, and these tendons appeared to be more clinically damaged than the eccentrically placed screws.

Combined tibial fixation techniques may be used in clinical practice, but we did not evaluate them in our study. Yoo and colleagues9 compared interference screw, interference screw plus cortical screw and spiked washer, and cortical screw and spiked washer alone. They found that stiffness nearly doubled, residual displacement was less, and ultimate LTF was significantly higher in the group with interference screw plus cortical screw and spiked washer. In a similar study, Walsh and colleagues13 demonstrated improved stiffness and LTF in cyclic testing with the combination of retrograde interference screw and suture button over interference screw alone. Further study may include direct comparisons of additional tibial fixation techniques using more than one device. Cost analysis of use of additional fixation devices would be beneficial as well.

Study results have clearly demonstrated that tibial fixation is the weak point in ACL reconstruction3,17 and that early aggressive rehabilitation can help restore range of motion, strength, and function.22,23 Implants that can withstand early loads during rehabilitation periods are therefore of utmost importance.

Conclusion

Intrafix demonstrated superior strength in the fixation of hamstring grafts in the tibia, followed closely by WasherLoc. When used as the sole tibial fixation device, interference screws had low LTF, decreased stiffness, and high residual displacement, which may have clinical implications for early rehabilitation after ACL reconstruction.

Of the procedures performed by surgeons specializing in sports medicine and by general orthopedists, anterior cruciate ligament (ACL) reconstruction remains one of the most common.1 Recent years have seen a trend toward replacing the “gold standard” of bone–patellar tendon–bone autograft with autograft or allograft hamstring tendon in ACL reconstruction.2 This shift is being made to try to avoid the donor-site morbidity of patellar tendon autografts and decrease the incidence of postoperative anterior knee pain. With increased use of hamstring grafts in ACL reconstruction, it is important to determine the strength of different methods of graft fixation.

Rigid fixation of hamstring grafts is recognized as a crucial factor in the long-term success of ACL reconstruction. Grafts must withstand early rehabilitation forces as high as 500 N.2 There is therefore much concern about the strength of tibial fixation, given the lower bone density of the tibial metaphysis versus the femoral metaphysis. In addition, stability is more a concern in the tibia, as the forces are directly in line with the tibial tunnel.3,4

The challenge has been to engineer devices that provide stable, rigid graft fixation that allows expeditious tendon-to-bone healing and increased construct stiffness. Many new fixation devices are being marketed. There is much interest in determining which devices have the most fixation strength,4-9 but so far several products have not been compared with one another.

We conducted a study to determine if tibial hamstring fixation devices used in ACL reconstruction differ in fixation strength. We hypothesized we would find no differences.

Materials and Methods

Forty porcine tibias were harvested after the animals had been euthanized for other studies at our institution. Our study was approved by the institutional animal care and use committee. Specimens were stored at –25°C and, on day of testing, thawed to room temperature. Gracilis and semitendinosus tendon grafts were donated by a tissue bank (LifeNet Health, Virginia Beach, Virginia). The grafts were stored at –25°C; on day of testing, tendons were thawed to room temperature.

We evaluated 4 different tibial fixation devices (Figure 1): Delta screw and Retroscrew (Arthrex, Naples, Florida), WasherLoc (Arthrotek, Warsaw, Indiana), and Intrafix (Depuy Mitek, Raynham, Massachusetts). For each device, 10 ACL fixation constructs were tested.

Quadrupled human semitendinosus–gracilis tendon grafts were fixed into the tibias using the 4 tibial fixation devices. All fixations were done according to manufacturer specifications. All interference screws were placed eccentrically. The testing apparatus and procedure are described in an article by Kousa and colleagues.6 The specimens were mounted on the mechanical testing apparatus by threaded bars and custom clamps to secure fixation (Figure 2). Constant tension was maintained on all 4 strands of the hamstring grafts to equalize the tendons. After the looped end of the hamstring graft was secured by clamps, 25 mm of graft was left between the clamp and the intra-articular tunnel.

In the cyclic loading test, the load was applied parallel to the long axis of the tibial tunnel. A 50-N preload was initially applied to each specimen for 10 seconds. Subsequently, 1500 loading cycles between 50 N and 200 N at a rate of 1 cycle per 120 seconds were performed. Standard force-displacement curves were then generated. Each tibial fixation device underwent 10 cyclic loading tests. Specimens surviving the cyclic loading then underwent a single-cycle load-to-failure (LTF) test in which the load was applied parallel to the long axis of the drill hole at a rate of 50 mm per minute.

Residual displacement, stiffness, and ultimate LTF data were recorded from the force-displacement curves. Residual displacement data were generated from the cyclic loading test; residual displacement was determined by subtracting preload displacement from displacement at 1, 10, 50, 100, 250, 500, 1000, and 1500 cycles. Stiffness data were generated from the single-cycle LTF test; stiffness was defined as the linear region slope of the force-displacement curve corresponding to the steepest straight-line tangent to the loading curve. Ultimate LTF (yield load) data were generated from the single-cycle LTF test; ultimate LTF was defined as the load at the point where the slope of the load displacement curve initially decreases.

Statistical analysis generated standard descriptive statistics: means, standard deviations, and proportions. One-way analysis of variance (ANOVA) was used to determine any statistically significant differences in stiffness, yield load, and residual displacement between the different fixation devices. Differences in force (load) between the single cycle and the cyclic loading test were determined by ANOVA. P < .05 was considered statistically significant for all tests.

Results

The modes of failure for the devices were similar. In all 10 tests, Intrafix was pulled through the tunnel with the hamstring allografts. WasherLoc failed in each test, with the tendons eventually being pulled through the washer and thus out through the tunnel. Delta screw and Retroscrew both failed with slippage of the fixation device and the tendons pulled out through the tunnel.

 

 

For the cyclic loading tests, 8 of the 10 Delta screws and only 2 of the 10 Retroscrews completed the 1500-cycle loading test before failure. The 2 Delta screws that did not complete the testing failed after about 500 cycles, and the 8 Retroscrews that did not complete the testing failed after about 250 cycles. All 10 WasherLoc and Intrafix devices completed the testing.

Residual displacement data were calculated from the cyclic loading tests (Table). Mean (SS) residual displacement was lowest for Intrafix at 2.9 (1.2) mm, followed by WasherLoc at 5.6 (2.2) mm and Delta at 6.4 (3.3) mm. Retroscrew at 25.5 (11.0) mm had the highest residual displacement, though only 2 completed the cyclic tests. Intrafix, WasherLoc, and Delta were not statistically different, but there was a statistical difference between Retroscrew and the other devices (P < .001).

Stiffness data were calculated from the LTF tests (Table). Mean (SD) stiffness was highest for Intrafix at 129 (32.7) N/mm, followed by WasherLoc at 97 (11.6) N/mm, Delta at 93 (9.5) N/mm, and Retroscrew at 80.2 (8.8) N/mm. Intrafix had statistically higher stiffness compared with WasherLoc (P < .05), Delta (P < .01), and Retroscrew (P < .05). There were no significant differences in stiffness among WasherLoc, Delta, and Retroscrew.

Mean (SD) ultimate LTF was highest for Intrafix at 656 (182.6) N, followed by WasherLoc at 630 (129.3) N, Delta at 430 (90.0) N, and Retroscrew at 285 (33.8) N (Table). There were significant differences between Intrafix and Delta (P < .05) and Retroscrew (P < .05). WasherLoc failed at a significantly higher load compared with Delta (P < .05) and Retroscrew (P < .05). There were no significant differences in mean LTF between Intrafix and WasherLoc.

Discussion

In this biomechanical comparison of 4 different tibial fixation devices, Intrafix had results superior to those of the other implants. Intrafix failed at higher LTF and lower residual displacement and had higher stiffness. WasherLoc performed well and had LTF similar to that of Intrafix. The interference screws performed poorly with respect to LTF, residual displacement, and stiffness, and a large proportion of them failed early into cyclic loading.

Intrafix is a central fixation device that uses a 4-quadrant sleeve and a screw to establish tensioning across all 4 hamstring graft strands. The theory is this configuration increases the contact area between graft and bone for proper integration of graft into bone. Intrafix has performed well in other biomechanical studies. Using a study design similar to ours, Kousa and colleagues7 found the performance of Intrafix to be superior to that of other devices, including interference screws and WasherLoc. Starch and colleagues10 reported that, compared with a standard interference screw, Intrafix required significantly higher load to cause a millimeter of graft laxity. They concluded that this demonstrates superior fixation strength and reduced laxity of the graft after cyclic loading. Coleridge and Amis4 found that, compared with WasherLoc and various interference screws, Intrafix had the lower residual displacement. However, they also found that, compared with Intrafix and interference screws, WasherLoc had the highest ultimate tensile strength. Their findings may be difficult to compare with ours, as they tested fixation of calf extensor tendons, and we tested human hamstring grafts.

An important concern in the present study was the poor performance of the interference screws. Other authors recently expressed concern with using interference screws in soft-tissue ACL grafts—based on biomechanical study results of increased slippage, bone tunnel widening, and less strength.11 Delta screws and Retroscrews have not been specifically evaluated, and their fixation strengths have not been directly compared with those of other devices. In the present study, Delta screws and Retroscrews consistently performed the poorest with respect to ultimate LTF, residual displacement, and stiffness. Twenty percent of the Delta screws and 80% of the Retroscrews did not complete 1500 cycles. The poor performance of the interference screws was echoed in studies by Magen and colleagues12 and Kousa and colleagues,7 in which the only complete failures were in the cyclic loading of the interference screws.

Three possible confounding factors may have affected the performance of the interference screws: bone density of porcine tibia, length of interference screw, and location of screw placement. In addition, in clinical practice these screws may be used with other modes of graft fixation. Combined fixation (interference screws, other devices) was not evaluated in this study.

Porcine models have been used in many biomechanical graft fixation studies.4,6,7,12,13 Some authors have found porcine tibia to be a poor substitute for human cadaver tibia because the volumetric density of porcine bone is higher than that of human bone.14,15 Other authors have demonstrated fairly similar bone density between human and porcine tibia.16 The concern is that interference screw fixation strength correlates with the density of the bone in which screws are fixed.17 Therefore, one limitation of our study is that we did not determine the bone density of the porcine tibias for comparison with that of young human tibias.

 

 

Another important variable that could have affected the performance of the interference screws is screw length. One study found no significant difference in screw strength between various lengths, and longer screws failed to protect against graft slippage.18 However, Selby and colleagues19 found that, compared with 28-mm screws, 35-mm bioabsorbable interference screws failed at higher LTF. This is in part why we selected 35-mm Delta screws for our study. Both 35-mm Delta screws and 20-mm Retroscrews performed poorly. However, we could not determine if the poorer performance of Retroscrews was related to their length.

We used an eccentric placement for our interference screws. Although some studies have suggested concentric placement might improve fixation strength by increasing bone–tendon contact,20 Simonian and colleagues21 found no difference in graft slippage or ultimate LTF between eccentrically and concentrically placed screws. Although they were not biomechanically tested in our study, a few grafts were fixed with concentrically placed screws, and these tendons appeared to be more clinically damaged than the eccentrically placed screws.

Combined tibial fixation techniques may be used in clinical practice, but we did not evaluate them in our study. Yoo and colleagues9 compared interference screw, interference screw plus cortical screw and spiked washer, and cortical screw and spiked washer alone. They found that stiffness nearly doubled, residual displacement was less, and ultimate LTF was significantly higher in the group with interference screw plus cortical screw and spiked washer. In a similar study, Walsh and colleagues13 demonstrated improved stiffness and LTF in cyclic testing with the combination of retrograde interference screw and suture button over interference screw alone. Further study may include direct comparisons of additional tibial fixation techniques using more than one device. Cost analysis of use of additional fixation devices would be beneficial as well.

Study results have clearly demonstrated that tibial fixation is the weak point in ACL reconstruction3,17 and that early aggressive rehabilitation can help restore range of motion, strength, and function.22,23 Implants that can withstand early loads during rehabilitation periods are therefore of utmost importance.

Conclusion

Intrafix demonstrated superior strength in the fixation of hamstring grafts in the tibia, followed closely by WasherLoc. When used as the sole tibial fixation device, interference screws had low LTF, decreased stiffness, and high residual displacement, which may have clinical implications for early rehabilitation after ACL reconstruction.

References

1.    Garrett WE Jr, Swiontkowski MF, Weinsten JN, et al. American Board of Orthopaedic Surgery Practice of the Orthopaedic Surgeon: part-II, certification examination case mix. J Bone Joint Surg Am. 2006;88(3):660-667.

2.    West RV, Harner CD. Graft selection in anterior cruciate ligament reconstruction. J Am Acad Orthop Surg. 2005;13(3):197-207.

3.    Brand J Jr, Weiler A, Caborn DN, Brown CH Jr, Johnson DL. Graft fixation in cruciate ligament reconstruction. Am J Sports Med. 2000;28(5):761-774.

4.    Coleridge SD, Amis AA. A comparison of five tibial-fixation systems in hamstring-graft anterior cruciate ligament reconstruction. Knee Surg Sports Traumatol Arthrosc. 2004;12(5):391-397.

5.    Fabbriciani C, Mulas PD, Ziranu F, Deriu L, Zarelli D, Milano G. Mechanical analysis of fixation methods for anterior cruciate ligament reconstruction with hamstring tendon graft. An experimental study in sheep knees. Knee. 2005;12(2):135-138.

6.    Kousa P, Järvinen TL, Vihavainen M, Kannus P, Järvinen M. The fixation strength of six hamstring tendon graft fixation devices in anterior cruciate ligament reconstruction. Part I: femoral site. Am J Sports Med. 2003;31(2):174-181.

7.    Kousa P, Järvinen TL, Vihavainen M, Kannus P, Järvinen M. The fixation strength of six hamstring tendon graft fixation devices in anterior cruciate ligament reconstruction. Part II: tibial site. Am J Sports Med. 2003;31(2):182-188.

8.    Weiler A, Hoffmann RF, Stähelin AC, Bail HJ, Siepe CJ, Südkamp NP. Hamstring tendon fixation using interference screws: a biomechanical study in calf tibial bone. Arthroscopy. 1998;14(1):29-37.

9.    Yoo JC, Ahn JH, Kim JH, et al. Biomechanical testing of hybrid hamstring graft tibial fixation in anterior cruciate ligament reconstruction. Knee. 2006;13(6):455-459.

10.  Starch DW, Alexander JW, Noble PC, Reddy S, Lintner DM. Multistranded hamstring tendon graft fixation with a central four-quadrant or a standard tibial interference screw for anterior cruciate ligament reconstruction. Am J Sports Med. 2003;31(3):338-344.

11.  Prodromos CC, Fu FH, Howell SM, Johnson DH, Lawhorn K. Controversies in soft-tissue anterior cruciate ligament reconstruction: grafts, bundles, tunnels, fixation, and harvest. J Am Acad Orthop Surg. 2008;16(7):376-384.

12.  Magen HE, Howell SM, Hull ML. Structural properties of six tibial fixation methods for anterior cruciate ligament soft tissue grafts. Am J Sports Med. 1999;27(1):35-43.

13.  Walsh MP, Wijdicks CA, Parker JB, Hapa O, LaPrade RF. A comparison between a retrograde interference screw, suture button, and combined fixation on the tibial side in an all-inside anterior cruciate ligament reconstruction: a biomechanical study in a porcine model. Am J Sports Med. 2009;37(1):160-167.

14.  Nurmi JT, Järvinen TL, Kannus P, Sievänen H, Toukosalo J, Järvinen M. Compaction versus extraction drilling for fixation of the hamstring tendon graft in anterior cruciate ligament reconstruction. Am J Sports Med. 2002;30(2):167-173.

15.  Nurmi JT, Sievänen H, Kannus P, Järvinen M, Järvinen TL. Porcine tibia is a poor substitute for human cadaver tibia for evaluating interference screw fixation. Am J Sports Med. 2004;32(3):765-771.

16.  Nagarkatti DG, McKeon BP, Donahue BS, Fulkerson JP. Mechanical evaluation of a soft tissue interference screw in free tendon anterior cruciate ligament graft fixation. Am J Sports Med. 2001;29(1):67-71.

17.  Brand JC Jr, Pienkowski D, Steenlage E, Hamilton D, Johnson DL, Caborn DN. Interference screw fixation strength of a quadrupled hamstring tendon graft is directly related to bone mineral density and insertion torque. Am J Sports Med. 2000;28(5):705-710.

18.  Stadelmaier DM, Lowe WR, Ilahi OA, Noble PC, Kohl HW 3rd. Cyclic pull-out strength of hamstring tendon graft fixation with soft tissue interference screws. Influence of screw length. Am J Sports Med. 1999;27(6):778-783.

19.  Selby JB, Johnson DL, Hester P, Caborn DN. Effect of screw length on bioabsorbable interference screw fixation in a tibial bone tunnel. Am J Sports Med. 2001;29(5):614-619.

20.  Shino K, Pflaster DS. Comparison of eccentric and concentric screw placement for hamstring graft fixation in the tibial tunnel. Knee Surg Sports Traumatol Arthrosc. 2000;8(2):73-75.

21.  Simonian PT, Sussmann PS, Baldini TH, Crockett HC, Wickiewicz TL. Interference screw position and hamstring graft location for anterior cruciate ligament reconstruction. Arthroscopy. 1998;14(5):459-464.

22.  Shelbourne KD, Nitz P. Accelerated rehabilitation after anterior cruciate ligament reconstruction. Am J Sports Med. 1990;18(3):292-299.

23.   Shelbourne KD, Wilckens JH. Current concepts in anterior cruciate ligament rehabilitation. Orthop Rev. 1990;19(11):957-964.

References

1.    Garrett WE Jr, Swiontkowski MF, Weinsten JN, et al. American Board of Orthopaedic Surgery Practice of the Orthopaedic Surgeon: part-II, certification examination case mix. J Bone Joint Surg Am. 2006;88(3):660-667.

2.    West RV, Harner CD. Graft selection in anterior cruciate ligament reconstruction. J Am Acad Orthop Surg. 2005;13(3):197-207.

3.    Brand J Jr, Weiler A, Caborn DN, Brown CH Jr, Johnson DL. Graft fixation in cruciate ligament reconstruction. Am J Sports Med. 2000;28(5):761-774.

4.    Coleridge SD, Amis AA. A comparison of five tibial-fixation systems in hamstring-graft anterior cruciate ligament reconstruction. Knee Surg Sports Traumatol Arthrosc. 2004;12(5):391-397.

5.    Fabbriciani C, Mulas PD, Ziranu F, Deriu L, Zarelli D, Milano G. Mechanical analysis of fixation methods for anterior cruciate ligament reconstruction with hamstring tendon graft. An experimental study in sheep knees. Knee. 2005;12(2):135-138.

6.    Kousa P, Järvinen TL, Vihavainen M, Kannus P, Järvinen M. The fixation strength of six hamstring tendon graft fixation devices in anterior cruciate ligament reconstruction. Part I: femoral site. Am J Sports Med. 2003;31(2):174-181.

7.    Kousa P, Järvinen TL, Vihavainen M, Kannus P, Järvinen M. The fixation strength of six hamstring tendon graft fixation devices in anterior cruciate ligament reconstruction. Part II: tibial site. Am J Sports Med. 2003;31(2):182-188.

8.    Weiler A, Hoffmann RF, Stähelin AC, Bail HJ, Siepe CJ, Südkamp NP. Hamstring tendon fixation using interference screws: a biomechanical study in calf tibial bone. Arthroscopy. 1998;14(1):29-37.

9.    Yoo JC, Ahn JH, Kim JH, et al. Biomechanical testing of hybrid hamstring graft tibial fixation in anterior cruciate ligament reconstruction. Knee. 2006;13(6):455-459.

10.  Starch DW, Alexander JW, Noble PC, Reddy S, Lintner DM. Multistranded hamstring tendon graft fixation with a central four-quadrant or a standard tibial interference screw for anterior cruciate ligament reconstruction. Am J Sports Med. 2003;31(3):338-344.

11.  Prodromos CC, Fu FH, Howell SM, Johnson DH, Lawhorn K. Controversies in soft-tissue anterior cruciate ligament reconstruction: grafts, bundles, tunnels, fixation, and harvest. J Am Acad Orthop Surg. 2008;16(7):376-384.

12.  Magen HE, Howell SM, Hull ML. Structural properties of six tibial fixation methods for anterior cruciate ligament soft tissue grafts. Am J Sports Med. 1999;27(1):35-43.

13.  Walsh MP, Wijdicks CA, Parker JB, Hapa O, LaPrade RF. A comparison between a retrograde interference screw, suture button, and combined fixation on the tibial side in an all-inside anterior cruciate ligament reconstruction: a biomechanical study in a porcine model. Am J Sports Med. 2009;37(1):160-167.

14.  Nurmi JT, Järvinen TL, Kannus P, Sievänen H, Toukosalo J, Järvinen M. Compaction versus extraction drilling for fixation of the hamstring tendon graft in anterior cruciate ligament reconstruction. Am J Sports Med. 2002;30(2):167-173.

15.  Nurmi JT, Sievänen H, Kannus P, Järvinen M, Järvinen TL. Porcine tibia is a poor substitute for human cadaver tibia for evaluating interference screw fixation. Am J Sports Med. 2004;32(3):765-771.

16.  Nagarkatti DG, McKeon BP, Donahue BS, Fulkerson JP. Mechanical evaluation of a soft tissue interference screw in free tendon anterior cruciate ligament graft fixation. Am J Sports Med. 2001;29(1):67-71.

17.  Brand JC Jr, Pienkowski D, Steenlage E, Hamilton D, Johnson DL, Caborn DN. Interference screw fixation strength of a quadrupled hamstring tendon graft is directly related to bone mineral density and insertion torque. Am J Sports Med. 2000;28(5):705-710.

18.  Stadelmaier DM, Lowe WR, Ilahi OA, Noble PC, Kohl HW 3rd. Cyclic pull-out strength of hamstring tendon graft fixation with soft tissue interference screws. Influence of screw length. Am J Sports Med. 1999;27(6):778-783.

19.  Selby JB, Johnson DL, Hester P, Caborn DN. Effect of screw length on bioabsorbable interference screw fixation in a tibial bone tunnel. Am J Sports Med. 2001;29(5):614-619.

20.  Shino K, Pflaster DS. Comparison of eccentric and concentric screw placement for hamstring graft fixation in the tibial tunnel. Knee Surg Sports Traumatol Arthrosc. 2000;8(2):73-75.

21.  Simonian PT, Sussmann PS, Baldini TH, Crockett HC, Wickiewicz TL. Interference screw position and hamstring graft location for anterior cruciate ligament reconstruction. Arthroscopy. 1998;14(5):459-464.

22.  Shelbourne KD, Nitz P. Accelerated rehabilitation after anterior cruciate ligament reconstruction. Am J Sports Med. 1990;18(3):292-299.

23.   Shelbourne KD, Wilckens JH. Current concepts in anterior cruciate ligament rehabilitation. Orthop Rev. 1990;19(11):957-964.

Issue
The American Journal of Orthopedics - 44(2)
Issue
The American Journal of Orthopedics - 44(2)
Page Number
82-85
Page Number
82-85
Publications
Publications
Topics
Article Type
Display Headline
Biomechanical Comparison of Hamstring Tendon Fixation Devices for Anterior Cruciate Ligament Reconstruction: Part 2. Four Tibial Devices
Display Headline
Biomechanical Comparison of Hamstring Tendon Fixation Devices for Anterior Cruciate Ligament Reconstruction: Part 2. Four Tibial Devices
Legacy Keywords
american journal of orthopedics, AJO, original study, study, hamstring tendon fixation devices, hamstring, tendon, devices, anterior cruciate ligament, ACL, part 2, tibial devices, ACL reconstruction, tibial, tibias, fixation, scannell, loeffler, hoenig, peindl, d'alessandro, connor, fleischli
Legacy Keywords
american journal of orthopedics, AJO, original study, study, hamstring tendon fixation devices, hamstring, tendon, devices, anterior cruciate ligament, ACL, part 2, tibial devices, ACL reconstruction, tibial, tibias, fixation, scannell, loeffler, hoenig, peindl, d'alessandro, connor, fleischli
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

When is it bipolar disorder and when is it DMDD?

Article Type
Changed
Fri, 01/18/2019 - 14:23
Display Headline
When is it bipolar disorder and when is it DMDD?

Introduction

In the last 20 years there has been a marked rise in the number of children and adolescents receiving the diagnosis of bipolar disorder (BD) – a mood disorder that, classically, involves cycling between episodes of elevated mood and episodes of low mood (Arch. Gen. Psychiatry 2207;64:1032-9). The increase in diagnosis is partly explained by the inclusion of children with chronic irritability being diagnosed with BD. This has led to concern about the subsequent use of approved second-generation antipsychotics for chronically irritable children, with the resultant side effects.

 

Dr. Robert R. Althoff

A new diagnosis called disruptive mood dysregulation disorder (DMDD) was introduced into the DSM-5 to describe these chronically irritable children and, in part, to reduce the number of children receiving a bipolar diagnosis. So, how does one know whether a child has BD, DMDD, or something else? The two brief cases that follow distinguish the difference between BD and DMDD.

Case 1 summary

Joseph is a 15-year-old boy with a history of childhood depression. About 1 year ago, he began to appear more irritable and anxious. Despite his parents’ prohibition, he was going out at night and was intoxicated on several occasions when he came home – something he had never done before. After about 2 weeks of this, he began going to bed at midnight, but would be up again by 4 a.m. talking to himself, playing music, or exercising. He was hanging out with a different crowd. He began to talk about the possibility of becoming part of a motorcycle gang – at some point perhaps the leader of Hells Angels. Slowly, this resolved. However, these symptoms recurred about 1 month ago with progressive worsening, again, and 2 days ago he stopped sleeping at all. He has been locking himself in his room, talking rapidly and excessively about motorcycles, complaining that he “just needed to get his thoughts together.” He was very distractible and was not eating. His mother called his primary care clinician who advised her to bring him to the ED, which she could do only by police because he refused to leave the home, complaining of the “noises” outside.

 

 

Case 1 discussion

Joseph most likely has bipolar I disorder, although a substance-induced mania will have to be ruled out. His symptoms are classic for what we think of as “narrow phenotypic” mania – elated and irritable mood, grandiosity, flight of ideas, decreased need for sleep, hypertalkativeness, increase in goal-directed activity, severe distractibility, and excessive involvement in activities that are likely to have painful consequences. These episodes are a clear change from baseline. Here, Joseph has been previously depressed, but never had symptoms like this that came, went, and then returned. If these manic symptoms continue for 1 week or longer, or are so severe as to require him to be hospitalized, these are a manic episode, which, essentially, makes the diagnosis of bipolar I disorder. Most clinicians have seen mania in late adolescence and early adulthood and can distinguish when these episodes occur in childhood. There is less ambiguity about this diagnosis when it occurs with frank mania.

Case 2 summary

Henry is a 12-year-old boy. His parents say that he’s been difficult since he was “in the womb.” Starting at about the age of 4 years, they started to notice that he would frequently become moody – lasting almost all day in a way that was noticed by everyone. He remains almost constantly irritable. He responds extremely to negative emotional stimuli, like when he got so upset about striking out at a Little League game last year that he had a 15-minute temper outburst that couldn’t be stopped. When his father removed him from the field to the car, he kicked out a window. These types of events are not uncommon, occurring four to five times per week, and are associated with verbal and physical aggression. There have been no symptom-free periods since age 4 years. There have been no clear episodes, and nothing that could be described as elation.

Case 2 discussion

Henry would very likely meet the criteria for the DSM-5 diagnosis of disruptive mood dysregulation disorder. DMDD requires that there be severe and recurrent temper outbursts that can be verbal or physical and are grossly out of proportion to the situation, happening at least three times a week for the past year. In between these outbursts, the child’s mood is angry or irritable, most of the day, nearly every day with no time longer than 3 months in the last year without symptoms. There cannot be symptoms of mania or hypomania. DMDD should be distinguished from oppositional-defiant disorder (ODD), which cannot be diagnosed concurrently. ODD has similar characteristics, but the temper outbursts are not as severe, frequent, or chronic. The mood symptoms in DMDD predominate, while oppositionality predominates in ODD. Note the chronicity of irritable mood in DMDD. This is the distinguishing characteristic of the disorder – chronic, nonepisodic irritability.

 

 

General discussion

The distinction between BD and DMDD does matter, but it is sometimes quite hard to draw a clear line – even for the experts. It can be easy to be frustrated with yourself as a clinician when you’re unable to come to a clear decision about the diagnosis. With mood disorders in children, however, it’s important not to attribute the field’s lack of clarity to your own lack of knowledge. In these difficult cases, it’s highly likely that even the experts would disagree. Making the distinction between bipolar disorder and DMDD becomes even more complex in the situation of “other specified bipolar and related disorders,” which allows for short or subsyndromal hypomanic episodes with major depression, hypomania without depression, or short-duration cyclothymia. These cases, formerly called “bipolar, not otherwise specified,” are more likely to progress to adult bipolar disorder I or II. DMDD, on the other hand, is more likely to progress to adult depression (Biol. Psychiatry 2006;60:991-7).

Why does the distinction matter? Because the treatment for bipolar disorder is likely to involve one of the traditional mood stabilizers or the second-generation antipsychotics that are Food and Drug Administration–approved for bipolar disorder along with family education and cognitive-behavioral therapy. However, there is no evidence at this time that the management of DMDD should consist of these same treatments. In fact, a trial of lithium for DMDD (actually, its research predecessor severe mood dysregulation) was negative (J. Child. Adolesc. Psychopharmacol. 2009;19:61-73). While we are still working out how to help children with DMDD, the current trials being done are examining the use of antidepressants and psychostimulants (either serially or in combination) along with family-based interventions similar to those used for ODD. These are tough cases, and frequently a consult with a child psychiatrist or psychologist will be helpful.

Dr. Althoff is an associate professor of psychiatry, psychology, and pediatrics at the University of Vermont, Burlington. He is director of the division of behavioral genetics and conducts research on the development of self-regulation in children. Dr. Althoff has received grants/research support from the National Institute of Mental Health, the National Institute of General Medical Sciences, the Research Center for Children, Youth, and Families, and the Klingenstein Third Generation Foundation, and honoraria from the Oakstone General Publishing for CME presentations. E-mail him at [email protected].

Publications
Topics
Legacy Keywords
bipolar disorder, DMDD, disruptive mood dysregulation disorder, irritable, chronic, antipsychotics
Sections

Introduction

In the last 20 years there has been a marked rise in the number of children and adolescents receiving the diagnosis of bipolar disorder (BD) – a mood disorder that, classically, involves cycling between episodes of elevated mood and episodes of low mood (Arch. Gen. Psychiatry 2207;64:1032-9). The increase in diagnosis is partly explained by the inclusion of children with chronic irritability being diagnosed with BD. This has led to concern about the subsequent use of approved second-generation antipsychotics for chronically irritable children, with the resultant side effects.

 

Dr. Robert R. Althoff

A new diagnosis called disruptive mood dysregulation disorder (DMDD) was introduced into the DSM-5 to describe these chronically irritable children and, in part, to reduce the number of children receiving a bipolar diagnosis. So, how does one know whether a child has BD, DMDD, or something else? The two brief cases that follow distinguish the difference between BD and DMDD.

Case 1 summary

Joseph is a 15-year-old boy with a history of childhood depression. About 1 year ago, he began to appear more irritable and anxious. Despite his parents’ prohibition, he was going out at night and was intoxicated on several occasions when he came home – something he had never done before. After about 2 weeks of this, he began going to bed at midnight, but would be up again by 4 a.m. talking to himself, playing music, or exercising. He was hanging out with a different crowd. He began to talk about the possibility of becoming part of a motorcycle gang – at some point perhaps the leader of Hells Angels. Slowly, this resolved. However, these symptoms recurred about 1 month ago with progressive worsening, again, and 2 days ago he stopped sleeping at all. He has been locking himself in his room, talking rapidly and excessively about motorcycles, complaining that he “just needed to get his thoughts together.” He was very distractible and was not eating. His mother called his primary care clinician who advised her to bring him to the ED, which she could do only by police because he refused to leave the home, complaining of the “noises” outside.

 

 

Case 1 discussion

Joseph most likely has bipolar I disorder, although a substance-induced mania will have to be ruled out. His symptoms are classic for what we think of as “narrow phenotypic” mania – elated and irritable mood, grandiosity, flight of ideas, decreased need for sleep, hypertalkativeness, increase in goal-directed activity, severe distractibility, and excessive involvement in activities that are likely to have painful consequences. These episodes are a clear change from baseline. Here, Joseph has been previously depressed, but never had symptoms like this that came, went, and then returned. If these manic symptoms continue for 1 week or longer, or are so severe as to require him to be hospitalized, these are a manic episode, which, essentially, makes the diagnosis of bipolar I disorder. Most clinicians have seen mania in late adolescence and early adulthood and can distinguish when these episodes occur in childhood. There is less ambiguity about this diagnosis when it occurs with frank mania.

Case 2 summary

Henry is a 12-year-old boy. His parents say that he’s been difficult since he was “in the womb.” Starting at about the age of 4 years, they started to notice that he would frequently become moody – lasting almost all day in a way that was noticed by everyone. He remains almost constantly irritable. He responds extremely to negative emotional stimuli, like when he got so upset about striking out at a Little League game last year that he had a 15-minute temper outburst that couldn’t be stopped. When his father removed him from the field to the car, he kicked out a window. These types of events are not uncommon, occurring four to five times per week, and are associated with verbal and physical aggression. There have been no symptom-free periods since age 4 years. There have been no clear episodes, and nothing that could be described as elation.

Case 2 discussion

Henry would very likely meet the criteria for the DSM-5 diagnosis of disruptive mood dysregulation disorder. DMDD requires that there be severe and recurrent temper outbursts that can be verbal or physical and are grossly out of proportion to the situation, happening at least three times a week for the past year. In between these outbursts, the child’s mood is angry or irritable, most of the day, nearly every day with no time longer than 3 months in the last year without symptoms. There cannot be symptoms of mania or hypomania. DMDD should be distinguished from oppositional-defiant disorder (ODD), which cannot be diagnosed concurrently. ODD has similar characteristics, but the temper outbursts are not as severe, frequent, or chronic. The mood symptoms in DMDD predominate, while oppositionality predominates in ODD. Note the chronicity of irritable mood in DMDD. This is the distinguishing characteristic of the disorder – chronic, nonepisodic irritability.

 

 

General discussion

The distinction between BD and DMDD does matter, but it is sometimes quite hard to draw a clear line – even for the experts. It can be easy to be frustrated with yourself as a clinician when you’re unable to come to a clear decision about the diagnosis. With mood disorders in children, however, it’s important not to attribute the field’s lack of clarity to your own lack of knowledge. In these difficult cases, it’s highly likely that even the experts would disagree. Making the distinction between bipolar disorder and DMDD becomes even more complex in the situation of “other specified bipolar and related disorders,” which allows for short or subsyndromal hypomanic episodes with major depression, hypomania without depression, or short-duration cyclothymia. These cases, formerly called “bipolar, not otherwise specified,” are more likely to progress to adult bipolar disorder I or II. DMDD, on the other hand, is more likely to progress to adult depression (Biol. Psychiatry 2006;60:991-7).

Why does the distinction matter? Because the treatment for bipolar disorder is likely to involve one of the traditional mood stabilizers or the second-generation antipsychotics that are Food and Drug Administration–approved for bipolar disorder along with family education and cognitive-behavioral therapy. However, there is no evidence at this time that the management of DMDD should consist of these same treatments. In fact, a trial of lithium for DMDD (actually, its research predecessor severe mood dysregulation) was negative (J. Child. Adolesc. Psychopharmacol. 2009;19:61-73). While we are still working out how to help children with DMDD, the current trials being done are examining the use of antidepressants and psychostimulants (either serially or in combination) along with family-based interventions similar to those used for ODD. These are tough cases, and frequently a consult with a child psychiatrist or psychologist will be helpful.

Dr. Althoff is an associate professor of psychiatry, psychology, and pediatrics at the University of Vermont, Burlington. He is director of the division of behavioral genetics and conducts research on the development of self-regulation in children. Dr. Althoff has received grants/research support from the National Institute of Mental Health, the National Institute of General Medical Sciences, the Research Center for Children, Youth, and Families, and the Klingenstein Third Generation Foundation, and honoraria from the Oakstone General Publishing for CME presentations. E-mail him at [email protected].

Introduction

In the last 20 years there has been a marked rise in the number of children and adolescents receiving the diagnosis of bipolar disorder (BD) – a mood disorder that, classically, involves cycling between episodes of elevated mood and episodes of low mood (Arch. Gen. Psychiatry 2207;64:1032-9). The increase in diagnosis is partly explained by the inclusion of children with chronic irritability being diagnosed with BD. This has led to concern about the subsequent use of approved second-generation antipsychotics for chronically irritable children, with the resultant side effects.

 

Dr. Robert R. Althoff

A new diagnosis called disruptive mood dysregulation disorder (DMDD) was introduced into the DSM-5 to describe these chronically irritable children and, in part, to reduce the number of children receiving a bipolar diagnosis. So, how does one know whether a child has BD, DMDD, or something else? The two brief cases that follow distinguish the difference between BD and DMDD.

Case 1 summary

Joseph is a 15-year-old boy with a history of childhood depression. About 1 year ago, he began to appear more irritable and anxious. Despite his parents’ prohibition, he was going out at night and was intoxicated on several occasions when he came home – something he had never done before. After about 2 weeks of this, he began going to bed at midnight, but would be up again by 4 a.m. talking to himself, playing music, or exercising. He was hanging out with a different crowd. He began to talk about the possibility of becoming part of a motorcycle gang – at some point perhaps the leader of Hells Angels. Slowly, this resolved. However, these symptoms recurred about 1 month ago with progressive worsening, again, and 2 days ago he stopped sleeping at all. He has been locking himself in his room, talking rapidly and excessively about motorcycles, complaining that he “just needed to get his thoughts together.” He was very distractible and was not eating. His mother called his primary care clinician who advised her to bring him to the ED, which she could do only by police because he refused to leave the home, complaining of the “noises” outside.

 

 

Case 1 discussion

Joseph most likely has bipolar I disorder, although a substance-induced mania will have to be ruled out. His symptoms are classic for what we think of as “narrow phenotypic” mania – elated and irritable mood, grandiosity, flight of ideas, decreased need for sleep, hypertalkativeness, increase in goal-directed activity, severe distractibility, and excessive involvement in activities that are likely to have painful consequences. These episodes are a clear change from baseline. Here, Joseph has been previously depressed, but never had symptoms like this that came, went, and then returned. If these manic symptoms continue for 1 week or longer, or are so severe as to require him to be hospitalized, these are a manic episode, which, essentially, makes the diagnosis of bipolar I disorder. Most clinicians have seen mania in late adolescence and early adulthood and can distinguish when these episodes occur in childhood. There is less ambiguity about this diagnosis when it occurs with frank mania.

Case 2 summary

Henry is a 12-year-old boy. His parents say that he’s been difficult since he was “in the womb.” Starting at about the age of 4 years, they started to notice that he would frequently become moody – lasting almost all day in a way that was noticed by everyone. He remains almost constantly irritable. He responds extremely to negative emotional stimuli, like when he got so upset about striking out at a Little League game last year that he had a 15-minute temper outburst that couldn’t be stopped. When his father removed him from the field to the car, he kicked out a window. These types of events are not uncommon, occurring four to five times per week, and are associated with verbal and physical aggression. There have been no symptom-free periods since age 4 years. There have been no clear episodes, and nothing that could be described as elation.

Case 2 discussion

Henry would very likely meet the criteria for the DSM-5 diagnosis of disruptive mood dysregulation disorder. DMDD requires that there be severe and recurrent temper outbursts that can be verbal or physical and are grossly out of proportion to the situation, happening at least three times a week for the past year. In between these outbursts, the child’s mood is angry or irritable, most of the day, nearly every day with no time longer than 3 months in the last year without symptoms. There cannot be symptoms of mania or hypomania. DMDD should be distinguished from oppositional-defiant disorder (ODD), which cannot be diagnosed concurrently. ODD has similar characteristics, but the temper outbursts are not as severe, frequent, or chronic. The mood symptoms in DMDD predominate, while oppositionality predominates in ODD. Note the chronicity of irritable mood in DMDD. This is the distinguishing characteristic of the disorder – chronic, nonepisodic irritability.

 

 

General discussion

The distinction between BD and DMDD does matter, but it is sometimes quite hard to draw a clear line – even for the experts. It can be easy to be frustrated with yourself as a clinician when you’re unable to come to a clear decision about the diagnosis. With mood disorders in children, however, it’s important not to attribute the field’s lack of clarity to your own lack of knowledge. In these difficult cases, it’s highly likely that even the experts would disagree. Making the distinction between bipolar disorder and DMDD becomes even more complex in the situation of “other specified bipolar and related disorders,” which allows for short or subsyndromal hypomanic episodes with major depression, hypomania without depression, or short-duration cyclothymia. These cases, formerly called “bipolar, not otherwise specified,” are more likely to progress to adult bipolar disorder I or II. DMDD, on the other hand, is more likely to progress to adult depression (Biol. Psychiatry 2006;60:991-7).

Why does the distinction matter? Because the treatment for bipolar disorder is likely to involve one of the traditional mood stabilizers or the second-generation antipsychotics that are Food and Drug Administration–approved for bipolar disorder along with family education and cognitive-behavioral therapy. However, there is no evidence at this time that the management of DMDD should consist of these same treatments. In fact, a trial of lithium for DMDD (actually, its research predecessor severe mood dysregulation) was negative (J. Child. Adolesc. Psychopharmacol. 2009;19:61-73). While we are still working out how to help children with DMDD, the current trials being done are examining the use of antidepressants and psychostimulants (either serially or in combination) along with family-based interventions similar to those used for ODD. These are tough cases, and frequently a consult with a child psychiatrist or psychologist will be helpful.

Dr. Althoff is an associate professor of psychiatry, psychology, and pediatrics at the University of Vermont, Burlington. He is director of the division of behavioral genetics and conducts research on the development of self-regulation in children. Dr. Althoff has received grants/research support from the National Institute of Mental Health, the National Institute of General Medical Sciences, the Research Center for Children, Youth, and Families, and the Klingenstein Third Generation Foundation, and honoraria from the Oakstone General Publishing for CME presentations. E-mail him at [email protected].

Publications
Publications
Topics
Article Type
Display Headline
When is it bipolar disorder and when is it DMDD?
Display Headline
When is it bipolar disorder and when is it DMDD?
Legacy Keywords
bipolar disorder, DMDD, disruptive mood dysregulation disorder, irritable, chronic, antipsychotics
Legacy Keywords
bipolar disorder, DMDD, disruptive mood dysregulation disorder, irritable, chronic, antipsychotics
Sections
Disallow All Ads

Falling back to sleep on call

Article Type
Changed
Mon, 05/06/2019 - 12:13
Display Headline
Falling back to sleep on call

Like many groups, our practice shares backup call on a rotational basis. This week-long pleasure cruise is characterized by phone calls throughout the night (“Why are we checking a temperature on a comfortably sleeping 85-year-old at 2 a.m. again?”), dubious requests (“I am still unclear why you were cleaning out your medicine cabinet at 4 a.m. Even so, I cannot refill the oxycodone you just flushed down the toilet.”), and fragmented sleep associated with clinically significant carbohydrate cravings.

In the old days, this indispensable community service could be handled without the need for remoting into the practice. But most calls these days require that our computers be close at hand. As such, we find ourselves in the wee hours of morning staring at computer screens that, we are increasingly aware, emit melatonin-killing blue wavelengths of light. This makes it that much harder to go back to sleep after triaging colonoscopy-preps-gone-wrong calls.

Dr. Jon O. Ebbert

Several months ago, one of my patients gave me orange-tinted, blue light–blocking (BB) glasses as a gift. These glasses are designed to filter out the blue wavelength (480 nm), which most strongly impacts alertness, cognitive performance, and circadian physiology.

They have collected dust on my desk. … until last week while on call.

In a recently published study, Stéphanie van der Lely of the University of Basel, Switzerland, and colleagues evaluated the impact of blue-blocker glasses as a countermeasure to evening computer screen time among adolescents (J. Adolesc. Health 2015;56:113-9). Thirteen adolescents with a mean age of 16 years participated in this crossover study over 16 days. Blue blockers were provided from 6 p.m. to sleep onset. Glasses reduced the blue light transmission to 30%.

Compared with clear lenses, BB significantly attenuated LED-induced melatonin suppression in the evening. BB glasses also decreased vigilant attention and subjective alertness before bedtime.

This article would suggest that my melatonin is not being suppressed while I wear the glasses as I do my evening article writing and answer phone calls. The color shifts take some getting used to, but the glasses are comfortable. In addition to sleeping in the attic, my backup call routine will include these glasses.

Now, if we can just find something to filter out midnight acetaminophen requests. At least I’ll fall back asleep quickly after telling them to take two and call me in the morning.

Dr. Ebbert is professor of medicine, a general internist at the Mayo Clinic in Rochester, Minn., and a diplomate of the American Board of Addiction Medicine. The opinions expressed are those of the author. The opinions expressed in this article should not be used to diagnose or treat any medical condition nor should they be used as a substitute for medical advice from a qualified, board-certified practicing clinician.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
sleep medicine, insomnia
Sections
Author and Disclosure Information

Author and Disclosure Information

Like many groups, our practice shares backup call on a rotational basis. This week-long pleasure cruise is characterized by phone calls throughout the night (“Why are we checking a temperature on a comfortably sleeping 85-year-old at 2 a.m. again?”), dubious requests (“I am still unclear why you were cleaning out your medicine cabinet at 4 a.m. Even so, I cannot refill the oxycodone you just flushed down the toilet.”), and fragmented sleep associated with clinically significant carbohydrate cravings.

In the old days, this indispensable community service could be handled without the need for remoting into the practice. But most calls these days require that our computers be close at hand. As such, we find ourselves in the wee hours of morning staring at computer screens that, we are increasingly aware, emit melatonin-killing blue wavelengths of light. This makes it that much harder to go back to sleep after triaging colonoscopy-preps-gone-wrong calls.

Dr. Jon O. Ebbert

Several months ago, one of my patients gave me orange-tinted, blue light–blocking (BB) glasses as a gift. These glasses are designed to filter out the blue wavelength (480 nm), which most strongly impacts alertness, cognitive performance, and circadian physiology.

They have collected dust on my desk. … until last week while on call.

In a recently published study, Stéphanie van der Lely of the University of Basel, Switzerland, and colleagues evaluated the impact of blue-blocker glasses as a countermeasure to evening computer screen time among adolescents (J. Adolesc. Health 2015;56:113-9). Thirteen adolescents with a mean age of 16 years participated in this crossover study over 16 days. Blue blockers were provided from 6 p.m. to sleep onset. Glasses reduced the blue light transmission to 30%.

Compared with clear lenses, BB significantly attenuated LED-induced melatonin suppression in the evening. BB glasses also decreased vigilant attention and subjective alertness before bedtime.

This article would suggest that my melatonin is not being suppressed while I wear the glasses as I do my evening article writing and answer phone calls. The color shifts take some getting used to, but the glasses are comfortable. In addition to sleeping in the attic, my backup call routine will include these glasses.

Now, if we can just find something to filter out midnight acetaminophen requests. At least I’ll fall back asleep quickly after telling them to take two and call me in the morning.

Dr. Ebbert is professor of medicine, a general internist at the Mayo Clinic in Rochester, Minn., and a diplomate of the American Board of Addiction Medicine. The opinions expressed are those of the author. The opinions expressed in this article should not be used to diagnose or treat any medical condition nor should they be used as a substitute for medical advice from a qualified, board-certified practicing clinician.

Like many groups, our practice shares backup call on a rotational basis. This week-long pleasure cruise is characterized by phone calls throughout the night (“Why are we checking a temperature on a comfortably sleeping 85-year-old at 2 a.m. again?”), dubious requests (“I am still unclear why you were cleaning out your medicine cabinet at 4 a.m. Even so, I cannot refill the oxycodone you just flushed down the toilet.”), and fragmented sleep associated with clinically significant carbohydrate cravings.

In the old days, this indispensable community service could be handled without the need for remoting into the practice. But most calls these days require that our computers be close at hand. As such, we find ourselves in the wee hours of morning staring at computer screens that, we are increasingly aware, emit melatonin-killing blue wavelengths of light. This makes it that much harder to go back to sleep after triaging colonoscopy-preps-gone-wrong calls.

Dr. Jon O. Ebbert

Several months ago, one of my patients gave me orange-tinted, blue light–blocking (BB) glasses as a gift. These glasses are designed to filter out the blue wavelength (480 nm), which most strongly impacts alertness, cognitive performance, and circadian physiology.

They have collected dust on my desk. … until last week while on call.

In a recently published study, Stéphanie van der Lely of the University of Basel, Switzerland, and colleagues evaluated the impact of blue-blocker glasses as a countermeasure to evening computer screen time among adolescents (J. Adolesc. Health 2015;56:113-9). Thirteen adolescents with a mean age of 16 years participated in this crossover study over 16 days. Blue blockers were provided from 6 p.m. to sleep onset. Glasses reduced the blue light transmission to 30%.

Compared with clear lenses, BB significantly attenuated LED-induced melatonin suppression in the evening. BB glasses also decreased vigilant attention and subjective alertness before bedtime.

This article would suggest that my melatonin is not being suppressed while I wear the glasses as I do my evening article writing and answer phone calls. The color shifts take some getting used to, but the glasses are comfortable. In addition to sleeping in the attic, my backup call routine will include these glasses.

Now, if we can just find something to filter out midnight acetaminophen requests. At least I’ll fall back asleep quickly after telling them to take two and call me in the morning.

Dr. Ebbert is professor of medicine, a general internist at the Mayo Clinic in Rochester, Minn., and a diplomate of the American Board of Addiction Medicine. The opinions expressed are those of the author. The opinions expressed in this article should not be used to diagnose or treat any medical condition nor should they be used as a substitute for medical advice from a qualified, board-certified practicing clinician.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Falling back to sleep on call
Display Headline
Falling back to sleep on call
Legacy Keywords
sleep medicine, insomnia
Legacy Keywords
sleep medicine, insomnia
Sections
Article Source

PURLs Copyright

Inside the Article

FEVAR radiation injury reexamined

Do not become complacent
Article Type
Changed
Tue, 12/13/2016 - 12:08
Display Headline
FEVAR radiation injury reexamined

CORONADO, CALIF. – Skin injury following fenestrated endovascular aortic stent grafting is less prevalent than expected, results from a single-center retrospective study showed.

“Radiation-induced skin injury is a serious potential complication of fluoroscopically guided interventions,” Dr. Melissa L. Kirkwood said at the annual meeting of the Western Vascular Society. “These injuries are associated with a threshold radiation dose, above which the severity of injury increases with increasing dose. Instances of these injuries are mostly limited to case reports of coronary interventions, TIPS procedures, and neuroembolizations.”

These radiation-induced skin lesions can be classified as prompt, early, mid-term, or late depending on when they present following the fluoroscopically guided intervention. “The National Cancer Institute has defined four grades of skin injury, with the most frequent being transient erythema, a prompt reaction within the first 24 hours occurring at skin doses as low as 2 Gy,” said Dr. Kirkwood of the division of vascular and endovascular surgery at the University of Texas Southwestern Medical Center, Dallas. “With increasing skin doses, more severe effects present themselves. Atrophy, ulceration, and necrosis are possibilities.”

She went on to note that fenestrated endovascular aneurysm repair often requires high doses of radiation, yet the prevalence of deterministic skin injury following these cases is unknown. In a recent study, Dr. Kirkwood and her associates retrospectively reviewed 61 complex fluoroscopically guided interventions that met substantial radiation dose level (SRDL) criteria, which is defined by the National Council on Radiation and Protection Measurements as a reference air kerma (RAK) greater than or equal to 5 Gy (J. Vasc. Surg. 2014; 60:742-8).

“Despite mean peak skin doses as high as 6.5 Gy, ranging up to 18.5 Gy, we did not detect any skin injuries in this cohort,” Dr. Kirkwood said. “That study, however, was limited by its retrospective design. There was no postoperative protocol in place to ensure that a thorough skin exam was performed on each patient at every follow-up visit. Therefore, we hypothesized that a more thorough postoperative follow-up of patients would detect some skin injury following these cases.”For the current study, she and her associates sought to examine the prevalence of deterministic effects after FEVAR as well as any patient characteristics that may predispose patients to skin injury.

 

 

In June 2013, the researchers implemented a new policy regarding the follow-up of FEVAR patients, which involved a full skin exam at postoperative week 2 and 4, and at 3 and 6 months, as well as questioning patients about any skin-related complaints. For the current study, they retrospectively reviewed all FEVARs over a 7-month period after the change in policy and highlighted all the cases that reached a RAK of 5 Gy or greater.

Peak skin dose, a dose index, and simulated skin dose maps were calculated using customized software employing input data from fluoroscopic machine logs. Of 317 cases performed, 22 met or exceeded a RAK of 5 Gy. Of these, 21 were FEVARs and one was an embolization. Dr. Kirkwood reported that the average RAK for all FEVARs was 8 Gy, with a range of 5-11 Gy.

Slightly more than half of patients (52%) had multiple fluoroscopically guided interventions within 6 months of their SRDL event. The average RAK for these patients was 10 Gy (range of 5 - 15). The mean peak skin dose for all FEVARs was 5 Gy (range of 2 - 10 Gy), and the dose index was 0.69. The average peak skin dose for the subset of patients with multiple procedures was 7 Gy (a range of 3 - 9 Gy).

In terms of the follow-up, all 21 FEVAR patients were examined at the 1- or 2-week mark, 81% were examined at 1 month, 52% were examined at 3 months, and 62% were examined at 6 months. No radiation skin injuries were reported. “Based on the published data, we would expect to see all grades of skin injury, especially in the cohort of the 5-10 Gy,” Dr. Kirkwood said.

In the previous study, conducted prior to the new follow-up policy, the dose index for FEVARs was 0.78, “meaning that the peak skin dose that the patient received could be roughly estimated as 78% of the RAK dose displayed on the monitor,” Dr. Kirkwood explained.

“In the current work, the dose index decreased to 60%. This suggests that surgeons in our group have now more appropriately and effectively employed strategies to decrease radiation dose to the patient. However, even when the best operating practice is employed, FEVARs still continue to require high radiation doses in order to complete.”

The present study demonstrated that deterministic skin injuries “are uncommon after FEVAR, even at high RAK levels and regardless of cumulative dose,” she concluded. “Even with more comprehensive patient follow-up, the fact that no skin injuries were reported suggests that skin injuries in this patient cohort are less prevalent than the published guidelines would predict.”

Dr. Kirkwood reported no financial disclosures.

[email protected]

References

Body

This report is a follow-up of a study by the same group published in the Journal of Vascular Surgery in 2013 (58:715-21) in which they demonstrated that the use of a variety of radiation safety measures including increasing table height, utilizing collimation and angulation, decreasing magnification modes, and maintaining minimal patient-to-detector distance resulted in a 60% reduction in skin dose to their patients when measured as an index of peak skin dose to reference air kerma (PSD/RAK). Unfortunately, skin exposure remained high for FEVAR despite these measures, underscoring the fact that for very complex interventions, even with excellent radiation safety practices, the risk of skin injury remains.

The fact that skin doses as high as 11 Gy did not result in any deterministic injuries is both reassuring and a little surprising. According to the Centers for Disease Control and Prevention, radiation doses of greater than 2 Gy but less than 15 Gy will usually result in erythema within 1-2 days, with a second period of erythema and edema at 2-5 weeks, occasionally resulting in desquamation at 6-7 weeks. Late changes can include mild skin atrophy and some hyperpigmentation. Although complete healing can usually be expected at these doses, squamous skin cancer can still occur, often more than a decade after exposure.

Dr. Frank Pomposelli

So why were no injuries seen? It may be that some were missed since follow-up examinations were not performed in 100% of their patients at any time interval, and it’s not stated whether exams were routinely performed in the first 1-2 days, when I would presume most patients were still hospitalized and the first stage of skin erythema is usually seen. Alternatively, it may be that the surrogate measure of either RAK or the index of PSD/RAK overestimated the true radiation skin dose, which seems highly likely, especially if the time of exposure in any one location was based less on the frequent changes in gantry angle and table position so commonly used in these procedures.

In our hospital, the Massachusetts Department of Public Health regulations require the patient and their physician be notified by letter when the estimated total absorbed radiation dose equals or exceeds 2 Gy. This is based on calculations by our physicist who reviews the details of any case in which the RAK measured equals or exceeds 2 Gy. Like the experiences of the authors, this most commonly occurs with lengthy and complex interventions. In our experience, we have never observed a significant skin injury presumably for the same reason – the exposure in any one location tends to be far less than the total calculated skin dose. Nevertheless, this study should not lull surgeons into a sense of complacency regarding the risk to the patient (and themselves and their staff). As our comfort and expertise with complex interventions increase, it is likely that radiation exposure will continue to increase, placing our patients at increased risk. Understanding the risk of radiation skin injury and how to minimize it is critical for any surgeon performing FEVAR and any other complex intervention utilizing fluoroscopic imaging.

Dr. Frank Pomposelli is an associate professor of surgery at Harvard Medical School. He is also an associate medical editor for Vascular Specialist.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
FEVAR, endovascular aortic stent, skin lesions
Author and Disclosure Information

Author and Disclosure Information

Body

This report is a follow-up of a study by the same group published in the Journal of Vascular Surgery in 2013 (58:715-21) in which they demonstrated that the use of a variety of radiation safety measures including increasing table height, utilizing collimation and angulation, decreasing magnification modes, and maintaining minimal patient-to-detector distance resulted in a 60% reduction in skin dose to their patients when measured as an index of peak skin dose to reference air kerma (PSD/RAK). Unfortunately, skin exposure remained high for FEVAR despite these measures, underscoring the fact that for very complex interventions, even with excellent radiation safety practices, the risk of skin injury remains.

The fact that skin doses as high as 11 Gy did not result in any deterministic injuries is both reassuring and a little surprising. According to the Centers for Disease Control and Prevention, radiation doses of greater than 2 Gy but less than 15 Gy will usually result in erythema within 1-2 days, with a second period of erythema and edema at 2-5 weeks, occasionally resulting in desquamation at 6-7 weeks. Late changes can include mild skin atrophy and some hyperpigmentation. Although complete healing can usually be expected at these doses, squamous skin cancer can still occur, often more than a decade after exposure.

Dr. Frank Pomposelli

So why were no injuries seen? It may be that some were missed since follow-up examinations were not performed in 100% of their patients at any time interval, and it’s not stated whether exams were routinely performed in the first 1-2 days, when I would presume most patients were still hospitalized and the first stage of skin erythema is usually seen. Alternatively, it may be that the surrogate measure of either RAK or the index of PSD/RAK overestimated the true radiation skin dose, which seems highly likely, especially if the time of exposure in any one location was based less on the frequent changes in gantry angle and table position so commonly used in these procedures.

In our hospital, the Massachusetts Department of Public Health regulations require the patient and their physician be notified by letter when the estimated total absorbed radiation dose equals or exceeds 2 Gy. This is based on calculations by our physicist who reviews the details of any case in which the RAK measured equals or exceeds 2 Gy. Like the experiences of the authors, this most commonly occurs with lengthy and complex interventions. In our experience, we have never observed a significant skin injury presumably for the same reason – the exposure in any one location tends to be far less than the total calculated skin dose. Nevertheless, this study should not lull surgeons into a sense of complacency regarding the risk to the patient (and themselves and their staff). As our comfort and expertise with complex interventions increase, it is likely that radiation exposure will continue to increase, placing our patients at increased risk. Understanding the risk of radiation skin injury and how to minimize it is critical for any surgeon performing FEVAR and any other complex intervention utilizing fluoroscopic imaging.

Dr. Frank Pomposelli is an associate professor of surgery at Harvard Medical School. He is also an associate medical editor for Vascular Specialist.

Body

This report is a follow-up of a study by the same group published in the Journal of Vascular Surgery in 2013 (58:715-21) in which they demonstrated that the use of a variety of radiation safety measures including increasing table height, utilizing collimation and angulation, decreasing magnification modes, and maintaining minimal patient-to-detector distance resulted in a 60% reduction in skin dose to their patients when measured as an index of peak skin dose to reference air kerma (PSD/RAK). Unfortunately, skin exposure remained high for FEVAR despite these measures, underscoring the fact that for very complex interventions, even with excellent radiation safety practices, the risk of skin injury remains.

The fact that skin doses as high as 11 Gy did not result in any deterministic injuries is both reassuring and a little surprising. According to the Centers for Disease Control and Prevention, radiation doses of greater than 2 Gy but less than 15 Gy will usually result in erythema within 1-2 days, with a second period of erythema and edema at 2-5 weeks, occasionally resulting in desquamation at 6-7 weeks. Late changes can include mild skin atrophy and some hyperpigmentation. Although complete healing can usually be expected at these doses, squamous skin cancer can still occur, often more than a decade after exposure.

Dr. Frank Pomposelli

So why were no injuries seen? It may be that some were missed since follow-up examinations were not performed in 100% of their patients at any time interval, and it’s not stated whether exams were routinely performed in the first 1-2 days, when I would presume most patients were still hospitalized and the first stage of skin erythema is usually seen. Alternatively, it may be that the surrogate measure of either RAK or the index of PSD/RAK overestimated the true radiation skin dose, which seems highly likely, especially if the time of exposure in any one location was based less on the frequent changes in gantry angle and table position so commonly used in these procedures.

In our hospital, the Massachusetts Department of Public Health regulations require the patient and their physician be notified by letter when the estimated total absorbed radiation dose equals or exceeds 2 Gy. This is based on calculations by our physicist who reviews the details of any case in which the RAK measured equals or exceeds 2 Gy. Like the experiences of the authors, this most commonly occurs with lengthy and complex interventions. In our experience, we have never observed a significant skin injury presumably for the same reason – the exposure in any one location tends to be far less than the total calculated skin dose. Nevertheless, this study should not lull surgeons into a sense of complacency regarding the risk to the patient (and themselves and their staff). As our comfort and expertise with complex interventions increase, it is likely that radiation exposure will continue to increase, placing our patients at increased risk. Understanding the risk of radiation skin injury and how to minimize it is critical for any surgeon performing FEVAR and any other complex intervention utilizing fluoroscopic imaging.

Dr. Frank Pomposelli is an associate professor of surgery at Harvard Medical School. He is also an associate medical editor for Vascular Specialist.

Title
Do not become complacent
Do not become complacent

CORONADO, CALIF. – Skin injury following fenestrated endovascular aortic stent grafting is less prevalent than expected, results from a single-center retrospective study showed.

“Radiation-induced skin injury is a serious potential complication of fluoroscopically guided interventions,” Dr. Melissa L. Kirkwood said at the annual meeting of the Western Vascular Society. “These injuries are associated with a threshold radiation dose, above which the severity of injury increases with increasing dose. Instances of these injuries are mostly limited to case reports of coronary interventions, TIPS procedures, and neuroembolizations.”

These radiation-induced skin lesions can be classified as prompt, early, mid-term, or late depending on when they present following the fluoroscopically guided intervention. “The National Cancer Institute has defined four grades of skin injury, with the most frequent being transient erythema, a prompt reaction within the first 24 hours occurring at skin doses as low as 2 Gy,” said Dr. Kirkwood of the division of vascular and endovascular surgery at the University of Texas Southwestern Medical Center, Dallas. “With increasing skin doses, more severe effects present themselves. Atrophy, ulceration, and necrosis are possibilities.”

She went on to note that fenestrated endovascular aneurysm repair often requires high doses of radiation, yet the prevalence of deterministic skin injury following these cases is unknown. In a recent study, Dr. Kirkwood and her associates retrospectively reviewed 61 complex fluoroscopically guided interventions that met substantial radiation dose level (SRDL) criteria, which is defined by the National Council on Radiation and Protection Measurements as a reference air kerma (RAK) greater than or equal to 5 Gy (J. Vasc. Surg. 2014; 60:742-8).

“Despite mean peak skin doses as high as 6.5 Gy, ranging up to 18.5 Gy, we did not detect any skin injuries in this cohort,” Dr. Kirkwood said. “That study, however, was limited by its retrospective design. There was no postoperative protocol in place to ensure that a thorough skin exam was performed on each patient at every follow-up visit. Therefore, we hypothesized that a more thorough postoperative follow-up of patients would detect some skin injury following these cases.”For the current study, she and her associates sought to examine the prevalence of deterministic effects after FEVAR as well as any patient characteristics that may predispose patients to skin injury.

 

 

In June 2013, the researchers implemented a new policy regarding the follow-up of FEVAR patients, which involved a full skin exam at postoperative week 2 and 4, and at 3 and 6 months, as well as questioning patients about any skin-related complaints. For the current study, they retrospectively reviewed all FEVARs over a 7-month period after the change in policy and highlighted all the cases that reached a RAK of 5 Gy or greater.

Peak skin dose, a dose index, and simulated skin dose maps were calculated using customized software employing input data from fluoroscopic machine logs. Of 317 cases performed, 22 met or exceeded a RAK of 5 Gy. Of these, 21 were FEVARs and one was an embolization. Dr. Kirkwood reported that the average RAK for all FEVARs was 8 Gy, with a range of 5-11 Gy.

Slightly more than half of patients (52%) had multiple fluoroscopically guided interventions within 6 months of their SRDL event. The average RAK for these patients was 10 Gy (range of 5 - 15). The mean peak skin dose for all FEVARs was 5 Gy (range of 2 - 10 Gy), and the dose index was 0.69. The average peak skin dose for the subset of patients with multiple procedures was 7 Gy (a range of 3 - 9 Gy).

In terms of the follow-up, all 21 FEVAR patients were examined at the 1- or 2-week mark, 81% were examined at 1 month, 52% were examined at 3 months, and 62% were examined at 6 months. No radiation skin injuries were reported. “Based on the published data, we would expect to see all grades of skin injury, especially in the cohort of the 5-10 Gy,” Dr. Kirkwood said.

In the previous study, conducted prior to the new follow-up policy, the dose index for FEVARs was 0.78, “meaning that the peak skin dose that the patient received could be roughly estimated as 78% of the RAK dose displayed on the monitor,” Dr. Kirkwood explained.

“In the current work, the dose index decreased to 60%. This suggests that surgeons in our group have now more appropriately and effectively employed strategies to decrease radiation dose to the patient. However, even when the best operating practice is employed, FEVARs still continue to require high radiation doses in order to complete.”

The present study demonstrated that deterministic skin injuries “are uncommon after FEVAR, even at high RAK levels and regardless of cumulative dose,” she concluded. “Even with more comprehensive patient follow-up, the fact that no skin injuries were reported suggests that skin injuries in this patient cohort are less prevalent than the published guidelines would predict.”

Dr. Kirkwood reported no financial disclosures.

[email protected]

CORONADO, CALIF. – Skin injury following fenestrated endovascular aortic stent grafting is less prevalent than expected, results from a single-center retrospective study showed.

“Radiation-induced skin injury is a serious potential complication of fluoroscopically guided interventions,” Dr. Melissa L. Kirkwood said at the annual meeting of the Western Vascular Society. “These injuries are associated with a threshold radiation dose, above which the severity of injury increases with increasing dose. Instances of these injuries are mostly limited to case reports of coronary interventions, TIPS procedures, and neuroembolizations.”

These radiation-induced skin lesions can be classified as prompt, early, mid-term, or late depending on when they present following the fluoroscopically guided intervention. “The National Cancer Institute has defined four grades of skin injury, with the most frequent being transient erythema, a prompt reaction within the first 24 hours occurring at skin doses as low as 2 Gy,” said Dr. Kirkwood of the division of vascular and endovascular surgery at the University of Texas Southwestern Medical Center, Dallas. “With increasing skin doses, more severe effects present themselves. Atrophy, ulceration, and necrosis are possibilities.”

She went on to note that fenestrated endovascular aneurysm repair often requires high doses of radiation, yet the prevalence of deterministic skin injury following these cases is unknown. In a recent study, Dr. Kirkwood and her associates retrospectively reviewed 61 complex fluoroscopically guided interventions that met substantial radiation dose level (SRDL) criteria, which is defined by the National Council on Radiation and Protection Measurements as a reference air kerma (RAK) greater than or equal to 5 Gy (J. Vasc. Surg. 2014; 60:742-8).

“Despite mean peak skin doses as high as 6.5 Gy, ranging up to 18.5 Gy, we did not detect any skin injuries in this cohort,” Dr. Kirkwood said. “That study, however, was limited by its retrospective design. There was no postoperative protocol in place to ensure that a thorough skin exam was performed on each patient at every follow-up visit. Therefore, we hypothesized that a more thorough postoperative follow-up of patients would detect some skin injury following these cases.”For the current study, she and her associates sought to examine the prevalence of deterministic effects after FEVAR as well as any patient characteristics that may predispose patients to skin injury.

 

 

In June 2013, the researchers implemented a new policy regarding the follow-up of FEVAR patients, which involved a full skin exam at postoperative week 2 and 4, and at 3 and 6 months, as well as questioning patients about any skin-related complaints. For the current study, they retrospectively reviewed all FEVARs over a 7-month period after the change in policy and highlighted all the cases that reached a RAK of 5 Gy or greater.

Peak skin dose, a dose index, and simulated skin dose maps were calculated using customized software employing input data from fluoroscopic machine logs. Of 317 cases performed, 22 met or exceeded a RAK of 5 Gy. Of these, 21 were FEVARs and one was an embolization. Dr. Kirkwood reported that the average RAK for all FEVARs was 8 Gy, with a range of 5-11 Gy.

Slightly more than half of patients (52%) had multiple fluoroscopically guided interventions within 6 months of their SRDL event. The average RAK for these patients was 10 Gy (range of 5 - 15). The mean peak skin dose for all FEVARs was 5 Gy (range of 2 - 10 Gy), and the dose index was 0.69. The average peak skin dose for the subset of patients with multiple procedures was 7 Gy (a range of 3 - 9 Gy).

In terms of the follow-up, all 21 FEVAR patients were examined at the 1- or 2-week mark, 81% were examined at 1 month, 52% were examined at 3 months, and 62% were examined at 6 months. No radiation skin injuries were reported. “Based on the published data, we would expect to see all grades of skin injury, especially in the cohort of the 5-10 Gy,” Dr. Kirkwood said.

In the previous study, conducted prior to the new follow-up policy, the dose index for FEVARs was 0.78, “meaning that the peak skin dose that the patient received could be roughly estimated as 78% of the RAK dose displayed on the monitor,” Dr. Kirkwood explained.

“In the current work, the dose index decreased to 60%. This suggests that surgeons in our group have now more appropriately and effectively employed strategies to decrease radiation dose to the patient. However, even when the best operating practice is employed, FEVARs still continue to require high radiation doses in order to complete.”

The present study demonstrated that deterministic skin injuries “are uncommon after FEVAR, even at high RAK levels and regardless of cumulative dose,” she concluded. “Even with more comprehensive patient follow-up, the fact that no skin injuries were reported suggests that skin injuries in this patient cohort are less prevalent than the published guidelines would predict.”

Dr. Kirkwood reported no financial disclosures.

[email protected]

References

References

Publications
Publications
Topics
Article Type
Display Headline
FEVAR radiation injury reexamined
Display Headline
FEVAR radiation injury reexamined
Legacy Keywords
FEVAR, endovascular aortic stent, skin lesions
Legacy Keywords
FEVAR, endovascular aortic stent, skin lesions
Article Source

PURLs Copyright

Inside the Article

FDA’s new labeling rule: clinical implications

Article Type
Changed
Fri, 01/18/2019 - 14:23
Display Headline
FDA’s new labeling rule: clinical implications

As reviewed in a previous column, in December 2014, the Food and Drug Administration released the Pregnancy and Lactation Labeling Rule (PLLR), which will go into effect on June 30, 2015. This replaces and addresses the limitations of the system that has been in place for more than 30 years, which ascribed a pregnancy risk category of A,B,C,D, or X to drugs, with the purpose of informing the clinician and patient about the reproductive safety of medications during pregnancy. Though well intentioned, criticisms of this system have been abundant.

The system certainly simplified the interaction between physicians and patients, who presumably would be reassured that the risk of a certain medicine had been quantified by a regulatory body and therefore could be used as a basis for making a decision about whether or not to take a medicine during pregnancy. While the purpose of the labeling system was to provide some overarching guidance about available reproductive safety information of a medicine, it was ultimately used by clinicians and patients either to somehow garner reassurance about a medicine, or to heighten concern about a medicine.

 

Dr. Lee S. Cohen

From the outset, the system could not take into account the accruing reproductive safety information regarding compounds across therapeutic categories, and as a result, the risk category could be inadvertently reassuring or even misleading to patients with respect to medicines they might decide to stop or to continue.

With the older labeling system, some medicines are in the same category, despite very different amounts of reproductive safety information available on the drugs. In the 1990s, there were more reproductive safety data available on certain selective serotonin reuptake inhibitors (SSRIs), compared with others, but now the amount of such data available across SSRIs is fairly consistent. Yet SSRI labels have not been updated with the abundance of new reproductive safety information that has become available.

Almost 10 years ago, paroxetine (Paxil) was switched from a category C to D, when first-trimester exposure was linked to an increased risk of birth defects, particularly heart defects. But it was not switched back to category C when data became available that did not support that level of concern. Because of some of its side effects, paroxetine may not be considered by many to be a first-line treatment for major depression, but it certainly would not be absolutely contraindicated during pregnancy as might be presumed by the assignment of a category D label.

Lithium and sodium valproate provide another example of the limitations of the old system, which will be addressed in the new system. While the teratogenicity of both agents has been well described, the absolute risk of malformations with fetal exposure to lithium is approximately 0.05%- 0.1%, but the risk of neural tube defects with sodium valproate is estimated at 8%. Complicating the issue further, in 2013, the FDA announced that sodium valproate had been changed from a category D to X for migraine prevention, but retained the category D classification for other indications.

Placing lithium in category D suggests a relative contraindication and yet discontinuing that medication during pregnancy can put the mother and her baby at risk, given the data supporting the rapid onset of relapse in women who stop mood stabilizers during pregnancy.

For women maintained on lithium for recurrent or brittle bipolar disorder, the drug would certainly not be contraindicated and may afford critical emotional well-being and protection from relapse during pregnancy; the clinical scenario of discontinuation of lithium proximate to or during pregnancy and subsequent relapse of underlying illness is a serious clinical matter frequently demanding urgent intervention.

Still another example of the incomplete informative value of the older system is found in the assignment of atypical antipsychotics into different risk categories. Lurasidone (Latuda), approved in 2010, is in category B, but other atypical antipsychotics are in category C. One might assume that this implies that there are more reproductive safety data available on lurasidone supporting safety, but in fact, reproductive safety data for this molecule are extremely limited, and the absence of adverse event information resulted in a category B. This is a great example of the clinical maxim that incomplete or sparse data is just that; it does not imply safety, it implies that we do not know a lot about the safety of a medication.

If the old system of pregnancy labeling was arbitrary, the PLLR will be more descriptive. Safety information during pregnancy and lactation in the drug label will appear in a section on pregnancy, reformatted to include a risk summary, clinical considerations, and data subsections, as well as a section on lactation, and a section on females and males of reproductive potential.

 

 

Ongoing revision of the label as information becomes outdated is a requirement, and manufacturers will be obligated to include information on whether there is a pregnancy registry for the given drug. The goal of the PLLR is thus to provide the patient and clinician with information which addresses both sides of the risk-benefit decision for a given medicine – risks of fetal drug exposure and the risk of untreated illness for the woman and baby, a factor that is not addressed at all with the current system.

Certainly, the new label system will be a charge to industry to establish, support, and encourage enrollment in well-designed pregnancy registries across therapeutic areas to provide ample amounts of good quality data that can then be used by patients along with their physicians to make the most appropriate clinical decisions.

Much of the currently available reproductive safety information on drugs is derived from spontaneous reports, where there has been inconsistent information and variable levels of scrutiny with respect to outcomes assessment, and from small, underpowered cohort studies or large administrative databases. Postmarketing surveillance efforts have been rather modest and have not been a priority for manufacturers in most cases. Hopefully, this will change as pregnancy registries become part of routine postmarketing surveillance.

The new system will not be a panacea, and I expect there will be growing pains, considering the huge challenge of reducing the available data of varying quality into distinct paragraphs. It may also be difficult to synthesize the volume of data and the nuanced differences between certain studies into a paragraph on risk assessment. The task will be simpler for some agents and more challenging for others where the data are less consistent. Questions also remain as to how data will be revised over time.

But despite these challenges, the new system represents a monumental change, and in my mind, will bring a focus to the importance of the issue of quantifying reproductive safety of medications used by women either planning to get pregnant or who are pregnant or breastfeeding, across therapeutic areas. Of particular importance, the new system will hopefully lead to more discussion between physician and patient about what is and is not known about the reproductive safety of a medication, versus a cursory reference to some previously assigned category label.

Our group has shown that when it comes to making decisions about using medication during pregnancy, even when given the same information, women will make different decisions. This is critical since people make personal decisions about the use of these medications in collaboration with their doctors on a case-by-case basis, based on personal preference, available information, and clinical conditions across a spectrum of severity.

As the FDA requirements shift from the arbitrary category label assignment to a more descriptive explanation of risk, based on available data, an important question will be what mechanism will be used by regulators collaborating with industry to update labels with the growing amounts of information on reproductive safety, particularly if there is a commitment from industry to enhance postmarketing surveillance with more pregnancy registries.

Better data can catalyze thoughtful discussions between doctor and patient regarding decisions to use or defer treatment with a given medicine. One might wonder if the new system will open a Pandora’s box. But I believe in this case, opening Pandora’s box would be welcome because it will hopefully lead to a more careful examination of the available information regarding reproductive safety and more informed decisions on the part of patients.

Dr. Cohen is the director of the Center for Women’s Mental Health at Massachusetts General Hospital in Boston, which provides information about reproductive mental health. He has been a consultant to manufacturers of antidepressant medications and is the principal investigator of the National Pregnancy Registry for Atypical Antipsychotics, which receives support from the manufacturers of those drugs. To comment, e-mail him at [email protected]. Scan this QR code or go to obgynnews.com to view similar columns.

Publications
Topics
Legacy Keywords
FDA, labeling, psychiatric, drugs, reproductive, safety
Sections
Related Articles

As reviewed in a previous column, in December 2014, the Food and Drug Administration released the Pregnancy and Lactation Labeling Rule (PLLR), which will go into effect on June 30, 2015. This replaces and addresses the limitations of the system that has been in place for more than 30 years, which ascribed a pregnancy risk category of A,B,C,D, or X to drugs, with the purpose of informing the clinician and patient about the reproductive safety of medications during pregnancy. Though well intentioned, criticisms of this system have been abundant.

The system certainly simplified the interaction between physicians and patients, who presumably would be reassured that the risk of a certain medicine had been quantified by a regulatory body and therefore could be used as a basis for making a decision about whether or not to take a medicine during pregnancy. While the purpose of the labeling system was to provide some overarching guidance about available reproductive safety information of a medicine, it was ultimately used by clinicians and patients either to somehow garner reassurance about a medicine, or to heighten concern about a medicine.

 

Dr. Lee S. Cohen

From the outset, the system could not take into account the accruing reproductive safety information regarding compounds across therapeutic categories, and as a result, the risk category could be inadvertently reassuring or even misleading to patients with respect to medicines they might decide to stop or to continue.

With the older labeling system, some medicines are in the same category, despite very different amounts of reproductive safety information available on the drugs. In the 1990s, there were more reproductive safety data available on certain selective serotonin reuptake inhibitors (SSRIs), compared with others, but now the amount of such data available across SSRIs is fairly consistent. Yet SSRI labels have not been updated with the abundance of new reproductive safety information that has become available.

Almost 10 years ago, paroxetine (Paxil) was switched from a category C to D, when first-trimester exposure was linked to an increased risk of birth defects, particularly heart defects. But it was not switched back to category C when data became available that did not support that level of concern. Because of some of its side effects, paroxetine may not be considered by many to be a first-line treatment for major depression, but it certainly would not be absolutely contraindicated during pregnancy as might be presumed by the assignment of a category D label.

Lithium and sodium valproate provide another example of the limitations of the old system, which will be addressed in the new system. While the teratogenicity of both agents has been well described, the absolute risk of malformations with fetal exposure to lithium is approximately 0.05%- 0.1%, but the risk of neural tube defects with sodium valproate is estimated at 8%. Complicating the issue further, in 2013, the FDA announced that sodium valproate had been changed from a category D to X for migraine prevention, but retained the category D classification for other indications.

Placing lithium in category D suggests a relative contraindication and yet discontinuing that medication during pregnancy can put the mother and her baby at risk, given the data supporting the rapid onset of relapse in women who stop mood stabilizers during pregnancy.

For women maintained on lithium for recurrent or brittle bipolar disorder, the drug would certainly not be contraindicated and may afford critical emotional well-being and protection from relapse during pregnancy; the clinical scenario of discontinuation of lithium proximate to or during pregnancy and subsequent relapse of underlying illness is a serious clinical matter frequently demanding urgent intervention.

Still another example of the incomplete informative value of the older system is found in the assignment of atypical antipsychotics into different risk categories. Lurasidone (Latuda), approved in 2010, is in category B, but other atypical antipsychotics are in category C. One might assume that this implies that there are more reproductive safety data available on lurasidone supporting safety, but in fact, reproductive safety data for this molecule are extremely limited, and the absence of adverse event information resulted in a category B. This is a great example of the clinical maxim that incomplete or sparse data is just that; it does not imply safety, it implies that we do not know a lot about the safety of a medication.

If the old system of pregnancy labeling was arbitrary, the PLLR will be more descriptive. Safety information during pregnancy and lactation in the drug label will appear in a section on pregnancy, reformatted to include a risk summary, clinical considerations, and data subsections, as well as a section on lactation, and a section on females and males of reproductive potential.

 

 

Ongoing revision of the label as information becomes outdated is a requirement, and manufacturers will be obligated to include information on whether there is a pregnancy registry for the given drug. The goal of the PLLR is thus to provide the patient and clinician with information which addresses both sides of the risk-benefit decision for a given medicine – risks of fetal drug exposure and the risk of untreated illness for the woman and baby, a factor that is not addressed at all with the current system.

Certainly, the new label system will be a charge to industry to establish, support, and encourage enrollment in well-designed pregnancy registries across therapeutic areas to provide ample amounts of good quality data that can then be used by patients along with their physicians to make the most appropriate clinical decisions.

Much of the currently available reproductive safety information on drugs is derived from spontaneous reports, where there has been inconsistent information and variable levels of scrutiny with respect to outcomes assessment, and from small, underpowered cohort studies or large administrative databases. Postmarketing surveillance efforts have been rather modest and have not been a priority for manufacturers in most cases. Hopefully, this will change as pregnancy registries become part of routine postmarketing surveillance.

The new system will not be a panacea, and I expect there will be growing pains, considering the huge challenge of reducing the available data of varying quality into distinct paragraphs. It may also be difficult to synthesize the volume of data and the nuanced differences between certain studies into a paragraph on risk assessment. The task will be simpler for some agents and more challenging for others where the data are less consistent. Questions also remain as to how data will be revised over time.

But despite these challenges, the new system represents a monumental change, and in my mind, will bring a focus to the importance of the issue of quantifying reproductive safety of medications used by women either planning to get pregnant or who are pregnant or breastfeeding, across therapeutic areas. Of particular importance, the new system will hopefully lead to more discussion between physician and patient about what is and is not known about the reproductive safety of a medication, versus a cursory reference to some previously assigned category label.

Our group has shown that when it comes to making decisions about using medication during pregnancy, even when given the same information, women will make different decisions. This is critical since people make personal decisions about the use of these medications in collaboration with their doctors on a case-by-case basis, based on personal preference, available information, and clinical conditions across a spectrum of severity.

As the FDA requirements shift from the arbitrary category label assignment to a more descriptive explanation of risk, based on available data, an important question will be what mechanism will be used by regulators collaborating with industry to update labels with the growing amounts of information on reproductive safety, particularly if there is a commitment from industry to enhance postmarketing surveillance with more pregnancy registries.

Better data can catalyze thoughtful discussions between doctor and patient regarding decisions to use or defer treatment with a given medicine. One might wonder if the new system will open a Pandora’s box. But I believe in this case, opening Pandora’s box would be welcome because it will hopefully lead to a more careful examination of the available information regarding reproductive safety and more informed decisions on the part of patients.

Dr. Cohen is the director of the Center for Women’s Mental Health at Massachusetts General Hospital in Boston, which provides information about reproductive mental health. He has been a consultant to manufacturers of antidepressant medications and is the principal investigator of the National Pregnancy Registry for Atypical Antipsychotics, which receives support from the manufacturers of those drugs. To comment, e-mail him at [email protected]. Scan this QR code or go to obgynnews.com to view similar columns.

As reviewed in a previous column, in December 2014, the Food and Drug Administration released the Pregnancy and Lactation Labeling Rule (PLLR), which will go into effect on June 30, 2015. This replaces and addresses the limitations of the system that has been in place for more than 30 years, which ascribed a pregnancy risk category of A,B,C,D, or X to drugs, with the purpose of informing the clinician and patient about the reproductive safety of medications during pregnancy. Though well intentioned, criticisms of this system have been abundant.

The system certainly simplified the interaction between physicians and patients, who presumably would be reassured that the risk of a certain medicine had been quantified by a regulatory body and therefore could be used as a basis for making a decision about whether or not to take a medicine during pregnancy. While the purpose of the labeling system was to provide some overarching guidance about available reproductive safety information of a medicine, it was ultimately used by clinicians and patients either to somehow garner reassurance about a medicine, or to heighten concern about a medicine.

 

Dr. Lee S. Cohen

From the outset, the system could not take into account the accruing reproductive safety information regarding compounds across therapeutic categories, and as a result, the risk category could be inadvertently reassuring or even misleading to patients with respect to medicines they might decide to stop or to continue.

With the older labeling system, some medicines are in the same category, despite very different amounts of reproductive safety information available on the drugs. In the 1990s, there were more reproductive safety data available on certain selective serotonin reuptake inhibitors (SSRIs), compared with others, but now the amount of such data available across SSRIs is fairly consistent. Yet SSRI labels have not been updated with the abundance of new reproductive safety information that has become available.

Almost 10 years ago, paroxetine (Paxil) was switched from a category C to D, when first-trimester exposure was linked to an increased risk of birth defects, particularly heart defects. But it was not switched back to category C when data became available that did not support that level of concern. Because of some of its side effects, paroxetine may not be considered by many to be a first-line treatment for major depression, but it certainly would not be absolutely contraindicated during pregnancy as might be presumed by the assignment of a category D label.

Lithium and sodium valproate provide another example of the limitations of the old system, which will be addressed in the new system. While the teratogenicity of both agents has been well described, the absolute risk of malformations with fetal exposure to lithium is approximately 0.05%- 0.1%, but the risk of neural tube defects with sodium valproate is estimated at 8%. Complicating the issue further, in 2013, the FDA announced that sodium valproate had been changed from a category D to X for migraine prevention, but retained the category D classification for other indications.

Placing lithium in category D suggests a relative contraindication and yet discontinuing that medication during pregnancy can put the mother and her baby at risk, given the data supporting the rapid onset of relapse in women who stop mood stabilizers during pregnancy.

For women maintained on lithium for recurrent or brittle bipolar disorder, the drug would certainly not be contraindicated and may afford critical emotional well-being and protection from relapse during pregnancy; the clinical scenario of discontinuation of lithium proximate to or during pregnancy and subsequent relapse of underlying illness is a serious clinical matter frequently demanding urgent intervention.

Still another example of the incomplete informative value of the older system is found in the assignment of atypical antipsychotics into different risk categories. Lurasidone (Latuda), approved in 2010, is in category B, but other atypical antipsychotics are in category C. One might assume that this implies that there are more reproductive safety data available on lurasidone supporting safety, but in fact, reproductive safety data for this molecule are extremely limited, and the absence of adverse event information resulted in a category B. This is a great example of the clinical maxim that incomplete or sparse data is just that; it does not imply safety, it implies that we do not know a lot about the safety of a medication.

If the old system of pregnancy labeling was arbitrary, the PLLR will be more descriptive. Safety information during pregnancy and lactation in the drug label will appear in a section on pregnancy, reformatted to include a risk summary, clinical considerations, and data subsections, as well as a section on lactation, and a section on females and males of reproductive potential.

 

 

Ongoing revision of the label as information becomes outdated is a requirement, and manufacturers will be obligated to include information on whether there is a pregnancy registry for the given drug. The goal of the PLLR is thus to provide the patient and clinician with information which addresses both sides of the risk-benefit decision for a given medicine – risks of fetal drug exposure and the risk of untreated illness for the woman and baby, a factor that is not addressed at all with the current system.

Certainly, the new label system will be a charge to industry to establish, support, and encourage enrollment in well-designed pregnancy registries across therapeutic areas to provide ample amounts of good quality data that can then be used by patients along with their physicians to make the most appropriate clinical decisions.

Much of the currently available reproductive safety information on drugs is derived from spontaneous reports, where there has been inconsistent information and variable levels of scrutiny with respect to outcomes assessment, and from small, underpowered cohort studies or large administrative databases. Postmarketing surveillance efforts have been rather modest and have not been a priority for manufacturers in most cases. Hopefully, this will change as pregnancy registries become part of routine postmarketing surveillance.

The new system will not be a panacea, and I expect there will be growing pains, considering the huge challenge of reducing the available data of varying quality into distinct paragraphs. It may also be difficult to synthesize the volume of data and the nuanced differences between certain studies into a paragraph on risk assessment. The task will be simpler for some agents and more challenging for others where the data are less consistent. Questions also remain as to how data will be revised over time.

But despite these challenges, the new system represents a monumental change, and in my mind, will bring a focus to the importance of the issue of quantifying reproductive safety of medications used by women either planning to get pregnant or who are pregnant or breastfeeding, across therapeutic areas. Of particular importance, the new system will hopefully lead to more discussion between physician and patient about what is and is not known about the reproductive safety of a medication, versus a cursory reference to some previously assigned category label.

Our group has shown that when it comes to making decisions about using medication during pregnancy, even when given the same information, women will make different decisions. This is critical since people make personal decisions about the use of these medications in collaboration with their doctors on a case-by-case basis, based on personal preference, available information, and clinical conditions across a spectrum of severity.

As the FDA requirements shift from the arbitrary category label assignment to a more descriptive explanation of risk, based on available data, an important question will be what mechanism will be used by regulators collaborating with industry to update labels with the growing amounts of information on reproductive safety, particularly if there is a commitment from industry to enhance postmarketing surveillance with more pregnancy registries.

Better data can catalyze thoughtful discussions between doctor and patient regarding decisions to use or defer treatment with a given medicine. One might wonder if the new system will open a Pandora’s box. But I believe in this case, opening Pandora’s box would be welcome because it will hopefully lead to a more careful examination of the available information regarding reproductive safety and more informed decisions on the part of patients.

Dr. Cohen is the director of the Center for Women’s Mental Health at Massachusetts General Hospital in Boston, which provides information about reproductive mental health. He has been a consultant to manufacturers of antidepressant medications and is the principal investigator of the National Pregnancy Registry for Atypical Antipsychotics, which receives support from the manufacturers of those drugs. To comment, e-mail him at [email protected]. Scan this QR code or go to obgynnews.com to view similar columns.

Publications
Publications
Topics
Article Type
Display Headline
FDA’s new labeling rule: clinical implications
Display Headline
FDA’s new labeling rule: clinical implications
Legacy Keywords
FDA, labeling, psychiatric, drugs, reproductive, safety
Legacy Keywords
FDA, labeling, psychiatric, drugs, reproductive, safety
Sections
Disallow All Ads