In response to “Preoperative angiotensin axis blockade therapy, intraoperative hypotension, and the risks of postoperative acute kidney injury”

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
In response to “Preoperative angiotensin axis blockade therapy, intraoperative hypotension, and the risks of postoperative acute kidney injury”

We are pleased to note the response of Onuigbo et al. to our article demonstrating an increase in acute kidney injury (AKI) associated with angiotensin axis blockade (AAB) in major orthopedic surgery.[1] Like Onuigbo et al., we also noted that intraoperative hypotension and AAB are associated with AKI.[2] In addition, we found that AAB‐associated AKI occurred independently of intraoperative hypotension. Because of our findings, we withhold angiotensin‐converting enzyme inhibitors and angiotensin receptor blockers on the day of surgery in all patients presenting for major orthopedic surgery whose blood pressure is well controlled preoperatively. We were concerned that this practice might increase the incidence of pre‐ and postoperative hypertension in such patients, but we have been reassured by a recent article demonstrating that this does not occur in outpatient surgical patients.[3]

We caution, however, that the common sense approach of stopping AAB preoperatively to avoid possible AKI still requires evaluation by a properly conducted randomized controlled trial. Because of the prolonged systemic half‐life and duration of tissue activity (>24 hours) of many AAB agents,[4] the required preoperative cessation period of AAB may vary considerably.

References
  1. Nielson E, Hennrikus E, Lehman E, Mets B. Angiotensin axis blockade, hypotension, and acute kidney injury in elective major orthopedic surgery. J Hosp Med. 2014;9:283288.
  2. Onuigbo MA, Onuigbo NTC. A second case of “quadruple whammy” in a week in a northwestern Wisconsin hospital. BMJ. 2013;346:f678.
  3. Twersky RS, Goel V, Narayan P, Weedon J. The risk of hypertension after preoperative discontinuation of angiotensin‐converting enzyme inhibitors or angiotensin receptor antagonists in ambulatory and same‐day admission patients. Anesth Analg. 2014;118:938944.
  4. Mets B. Management of hypotension associated with angiotensin‐axis blockade and general anesthesia administration. J Cardiothorac Vasc Anesth. 2013;27:156167.
Article PDF
Issue
Journal of Hospital Medicine - 9(9)
Page Number
611-611
Sections
Article PDF
Article PDF

We are pleased to note the response of Onuigbo et al. to our article demonstrating an increase in acute kidney injury (AKI) associated with angiotensin axis blockade (AAB) in major orthopedic surgery.[1] Like Onuigbo et al., we also noted that intraoperative hypotension and AAB are associated with AKI.[2] In addition, we found that AAB‐associated AKI occurred independently of intraoperative hypotension. Because of our findings, we withhold angiotensin‐converting enzyme inhibitors and angiotensin receptor blockers on the day of surgery in all patients presenting for major orthopedic surgery whose blood pressure is well controlled preoperatively. We were concerned that this practice might increase the incidence of pre‐ and postoperative hypertension in such patients, but we have been reassured by a recent article demonstrating that this does not occur in outpatient surgical patients.[3]

We caution, however, that the common sense approach of stopping AAB preoperatively to avoid possible AKI still requires evaluation by a properly conducted randomized controlled trial. Because of the prolonged systemic half‐life and duration of tissue activity (>24 hours) of many AAB agents,[4] the required preoperative cessation period of AAB may vary considerably.

We are pleased to note the response of Onuigbo et al. to our article demonstrating an increase in acute kidney injury (AKI) associated with angiotensin axis blockade (AAB) in major orthopedic surgery.[1] Like Onuigbo et al., we also noted that intraoperative hypotension and AAB are associated with AKI.[2] In addition, we found that AAB‐associated AKI occurred independently of intraoperative hypotension. Because of our findings, we withhold angiotensin‐converting enzyme inhibitors and angiotensin receptor blockers on the day of surgery in all patients presenting for major orthopedic surgery whose blood pressure is well controlled preoperatively. We were concerned that this practice might increase the incidence of pre‐ and postoperative hypertension in such patients, but we have been reassured by a recent article demonstrating that this does not occur in outpatient surgical patients.[3]

We caution, however, that the common sense approach of stopping AAB preoperatively to avoid possible AKI still requires evaluation by a properly conducted randomized controlled trial. Because of the prolonged systemic half‐life and duration of tissue activity (>24 hours) of many AAB agents,[4] the required preoperative cessation period of AAB may vary considerably.

References
  1. Nielson E, Hennrikus E, Lehman E, Mets B. Angiotensin axis blockade, hypotension, and acute kidney injury in elective major orthopedic surgery. J Hosp Med. 2014;9:283288.
  2. Onuigbo MA, Onuigbo NTC. A second case of “quadruple whammy” in a week in a northwestern Wisconsin hospital. BMJ. 2013;346:f678.
  3. Twersky RS, Goel V, Narayan P, Weedon J. The risk of hypertension after preoperative discontinuation of angiotensin‐converting enzyme inhibitors or angiotensin receptor antagonists in ambulatory and same‐day admission patients. Anesth Analg. 2014;118:938944.
  4. Mets B. Management of hypotension associated with angiotensin‐axis blockade and general anesthesia administration. J Cardiothorac Vasc Anesth. 2013;27:156167.
References
  1. Nielson E, Hennrikus E, Lehman E, Mets B. Angiotensin axis blockade, hypotension, and acute kidney injury in elective major orthopedic surgery. J Hosp Med. 2014;9:283288.
  2. Onuigbo MA, Onuigbo NTC. A second case of “quadruple whammy” in a week in a northwestern Wisconsin hospital. BMJ. 2013;346:f678.
  3. Twersky RS, Goel V, Narayan P, Weedon J. The risk of hypertension after preoperative discontinuation of angiotensin‐converting enzyme inhibitors or angiotensin receptor antagonists in ambulatory and same‐day admission patients. Anesth Analg. 2014;118:938944.
  4. Mets B. Management of hypotension associated with angiotensin‐axis blockade and general anesthesia administration. J Cardiothorac Vasc Anesth. 2013;27:156167.
Issue
Journal of Hospital Medicine - 9(9)
Issue
Journal of Hospital Medicine - 9(9)
Page Number
611-611
Page Number
611-611
Article Type
Display Headline
In response to “Preoperative angiotensin axis blockade therapy, intraoperative hypotension, and the risks of postoperative acute kidney injury”
Display Headline
In response to “Preoperative angiotensin axis blockade therapy, intraoperative hypotension, and the risks of postoperative acute kidney injury”
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Psychogenic Nonepileptic Seizures

Article Type
Changed
Tue, 03/06/2018 - 16:11
Display Headline
Psychogenic Nonepileptic Seizures

From the Department of Neurology, University of Maryland School of Medicine, Baltimore, MD.

 

Abstracts

  • Objective: To provide a review of psychogenic nonepileptic seizures, including a discussion of the diagnosis, treatment, and clinical significance of the disorder.
  • Methods: Review of the relevant literature.
  • Results: Psychogenic nonepileptic seizures are a common and potentially disabling neurologic disorder. They are most prevalent in young adults, and more commonly seen in women versus men. Certain psychosocial variables may impact the development of the condition. The diagnosis is made through a detailed history and observation of clinical events in conjunction with video EEG monitoring. Neuropsychological testing is an important component in the evaluation. Treatment includes establishment of an accurate diagnosis, management of any underlying psychiatric diagnoses, and regular follow-up with a neurologist or trained care provider.
  • Conclusion: Psychogenic nonepileptic seizures represent a complex interaction between neurologic and psychological factors. Obtaining an accurate diagnosis through the use of video EEG monitoring and clinical observation is an important initial step in treatment and improved quality of life in this patient population.

 

Psychogenic nonepileptic seizures (PNES) are commonly encountered in outpatient specialty epilepsy clinics as well as inpatient epilepsy monitoring units. They comprise approximately 20% of all refractory seizure disorders referred to specialty epilepsy centers [1–4]. PNES are thought to be psychological in origin as opposed to arising from abnormal electrical discharges as in epileptic seizures. PNES may be more frequent and disabling than epileptic seizures, and patients with PNES may report worse outcomes [5,6]. Increased utilization of long-term video EEG monitoring along with greater recognition of psychogenic neurologic disorders has allowed for improved diagnosis of PNES. However, many diagnostic and therapeutic challenges remain. There are often delays in obtaining an accurate diagnosis, and optimal management remains challenging, often leading to inappropriate, ineffective, and costly treatment, sometimes for many years [6–8].

Epidemiology

PNES are seen across the spectrum of age-groups, from children [9,10] to elderly persons, but they most often occur in young adults between the ages of 15 to 35 years [1,8]. Caution should be used when considering this diagnosis in infants or young children, in whom it is more common to see physiologic events that may mimic epileptic seizures, including gastroesophageal reflux, shuddering, night terrors, or breath holding spells [1,9,10].

PNES are prevalent within epilepsy practices. Patients with PNES comprise approximately 5% to 20% patients thought to have intractable epilepsy seen in outpatient centers, and within epilepsy monitoring units they account for 10% to 40% of patients [1,2,6,8]. A population-based study approximates the incidence of PNES at 1.4 per 100,000 people and 3.4 per 100,000 people between the ages of 15 to 24 years [4].

There is a female preponderance in PNES, which is similar to other conversion and somatoform disorders. Overall, women comprise approximately 70% to 80% of patients with the PNES diagnosis [1,2,6]. There are psychosocial variables that are seen in some patients with this disorder. An important factor that has been described is past history of sexual or physical abuse. In one series, there was a history of sexual abuse in almost 25% of patients with PNES, and history of either sexual abuse, physical abuse, or both in 32% of patients [11]. A history of sexual and/or physical abuse is not exclusive to these patients, and can certainly be seen in patients with epilepsy as well. For example, in a control population of epilepsy patients, there was a reported rate of past sexual or physical abuse approaching 9% [12].

A prior history of head trauma, often of a relatively mild degree, has been described as a potential inciting factor for some cases of PNES [6,13]. In the literature, studies report that as many as 20% of PNES patients attributed their seizures to head trauma, often rather mild head trauma [6,14].

Historcial Context

Historically, what today are called PNES originate with the concept of hysteria, a medical diagnosis in women that can be traced to antiquity [15,16]. By the late 1800s, one of the founders of neurology, Jean Charcot, established hysterical seizures as an important clinical entity with his detailed, elegant descriptions of patients. Charcot formulated clinical methods for distinguishing hysteria and particularly hysterical seizures from epilepsy. He presumed that hysteria and epilepsy were closely related, and he termed seizures due to hysteria as “hysteroepilepsy” or “epileptiform” hysteria. Charcot proposed that hysterical seizures were organic disorders of the brain, like other forms of seizures and epilepsy, and emphasized their relation to disturbance of the female reproductive system [17,18]. Charcot utilized techniques such as manipulation of “hysterogenic zones” and ovarian compression as well as suggestion to both treat and provoke hysteria and hysterical seizures, which he described and documented [17,18]. One of Charcot’s most celebrated students, Sigmund Freud, observed Charcot’s demonstrations but drew different conclusions. He theorized that hysteria and hysterical seizures were not organic disorders of the brain as Charcot proposed, but were rather emotional disorders of the unconscious mind due to repressed energies or drives. Based largely the theories of Freud and Charcot, individuals with hysteria were distinguished from those with epilepsy, with hysterical seizures related to psychological dysfunction while epileptic seizures were associated with physical or organic brain disorders [15,16].

With the introduction of EEG recording in the 1930s, it became possible to characterize epilepsy as an electrical disorder of the brain with associated EEG changes and more effectively distinguish it from hysterical seizures, which did not have such abnormalities. In addition, in the first half of the 20th century, the nature of hysteria as seen and diagnosed by physicians seemed to change. The dramatic, theatrical convulsions described by Charcot and his contemporaries appeared less commonly, while disorders such as chronic pain seemed to increase [1,19].

However, by the 1960s, several reports confirmed that hysterical seizures were actually still prevalent. Newer terms like “pseudoseizures” were used to describe these disorders because the term “hysteria” was thought to be somewhat derogatory, anti-feminist, and antiquated [20,21]. In the 1970s and thereafter, with the increasing availability of video EEG monitoring and growth of inpatient epilepsy monitoring units, it was discovered that these hysterical, pseudo-, or what were also by then termed psychogenic seizures, were actually still common [1,22].

More recently, it has been recognized that the pendulum in some cases may have swung too far in regard to the diagnosis of this disorder. Some rare patients with seizures initially diagnosed as PNES may actually have forms of epileptic seizures such as frontal lobe epilepsy or related physiological disorders rather than psychogenic causes for their episodes [1,23]. These types of epileptic seizures can be very difficult to diagnose properly unless one appreciates how they present and manifest and remains vigilant for them during evaluation [1,23].

Terminology

There is an ongoing debate regarding the appropriate terminology for psychogenic events, and there is no uniform standardized definition or classification at this time. The term that is currently preferred within the epilepsy community for seizures of psychological origin that are thought to be associated with conversion, somatization, or dissociative disorders is “psychogenic nonepileptic seizures” (PNES). This terminology is felt to be non-disparaging and more neutral as compared with other terms such as pseudoseizures, which were previously favored. Nonepileptic seizures or nonepileptic events are broader terms meant to incorporate both physiologic and psychological causes for disorders that are mistaken for epilepsy. PNES are widely defined as paroxysmal events that appear similar to epileptic seizures but are not due to abnormal electrical discharges in the brain and as noted, are typically thought to be related or caused by conversion, somatization, or dissociative disorders.

Physiologic nonepileptic events are another category of physical disorders that may be mistaken for epilepsy. The underlying causes differ between age-groups, and can include conditions such as cardiac arrhythmias, migraine variants, syncope, or metabolic abnormalities. Physiologic nonepileptic seizures account for only a small proportion of all patients with nonepileptic seizures or events [1]. In general, any patient with a psychological disorder that causes symptoms that are mistaken for epilepsy can be said to have PNES.

Clinical Characteristics And Presentation

PNES and epileptic seizures are predominantly distinguished through clinical observation along with descriptions from the patient or witnesses, and an understanding of seizure semiology. Although video EEG may be needed to confirm the diagnosis, certain clinical characteristics and historical details can help to distinguish between the 2 disorders (Table 1) [24,25]. Features to consider include movements and/or vocalizations during seizures, duration of seizures, and other factors such as injury, incontinence, and amnesia [1,24,25]. Caution must be taken not to use one sign or feature in isolation, as none have been found to be specifically pathognomonic.

The duration of PNES is often significantly longer than that seen in epileptic seizures, which usually last less than 3 minutes, excluding the postictal period. PNES may also exhibit waxing and waning convulsive activity, although this finding can certainly be seen in epileptic seizures as well. PNES may be shown to have distractibility with external stimuli. Additionally, the movements in PNES may appear asymmetric, asynchronous, or purposeful, although this is not diagnostic for this disorder. This may contrast with the well-defined, synchronous tonic-clonic activity typically seen in epileptic seizures [1,24,25]. Back arching and pelvic thrusting movements can also be seen in PNES. Despite these differences, it may still be challenging to distinguish the semi-purposeful behaviors of PNES from the automatisms of certain focal epileptic seizures. The often bizarre-appearing, hypermotor activity that can be seen in frontal lobe seizures is often especially difficult to differentiate from PNES [1,23].

Another important consideration is that consciousness is preserved in PNES, while consciousness and responsiveness are frequently impaired in epileptic seizures. Patients with PNES are often apparently unresponsive during events, although there is no true impairment of awareness. Other characteristics that are more commonly seen in PNES are crying and eye closure [26]. Self-injury and incontinence may be reported, but they are less often clearly witnessed or documented [27,28]. Additionally, although patients may at times appear to be asleep at seizure onset, EEG recordings document the patient to actually be asleep in less than 1% of cases [29]. While epileptic seizures often respond well to antiepileptic medications, PNES characteristically do not [1,3,6,8].

In certain situations, provocation maneuvers may be utilized in order to reproduce PNES in patients undergoing EEG monitoring. In comparison to epileptic seizures, suggestion and emotional stimuli are more likely to trigger psychogenic events [1]. Methods utilized to provoke PNES may include saline injections, placement of a tuning fork on the head or body, or even hypnosis, when a suggestion is concurrently provided that such maneuvers can trigger the patient’s seizures [1,30,31]. When evaluating seizures that are provoked in such a manner, it is important to consider whether or not the event captured is in fact a typical event for the patient, or whether the provocation has uncovered a different, atypical event. Given that PNES and epileptic seizures can co-exist within the same patient, care should be taken to avoid making a diagnosis based on capturing an atypical event, or capturing only a subset of a patient’s seizure types. This could result in failure to make an accurate and thorough diagnosis [23]. There is debate regarding the ethics of provoking seizures by way of suggestion. Some members of the epilepsy community feel that provoking seizures through suggestion is inherently deceitful, and therefore can damage the physician-patient relationship. Others assert that such provocative testing can be undertaken in an honest manner, and can ultimately help achieve an accurate diagnosis for the patient [32].

As previously mentioned, there is a proportion of patients who have co-existing epileptic seizures and PNES, and obtaining an accurate diagnosis can be especially challenging in this group. Studies have reported that around 10% to 40% of patients with PNES also have epilepsy [1,22,23,33]. Care must be taken to distinguish between differences in seizure types and if necessary, video EEG monitoring may be needed to capture both seizure types for an accurate diagnosis. This testing can then be useful in education with families and caregivers who may be shown the videos with consent from the patient in order to guide future care.

Evaluation And Diagnosis

As in much of neurology, a thorough history, along with detailed clinical observation remains essential in the diagnosis of patients with PNES and for distinguishing these events from epilepsy. Video EEG monitoring of seizures is a key adjunct to the history and clinical observation in diagnosing this condition [1,2]. Long-term video EEG monitoring is considered the “gold standard” in the characterization and differential diagnosis of seizures. Additional potentially helpful diagnostic techniques include video EEG-monitored seizure provocation, serum prolactin levels, single photon emission computed tomography, and neuropsychological testing.

Video EEG Monitoring

Video EEG monitoring, often undertaken in dedicated inpatient epilepsy monitoring units, has become a mainstay for diagnosis of psychogenic seizures. Ideally, a typical seizure is recorded with simultaneous EEG and video monitoring with no evidence of epileptic activity seen during the event. In patients with generalized convulsive epileptic seizures, the EEG should show an ictal correlate during the seizure. In the case of focal seizures with impaired awareness (complex partial seizures), the EEG will demonstrate a corresponding ictal abnormality in 85 to 95% of cases [1]. Focal seizures without impaired awareness (simple partial seizures) may not necessarily be associated with a corresponding EEG change. Up to 60% of such seizures have been shown to produce an ictal EEG abnormality, and this number may rise to almost 80% if multiple seizures are captured [34]. It is extremely important to capture a typical event with video EEG monitoring because an interictal or routine EEG may not provide all of the needed information to make a diagnosis. Specifically, a normal routine (non-ictal) EEG may be seen in epilepsy patients, and minor or non-specific abnormalities can be seen on EEGs of patients with PNES (Table 2) [1,6,8,22].

EEG monitoring for characterization of clinical events can be conducted on an ambulatory or outpatient basis or in dedicated inpatient epilepsy monitoring units. Ambulatory monitoring can be useful in the case of patients who report seizures that are more frequent in their home environment or in patients with frequent events. If events are infrequent, then inpatient monitoring may be more efficacious [1]. With longer-term inpatient monitoring, antiepileptic medications can be withdrawn in a supervised setting, in order to lower the seizure threshold as well as to safely discontinue medications that may not be necessary. Such medication titrations are typically not safe in an unsupervised outpatient setting. Some ambulatory EEG monitoring systems do allow for simultaneous video and EEG recording. However, an advantage to inpatient monitoring, which is not afforded in the outpatient setting, is the ability for nursing staff or physicians to perform clinical testing during events to assess for patient responsiveness and other features. Additionally, with inpatient monitoring, EEG technicians can routinely assess for any technical problems with the electrodes or recording system.

Another benefit of video EEG monitoring is that the state (waking, drowsy, or asleep) of the patient at the onset of an event can be established. While epileptic seizures can arise from any state, PNES most often occur from wakefulness. Patients with PNES may appear to be asleep at the onset of events, and they may report seizures from sleep. Video EEG monitoring can help to establish the waking or sleep state of the patient that may aid in diagnosis [29].

Prolactin Levels

Serum prolactin levels may be helpful in the diagnosis of PNES [35,36]. Following generalized tonic-clonic or complex partial epileptic seizures, the serum prolactin can rise from two to threefold to five to tenfold [37]. The maximal rise in serum prolactin occurs in the initial 20 to 60 minutes after the seizure [35–37]. A similar rise in serum prolactin would not be expected in PNES. Although prolactin levels may have some utility in diagnosis, they are not currently routinely ordered as part of a standard admission to most inpatient epilepsy monitoring units. This may be due in part to the fact that false-positive and false-negative results can occur with these levels [37–39]. For example, there may not be a rise in the prolactin level after a simple partial seizure or more subtle complex partial seizure.

Neuropsychological Testing

Neuropsychological testing is also a key component in the evaluation and diagnosis of PNES. Ideally, a mental health provider with a background in psychological assessment and neuropsychological intervention for patients with psychogenic disorders would perform the evaluation [40,41].

The goal of the evaluation should not solely focused on whether the patient suffers from nonepileptic or epileptic seizures. An epileptologist upon review of clinical, electrographic, and neuropsychological data better makes this determination. Moreover, neuropsychological testing cannot in itself either diagnose or exclude the possibility that a seizure disorder is nonepileptic because of the considerable overlap between epileptic and nonepileptic test results [40,41]. Neuropsychological evaluations aid this assessment by (1) determining the potential or likelihood of significant contributing psychopathology or cognitive difficulties, (2) defining the nature of the associated psychological or psychosocial issues, and (3) assessing how a patient might benefit from various psychologically based interventions [1]. The testing may identify psychological problems that can guide treatment after diagnosis.

Delays in Diagnosis

Correct and prompt diagnosis is essential for patients with PNES as is appropriate referral to a knowledgeable trained mental health professional. On average, patients with PNES are diagnosed 7.2 years after manifestation (SD 9.3 years), with mean delay of 5 to 7 years. Younger age, interictal epileptiform potentials in the EEG, and anticonvulsant treatment are associated with longer delays [42,43]. Delays are also thought to occur because of problems with “ownership” of these patients. Although typically neurologists are involved in the diagnosis of PNES, often using video EEG monitoring done in an inpatient setting, the next step is often a referral to a psychiatrist or mental health care provider. There are sometimes delays in the initial referral to the neurologist, delays in referral to specialists for video EEG testing, and also to the physicians, psychologists or social workers who may provide treatment. Another disconnect can occur if patients are “lost to follow-up” if they receive a referral for mental health care and either do not follow up on this on their own, or if the reason for this care is not fully explained. In addition, many mental health professionals are not trained in the evaluation and treatment of psychogenic symptoms and may even feel uncomfortable in dealing with these patients [13,44].

Many studies have been suggestive that delays in diagnosis may result in poorer outcomes [45,46], while other studies have suggested that patients who have an acute diagnosis of PNES upon presentation may do particularly well [8,47–49]. Some of the most recent large outcome studies suggest that there may be no worsening of outcome associated with delays in diagnosis and that outcome was predicted by other factors [50–52].

Management

Management of patients with PNES is similar to that for patients with other types of so-called abnormal illness behavior, although there remains a relative paucity of evidence for specific treatment strategies for PNES [1]. The first consideration should be the manner in which the diagnosis of PNES is presented to the patient and family. It is important to be honest with the patient and demonstrate a positive approach to the diagnosis [53]. The physician should emphasize as favorable or good news the fact that the patient does not have epilepsy, and should also stress that the disorder, although serious and "real," does not require treatment with antiepileptic medications and that once stress or emotional issues are resolved, the patient has the potential to gain better control of these events [1,54,55]. Nevertheless, not all patients readily accept the diagnosis or this type of approach. Some patients may seek other opinions, and this should not be discouraged. An adversarial relationship with the patient should be avoided. The patient should be encouraged to return if desired, and records should be made available to other health care providers to avoid duplication of services.

After the diagnosis of PNES is presented, supportive measures should be initiated. PNES patients may benefit from education and support that can be provided by the neurologist or primary care physician [1]. If the neuropsychological assessment suggests a clinical profile that requires a professional mental health intervention, then an appropriate referral should be made. Regular follow-up visits for the patient with the neurologist are useful even if a mental health professional is involved [49,56]. This allows the patient to get medical attention without demonstrating illness behavior. Patient education and support are stressed at these visits. Because family issues are often important contributing factors, physicians should consider involving family members in visits with consent of the patient [1].

A variety of treatment strategies are employed for the management of PNES including cognitive behavioral therapy (CBT), group and family therapy, antidepressant medication, and other forms of rehabilitation [5,57,58]. A 2007 Cochrane review that identified 608 references for non-medication PNES treatments found that only 3 studies met criteria for a randomized controlled trial. One of the more recently favored treatment options for PNES that has been applied to the treatment of various somatoform disorders and other psychiatric disorders in the past is CBT [57,59,60]. This form of psychotherapy can be administered by trained personnel in a time-limited fashion using defined protocols. The basis of this treatment is that the patient learns to increase awareness of their dysfunctional thoughts and learns new ways to respond to them [57,58]. To date, several groups have reported results of nonrandomized trials as well as case reports and case series which have established the utility of this treatment. There have been reports of significant reductions in seizure frequency and this treatment strategy appears very promising [61–65]. Preliminary randomized controlled trials have also been piloted and are also suggestive that this may be a validated treatment approach [66].

Prognosis

The outcomes of patients with PNES vary. Long-term follow-up studies show that about half of all patients with PNES function reasonably well following their diagnosis. However, only approximately one-third of patients will completely stop having seizures or related problems, and approximately 50% percent have poor functional outcomes [1,2,50]. When the diagnosis of PNES is based on reliable criteria such a video EEG monitoring, misdiagnosis is unlikely. Instead, the usual cause for a poor outcome is related to a patient’s chronic psychological and social problems[1,8,22,50].

It is noteworthy that children with PNES appear to have a much better prognosis than adults [9,10]. In fact, the etiology in children may be related more to transient stress and coping disorders, while adults are more likely to have PNES within the context of more chronic psychological maladjustment, such as personality disorders [10]. Another factor that accounts for the better outcomes in children is that they are usually properly diagnosed earlier in the course of their disorder [9,10].

Patients with milder psychopathology respond better to supportive educational or behavioral therapeutic approaches. In contrast, patients with more severe psychopathology and factitious disorders more often have associated chronic personality problems and correspondingly, a poorer prognosis [1,50]. Also it appears that patients who continue to be followed by the diagnosing neurologist or center do better than patients who are not seen after diagnosis [49,67]. As knowledge about the nature of PNES and their associated psychopathology is gained, better treatment strategies can be developed that will improve the care and prognosis of these difficult and challenging patients.

A large study of 164 patients who were followed for 10 years were considered to have “poor outcome” in general but favorable factors included higher education, younger age of onset and diagnosis, and less “dramatic” attacks, defined as lack of “positive motor features, no ictal incontinence or tongue biting.” These findings were consistent with prior studies [52,68].

In addition, the patients who tended to have less seizures and do better long term, had less somatoform and dissociative symptoms on psychometric testing [51]. These findings are often explained by the theory that patients who do not do well have poor coping strategies to deal with stress and anxiety and that in a sense, these patients have emotional dysregulation.

Special Issues

Coexisting Epileptic and Psychogenic Nonepileptic Seizures

A complicating factor in diagnosis is that both PNES and epileptic seizures may occur in a single patient. Indeed, approximately 10% to 40% of patients identified to have PNES also have been reported to have epileptic seizures [1,23,33,56]. There are several possible explanations for this. Some patients with epilepsy may learn that seizures result in attention and fill certain psychological needs. Alternatively, they may have concomitant neurologic problems, personality disorders, cognitive deficits, or impaired coping mechanisms that predispose them to psychogenic symptoms [69–71]. Fortunately, in such patients with combined seizure disorders, the epileptic seizures are usually well controlled or of only historical relevance at the time a patient develops PNES [1,22,23,33,72–74].

In other patients, both epileptic and PNES may start simultaneously, making management even more complex. In such patients, we have found it particularly helpful to focus on the semiology of seizure manifestations as recorded by video EEG monitoring to distinguish PNES from the epileptic seizures. We then direct our treatment of the patient according to the semiology manifesting at that time. We also have found it useful to show the videos of seizures to family members or caregivers with patient consent to help them understand how to respond best to a patient’s symptoms when epileptic and PNES co-exist.

Misdiagnosis of Psychogenic Nonepileptic Seizures

Sometimes events that are initially diagnosed as nonepileptic actually prove to be epileptic. Such events can be called “pseudo-pseudo” or “epileptic-nonepileptic” seizures [1]. Frontal lobe seizures in particular may not be associated with significant EEG changes ictally and therefore misdiagnosed as PNES [23,75,76]. Clinical presentation and proper diagnosis of these types of events warrant emphasis.

Notable manifestations of frontal lobe seizures that may easily be confused with hysterical behavior include shouting, laughing, cursing, clapping, snapping, genital manipulation, pelvic thrusting, pedaling, running, kicking, and thrashing [23,75–77]. Not all of these behaviors are specific for frontal lobe seizures. For example, bicycling leg movements have also been reported in seizures originating from the temporal lobe [78].

Summary

PNES represent a common yet challenging problem within neurology. This is due to the difficulty in diagnosis as well as lack of effective and widely available treatment options. Overall outcomes of patients with PNES vary, and may relate to an individual patient’s chronic psychological and social problems. However, an accurate and timely diagnosis remains critical and can help provide direction for implementing appropriate treatment.

 

Corresponding author: Jennifer Hopp, MD, Department of Neurology, University of Maryland Medical Center, Room S12C09, 22 South Greene Street, Baltimore, MD 21201, [email protected].

Financial disclosures: None.

References

1. Krumholz A. Nonepileptic seizures: diagnosis and management. Neurology 1999;S76–83.

2. Meierkord H, Will B, Fish D, Shorvon S. The clinical features and prognosis of pseudoseizures diagnosed using video-EEG telemetry. Neurology 1991;41:1643–6.

3. Lesser RP. Psychogenic seizures. Neurology 1996;46:1499–1507.

4. Sigurdardottir KR, Olafsson E. Incidence of psychogenic seizures in adults: a population-based study in Iceland. Epilepsia 1998;39:857–62.

5. Szaflarski JP, Szaflarski M, Hughes C, et al. Psychopathology and quality of life: psychogenic non-epileptic seizures versus epilepsy. Med Sci Monit 2003 9:CR113–8.

6. Barry E, Krumholz A, Bergey C, et al. Nonepileptic posttraumatic seizures. Epilepsia 1998;39:427–31.

7. Pakalnis A, Drake ME, Phillips B. Neuropsychiatric aspects of psychogenic status epilepticus. Neurology 1991;41;1104–6.

8. Walzack TS, Papacostas S, Williams DT, et al. Outcome after the diagnosis of psychogenic nonepileptic seizures. Epilepsia 1995;36:1131–7.

9. Metrick ME, Ritter FJ, Gates JR, et al. Nonepileptic events in childhood. Epilepsia 1991;32:322–8.

10. Wyllie E, Friedman D, Luders H, et al. Outcome of psychogenic seizures in children and adolescents compared to adults. Neurology 1991;41:742–4.

11. Duncan R, Oto M. Predictors of antecedent factors in psychogenic nonepileptic attacks: multivariate analysis. Neurology 2008;71:1000–5.

12. Alper K, Devinsky O, Perrine K, et al. Nonepileptic seizures and childhood sexual and physical abuse. Neurology 1993; 43:1950–3.

13. LaFrance WC Jr, Devinsky O. The treatment of nonepileptic seizures: historical perspectives and future directions. Epilepsia 2004;45 Suppl 2:15–21.

14. Westbrook LE, Devinsky O, Geocadin R. Nonepileptic seizures after head injury. Epilepsia 1998;39:978–82.

15. Slavney PR. Perspectives on hysteria. Baltimore: Johns Hopkins University Press; 1990.

16. Veith I. Hysteria: the history of a disease. Chicago: University of Chicago Press; 1965.

17. Goetz CG. Charcot the clinician. The Tuesday lessons. New York: Raven Press; 1987.

18. Massey EW, McHenry LC. Hysteroepilepsy in the nineteenth century: Charcot and Gowers. Neurology 1986;36:65–7.

19. Zeigler FJ, Imboden JB, Meyer E. Contemporary conversion reactions: a clinical study. Am J Psychiatry 1960;116:901–10.

20. Liske E, Forster FM. Pseudoseizures: a problem in the diagnosis and management of epileptic patients. Neurology 1964;14:41–9.

21. Diagnostic and statistical manual of mental disorders. DSM-IV 4th ed. American Psychiatric Association. Washington, DC; 1995.

22. Krumholz A, Niedermeyer, E. Psychogenic seizures: a clinical study with follow-up data. Neurology 1983; 33:498-502.

23. Krumholz A, Ting T. Co-existing epileptic and nonepileptic seizures. in imitators of epilepsy. 2nd ed. In: Kaplan PW, Fisher RS, editors. New York: Demos Medical Publishing; 2005:261–76.

24. Gates JR, Ramani V, Whalen S, Loewenson R. Ictal characteristics of pseudoseizures. Arch Neurol 1985;42:1183–87.

25. Leis AA, Ross MA, Summers AK. Psychogenic seizures: Ictal characteristics and diagnostic pitfalls. Neurology 1992;42:95–9.

26. Walczak TS, Bogolioubov. Weeping during psychogenic nonepileptic seizures. Epilepsia 1996;37:207–10.

27. Bergen D, Ristanovic R. Weeping is a common element during psychogenic nonepileptic seizures. Arch Neurol 1993;50:1059–60.

28. Peguero E, Abou-Khalil B, Fakhoury, Mathews G. Self-injury and incontinence in psychogenic seizures. Epilepsia 1995;36:586–91.

29. Orbach D, Ritaccio A, Devinsky O. Psychogenic, nonepileptic seizures associated with video-EEG-verified sleep. Epilepsia 2003;44:64–8.

30. Walczak TS, Williams DT, Berton W. Utility and reliability of placebo infusion in the evaluation of patients with seizures. Neurology 1994;44:394–99.

31. Bazil CW, Kothari M, Luciano D, et al. Provocation of nonepileptic seizures by suggestion in a general seizure population. Epilepsia 1994;35:768–70.

32. Devinsky O, Fisher RS. Ethical use of placebos and provocative testing in diagnosing nonepileptic seizures. Neurology 1996;47:866–70.

33. Lesser RP, Lueders H, Dinner DS. Evidence for epilepsy is rare in patients with psychogenic seizures. Neurology 1983; 33:502–4.

34. Barre MA, Burnstine TH, Fisher RS, Lesser RP. Electroencephalographic changes during simple partial seizures. Epilepsia 1994;35:715–20.

35. Trimble MR. Serum prolactin levels in epilepsy and hysteria. BMJ 1978;2:1682.

36. Laxer KD, Mullooly JP, Howell B. Prolactin changes after seizures classified by EEG monitoring. Neurology 1985; 35:31–5.

37. Pritchard PB, Wannamaker BB, Sagel J, et al. Endocrine function following complex partial seizures. Ann Neurol 1983;14:27–32.

38. Malkowicz DE, Legido A, Jackel RA, et al. Prolactin secretion following repetitive seizures. Neurology 1995;45:448–52.

39. Oribe E, Rohullah A, Nissenbaum E, Boal B. Serum prolactin concentrations are elevated after syncope. Neurology 1996;47:60–2.

40. Henrichs TF, Tucker DM, Farha J, Novelly RA. MMPI indices in the identification of patients evidencing pseudoseizures. Epilepsia 1988;29:184–8.

41. Wilkus RJ, Dodrill CB. Factors affecting the outcome of MMPI and neuropsychological assessments of psychogenic and epileptic seizure patients. Epilepsia 1989;30:339–47.

42. DeTimary P, Fouchet P, Sylin M, et al. Non–epileptic seizures: delayed diagnosis in patients presenting with electroencephalographic (EEG) or clinical signs of epileptic seizures. Seizure 2002;11:193–7.

43. Reuber M, Fernandez G, et al. Diagnostic delay in psychogenic nonepileptic seizures. Neurology 2002;58:493–5.

44. Rosenbaum DH, et al. Outpatient multidisciplinary management of non-epileptic seizures. In: Rowan AJ, Gates Jr, editors. Non-epileptic seizures. 1st ed. Stoneham, MA: Butterworth-Heinemann; 1993:275–83.

45. Lempert T, Schmidt D. Natural history and outcome of psychogenic seizures: a clinical study in 50 patients. J Neurol 1990;237:35–8.

46. Selwa LM, Geyer J, Nikakhtar N, et al. Nonepileptic seizure outcome varies by type of spell and duration of illness. Epilepsia 2000;41:1330–4.

47. Buchanan N, Snars J. Pseudoseizures (non epileptic attack disorder): clinical management and outcome in 50 patients. Seizure 1993;2:141–6.

48. Kanner AM. More controversies on the treatment of psychogenic pseudoseizures: an addendum. Epilepsy Behav 2003;4:360–4.

49. Aboukasm A, Mahr G, Gahry BR, et al. Retrospective analysis of the effects of psychotherapeutic interventions on outcomes of psychogenic nonepileptic seizures. Epilepsia 1998;39:470–3.

50. Reuber M, Pukrop T, Bauer J, et al. Outcome in psychogenic nonepileptic seizures: 1 to 10-year follow-up in 164 patients. Ann Neurol 2003;53:305–11.

51. McKenzie P, Oto M, Russell A, Pelosi A, Duncan R. Early outcomes and predictors in 260 patients with psychogenic nonepileptic seizures (PNES). Neurology 2010;74:64–9.

52. Kanner AM, Parra J, Frey M, et al. Psychiatric and neurologic predictors of psychogenic pseudoseizure outcome. Neurology 1999;53:933–8.

53. Shen W, Bowman ES, Markand ON. Presenting the diagnosis of pseudoseizure. Neurology 1990; 40:756–9.

54. Friedman JH, LaFrance Jr WC. Psychogenic disorders: the need to speak plainly. Arch Neurol 2010;67:753–5.

55. LaFrance Jr WC. Psychogenic nonepileptic “seizures” or “attacks”? It’s not just semantics: “Seizures.” Neurology 2010;75: 87–8.

56. Ramsay RE, Cohen A, Brown MC. Coexisting epilepsy and non-epileptic seizures. In: Non-epileptic seizures. Butterworth-Heinemann; 1998:47–54.

57. Stone J, Carson A, Sharpe M. Functional symptoms in neurology: management. J Neurol Neurosurg Psychiatry. 2005;6(Suppl 1):i13–i21.

58. LaFrance WC Jr, Bjornaes H. Designing treatment plans based on etiology of psychogenic nonepileptic seizures. In: Schachter SC, LaFrance WC Jr, editors. Gates and Rowan’s nonepileptic seizures. 3rd ed. New York: Cambridge University Press; 2010:266–80.

59. Kroenke K, Swindle R. Cognitive-behavioral therapy for somatization and symptom syndromes: a critical review of controlled clinical trials. Psychother Psychosom 200;69:205–15.

60. Kroenke K. Efficacy of treatment of somatoform disorders: a review of randomized controlled trials. Psychosom Med 2007:69:881–8.

61. LaFrance WC Jr, Miller IW, Ryan CE, et al. Cognitive behavioral therapy for psychogenic nonepileptic seizures. Epilepsy Behav 2009;14:591–6.

62. Chalder T. Non-epileptic attacks: a cognitive behavioral approach in a single case with a four-year follow-up. Clin Psychol Psychother 1996;3:291–7.

63. Betts T, Duffy N. Non-epileptic attack disorder (pseudoseizures) and sexual abuse: a review. In: Gram L, Johannessen SI, Osterman PE, et al, editors. Pseudo-epileptic seizures. Petersfield, UK: Wrightson Biomedical Publishing; 1993:55–66.

64. Lesser RP. Treatment and outcome of psychogenic nonepileptic seizures. Epilepsy Currents 2003;3:198–200.

65. Ramani V. Review of psychiatric treatment strategies in non-epileptic seizures. In: Rowan AJ, Gates JR, eds. Non-epileptic Seizures. 1st ed. Stoneham, MA: Butterworth Heinemann; 1993:259–67.

66. Goldstein LH, Chalder T, Chigwedere C, et al. Cognitive-behavioral therapy for psychogenic nonepileptic seizures: a pilot RCT. Neurology 2010;74:1986–94.

67. Bennet C, So NM, Smith WB, Thompson K. Structured treatment improves the outcome of nonepileptic events. Epilepsia 1997;38(Suppl 8):214.

68. McDade G, Brown SW. Non-epileptic seizures: management and predictive factors of outcome. Seizure 1992;1:7–10.

69. Bowman ES. Etiology and clinical course of pseudoseizures: relationship to trauma, depression, and dissociation. Psychosomatics 1993;34:333–42.

70. Bowman ES, Markand ON. Psychodynamics and psychiatric diagnoses of pseudoseizure subjects. Am J Psychiatry 1996;153:57–63.

71. Vanderzant CW, Giordani B, Berent S, et al. Personality of patients with pseudoseizures. Neurology 1986;36:664–8.

72. Benbadis SR, Agrawal V, Tatum WO. How many patients with psychogenic nonepileptic seizures also have epilepsy? Neurology 2001; 57:915–7.

73. Glosser G, Roberts D, et al. Nonepileptic seizures after resective epilepsy surgery. Epilepsia 1999; 40:1750–4.

74. Reuber M, Kral T. New-onset psychogenic seizures after intracranial neurosurgery. Acta Neurochir (Wien) 2002; 144:901–7.

75. Williamson P, Spencer D, Spencer S, et al. Complex partial seizures of frontal lobe origin. Ann Neurol 1985;18:497–504.

76. Saygi S, Katz A, Marks D, et al. Frontal lobe partial seizures and psychogenic seizures: comparison of clinical and ictal characteristics. Neurology 1992;42:1274–7.

77. Waterman K, Purves S, Kosaka B, et al. An epileptic syndrome caused by mesial frontal lobe seizure foci. Neurology 1987; 37:577–82.

78. Sussman N, Jackel R, Kaplan L, et al. Bicycling movements as a manifestation of complex partial seizures of temporal lobe origin. Epilepsia 1989;30:527–31.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Topics
Sections

From the Department of Neurology, University of Maryland School of Medicine, Baltimore, MD.

 

Abstracts

  • Objective: To provide a review of psychogenic nonepileptic seizures, including a discussion of the diagnosis, treatment, and clinical significance of the disorder.
  • Methods: Review of the relevant literature.
  • Results: Psychogenic nonepileptic seizures are a common and potentially disabling neurologic disorder. They are most prevalent in young adults, and more commonly seen in women versus men. Certain psychosocial variables may impact the development of the condition. The diagnosis is made through a detailed history and observation of clinical events in conjunction with video EEG monitoring. Neuropsychological testing is an important component in the evaluation. Treatment includes establishment of an accurate diagnosis, management of any underlying psychiatric diagnoses, and regular follow-up with a neurologist or trained care provider.
  • Conclusion: Psychogenic nonepileptic seizures represent a complex interaction between neurologic and psychological factors. Obtaining an accurate diagnosis through the use of video EEG monitoring and clinical observation is an important initial step in treatment and improved quality of life in this patient population.

 

Psychogenic nonepileptic seizures (PNES) are commonly encountered in outpatient specialty epilepsy clinics as well as inpatient epilepsy monitoring units. They comprise approximately 20% of all refractory seizure disorders referred to specialty epilepsy centers [1–4]. PNES are thought to be psychological in origin as opposed to arising from abnormal electrical discharges as in epileptic seizures. PNES may be more frequent and disabling than epileptic seizures, and patients with PNES may report worse outcomes [5,6]. Increased utilization of long-term video EEG monitoring along with greater recognition of psychogenic neurologic disorders has allowed for improved diagnosis of PNES. However, many diagnostic and therapeutic challenges remain. There are often delays in obtaining an accurate diagnosis, and optimal management remains challenging, often leading to inappropriate, ineffective, and costly treatment, sometimes for many years [6–8].

Epidemiology

PNES are seen across the spectrum of age-groups, from children [9,10] to elderly persons, but they most often occur in young adults between the ages of 15 to 35 years [1,8]. Caution should be used when considering this diagnosis in infants or young children, in whom it is more common to see physiologic events that may mimic epileptic seizures, including gastroesophageal reflux, shuddering, night terrors, or breath holding spells [1,9,10].

PNES are prevalent within epilepsy practices. Patients with PNES comprise approximately 5% to 20% patients thought to have intractable epilepsy seen in outpatient centers, and within epilepsy monitoring units they account for 10% to 40% of patients [1,2,6,8]. A population-based study approximates the incidence of PNES at 1.4 per 100,000 people and 3.4 per 100,000 people between the ages of 15 to 24 years [4].

There is a female preponderance in PNES, which is similar to other conversion and somatoform disorders. Overall, women comprise approximately 70% to 80% of patients with the PNES diagnosis [1,2,6]. There are psychosocial variables that are seen in some patients with this disorder. An important factor that has been described is past history of sexual or physical abuse. In one series, there was a history of sexual abuse in almost 25% of patients with PNES, and history of either sexual abuse, physical abuse, or both in 32% of patients [11]. A history of sexual and/or physical abuse is not exclusive to these patients, and can certainly be seen in patients with epilepsy as well. For example, in a control population of epilepsy patients, there was a reported rate of past sexual or physical abuse approaching 9% [12].

A prior history of head trauma, often of a relatively mild degree, has been described as a potential inciting factor for some cases of PNES [6,13]. In the literature, studies report that as many as 20% of PNES patients attributed their seizures to head trauma, often rather mild head trauma [6,14].

Historcial Context

Historically, what today are called PNES originate with the concept of hysteria, a medical diagnosis in women that can be traced to antiquity [15,16]. By the late 1800s, one of the founders of neurology, Jean Charcot, established hysterical seizures as an important clinical entity with his detailed, elegant descriptions of patients. Charcot formulated clinical methods for distinguishing hysteria and particularly hysterical seizures from epilepsy. He presumed that hysteria and epilepsy were closely related, and he termed seizures due to hysteria as “hysteroepilepsy” or “epileptiform” hysteria. Charcot proposed that hysterical seizures were organic disorders of the brain, like other forms of seizures and epilepsy, and emphasized their relation to disturbance of the female reproductive system [17,18]. Charcot utilized techniques such as manipulation of “hysterogenic zones” and ovarian compression as well as suggestion to both treat and provoke hysteria and hysterical seizures, which he described and documented [17,18]. One of Charcot’s most celebrated students, Sigmund Freud, observed Charcot’s demonstrations but drew different conclusions. He theorized that hysteria and hysterical seizures were not organic disorders of the brain as Charcot proposed, but were rather emotional disorders of the unconscious mind due to repressed energies or drives. Based largely the theories of Freud and Charcot, individuals with hysteria were distinguished from those with epilepsy, with hysterical seizures related to psychological dysfunction while epileptic seizures were associated with physical or organic brain disorders [15,16].

With the introduction of EEG recording in the 1930s, it became possible to characterize epilepsy as an electrical disorder of the brain with associated EEG changes and more effectively distinguish it from hysterical seizures, which did not have such abnormalities. In addition, in the first half of the 20th century, the nature of hysteria as seen and diagnosed by physicians seemed to change. The dramatic, theatrical convulsions described by Charcot and his contemporaries appeared less commonly, while disorders such as chronic pain seemed to increase [1,19].

However, by the 1960s, several reports confirmed that hysterical seizures were actually still prevalent. Newer terms like “pseudoseizures” were used to describe these disorders because the term “hysteria” was thought to be somewhat derogatory, anti-feminist, and antiquated [20,21]. In the 1970s and thereafter, with the increasing availability of video EEG monitoring and growth of inpatient epilepsy monitoring units, it was discovered that these hysterical, pseudo-, or what were also by then termed psychogenic seizures, were actually still common [1,22].

More recently, it has been recognized that the pendulum in some cases may have swung too far in regard to the diagnosis of this disorder. Some rare patients with seizures initially diagnosed as PNES may actually have forms of epileptic seizures such as frontal lobe epilepsy or related physiological disorders rather than psychogenic causes for their episodes [1,23]. These types of epileptic seizures can be very difficult to diagnose properly unless one appreciates how they present and manifest and remains vigilant for them during evaluation [1,23].

Terminology

There is an ongoing debate regarding the appropriate terminology for psychogenic events, and there is no uniform standardized definition or classification at this time. The term that is currently preferred within the epilepsy community for seizures of psychological origin that are thought to be associated with conversion, somatization, or dissociative disorders is “psychogenic nonepileptic seizures” (PNES). This terminology is felt to be non-disparaging and more neutral as compared with other terms such as pseudoseizures, which were previously favored. Nonepileptic seizures or nonepileptic events are broader terms meant to incorporate both physiologic and psychological causes for disorders that are mistaken for epilepsy. PNES are widely defined as paroxysmal events that appear similar to epileptic seizures but are not due to abnormal electrical discharges in the brain and as noted, are typically thought to be related or caused by conversion, somatization, or dissociative disorders.

Physiologic nonepileptic events are another category of physical disorders that may be mistaken for epilepsy. The underlying causes differ between age-groups, and can include conditions such as cardiac arrhythmias, migraine variants, syncope, or metabolic abnormalities. Physiologic nonepileptic seizures account for only a small proportion of all patients with nonepileptic seizures or events [1]. In general, any patient with a psychological disorder that causes symptoms that are mistaken for epilepsy can be said to have PNES.

Clinical Characteristics And Presentation

PNES and epileptic seizures are predominantly distinguished through clinical observation along with descriptions from the patient or witnesses, and an understanding of seizure semiology. Although video EEG may be needed to confirm the diagnosis, certain clinical characteristics and historical details can help to distinguish between the 2 disorders (Table 1) [24,25]. Features to consider include movements and/or vocalizations during seizures, duration of seizures, and other factors such as injury, incontinence, and amnesia [1,24,25]. Caution must be taken not to use one sign or feature in isolation, as none have been found to be specifically pathognomonic.

The duration of PNES is often significantly longer than that seen in epileptic seizures, which usually last less than 3 minutes, excluding the postictal period. PNES may also exhibit waxing and waning convulsive activity, although this finding can certainly be seen in epileptic seizures as well. PNES may be shown to have distractibility with external stimuli. Additionally, the movements in PNES may appear asymmetric, asynchronous, or purposeful, although this is not diagnostic for this disorder. This may contrast with the well-defined, synchronous tonic-clonic activity typically seen in epileptic seizures [1,24,25]. Back arching and pelvic thrusting movements can also be seen in PNES. Despite these differences, it may still be challenging to distinguish the semi-purposeful behaviors of PNES from the automatisms of certain focal epileptic seizures. The often bizarre-appearing, hypermotor activity that can be seen in frontal lobe seizures is often especially difficult to differentiate from PNES [1,23].

Another important consideration is that consciousness is preserved in PNES, while consciousness and responsiveness are frequently impaired in epileptic seizures. Patients with PNES are often apparently unresponsive during events, although there is no true impairment of awareness. Other characteristics that are more commonly seen in PNES are crying and eye closure [26]. Self-injury and incontinence may be reported, but they are less often clearly witnessed or documented [27,28]. Additionally, although patients may at times appear to be asleep at seizure onset, EEG recordings document the patient to actually be asleep in less than 1% of cases [29]. While epileptic seizures often respond well to antiepileptic medications, PNES characteristically do not [1,3,6,8].

In certain situations, provocation maneuvers may be utilized in order to reproduce PNES in patients undergoing EEG monitoring. In comparison to epileptic seizures, suggestion and emotional stimuli are more likely to trigger psychogenic events [1]. Methods utilized to provoke PNES may include saline injections, placement of a tuning fork on the head or body, or even hypnosis, when a suggestion is concurrently provided that such maneuvers can trigger the patient’s seizures [1,30,31]. When evaluating seizures that are provoked in such a manner, it is important to consider whether or not the event captured is in fact a typical event for the patient, or whether the provocation has uncovered a different, atypical event. Given that PNES and epileptic seizures can co-exist within the same patient, care should be taken to avoid making a diagnosis based on capturing an atypical event, or capturing only a subset of a patient’s seizure types. This could result in failure to make an accurate and thorough diagnosis [23]. There is debate regarding the ethics of provoking seizures by way of suggestion. Some members of the epilepsy community feel that provoking seizures through suggestion is inherently deceitful, and therefore can damage the physician-patient relationship. Others assert that such provocative testing can be undertaken in an honest manner, and can ultimately help achieve an accurate diagnosis for the patient [32].

As previously mentioned, there is a proportion of patients who have co-existing epileptic seizures and PNES, and obtaining an accurate diagnosis can be especially challenging in this group. Studies have reported that around 10% to 40% of patients with PNES also have epilepsy [1,22,23,33]. Care must be taken to distinguish between differences in seizure types and if necessary, video EEG monitoring may be needed to capture both seizure types for an accurate diagnosis. This testing can then be useful in education with families and caregivers who may be shown the videos with consent from the patient in order to guide future care.

Evaluation And Diagnosis

As in much of neurology, a thorough history, along with detailed clinical observation remains essential in the diagnosis of patients with PNES and for distinguishing these events from epilepsy. Video EEG monitoring of seizures is a key adjunct to the history and clinical observation in diagnosing this condition [1,2]. Long-term video EEG monitoring is considered the “gold standard” in the characterization and differential diagnosis of seizures. Additional potentially helpful diagnostic techniques include video EEG-monitored seizure provocation, serum prolactin levels, single photon emission computed tomography, and neuropsychological testing.

Video EEG Monitoring

Video EEG monitoring, often undertaken in dedicated inpatient epilepsy monitoring units, has become a mainstay for diagnosis of psychogenic seizures. Ideally, a typical seizure is recorded with simultaneous EEG and video monitoring with no evidence of epileptic activity seen during the event. In patients with generalized convulsive epileptic seizures, the EEG should show an ictal correlate during the seizure. In the case of focal seizures with impaired awareness (complex partial seizures), the EEG will demonstrate a corresponding ictal abnormality in 85 to 95% of cases [1]. Focal seizures without impaired awareness (simple partial seizures) may not necessarily be associated with a corresponding EEG change. Up to 60% of such seizures have been shown to produce an ictal EEG abnormality, and this number may rise to almost 80% if multiple seizures are captured [34]. It is extremely important to capture a typical event with video EEG monitoring because an interictal or routine EEG may not provide all of the needed information to make a diagnosis. Specifically, a normal routine (non-ictal) EEG may be seen in epilepsy patients, and minor or non-specific abnormalities can be seen on EEGs of patients with PNES (Table 2) [1,6,8,22].

EEG monitoring for characterization of clinical events can be conducted on an ambulatory or outpatient basis or in dedicated inpatient epilepsy monitoring units. Ambulatory monitoring can be useful in the case of patients who report seizures that are more frequent in their home environment or in patients with frequent events. If events are infrequent, then inpatient monitoring may be more efficacious [1]. With longer-term inpatient monitoring, antiepileptic medications can be withdrawn in a supervised setting, in order to lower the seizure threshold as well as to safely discontinue medications that may not be necessary. Such medication titrations are typically not safe in an unsupervised outpatient setting. Some ambulatory EEG monitoring systems do allow for simultaneous video and EEG recording. However, an advantage to inpatient monitoring, which is not afforded in the outpatient setting, is the ability for nursing staff or physicians to perform clinical testing during events to assess for patient responsiveness and other features. Additionally, with inpatient monitoring, EEG technicians can routinely assess for any technical problems with the electrodes or recording system.

Another benefit of video EEG monitoring is that the state (waking, drowsy, or asleep) of the patient at the onset of an event can be established. While epileptic seizures can arise from any state, PNES most often occur from wakefulness. Patients with PNES may appear to be asleep at the onset of events, and they may report seizures from sleep. Video EEG monitoring can help to establish the waking or sleep state of the patient that may aid in diagnosis [29].

Prolactin Levels

Serum prolactin levels may be helpful in the diagnosis of PNES [35,36]. Following generalized tonic-clonic or complex partial epileptic seizures, the serum prolactin can rise from two to threefold to five to tenfold [37]. The maximal rise in serum prolactin occurs in the initial 20 to 60 minutes after the seizure [35–37]. A similar rise in serum prolactin would not be expected in PNES. Although prolactin levels may have some utility in diagnosis, they are not currently routinely ordered as part of a standard admission to most inpatient epilepsy monitoring units. This may be due in part to the fact that false-positive and false-negative results can occur with these levels [37–39]. For example, there may not be a rise in the prolactin level after a simple partial seizure or more subtle complex partial seizure.

Neuropsychological Testing

Neuropsychological testing is also a key component in the evaluation and diagnosis of PNES. Ideally, a mental health provider with a background in psychological assessment and neuropsychological intervention for patients with psychogenic disorders would perform the evaluation [40,41].

The goal of the evaluation should not solely focused on whether the patient suffers from nonepileptic or epileptic seizures. An epileptologist upon review of clinical, electrographic, and neuropsychological data better makes this determination. Moreover, neuropsychological testing cannot in itself either diagnose or exclude the possibility that a seizure disorder is nonepileptic because of the considerable overlap between epileptic and nonepileptic test results [40,41]. Neuropsychological evaluations aid this assessment by (1) determining the potential or likelihood of significant contributing psychopathology or cognitive difficulties, (2) defining the nature of the associated psychological or psychosocial issues, and (3) assessing how a patient might benefit from various psychologically based interventions [1]. The testing may identify psychological problems that can guide treatment after diagnosis.

Delays in Diagnosis

Correct and prompt diagnosis is essential for patients with PNES as is appropriate referral to a knowledgeable trained mental health professional. On average, patients with PNES are diagnosed 7.2 years after manifestation (SD 9.3 years), with mean delay of 5 to 7 years. Younger age, interictal epileptiform potentials in the EEG, and anticonvulsant treatment are associated with longer delays [42,43]. Delays are also thought to occur because of problems with “ownership” of these patients. Although typically neurologists are involved in the diagnosis of PNES, often using video EEG monitoring done in an inpatient setting, the next step is often a referral to a psychiatrist or mental health care provider. There are sometimes delays in the initial referral to the neurologist, delays in referral to specialists for video EEG testing, and also to the physicians, psychologists or social workers who may provide treatment. Another disconnect can occur if patients are “lost to follow-up” if they receive a referral for mental health care and either do not follow up on this on their own, or if the reason for this care is not fully explained. In addition, many mental health professionals are not trained in the evaluation and treatment of psychogenic symptoms and may even feel uncomfortable in dealing with these patients [13,44].

Many studies have been suggestive that delays in diagnosis may result in poorer outcomes [45,46], while other studies have suggested that patients who have an acute diagnosis of PNES upon presentation may do particularly well [8,47–49]. Some of the most recent large outcome studies suggest that there may be no worsening of outcome associated with delays in diagnosis and that outcome was predicted by other factors [50–52].

Management

Management of patients with PNES is similar to that for patients with other types of so-called abnormal illness behavior, although there remains a relative paucity of evidence for specific treatment strategies for PNES [1]. The first consideration should be the manner in which the diagnosis of PNES is presented to the patient and family. It is important to be honest with the patient and demonstrate a positive approach to the diagnosis [53]. The physician should emphasize as favorable or good news the fact that the patient does not have epilepsy, and should also stress that the disorder, although serious and "real," does not require treatment with antiepileptic medications and that once stress or emotional issues are resolved, the patient has the potential to gain better control of these events [1,54,55]. Nevertheless, not all patients readily accept the diagnosis or this type of approach. Some patients may seek other opinions, and this should not be discouraged. An adversarial relationship with the patient should be avoided. The patient should be encouraged to return if desired, and records should be made available to other health care providers to avoid duplication of services.

After the diagnosis of PNES is presented, supportive measures should be initiated. PNES patients may benefit from education and support that can be provided by the neurologist or primary care physician [1]. If the neuropsychological assessment suggests a clinical profile that requires a professional mental health intervention, then an appropriate referral should be made. Regular follow-up visits for the patient with the neurologist are useful even if a mental health professional is involved [49,56]. This allows the patient to get medical attention without demonstrating illness behavior. Patient education and support are stressed at these visits. Because family issues are often important contributing factors, physicians should consider involving family members in visits with consent of the patient [1].

A variety of treatment strategies are employed for the management of PNES including cognitive behavioral therapy (CBT), group and family therapy, antidepressant medication, and other forms of rehabilitation [5,57,58]. A 2007 Cochrane review that identified 608 references for non-medication PNES treatments found that only 3 studies met criteria for a randomized controlled trial. One of the more recently favored treatment options for PNES that has been applied to the treatment of various somatoform disorders and other psychiatric disorders in the past is CBT [57,59,60]. This form of psychotherapy can be administered by trained personnel in a time-limited fashion using defined protocols. The basis of this treatment is that the patient learns to increase awareness of their dysfunctional thoughts and learns new ways to respond to them [57,58]. To date, several groups have reported results of nonrandomized trials as well as case reports and case series which have established the utility of this treatment. There have been reports of significant reductions in seizure frequency and this treatment strategy appears very promising [61–65]. Preliminary randomized controlled trials have also been piloted and are also suggestive that this may be a validated treatment approach [66].

Prognosis

The outcomes of patients with PNES vary. Long-term follow-up studies show that about half of all patients with PNES function reasonably well following their diagnosis. However, only approximately one-third of patients will completely stop having seizures or related problems, and approximately 50% percent have poor functional outcomes [1,2,50]. When the diagnosis of PNES is based on reliable criteria such a video EEG monitoring, misdiagnosis is unlikely. Instead, the usual cause for a poor outcome is related to a patient’s chronic psychological and social problems[1,8,22,50].

It is noteworthy that children with PNES appear to have a much better prognosis than adults [9,10]. In fact, the etiology in children may be related more to transient stress and coping disorders, while adults are more likely to have PNES within the context of more chronic psychological maladjustment, such as personality disorders [10]. Another factor that accounts for the better outcomes in children is that they are usually properly diagnosed earlier in the course of their disorder [9,10].

Patients with milder psychopathology respond better to supportive educational or behavioral therapeutic approaches. In contrast, patients with more severe psychopathology and factitious disorders more often have associated chronic personality problems and correspondingly, a poorer prognosis [1,50]. Also it appears that patients who continue to be followed by the diagnosing neurologist or center do better than patients who are not seen after diagnosis [49,67]. As knowledge about the nature of PNES and their associated psychopathology is gained, better treatment strategies can be developed that will improve the care and prognosis of these difficult and challenging patients.

A large study of 164 patients who were followed for 10 years were considered to have “poor outcome” in general but favorable factors included higher education, younger age of onset and diagnosis, and less “dramatic” attacks, defined as lack of “positive motor features, no ictal incontinence or tongue biting.” These findings were consistent with prior studies [52,68].

In addition, the patients who tended to have less seizures and do better long term, had less somatoform and dissociative symptoms on psychometric testing [51]. These findings are often explained by the theory that patients who do not do well have poor coping strategies to deal with stress and anxiety and that in a sense, these patients have emotional dysregulation.

Special Issues

Coexisting Epileptic and Psychogenic Nonepileptic Seizures

A complicating factor in diagnosis is that both PNES and epileptic seizures may occur in a single patient. Indeed, approximately 10% to 40% of patients identified to have PNES also have been reported to have epileptic seizures [1,23,33,56]. There are several possible explanations for this. Some patients with epilepsy may learn that seizures result in attention and fill certain psychological needs. Alternatively, they may have concomitant neurologic problems, personality disorders, cognitive deficits, or impaired coping mechanisms that predispose them to psychogenic symptoms [69–71]. Fortunately, in such patients with combined seizure disorders, the epileptic seizures are usually well controlled or of only historical relevance at the time a patient develops PNES [1,22,23,33,72–74].

In other patients, both epileptic and PNES may start simultaneously, making management even more complex. In such patients, we have found it particularly helpful to focus on the semiology of seizure manifestations as recorded by video EEG monitoring to distinguish PNES from the epileptic seizures. We then direct our treatment of the patient according to the semiology manifesting at that time. We also have found it useful to show the videos of seizures to family members or caregivers with patient consent to help them understand how to respond best to a patient’s symptoms when epileptic and PNES co-exist.

Misdiagnosis of Psychogenic Nonepileptic Seizures

Sometimes events that are initially diagnosed as nonepileptic actually prove to be epileptic. Such events can be called “pseudo-pseudo” or “epileptic-nonepileptic” seizures [1]. Frontal lobe seizures in particular may not be associated with significant EEG changes ictally and therefore misdiagnosed as PNES [23,75,76]. Clinical presentation and proper diagnosis of these types of events warrant emphasis.

Notable manifestations of frontal lobe seizures that may easily be confused with hysterical behavior include shouting, laughing, cursing, clapping, snapping, genital manipulation, pelvic thrusting, pedaling, running, kicking, and thrashing [23,75–77]. Not all of these behaviors are specific for frontal lobe seizures. For example, bicycling leg movements have also been reported in seizures originating from the temporal lobe [78].

Summary

PNES represent a common yet challenging problem within neurology. This is due to the difficulty in diagnosis as well as lack of effective and widely available treatment options. Overall outcomes of patients with PNES vary, and may relate to an individual patient’s chronic psychological and social problems. However, an accurate and timely diagnosis remains critical and can help provide direction for implementing appropriate treatment.

 

Corresponding author: Jennifer Hopp, MD, Department of Neurology, University of Maryland Medical Center, Room S12C09, 22 South Greene Street, Baltimore, MD 21201, [email protected].

Financial disclosures: None.

From the Department of Neurology, University of Maryland School of Medicine, Baltimore, MD.

 

Abstracts

  • Objective: To provide a review of psychogenic nonepileptic seizures, including a discussion of the diagnosis, treatment, and clinical significance of the disorder.
  • Methods: Review of the relevant literature.
  • Results: Psychogenic nonepileptic seizures are a common and potentially disabling neurologic disorder. They are most prevalent in young adults, and more commonly seen in women versus men. Certain psychosocial variables may impact the development of the condition. The diagnosis is made through a detailed history and observation of clinical events in conjunction with video EEG monitoring. Neuropsychological testing is an important component in the evaluation. Treatment includes establishment of an accurate diagnosis, management of any underlying psychiatric diagnoses, and regular follow-up with a neurologist or trained care provider.
  • Conclusion: Psychogenic nonepileptic seizures represent a complex interaction between neurologic and psychological factors. Obtaining an accurate diagnosis through the use of video EEG monitoring and clinical observation is an important initial step in treatment and improved quality of life in this patient population.

 

Psychogenic nonepileptic seizures (PNES) are commonly encountered in outpatient specialty epilepsy clinics as well as inpatient epilepsy monitoring units. They comprise approximately 20% of all refractory seizure disorders referred to specialty epilepsy centers [1–4]. PNES are thought to be psychological in origin as opposed to arising from abnormal electrical discharges as in epileptic seizures. PNES may be more frequent and disabling than epileptic seizures, and patients with PNES may report worse outcomes [5,6]. Increased utilization of long-term video EEG monitoring along with greater recognition of psychogenic neurologic disorders has allowed for improved diagnosis of PNES. However, many diagnostic and therapeutic challenges remain. There are often delays in obtaining an accurate diagnosis, and optimal management remains challenging, often leading to inappropriate, ineffective, and costly treatment, sometimes for many years [6–8].

Epidemiology

PNES are seen across the spectrum of age-groups, from children [9,10] to elderly persons, but they most often occur in young adults between the ages of 15 to 35 years [1,8]. Caution should be used when considering this diagnosis in infants or young children, in whom it is more common to see physiologic events that may mimic epileptic seizures, including gastroesophageal reflux, shuddering, night terrors, or breath holding spells [1,9,10].

PNES are prevalent within epilepsy practices. Patients with PNES comprise approximately 5% to 20% patients thought to have intractable epilepsy seen in outpatient centers, and within epilepsy monitoring units they account for 10% to 40% of patients [1,2,6,8]. A population-based study approximates the incidence of PNES at 1.4 per 100,000 people and 3.4 per 100,000 people between the ages of 15 to 24 years [4].

There is a female preponderance in PNES, which is similar to other conversion and somatoform disorders. Overall, women comprise approximately 70% to 80% of patients with the PNES diagnosis [1,2,6]. There are psychosocial variables that are seen in some patients with this disorder. An important factor that has been described is past history of sexual or physical abuse. In one series, there was a history of sexual abuse in almost 25% of patients with PNES, and history of either sexual abuse, physical abuse, or both in 32% of patients [11]. A history of sexual and/or physical abuse is not exclusive to these patients, and can certainly be seen in patients with epilepsy as well. For example, in a control population of epilepsy patients, there was a reported rate of past sexual or physical abuse approaching 9% [12].

A prior history of head trauma, often of a relatively mild degree, has been described as a potential inciting factor for some cases of PNES [6,13]. In the literature, studies report that as many as 20% of PNES patients attributed their seizures to head trauma, often rather mild head trauma [6,14].

Historcial Context

Historically, what today are called PNES originate with the concept of hysteria, a medical diagnosis in women that can be traced to antiquity [15,16]. By the late 1800s, one of the founders of neurology, Jean Charcot, established hysterical seizures as an important clinical entity with his detailed, elegant descriptions of patients. Charcot formulated clinical methods for distinguishing hysteria and particularly hysterical seizures from epilepsy. He presumed that hysteria and epilepsy were closely related, and he termed seizures due to hysteria as “hysteroepilepsy” or “epileptiform” hysteria. Charcot proposed that hysterical seizures were organic disorders of the brain, like other forms of seizures and epilepsy, and emphasized their relation to disturbance of the female reproductive system [17,18]. Charcot utilized techniques such as manipulation of “hysterogenic zones” and ovarian compression as well as suggestion to both treat and provoke hysteria and hysterical seizures, which he described and documented [17,18]. One of Charcot’s most celebrated students, Sigmund Freud, observed Charcot’s demonstrations but drew different conclusions. He theorized that hysteria and hysterical seizures were not organic disorders of the brain as Charcot proposed, but were rather emotional disorders of the unconscious mind due to repressed energies or drives. Based largely the theories of Freud and Charcot, individuals with hysteria were distinguished from those with epilepsy, with hysterical seizures related to psychological dysfunction while epileptic seizures were associated with physical or organic brain disorders [15,16].

With the introduction of EEG recording in the 1930s, it became possible to characterize epilepsy as an electrical disorder of the brain with associated EEG changes and more effectively distinguish it from hysterical seizures, which did not have such abnormalities. In addition, in the first half of the 20th century, the nature of hysteria as seen and diagnosed by physicians seemed to change. The dramatic, theatrical convulsions described by Charcot and his contemporaries appeared less commonly, while disorders such as chronic pain seemed to increase [1,19].

However, by the 1960s, several reports confirmed that hysterical seizures were actually still prevalent. Newer terms like “pseudoseizures” were used to describe these disorders because the term “hysteria” was thought to be somewhat derogatory, anti-feminist, and antiquated [20,21]. In the 1970s and thereafter, with the increasing availability of video EEG monitoring and growth of inpatient epilepsy monitoring units, it was discovered that these hysterical, pseudo-, or what were also by then termed psychogenic seizures, were actually still common [1,22].

More recently, it has been recognized that the pendulum in some cases may have swung too far in regard to the diagnosis of this disorder. Some rare patients with seizures initially diagnosed as PNES may actually have forms of epileptic seizures such as frontal lobe epilepsy or related physiological disorders rather than psychogenic causes for their episodes [1,23]. These types of epileptic seizures can be very difficult to diagnose properly unless one appreciates how they present and manifest and remains vigilant for them during evaluation [1,23].

Terminology

There is an ongoing debate regarding the appropriate terminology for psychogenic events, and there is no uniform standardized definition or classification at this time. The term that is currently preferred within the epilepsy community for seizures of psychological origin that are thought to be associated with conversion, somatization, or dissociative disorders is “psychogenic nonepileptic seizures” (PNES). This terminology is felt to be non-disparaging and more neutral as compared with other terms such as pseudoseizures, which were previously favored. Nonepileptic seizures or nonepileptic events are broader terms meant to incorporate both physiologic and psychological causes for disorders that are mistaken for epilepsy. PNES are widely defined as paroxysmal events that appear similar to epileptic seizures but are not due to abnormal electrical discharges in the brain and as noted, are typically thought to be related or caused by conversion, somatization, or dissociative disorders.

Physiologic nonepileptic events are another category of physical disorders that may be mistaken for epilepsy. The underlying causes differ between age-groups, and can include conditions such as cardiac arrhythmias, migraine variants, syncope, or metabolic abnormalities. Physiologic nonepileptic seizures account for only a small proportion of all patients with nonepileptic seizures or events [1]. In general, any patient with a psychological disorder that causes symptoms that are mistaken for epilepsy can be said to have PNES.

Clinical Characteristics And Presentation

PNES and epileptic seizures are predominantly distinguished through clinical observation along with descriptions from the patient or witnesses, and an understanding of seizure semiology. Although video EEG may be needed to confirm the diagnosis, certain clinical characteristics and historical details can help to distinguish between the 2 disorders (Table 1) [24,25]. Features to consider include movements and/or vocalizations during seizures, duration of seizures, and other factors such as injury, incontinence, and amnesia [1,24,25]. Caution must be taken not to use one sign or feature in isolation, as none have been found to be specifically pathognomonic.

The duration of PNES is often significantly longer than that seen in epileptic seizures, which usually last less than 3 minutes, excluding the postictal period. PNES may also exhibit waxing and waning convulsive activity, although this finding can certainly be seen in epileptic seizures as well. PNES may be shown to have distractibility with external stimuli. Additionally, the movements in PNES may appear asymmetric, asynchronous, or purposeful, although this is not diagnostic for this disorder. This may contrast with the well-defined, synchronous tonic-clonic activity typically seen in epileptic seizures [1,24,25]. Back arching and pelvic thrusting movements can also be seen in PNES. Despite these differences, it may still be challenging to distinguish the semi-purposeful behaviors of PNES from the automatisms of certain focal epileptic seizures. The often bizarre-appearing, hypermotor activity that can be seen in frontal lobe seizures is often especially difficult to differentiate from PNES [1,23].

Another important consideration is that consciousness is preserved in PNES, while consciousness and responsiveness are frequently impaired in epileptic seizures. Patients with PNES are often apparently unresponsive during events, although there is no true impairment of awareness. Other characteristics that are more commonly seen in PNES are crying and eye closure [26]. Self-injury and incontinence may be reported, but they are less often clearly witnessed or documented [27,28]. Additionally, although patients may at times appear to be asleep at seizure onset, EEG recordings document the patient to actually be asleep in less than 1% of cases [29]. While epileptic seizures often respond well to antiepileptic medications, PNES characteristically do not [1,3,6,8].

In certain situations, provocation maneuvers may be utilized in order to reproduce PNES in patients undergoing EEG monitoring. In comparison to epileptic seizures, suggestion and emotional stimuli are more likely to trigger psychogenic events [1]. Methods utilized to provoke PNES may include saline injections, placement of a tuning fork on the head or body, or even hypnosis, when a suggestion is concurrently provided that such maneuvers can trigger the patient’s seizures [1,30,31]. When evaluating seizures that are provoked in such a manner, it is important to consider whether or not the event captured is in fact a typical event for the patient, or whether the provocation has uncovered a different, atypical event. Given that PNES and epileptic seizures can co-exist within the same patient, care should be taken to avoid making a diagnosis based on capturing an atypical event, or capturing only a subset of a patient’s seizure types. This could result in failure to make an accurate and thorough diagnosis [23]. There is debate regarding the ethics of provoking seizures by way of suggestion. Some members of the epilepsy community feel that provoking seizures through suggestion is inherently deceitful, and therefore can damage the physician-patient relationship. Others assert that such provocative testing can be undertaken in an honest manner, and can ultimately help achieve an accurate diagnosis for the patient [32].

As previously mentioned, there is a proportion of patients who have co-existing epileptic seizures and PNES, and obtaining an accurate diagnosis can be especially challenging in this group. Studies have reported that around 10% to 40% of patients with PNES also have epilepsy [1,22,23,33]. Care must be taken to distinguish between differences in seizure types and if necessary, video EEG monitoring may be needed to capture both seizure types for an accurate diagnosis. This testing can then be useful in education with families and caregivers who may be shown the videos with consent from the patient in order to guide future care.

Evaluation And Diagnosis

As in much of neurology, a thorough history, along with detailed clinical observation remains essential in the diagnosis of patients with PNES and for distinguishing these events from epilepsy. Video EEG monitoring of seizures is a key adjunct to the history and clinical observation in diagnosing this condition [1,2]. Long-term video EEG monitoring is considered the “gold standard” in the characterization and differential diagnosis of seizures. Additional potentially helpful diagnostic techniques include video EEG-monitored seizure provocation, serum prolactin levels, single photon emission computed tomography, and neuropsychological testing.

Video EEG Monitoring

Video EEG monitoring, often undertaken in dedicated inpatient epilepsy monitoring units, has become a mainstay for diagnosis of psychogenic seizures. Ideally, a typical seizure is recorded with simultaneous EEG and video monitoring with no evidence of epileptic activity seen during the event. In patients with generalized convulsive epileptic seizures, the EEG should show an ictal correlate during the seizure. In the case of focal seizures with impaired awareness (complex partial seizures), the EEG will demonstrate a corresponding ictal abnormality in 85 to 95% of cases [1]. Focal seizures without impaired awareness (simple partial seizures) may not necessarily be associated with a corresponding EEG change. Up to 60% of such seizures have been shown to produce an ictal EEG abnormality, and this number may rise to almost 80% if multiple seizures are captured [34]. It is extremely important to capture a typical event with video EEG monitoring because an interictal or routine EEG may not provide all of the needed information to make a diagnosis. Specifically, a normal routine (non-ictal) EEG may be seen in epilepsy patients, and minor or non-specific abnormalities can be seen on EEGs of patients with PNES (Table 2) [1,6,8,22].

EEG monitoring for characterization of clinical events can be conducted on an ambulatory or outpatient basis or in dedicated inpatient epilepsy monitoring units. Ambulatory monitoring can be useful in the case of patients who report seizures that are more frequent in their home environment or in patients with frequent events. If events are infrequent, then inpatient monitoring may be more efficacious [1]. With longer-term inpatient monitoring, antiepileptic medications can be withdrawn in a supervised setting, in order to lower the seizure threshold as well as to safely discontinue medications that may not be necessary. Such medication titrations are typically not safe in an unsupervised outpatient setting. Some ambulatory EEG monitoring systems do allow for simultaneous video and EEG recording. However, an advantage to inpatient monitoring, which is not afforded in the outpatient setting, is the ability for nursing staff or physicians to perform clinical testing during events to assess for patient responsiveness and other features. Additionally, with inpatient monitoring, EEG technicians can routinely assess for any technical problems with the electrodes or recording system.

Another benefit of video EEG monitoring is that the state (waking, drowsy, or asleep) of the patient at the onset of an event can be established. While epileptic seizures can arise from any state, PNES most often occur from wakefulness. Patients with PNES may appear to be asleep at the onset of events, and they may report seizures from sleep. Video EEG monitoring can help to establish the waking or sleep state of the patient that may aid in diagnosis [29].

Prolactin Levels

Serum prolactin levels may be helpful in the diagnosis of PNES [35,36]. Following generalized tonic-clonic or complex partial epileptic seizures, the serum prolactin can rise from two to threefold to five to tenfold [37]. The maximal rise in serum prolactin occurs in the initial 20 to 60 minutes after the seizure [35–37]. A similar rise in serum prolactin would not be expected in PNES. Although prolactin levels may have some utility in diagnosis, they are not currently routinely ordered as part of a standard admission to most inpatient epilepsy monitoring units. This may be due in part to the fact that false-positive and false-negative results can occur with these levels [37–39]. For example, there may not be a rise in the prolactin level after a simple partial seizure or more subtle complex partial seizure.

Neuropsychological Testing

Neuropsychological testing is also a key component in the evaluation and diagnosis of PNES. Ideally, a mental health provider with a background in psychological assessment and neuropsychological intervention for patients with psychogenic disorders would perform the evaluation [40,41].

The goal of the evaluation should not solely focused on whether the patient suffers from nonepileptic or epileptic seizures. An epileptologist upon review of clinical, electrographic, and neuropsychological data better makes this determination. Moreover, neuropsychological testing cannot in itself either diagnose or exclude the possibility that a seizure disorder is nonepileptic because of the considerable overlap between epileptic and nonepileptic test results [40,41]. Neuropsychological evaluations aid this assessment by (1) determining the potential or likelihood of significant contributing psychopathology or cognitive difficulties, (2) defining the nature of the associated psychological or psychosocial issues, and (3) assessing how a patient might benefit from various psychologically based interventions [1]. The testing may identify psychological problems that can guide treatment after diagnosis.

Delays in Diagnosis

Correct and prompt diagnosis is essential for patients with PNES as is appropriate referral to a knowledgeable trained mental health professional. On average, patients with PNES are diagnosed 7.2 years after manifestation (SD 9.3 years), with mean delay of 5 to 7 years. Younger age, interictal epileptiform potentials in the EEG, and anticonvulsant treatment are associated with longer delays [42,43]. Delays are also thought to occur because of problems with “ownership” of these patients. Although typically neurologists are involved in the diagnosis of PNES, often using video EEG monitoring done in an inpatient setting, the next step is often a referral to a psychiatrist or mental health care provider. There are sometimes delays in the initial referral to the neurologist, delays in referral to specialists for video EEG testing, and also to the physicians, psychologists or social workers who may provide treatment. Another disconnect can occur if patients are “lost to follow-up” if they receive a referral for mental health care and either do not follow up on this on their own, or if the reason for this care is not fully explained. In addition, many mental health professionals are not trained in the evaluation and treatment of psychogenic symptoms and may even feel uncomfortable in dealing with these patients [13,44].

Many studies have been suggestive that delays in diagnosis may result in poorer outcomes [45,46], while other studies have suggested that patients who have an acute diagnosis of PNES upon presentation may do particularly well [8,47–49]. Some of the most recent large outcome studies suggest that there may be no worsening of outcome associated with delays in diagnosis and that outcome was predicted by other factors [50–52].

Management

Management of patients with PNES is similar to that for patients with other types of so-called abnormal illness behavior, although there remains a relative paucity of evidence for specific treatment strategies for PNES [1]. The first consideration should be the manner in which the diagnosis of PNES is presented to the patient and family. It is important to be honest with the patient and demonstrate a positive approach to the diagnosis [53]. The physician should emphasize as favorable or good news the fact that the patient does not have epilepsy, and should also stress that the disorder, although serious and "real," does not require treatment with antiepileptic medications and that once stress or emotional issues are resolved, the patient has the potential to gain better control of these events [1,54,55]. Nevertheless, not all patients readily accept the diagnosis or this type of approach. Some patients may seek other opinions, and this should not be discouraged. An adversarial relationship with the patient should be avoided. The patient should be encouraged to return if desired, and records should be made available to other health care providers to avoid duplication of services.

After the diagnosis of PNES is presented, supportive measures should be initiated. PNES patients may benefit from education and support that can be provided by the neurologist or primary care physician [1]. If the neuropsychological assessment suggests a clinical profile that requires a professional mental health intervention, then an appropriate referral should be made. Regular follow-up visits for the patient with the neurologist are useful even if a mental health professional is involved [49,56]. This allows the patient to get medical attention without demonstrating illness behavior. Patient education and support are stressed at these visits. Because family issues are often important contributing factors, physicians should consider involving family members in visits with consent of the patient [1].

A variety of treatment strategies are employed for the management of PNES including cognitive behavioral therapy (CBT), group and family therapy, antidepressant medication, and other forms of rehabilitation [5,57,58]. A 2007 Cochrane review that identified 608 references for non-medication PNES treatments found that only 3 studies met criteria for a randomized controlled trial. One of the more recently favored treatment options for PNES that has been applied to the treatment of various somatoform disorders and other psychiatric disorders in the past is CBT [57,59,60]. This form of psychotherapy can be administered by trained personnel in a time-limited fashion using defined protocols. The basis of this treatment is that the patient learns to increase awareness of their dysfunctional thoughts and learns new ways to respond to them [57,58]. To date, several groups have reported results of nonrandomized trials as well as case reports and case series which have established the utility of this treatment. There have been reports of significant reductions in seizure frequency and this treatment strategy appears very promising [61–65]. Preliminary randomized controlled trials have also been piloted and are also suggestive that this may be a validated treatment approach [66].

Prognosis

The outcomes of patients with PNES vary. Long-term follow-up studies show that about half of all patients with PNES function reasonably well following their diagnosis. However, only approximately one-third of patients will completely stop having seizures or related problems, and approximately 50% percent have poor functional outcomes [1,2,50]. When the diagnosis of PNES is based on reliable criteria such a video EEG monitoring, misdiagnosis is unlikely. Instead, the usual cause for a poor outcome is related to a patient’s chronic psychological and social problems[1,8,22,50].

It is noteworthy that children with PNES appear to have a much better prognosis than adults [9,10]. In fact, the etiology in children may be related more to transient stress and coping disorders, while adults are more likely to have PNES within the context of more chronic psychological maladjustment, such as personality disorders [10]. Another factor that accounts for the better outcomes in children is that they are usually properly diagnosed earlier in the course of their disorder [9,10].

Patients with milder psychopathology respond better to supportive educational or behavioral therapeutic approaches. In contrast, patients with more severe psychopathology and factitious disorders more often have associated chronic personality problems and correspondingly, a poorer prognosis [1,50]. Also it appears that patients who continue to be followed by the diagnosing neurologist or center do better than patients who are not seen after diagnosis [49,67]. As knowledge about the nature of PNES and their associated psychopathology is gained, better treatment strategies can be developed that will improve the care and prognosis of these difficult and challenging patients.

A large study of 164 patients who were followed for 10 years were considered to have “poor outcome” in general but favorable factors included higher education, younger age of onset and diagnosis, and less “dramatic” attacks, defined as lack of “positive motor features, no ictal incontinence or tongue biting.” These findings were consistent with prior studies [52,68].

In addition, the patients who tended to have less seizures and do better long term, had less somatoform and dissociative symptoms on psychometric testing [51]. These findings are often explained by the theory that patients who do not do well have poor coping strategies to deal with stress and anxiety and that in a sense, these patients have emotional dysregulation.

Special Issues

Coexisting Epileptic and Psychogenic Nonepileptic Seizures

A complicating factor in diagnosis is that both PNES and epileptic seizures may occur in a single patient. Indeed, approximately 10% to 40% of patients identified to have PNES also have been reported to have epileptic seizures [1,23,33,56]. There are several possible explanations for this. Some patients with epilepsy may learn that seizures result in attention and fill certain psychological needs. Alternatively, they may have concomitant neurologic problems, personality disorders, cognitive deficits, or impaired coping mechanisms that predispose them to psychogenic symptoms [69–71]. Fortunately, in such patients with combined seizure disorders, the epileptic seizures are usually well controlled or of only historical relevance at the time a patient develops PNES [1,22,23,33,72–74].

In other patients, both epileptic and PNES may start simultaneously, making management even more complex. In such patients, we have found it particularly helpful to focus on the semiology of seizure manifestations as recorded by video EEG monitoring to distinguish PNES from the epileptic seizures. We then direct our treatment of the patient according to the semiology manifesting at that time. We also have found it useful to show the videos of seizures to family members or caregivers with patient consent to help them understand how to respond best to a patient’s symptoms when epileptic and PNES co-exist.

Misdiagnosis of Psychogenic Nonepileptic Seizures

Sometimes events that are initially diagnosed as nonepileptic actually prove to be epileptic. Such events can be called “pseudo-pseudo” or “epileptic-nonepileptic” seizures [1]. Frontal lobe seizures in particular may not be associated with significant EEG changes ictally and therefore misdiagnosed as PNES [23,75,76]. Clinical presentation and proper diagnosis of these types of events warrant emphasis.

Notable manifestations of frontal lobe seizures that may easily be confused with hysterical behavior include shouting, laughing, cursing, clapping, snapping, genital manipulation, pelvic thrusting, pedaling, running, kicking, and thrashing [23,75–77]. Not all of these behaviors are specific for frontal lobe seizures. For example, bicycling leg movements have also been reported in seizures originating from the temporal lobe [78].

Summary

PNES represent a common yet challenging problem within neurology. This is due to the difficulty in diagnosis as well as lack of effective and widely available treatment options. Overall outcomes of patients with PNES vary, and may relate to an individual patient’s chronic psychological and social problems. However, an accurate and timely diagnosis remains critical and can help provide direction for implementing appropriate treatment.

 

Corresponding author: Jennifer Hopp, MD, Department of Neurology, University of Maryland Medical Center, Room S12C09, 22 South Greene Street, Baltimore, MD 21201, [email protected].

Financial disclosures: None.

References

1. Krumholz A. Nonepileptic seizures: diagnosis and management. Neurology 1999;S76–83.

2. Meierkord H, Will B, Fish D, Shorvon S. The clinical features and prognosis of pseudoseizures diagnosed using video-EEG telemetry. Neurology 1991;41:1643–6.

3. Lesser RP. Psychogenic seizures. Neurology 1996;46:1499–1507.

4. Sigurdardottir KR, Olafsson E. Incidence of psychogenic seizures in adults: a population-based study in Iceland. Epilepsia 1998;39:857–62.

5. Szaflarski JP, Szaflarski M, Hughes C, et al. Psychopathology and quality of life: psychogenic non-epileptic seizures versus epilepsy. Med Sci Monit 2003 9:CR113–8.

6. Barry E, Krumholz A, Bergey C, et al. Nonepileptic posttraumatic seizures. Epilepsia 1998;39:427–31.

7. Pakalnis A, Drake ME, Phillips B. Neuropsychiatric aspects of psychogenic status epilepticus. Neurology 1991;41;1104–6.

8. Walzack TS, Papacostas S, Williams DT, et al. Outcome after the diagnosis of psychogenic nonepileptic seizures. Epilepsia 1995;36:1131–7.

9. Metrick ME, Ritter FJ, Gates JR, et al. Nonepileptic events in childhood. Epilepsia 1991;32:322–8.

10. Wyllie E, Friedman D, Luders H, et al. Outcome of psychogenic seizures in children and adolescents compared to adults. Neurology 1991;41:742–4.

11. Duncan R, Oto M. Predictors of antecedent factors in psychogenic nonepileptic attacks: multivariate analysis. Neurology 2008;71:1000–5.

12. Alper K, Devinsky O, Perrine K, et al. Nonepileptic seizures and childhood sexual and physical abuse. Neurology 1993; 43:1950–3.

13. LaFrance WC Jr, Devinsky O. The treatment of nonepileptic seizures: historical perspectives and future directions. Epilepsia 2004;45 Suppl 2:15–21.

14. Westbrook LE, Devinsky O, Geocadin R. Nonepileptic seizures after head injury. Epilepsia 1998;39:978–82.

15. Slavney PR. Perspectives on hysteria. Baltimore: Johns Hopkins University Press; 1990.

16. Veith I. Hysteria: the history of a disease. Chicago: University of Chicago Press; 1965.

17. Goetz CG. Charcot the clinician. The Tuesday lessons. New York: Raven Press; 1987.

18. Massey EW, McHenry LC. Hysteroepilepsy in the nineteenth century: Charcot and Gowers. Neurology 1986;36:65–7.

19. Zeigler FJ, Imboden JB, Meyer E. Contemporary conversion reactions: a clinical study. Am J Psychiatry 1960;116:901–10.

20. Liske E, Forster FM. Pseudoseizures: a problem in the diagnosis and management of epileptic patients. Neurology 1964;14:41–9.

21. Diagnostic and statistical manual of mental disorders. DSM-IV 4th ed. American Psychiatric Association. Washington, DC; 1995.

22. Krumholz A, Niedermeyer, E. Psychogenic seizures: a clinical study with follow-up data. Neurology 1983; 33:498-502.

23. Krumholz A, Ting T. Co-existing epileptic and nonepileptic seizures. in imitators of epilepsy. 2nd ed. In: Kaplan PW, Fisher RS, editors. New York: Demos Medical Publishing; 2005:261–76.

24. Gates JR, Ramani V, Whalen S, Loewenson R. Ictal characteristics of pseudoseizures. Arch Neurol 1985;42:1183–87.

25. Leis AA, Ross MA, Summers AK. Psychogenic seizures: Ictal characteristics and diagnostic pitfalls. Neurology 1992;42:95–9.

26. Walczak TS, Bogolioubov. Weeping during psychogenic nonepileptic seizures. Epilepsia 1996;37:207–10.

27. Bergen D, Ristanovic R. Weeping is a common element during psychogenic nonepileptic seizures. Arch Neurol 1993;50:1059–60.

28. Peguero E, Abou-Khalil B, Fakhoury, Mathews G. Self-injury and incontinence in psychogenic seizures. Epilepsia 1995;36:586–91.

29. Orbach D, Ritaccio A, Devinsky O. Psychogenic, nonepileptic seizures associated with video-EEG-verified sleep. Epilepsia 2003;44:64–8.

30. Walczak TS, Williams DT, Berton W. Utility and reliability of placebo infusion in the evaluation of patients with seizures. Neurology 1994;44:394–99.

31. Bazil CW, Kothari M, Luciano D, et al. Provocation of nonepileptic seizures by suggestion in a general seizure population. Epilepsia 1994;35:768–70.

32. Devinsky O, Fisher RS. Ethical use of placebos and provocative testing in diagnosing nonepileptic seizures. Neurology 1996;47:866–70.

33. Lesser RP, Lueders H, Dinner DS. Evidence for epilepsy is rare in patients with psychogenic seizures. Neurology 1983; 33:502–4.

34. Barre MA, Burnstine TH, Fisher RS, Lesser RP. Electroencephalographic changes during simple partial seizures. Epilepsia 1994;35:715–20.

35. Trimble MR. Serum prolactin levels in epilepsy and hysteria. BMJ 1978;2:1682.

36. Laxer KD, Mullooly JP, Howell B. Prolactin changes after seizures classified by EEG monitoring. Neurology 1985; 35:31–5.

37. Pritchard PB, Wannamaker BB, Sagel J, et al. Endocrine function following complex partial seizures. Ann Neurol 1983;14:27–32.

38. Malkowicz DE, Legido A, Jackel RA, et al. Prolactin secretion following repetitive seizures. Neurology 1995;45:448–52.

39. Oribe E, Rohullah A, Nissenbaum E, Boal B. Serum prolactin concentrations are elevated after syncope. Neurology 1996;47:60–2.

40. Henrichs TF, Tucker DM, Farha J, Novelly RA. MMPI indices in the identification of patients evidencing pseudoseizures. Epilepsia 1988;29:184–8.

41. Wilkus RJ, Dodrill CB. Factors affecting the outcome of MMPI and neuropsychological assessments of psychogenic and epileptic seizure patients. Epilepsia 1989;30:339–47.

42. DeTimary P, Fouchet P, Sylin M, et al. Non–epileptic seizures: delayed diagnosis in patients presenting with electroencephalographic (EEG) or clinical signs of epileptic seizures. Seizure 2002;11:193–7.

43. Reuber M, Fernandez G, et al. Diagnostic delay in psychogenic nonepileptic seizures. Neurology 2002;58:493–5.

44. Rosenbaum DH, et al. Outpatient multidisciplinary management of non-epileptic seizures. In: Rowan AJ, Gates Jr, editors. Non-epileptic seizures. 1st ed. Stoneham, MA: Butterworth-Heinemann; 1993:275–83.

45. Lempert T, Schmidt D. Natural history and outcome of psychogenic seizures: a clinical study in 50 patients. J Neurol 1990;237:35–8.

46. Selwa LM, Geyer J, Nikakhtar N, et al. Nonepileptic seizure outcome varies by type of spell and duration of illness. Epilepsia 2000;41:1330–4.

47. Buchanan N, Snars J. Pseudoseizures (non epileptic attack disorder): clinical management and outcome in 50 patients. Seizure 1993;2:141–6.

48. Kanner AM. More controversies on the treatment of psychogenic pseudoseizures: an addendum. Epilepsy Behav 2003;4:360–4.

49. Aboukasm A, Mahr G, Gahry BR, et al. Retrospective analysis of the effects of psychotherapeutic interventions on outcomes of psychogenic nonepileptic seizures. Epilepsia 1998;39:470–3.

50. Reuber M, Pukrop T, Bauer J, et al. Outcome in psychogenic nonepileptic seizures: 1 to 10-year follow-up in 164 patients. Ann Neurol 2003;53:305–11.

51. McKenzie P, Oto M, Russell A, Pelosi A, Duncan R. Early outcomes and predictors in 260 patients with psychogenic nonepileptic seizures (PNES). Neurology 2010;74:64–9.

52. Kanner AM, Parra J, Frey M, et al. Psychiatric and neurologic predictors of psychogenic pseudoseizure outcome. Neurology 1999;53:933–8.

53. Shen W, Bowman ES, Markand ON. Presenting the diagnosis of pseudoseizure. Neurology 1990; 40:756–9.

54. Friedman JH, LaFrance Jr WC. Psychogenic disorders: the need to speak plainly. Arch Neurol 2010;67:753–5.

55. LaFrance Jr WC. Psychogenic nonepileptic “seizures” or “attacks”? It’s not just semantics: “Seizures.” Neurology 2010;75: 87–8.

56. Ramsay RE, Cohen A, Brown MC. Coexisting epilepsy and non-epileptic seizures. In: Non-epileptic seizures. Butterworth-Heinemann; 1998:47–54.

57. Stone J, Carson A, Sharpe M. Functional symptoms in neurology: management. J Neurol Neurosurg Psychiatry. 2005;6(Suppl 1):i13–i21.

58. LaFrance WC Jr, Bjornaes H. Designing treatment plans based on etiology of psychogenic nonepileptic seizures. In: Schachter SC, LaFrance WC Jr, editors. Gates and Rowan’s nonepileptic seizures. 3rd ed. New York: Cambridge University Press; 2010:266–80.

59. Kroenke K, Swindle R. Cognitive-behavioral therapy for somatization and symptom syndromes: a critical review of controlled clinical trials. Psychother Psychosom 200;69:205–15.

60. Kroenke K. Efficacy of treatment of somatoform disorders: a review of randomized controlled trials. Psychosom Med 2007:69:881–8.

61. LaFrance WC Jr, Miller IW, Ryan CE, et al. Cognitive behavioral therapy for psychogenic nonepileptic seizures. Epilepsy Behav 2009;14:591–6.

62. Chalder T. Non-epileptic attacks: a cognitive behavioral approach in a single case with a four-year follow-up. Clin Psychol Psychother 1996;3:291–7.

63. Betts T, Duffy N. Non-epileptic attack disorder (pseudoseizures) and sexual abuse: a review. In: Gram L, Johannessen SI, Osterman PE, et al, editors. Pseudo-epileptic seizures. Petersfield, UK: Wrightson Biomedical Publishing; 1993:55–66.

64. Lesser RP. Treatment and outcome of psychogenic nonepileptic seizures. Epilepsy Currents 2003;3:198–200.

65. Ramani V. Review of psychiatric treatment strategies in non-epileptic seizures. In: Rowan AJ, Gates JR, eds. Non-epileptic Seizures. 1st ed. Stoneham, MA: Butterworth Heinemann; 1993:259–67.

66. Goldstein LH, Chalder T, Chigwedere C, et al. Cognitive-behavioral therapy for psychogenic nonepileptic seizures: a pilot RCT. Neurology 2010;74:1986–94.

67. Bennet C, So NM, Smith WB, Thompson K. Structured treatment improves the outcome of nonepileptic events. Epilepsia 1997;38(Suppl 8):214.

68. McDade G, Brown SW. Non-epileptic seizures: management and predictive factors of outcome. Seizure 1992;1:7–10.

69. Bowman ES. Etiology and clinical course of pseudoseizures: relationship to trauma, depression, and dissociation. Psychosomatics 1993;34:333–42.

70. Bowman ES, Markand ON. Psychodynamics and psychiatric diagnoses of pseudoseizure subjects. Am J Psychiatry 1996;153:57–63.

71. Vanderzant CW, Giordani B, Berent S, et al. Personality of patients with pseudoseizures. Neurology 1986;36:664–8.

72. Benbadis SR, Agrawal V, Tatum WO. How many patients with psychogenic nonepileptic seizures also have epilepsy? Neurology 2001; 57:915–7.

73. Glosser G, Roberts D, et al. Nonepileptic seizures after resective epilepsy surgery. Epilepsia 1999; 40:1750–4.

74. Reuber M, Kral T. New-onset psychogenic seizures after intracranial neurosurgery. Acta Neurochir (Wien) 2002; 144:901–7.

75. Williamson P, Spencer D, Spencer S, et al. Complex partial seizures of frontal lobe origin. Ann Neurol 1985;18:497–504.

76. Saygi S, Katz A, Marks D, et al. Frontal lobe partial seizures and psychogenic seizures: comparison of clinical and ictal characteristics. Neurology 1992;42:1274–7.

77. Waterman K, Purves S, Kosaka B, et al. An epileptic syndrome caused by mesial frontal lobe seizure foci. Neurology 1987; 37:577–82.

78. Sussman N, Jackel R, Kaplan L, et al. Bicycling movements as a manifestation of complex partial seizures of temporal lobe origin. Epilepsia 1989;30:527–31.

References

1. Krumholz A. Nonepileptic seizures: diagnosis and management. Neurology 1999;S76–83.

2. Meierkord H, Will B, Fish D, Shorvon S. The clinical features and prognosis of pseudoseizures diagnosed using video-EEG telemetry. Neurology 1991;41:1643–6.

3. Lesser RP. Psychogenic seizures. Neurology 1996;46:1499–1507.

4. Sigurdardottir KR, Olafsson E. Incidence of psychogenic seizures in adults: a population-based study in Iceland. Epilepsia 1998;39:857–62.

5. Szaflarski JP, Szaflarski M, Hughes C, et al. Psychopathology and quality of life: psychogenic non-epileptic seizures versus epilepsy. Med Sci Monit 2003 9:CR113–8.

6. Barry E, Krumholz A, Bergey C, et al. Nonepileptic posttraumatic seizures. Epilepsia 1998;39:427–31.

7. Pakalnis A, Drake ME, Phillips B. Neuropsychiatric aspects of psychogenic status epilepticus. Neurology 1991;41;1104–6.

8. Walzack TS, Papacostas S, Williams DT, et al. Outcome after the diagnosis of psychogenic nonepileptic seizures. Epilepsia 1995;36:1131–7.

9. Metrick ME, Ritter FJ, Gates JR, et al. Nonepileptic events in childhood. Epilepsia 1991;32:322–8.

10. Wyllie E, Friedman D, Luders H, et al. Outcome of psychogenic seizures in children and adolescents compared to adults. Neurology 1991;41:742–4.

11. Duncan R, Oto M. Predictors of antecedent factors in psychogenic nonepileptic attacks: multivariate analysis. Neurology 2008;71:1000–5.

12. Alper K, Devinsky O, Perrine K, et al. Nonepileptic seizures and childhood sexual and physical abuse. Neurology 1993; 43:1950–3.

13. LaFrance WC Jr, Devinsky O. The treatment of nonepileptic seizures: historical perspectives and future directions. Epilepsia 2004;45 Suppl 2:15–21.

14. Westbrook LE, Devinsky O, Geocadin R. Nonepileptic seizures after head injury. Epilepsia 1998;39:978–82.

15. Slavney PR. Perspectives on hysteria. Baltimore: Johns Hopkins University Press; 1990.

16. Veith I. Hysteria: the history of a disease. Chicago: University of Chicago Press; 1965.

17. Goetz CG. Charcot the clinician. The Tuesday lessons. New York: Raven Press; 1987.

18. Massey EW, McHenry LC. Hysteroepilepsy in the nineteenth century: Charcot and Gowers. Neurology 1986;36:65–7.

19. Zeigler FJ, Imboden JB, Meyer E. Contemporary conversion reactions: a clinical study. Am J Psychiatry 1960;116:901–10.

20. Liske E, Forster FM. Pseudoseizures: a problem in the diagnosis and management of epileptic patients. Neurology 1964;14:41–9.

21. Diagnostic and statistical manual of mental disorders. DSM-IV 4th ed. American Psychiatric Association. Washington, DC; 1995.

22. Krumholz A, Niedermeyer, E. Psychogenic seizures: a clinical study with follow-up data. Neurology 1983; 33:498-502.

23. Krumholz A, Ting T. Co-existing epileptic and nonepileptic seizures. in imitators of epilepsy. 2nd ed. In: Kaplan PW, Fisher RS, editors. New York: Demos Medical Publishing; 2005:261–76.

24. Gates JR, Ramani V, Whalen S, Loewenson R. Ictal characteristics of pseudoseizures. Arch Neurol 1985;42:1183–87.

25. Leis AA, Ross MA, Summers AK. Psychogenic seizures: Ictal characteristics and diagnostic pitfalls. Neurology 1992;42:95–9.

26. Walczak TS, Bogolioubov. Weeping during psychogenic nonepileptic seizures. Epilepsia 1996;37:207–10.

27. Bergen D, Ristanovic R. Weeping is a common element during psychogenic nonepileptic seizures. Arch Neurol 1993;50:1059–60.

28. Peguero E, Abou-Khalil B, Fakhoury, Mathews G. Self-injury and incontinence in psychogenic seizures. Epilepsia 1995;36:586–91.

29. Orbach D, Ritaccio A, Devinsky O. Psychogenic, nonepileptic seizures associated with video-EEG-verified sleep. Epilepsia 2003;44:64–8.

30. Walczak TS, Williams DT, Berton W. Utility and reliability of placebo infusion in the evaluation of patients with seizures. Neurology 1994;44:394–99.

31. Bazil CW, Kothari M, Luciano D, et al. Provocation of nonepileptic seizures by suggestion in a general seizure population. Epilepsia 1994;35:768–70.

32. Devinsky O, Fisher RS. Ethical use of placebos and provocative testing in diagnosing nonepileptic seizures. Neurology 1996;47:866–70.

33. Lesser RP, Lueders H, Dinner DS. Evidence for epilepsy is rare in patients with psychogenic seizures. Neurology 1983; 33:502–4.

34. Barre MA, Burnstine TH, Fisher RS, Lesser RP. Electroencephalographic changes during simple partial seizures. Epilepsia 1994;35:715–20.

35. Trimble MR. Serum prolactin levels in epilepsy and hysteria. BMJ 1978;2:1682.

36. Laxer KD, Mullooly JP, Howell B. Prolactin changes after seizures classified by EEG monitoring. Neurology 1985; 35:31–5.

37. Pritchard PB, Wannamaker BB, Sagel J, et al. Endocrine function following complex partial seizures. Ann Neurol 1983;14:27–32.

38. Malkowicz DE, Legido A, Jackel RA, et al. Prolactin secretion following repetitive seizures. Neurology 1995;45:448–52.

39. Oribe E, Rohullah A, Nissenbaum E, Boal B. Serum prolactin concentrations are elevated after syncope. Neurology 1996;47:60–2.

40. Henrichs TF, Tucker DM, Farha J, Novelly RA. MMPI indices in the identification of patients evidencing pseudoseizures. Epilepsia 1988;29:184–8.

41. Wilkus RJ, Dodrill CB. Factors affecting the outcome of MMPI and neuropsychological assessments of psychogenic and epileptic seizure patients. Epilepsia 1989;30:339–47.

42. DeTimary P, Fouchet P, Sylin M, et al. Non–epileptic seizures: delayed diagnosis in patients presenting with electroencephalographic (EEG) or clinical signs of epileptic seizures. Seizure 2002;11:193–7.

43. Reuber M, Fernandez G, et al. Diagnostic delay in psychogenic nonepileptic seizures. Neurology 2002;58:493–5.

44. Rosenbaum DH, et al. Outpatient multidisciplinary management of non-epileptic seizures. In: Rowan AJ, Gates Jr, editors. Non-epileptic seizures. 1st ed. Stoneham, MA: Butterworth-Heinemann; 1993:275–83.

45. Lempert T, Schmidt D. Natural history and outcome of psychogenic seizures: a clinical study in 50 patients. J Neurol 1990;237:35–8.

46. Selwa LM, Geyer J, Nikakhtar N, et al. Nonepileptic seizure outcome varies by type of spell and duration of illness. Epilepsia 2000;41:1330–4.

47. Buchanan N, Snars J. Pseudoseizures (non epileptic attack disorder): clinical management and outcome in 50 patients. Seizure 1993;2:141–6.

48. Kanner AM. More controversies on the treatment of psychogenic pseudoseizures: an addendum. Epilepsy Behav 2003;4:360–4.

49. Aboukasm A, Mahr G, Gahry BR, et al. Retrospective analysis of the effects of psychotherapeutic interventions on outcomes of psychogenic nonepileptic seizures. Epilepsia 1998;39:470–3.

50. Reuber M, Pukrop T, Bauer J, et al. Outcome in psychogenic nonepileptic seizures: 1 to 10-year follow-up in 164 patients. Ann Neurol 2003;53:305–11.

51. McKenzie P, Oto M, Russell A, Pelosi A, Duncan R. Early outcomes and predictors in 260 patients with psychogenic nonepileptic seizures (PNES). Neurology 2010;74:64–9.

52. Kanner AM, Parra J, Frey M, et al. Psychiatric and neurologic predictors of psychogenic pseudoseizure outcome. Neurology 1999;53:933–8.

53. Shen W, Bowman ES, Markand ON. Presenting the diagnosis of pseudoseizure. Neurology 1990; 40:756–9.

54. Friedman JH, LaFrance Jr WC. Psychogenic disorders: the need to speak plainly. Arch Neurol 2010;67:753–5.

55. LaFrance Jr WC. Psychogenic nonepileptic “seizures” or “attacks”? It’s not just semantics: “Seizures.” Neurology 2010;75: 87–8.

56. Ramsay RE, Cohen A, Brown MC. Coexisting epilepsy and non-epileptic seizures. In: Non-epileptic seizures. Butterworth-Heinemann; 1998:47–54.

57. Stone J, Carson A, Sharpe M. Functional symptoms in neurology: management. J Neurol Neurosurg Psychiatry. 2005;6(Suppl 1):i13–i21.

58. LaFrance WC Jr, Bjornaes H. Designing treatment plans based on etiology of psychogenic nonepileptic seizures. In: Schachter SC, LaFrance WC Jr, editors. Gates and Rowan’s nonepileptic seizures. 3rd ed. New York: Cambridge University Press; 2010:266–80.

59. Kroenke K, Swindle R. Cognitive-behavioral therapy for somatization and symptom syndromes: a critical review of controlled clinical trials. Psychother Psychosom 200;69:205–15.

60. Kroenke K. Efficacy of treatment of somatoform disorders: a review of randomized controlled trials. Psychosom Med 2007:69:881–8.

61. LaFrance WC Jr, Miller IW, Ryan CE, et al. Cognitive behavioral therapy for psychogenic nonepileptic seizures. Epilepsy Behav 2009;14:591–6.

62. Chalder T. Non-epileptic attacks: a cognitive behavioral approach in a single case with a four-year follow-up. Clin Psychol Psychother 1996;3:291–7.

63. Betts T, Duffy N. Non-epileptic attack disorder (pseudoseizures) and sexual abuse: a review. In: Gram L, Johannessen SI, Osterman PE, et al, editors. Pseudo-epileptic seizures. Petersfield, UK: Wrightson Biomedical Publishing; 1993:55–66.

64. Lesser RP. Treatment and outcome of psychogenic nonepileptic seizures. Epilepsy Currents 2003;3:198–200.

65. Ramani V. Review of psychiatric treatment strategies in non-epileptic seizures. In: Rowan AJ, Gates JR, eds. Non-epileptic Seizures. 1st ed. Stoneham, MA: Butterworth Heinemann; 1993:259–67.

66. Goldstein LH, Chalder T, Chigwedere C, et al. Cognitive-behavioral therapy for psychogenic nonepileptic seizures: a pilot RCT. Neurology 2010;74:1986–94.

67. Bennet C, So NM, Smith WB, Thompson K. Structured treatment improves the outcome of nonepileptic events. Epilepsia 1997;38(Suppl 8):214.

68. McDade G, Brown SW. Non-epileptic seizures: management and predictive factors of outcome. Seizure 1992;1:7–10.

69. Bowman ES. Etiology and clinical course of pseudoseizures: relationship to trauma, depression, and dissociation. Psychosomatics 1993;34:333–42.

70. Bowman ES, Markand ON. Psychodynamics and psychiatric diagnoses of pseudoseizure subjects. Am J Psychiatry 1996;153:57–63.

71. Vanderzant CW, Giordani B, Berent S, et al. Personality of patients with pseudoseizures. Neurology 1986;36:664–8.

72. Benbadis SR, Agrawal V, Tatum WO. How many patients with psychogenic nonepileptic seizures also have epilepsy? Neurology 2001; 57:915–7.

73. Glosser G, Roberts D, et al. Nonepileptic seizures after resective epilepsy surgery. Epilepsia 1999; 40:1750–4.

74. Reuber M, Kral T. New-onset psychogenic seizures after intracranial neurosurgery. Acta Neurochir (Wien) 2002; 144:901–7.

75. Williamson P, Spencer D, Spencer S, et al. Complex partial seizures of frontal lobe origin. Ann Neurol 1985;18:497–504.

76. Saygi S, Katz A, Marks D, et al. Frontal lobe partial seizures and psychogenic seizures: comparison of clinical and ictal characteristics. Neurology 1992;42:1274–7.

77. Waterman K, Purves S, Kosaka B, et al. An epileptic syndrome caused by mesial frontal lobe seizure foci. Neurology 1987; 37:577–82.

78. Sussman N, Jackel R, Kaplan L, et al. Bicycling movements as a manifestation of complex partial seizures of temporal lobe origin. Epilepsia 1989;30:527–31.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
Psychogenic Nonepileptic Seizures
Display Headline
Psychogenic Nonepileptic Seizures
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

How Valid Is the “Healthy Obese” Phenotype For Older Women?

Article Type
Changed
Tue, 03/06/2018 - 15:57
Display Headline
How Valid Is the “Healthy Obese” Phenotype For Older Women?

Study Overview

Objective. To determine whether having a body mass index (BMI) in the obese range (30 kg/m2) as an older adult woman is associated with changes in late-age survival and morbidity.

Design. Observational cohort study.

Setting and participants. This study relied upon data collected as part of the Women’s Health Initiative (WHI), an observational study and clinical trial focusing on the health of postmenopausal women aged 50–79 years at enrollment. For the purposes of the WHI, women were recruited from centers across the United States between 1993 and 1998 and could participate in several intervention studies (hormone replacement therapy, low-fat diet, calcium/vitamin D supplementation) or an observational study [1].

For this paper, the authors utilized data from those WHI participants who, based on their age at enrollment, could have reached age 85 years by September of 2012. The authors excluded women who did not provide follow-up health information within 18 months of their 85th birthdays or who reported mobility disabilities at their baseline data collection. This resulted in a total of 36,611 women for analysis.

There were a number of baseline measures collected on the study participants. Via written survey, participants self-reported their race and ethnicity, hormone use status, smoking status, alcohol consumption, physical activity level, depressive symptoms, and a number of demographic characteristics. Study personnel objectively measured height and weight to calculate baseline BMI and also measured waist circumference (WC, in cm).

The primary exposure measure for this study was BMI category at trial entry categorized as follows: underweight (< 18.5 kg/m2), healthy weight (18.5–24.9 kg/m2), overweight (25.0–29.9 kg/m2) or obese class I (30–34.9 kg/m2), II (35–39.9 kg/m2) or III (≥ 40 kg/m2), using standard accepted cut-points except for Asian/Pacific Islander participants, where alternative World Health Organization (WHO) cut-points were used. The WHO cut-points are slightly lower to account for usual body habitus and disease risk in that population. BMI changes over study follow-up were not included in the exposure measure for this study. WC (dichotomized around 88 cm) was also used as an exposure measure.

Main outcome measures. Disease-free survival status during the follow-up period. In the year at which participants were supposed to reach their 85th birthdays, they were categorized as to whether they had survived or not. Survival status was ascertained by hospital record review, autopsy reports, death certificates and review of the National Death Index. Those who survived were sub-grouped according to type of survival into 1 of the following categories: (1) no incident disease and no mobility disability (healthy), (2) baseline disease present but no incident disease or mobility disability during follow-up (prevalent disease), (3) incident disease but no mobility disability during follow-up (incident disease), and (4) incident mobility disability with or without incident disease (disabled).

Diseases of interest (prevalent and incident) included coronary and cerebrovascular disease, cancer, diabetes and hip fracture—the conditions the investigators felt most increased risk of death or morbidity and mobility disability in this population of aging women. Baseline disease status was defined using self-report, but incident disease in follow-up was more rigorously defined using self-report plus medical record review, except for incident diabetes, which required only self-report of diagnosis plus report of new oral hypoglycemic or insulin use.

Because the outcome of interest (survival status) had 5 possible categories, multinomial logistic regression was used as the analytic technique, with baseline BMI category and WC categories as predictors. The authors adjusted for baseline characteristics including age, race/ethnicity, study arm (intervention or observational for WHI), educational level, marital status, smoking status, ethanol use, self-reported physical activity and depression symptoms. Because of the possibly interrelated predictors (BMI and WC), the authors built BMI models with and without WC, and when WC was the primary predictor they adjusted for a participant’s BMI in order to try to isolate the impact of central adiposity. Additionally, they performed the analyses stratified by race and ethnicity as well as by smoking status.

Results. The mean (SD) baseline age of participants was 72.4 (3) years, and the vast majority (88.5%) self-identified as non-Hispanic white. At the end of the follow-up period, of the initial 36,611 participants, 9079 (24.8%) had died, 6702 (18.3%) had become disabled, 8512 (23.2%) had developed incident disease without disability, 5366 (14.6%) had prevalent but no incident disease, and 6952 (18.9%) were categorized as healthy. There were a number of potentially confounding baseline characteristics that differed between the survival categories. Importantly, race was associated with survival status—non-Hispanic white women were more likely to be in the “healthy” category at follow-up than their counterparts from other races/ethnicities. Baseline smokers were more likely not to live to 85 years, and those with less than a high school education were also more likely not to live to 85 years.

In models adjusting for baseline covariates, with BMI category as the primary predictor, women with an obese baseline BMI had significantly increased odds of not living to 85 years of age, relative to women in a healthy baseline BMI category, with increasing odds of death among those with higher baseline BMI levels (class I obesity odds ratio [OR] 1.72 [95% CI 1.55–1.92], class II obesity OR 3.28 [95% CI 2.69–4.01], class III obesity OR 3.48 [95% CI 2.52–4.80]). Amongst survivors, baseline obesity was also associated with greater odds of developing incident disease, relative to healthy weight women (class I obesity OR 1.65 [95% CI 1.48–1.84], class II obesity OR 2.44 (95% CI 2.02–2.96), class III obesity OR 1.73 [95% CI 1.21–2.46]). There was a striking relationship between baseline obesity and the odds of incident disability during follow-up (class I obesity OR 3.22 [95% CI 2.87–3.61], class II obesity OR 6.62 [95% CI 5.41–8.09], class III obesity OR 6.65 [95% CI 4.80–9.21]).

Women who were overweight at baseline also displayed statistically significant but more modestly increased odds of incident disease, mobility disability, and death relative to their normal-weight counterparts. Importantly, even in multivariable models, being underweight at baseline was also associated with significantly increased odds of death before age 85 relative to healthy weight individuals (OR 2.09 [95% CI 1.54–2.85]) but not with increased odds of incident disease or disability.

When WC status was adjusted for in the “BMI-outcome” models, the odds of death, disability, and incident disease were attenuated for obese women but remained elevated, particularly for women with class II or III obesity. When WC was examined as a primary predictor in multivariable models (adjusted for BMI category), those women with baseline WC ≥ 88 cm experienced increased odds of incident disease (OR 1.47 [95% CI 1.33–1.62]), mobility disability (OR 1.64 [95% CI 1.49–1.84]) and death (OR 1.83 [95% CI 1.66–2.03]) compared to women with smaller baseline WC.

When participants were stratified by race/ethnicity, the relationships for increasing odds of incident disease/disability with baseline obesity persisted for non-Hispanic white and black/African-American participants. Hispanic/Latina participants who were obese at baseline, however, did not have significantly increased odds of death before 85 years relative to healthy weight counterparts, although there were far fewer of these women represented in the cohort (n = 600). Asian/Pacific Islander (API) participants (n = 781), the majority of whom were in the healthy weight range at baseline (57%), showed a somewhat different pattern. Odds ratios for incident disease and death among obese API women were not significantly elevated relative to healthy weight women (although the “n ”s for these groups was relatively small), however the odds of incident disability was significantly elevated amongst API women who were obese at baseline (OR 4.95 [95% CI 1.51–16.23]).

Conclusion. Compared to older women with a healthy BMI, obese women and those with increased abdominal circumference had a lower chance of surviving to age 85 years. Those who did survive were more likely to develop incident disease and/or disability than their healthy weight counterparts.

Commentary

The prevalence of obesity has risen substantially over the past several decades, and few demographic groups have found themselves spared from the epidemic [2]. Although much focus is placed on obesity incidence and prevalence among children and young adults, adults over age 60, a growing segment of the US population, are heavily impacted by the rising rates of obesity as well, with 42% of women and 37% of men in this group characterized as obese in 2010 [2]. This trend has potentially major implications for policy makers who are tasked with cutting the cost of programs such as Medicare.

Obesity has only recently been recognized as a disease by the American Medical Association, and yet it has long been associated with costly and debilitating chronic conditions such as type 2 diabetes, hypertension, sleep apnea, and degenerative joint disease [3]. Despite this fact, several epidemiologic studies have suggested an “obesity paradox”—older adults who are mildly obese have mortality rates similar to normal weight adults, and those who are overweight appear to have lower mortality [4]. These papers have generated controversy among obesity researchers and epidemiologists who have grappled with the following question: How is it possible that overweight and obesity, while clearly linked to so many chronic conditions that increase mortality and morbidity, might be a good thing? Is there such a thing as a “healthy level of obesity,” or, can you be “fit and fat?” In the midst of these discussions and the media storm that inevitably surrounds them, patients are confronted with confusing mixed messages, possibly making them less likely to attempt to maintain a healthy body weight. Unfortunately, as many prior authors have asserted, most of the epidemiologic studies that assert this protective effect of overweight and obesity have not accounted for potentially important confounders of the “weight category–mortality” relationship, such as smoking status [5]. Among older adults, a substantial fraction of those in the normal weight category are at a so-called healthy BMI for very unhealthy reasons, such as cigarette smoking, cancer, or other chronic conditions (ie, they were heavier but lost weight due to underlying illness). Including these sick (but so-called “healthy weight”) people alongside those who are truly healthy and in a healthy BMI range muddies the picture and does not effectively isolate the impact of weight status on morbidity and mortality.

This cohort study by Rillamas-Sun et al makes an important contribution to the discussion by relying on a very large and comprehensive dataset, with an impressive follow-up period of nearly 2 decades, to more fully isolate the relationship between BMI category and survival for postmenopausal women. By adjusting for important potential confounders such as baseline smoking status, alcohol use, chronic disease status and a number of sociodemographic factors, and by separating out the chronically ill patients from the beginning, the investigators reached conclusions that seem to align better with all that we know about the increased health risks conferred by obesity. They found that postmenopausal women who were obese but without prevalent disease at baseline had increased odds of death before age 85, as well as increased odds of incident chronic disease (such as cardiovascular disease or diabetes) and increased odds of incident disability relative to postmenopausal women starting out in a healthy BMI range. Degree of obesity seemed to matter as well; those with class II and III obesity had significantly increased odds of developing mobility impairment, in particular, relative to normal weight women. This is particularly important when viewed through the lens of caring for an aging population—those who have significant mobility impairment will have a much harder time caring for themselves as they age. Furthermore, they found that overweight women also faced slightly increased odds of these outcomes relative to normal weight women. Abdominal adiposity, in particular, appeared to confer risk of death and disease, as elevated odds of mortality and incident disease or disability persisted in women with waist circumference ≥ 88 cm even after adjusting for BMI. As has been suggested by prior research on this topic, this study also supported the finding that being underweight increases ones odds of death, however, there was no increased incidence of disease or mobility disability for underweight women (relative to healthy starting weight).

The authors of the study made a wise decision in separating women with baseline chronic illness from those who had not yet been diagnosed with diabetes, cardiovascular disease or other chronic condition at baseline. As is pointed out in an editorial accompanying this study [6], this creates a scenario where the exposure (obesity) clearly predates the outcome (chronic illness), helping to avoid contamination of risk estimates by reverse causation (ie, is chronic illness leading to increased obesity, with the downstream increase in mortality actually due to the chronic illness?).

Despite the clear strengths of the study, there are several important limitations that must be acknowledged in interpreting the results. The most obvious is that BMI status was only measured at baseline. There is no way of knowing either what a participant’s weight trajectory had been in their younger years, or what happened to the BMI during the study follow-up period, both of which could certainly impact a participant’s risk of morbidity or mortality. Given a follow-up period of nearly 20 years, it is possible that there was crossover between BMI (exposure) categories after baseline assignment. Furthermore, the study does not address the very important question of how an intervention to promote weight loss in older women might impact morbidity and mortality—it is possible that encouraging weight loss in this population may in fact worsen health outcomes for some patients [6].

The generalizability of the study may be somewhat limited. The study population itself represented a group of women who were likely relatively healthy and motivated, having self-selected to participate in the WHI, thus they could have been healthier than groups studied in previous population-based samples. Furthermore, the study results may not generalize to men, however other similar cohort studies with male participants have reached similar conclusions [7].

Applications for Clinical Practice

To promote longevity and maintenance of independence in our growing population of postmenopausal women, it is important that physicians continue to educate and assist their patients in maintaining a healthy weight as they age. Although the impact of intentional weight loss in obese older women is not addressed by this paper, it does support the idea that obese postmenopausal women are at higher risk of death before age 85 years and disability. Therefore, for these patients, physicians should take particular care to reinforce healthy lifestyle choices such as good nutrition and regular physical activity.

—Kristina Lewis, MD, MPH

References

1. Design of the Women’s Health Initiative clinical trial and observational study. The Women’s Health Initiative Study Group. Control Clin Trials 1998;19:61–109.

2. Flegal KM, Carroll MD, Kit BK, Ogden CL. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999-2010. JAMA 2012;307:491–7.

3. Must A, Spadano J, Coakley EH, et al. The disease burden associated with overweight and obesity. JAMA 1999;282:1523–9.

4. Flegal KM, Kit BK, Orpana H, Graubard BI. Association of all-cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta-analysis. JAMA 2013;309:71–82.

5. Jackson CL, Stampfer MJ. Maintaining a healthy body weight is paramount. JAMA Intern Med 2014;174:23–4.

6. Dixon JB, Egger GJ, Finkelstein EA, et al. ‘Obesity Paradox’ misunderstands the biology of optimal weight throughout the life cycle. Int J Obesity 2014.

7. Reed DM, Foley DJ, White LR, et al. Predictors of healthy aging in men with high life expectancies. Am J Public Health 1998;88:1463–8.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Sections

Study Overview

Objective. To determine whether having a body mass index (BMI) in the obese range (30 kg/m2) as an older adult woman is associated with changes in late-age survival and morbidity.

Design. Observational cohort study.

Setting and participants. This study relied upon data collected as part of the Women’s Health Initiative (WHI), an observational study and clinical trial focusing on the health of postmenopausal women aged 50–79 years at enrollment. For the purposes of the WHI, women were recruited from centers across the United States between 1993 and 1998 and could participate in several intervention studies (hormone replacement therapy, low-fat diet, calcium/vitamin D supplementation) or an observational study [1].

For this paper, the authors utilized data from those WHI participants who, based on their age at enrollment, could have reached age 85 years by September of 2012. The authors excluded women who did not provide follow-up health information within 18 months of their 85th birthdays or who reported mobility disabilities at their baseline data collection. This resulted in a total of 36,611 women for analysis.

There were a number of baseline measures collected on the study participants. Via written survey, participants self-reported their race and ethnicity, hormone use status, smoking status, alcohol consumption, physical activity level, depressive symptoms, and a number of demographic characteristics. Study personnel objectively measured height and weight to calculate baseline BMI and also measured waist circumference (WC, in cm).

The primary exposure measure for this study was BMI category at trial entry categorized as follows: underweight (< 18.5 kg/m2), healthy weight (18.5–24.9 kg/m2), overweight (25.0–29.9 kg/m2) or obese class I (30–34.9 kg/m2), II (35–39.9 kg/m2) or III (≥ 40 kg/m2), using standard accepted cut-points except for Asian/Pacific Islander participants, where alternative World Health Organization (WHO) cut-points were used. The WHO cut-points are slightly lower to account for usual body habitus and disease risk in that population. BMI changes over study follow-up were not included in the exposure measure for this study. WC (dichotomized around 88 cm) was also used as an exposure measure.

Main outcome measures. Disease-free survival status during the follow-up period. In the year at which participants were supposed to reach their 85th birthdays, they were categorized as to whether they had survived or not. Survival status was ascertained by hospital record review, autopsy reports, death certificates and review of the National Death Index. Those who survived were sub-grouped according to type of survival into 1 of the following categories: (1) no incident disease and no mobility disability (healthy), (2) baseline disease present but no incident disease or mobility disability during follow-up (prevalent disease), (3) incident disease but no mobility disability during follow-up (incident disease), and (4) incident mobility disability with or without incident disease (disabled).

Diseases of interest (prevalent and incident) included coronary and cerebrovascular disease, cancer, diabetes and hip fracture—the conditions the investigators felt most increased risk of death or morbidity and mobility disability in this population of aging women. Baseline disease status was defined using self-report, but incident disease in follow-up was more rigorously defined using self-report plus medical record review, except for incident diabetes, which required only self-report of diagnosis plus report of new oral hypoglycemic or insulin use.

Because the outcome of interest (survival status) had 5 possible categories, multinomial logistic regression was used as the analytic technique, with baseline BMI category and WC categories as predictors. The authors adjusted for baseline characteristics including age, race/ethnicity, study arm (intervention or observational for WHI), educational level, marital status, smoking status, ethanol use, self-reported physical activity and depression symptoms. Because of the possibly interrelated predictors (BMI and WC), the authors built BMI models with and without WC, and when WC was the primary predictor they adjusted for a participant’s BMI in order to try to isolate the impact of central adiposity. Additionally, they performed the analyses stratified by race and ethnicity as well as by smoking status.

Results. The mean (SD) baseline age of participants was 72.4 (3) years, and the vast majority (88.5%) self-identified as non-Hispanic white. At the end of the follow-up period, of the initial 36,611 participants, 9079 (24.8%) had died, 6702 (18.3%) had become disabled, 8512 (23.2%) had developed incident disease without disability, 5366 (14.6%) had prevalent but no incident disease, and 6952 (18.9%) were categorized as healthy. There were a number of potentially confounding baseline characteristics that differed between the survival categories. Importantly, race was associated with survival status—non-Hispanic white women were more likely to be in the “healthy” category at follow-up than their counterparts from other races/ethnicities. Baseline smokers were more likely not to live to 85 years, and those with less than a high school education were also more likely not to live to 85 years.

In models adjusting for baseline covariates, with BMI category as the primary predictor, women with an obese baseline BMI had significantly increased odds of not living to 85 years of age, relative to women in a healthy baseline BMI category, with increasing odds of death among those with higher baseline BMI levels (class I obesity odds ratio [OR] 1.72 [95% CI 1.55–1.92], class II obesity OR 3.28 [95% CI 2.69–4.01], class III obesity OR 3.48 [95% CI 2.52–4.80]). Amongst survivors, baseline obesity was also associated with greater odds of developing incident disease, relative to healthy weight women (class I obesity OR 1.65 [95% CI 1.48–1.84], class II obesity OR 2.44 (95% CI 2.02–2.96), class III obesity OR 1.73 [95% CI 1.21–2.46]). There was a striking relationship between baseline obesity and the odds of incident disability during follow-up (class I obesity OR 3.22 [95% CI 2.87–3.61], class II obesity OR 6.62 [95% CI 5.41–8.09], class III obesity OR 6.65 [95% CI 4.80–9.21]).

Women who were overweight at baseline also displayed statistically significant but more modestly increased odds of incident disease, mobility disability, and death relative to their normal-weight counterparts. Importantly, even in multivariable models, being underweight at baseline was also associated with significantly increased odds of death before age 85 relative to healthy weight individuals (OR 2.09 [95% CI 1.54–2.85]) but not with increased odds of incident disease or disability.

When WC status was adjusted for in the “BMI-outcome” models, the odds of death, disability, and incident disease were attenuated for obese women but remained elevated, particularly for women with class II or III obesity. When WC was examined as a primary predictor in multivariable models (adjusted for BMI category), those women with baseline WC ≥ 88 cm experienced increased odds of incident disease (OR 1.47 [95% CI 1.33–1.62]), mobility disability (OR 1.64 [95% CI 1.49–1.84]) and death (OR 1.83 [95% CI 1.66–2.03]) compared to women with smaller baseline WC.

When participants were stratified by race/ethnicity, the relationships for increasing odds of incident disease/disability with baseline obesity persisted for non-Hispanic white and black/African-American participants. Hispanic/Latina participants who were obese at baseline, however, did not have significantly increased odds of death before 85 years relative to healthy weight counterparts, although there were far fewer of these women represented in the cohort (n = 600). Asian/Pacific Islander (API) participants (n = 781), the majority of whom were in the healthy weight range at baseline (57%), showed a somewhat different pattern. Odds ratios for incident disease and death among obese API women were not significantly elevated relative to healthy weight women (although the “n ”s for these groups was relatively small), however the odds of incident disability was significantly elevated amongst API women who were obese at baseline (OR 4.95 [95% CI 1.51–16.23]).

Conclusion. Compared to older women with a healthy BMI, obese women and those with increased abdominal circumference had a lower chance of surviving to age 85 years. Those who did survive were more likely to develop incident disease and/or disability than their healthy weight counterparts.

Commentary

The prevalence of obesity has risen substantially over the past several decades, and few demographic groups have found themselves spared from the epidemic [2]. Although much focus is placed on obesity incidence and prevalence among children and young adults, adults over age 60, a growing segment of the US population, are heavily impacted by the rising rates of obesity as well, with 42% of women and 37% of men in this group characterized as obese in 2010 [2]. This trend has potentially major implications for policy makers who are tasked with cutting the cost of programs such as Medicare.

Obesity has only recently been recognized as a disease by the American Medical Association, and yet it has long been associated with costly and debilitating chronic conditions such as type 2 diabetes, hypertension, sleep apnea, and degenerative joint disease [3]. Despite this fact, several epidemiologic studies have suggested an “obesity paradox”—older adults who are mildly obese have mortality rates similar to normal weight adults, and those who are overweight appear to have lower mortality [4]. These papers have generated controversy among obesity researchers and epidemiologists who have grappled with the following question: How is it possible that overweight and obesity, while clearly linked to so many chronic conditions that increase mortality and morbidity, might be a good thing? Is there such a thing as a “healthy level of obesity,” or, can you be “fit and fat?” In the midst of these discussions and the media storm that inevitably surrounds them, patients are confronted with confusing mixed messages, possibly making them less likely to attempt to maintain a healthy body weight. Unfortunately, as many prior authors have asserted, most of the epidemiologic studies that assert this protective effect of overweight and obesity have not accounted for potentially important confounders of the “weight category–mortality” relationship, such as smoking status [5]. Among older adults, a substantial fraction of those in the normal weight category are at a so-called healthy BMI for very unhealthy reasons, such as cigarette smoking, cancer, or other chronic conditions (ie, they were heavier but lost weight due to underlying illness). Including these sick (but so-called “healthy weight”) people alongside those who are truly healthy and in a healthy BMI range muddies the picture and does not effectively isolate the impact of weight status on morbidity and mortality.

This cohort study by Rillamas-Sun et al makes an important contribution to the discussion by relying on a very large and comprehensive dataset, with an impressive follow-up period of nearly 2 decades, to more fully isolate the relationship between BMI category and survival for postmenopausal women. By adjusting for important potential confounders such as baseline smoking status, alcohol use, chronic disease status and a number of sociodemographic factors, and by separating out the chronically ill patients from the beginning, the investigators reached conclusions that seem to align better with all that we know about the increased health risks conferred by obesity. They found that postmenopausal women who were obese but without prevalent disease at baseline had increased odds of death before age 85, as well as increased odds of incident chronic disease (such as cardiovascular disease or diabetes) and increased odds of incident disability relative to postmenopausal women starting out in a healthy BMI range. Degree of obesity seemed to matter as well; those with class II and III obesity had significantly increased odds of developing mobility impairment, in particular, relative to normal weight women. This is particularly important when viewed through the lens of caring for an aging population—those who have significant mobility impairment will have a much harder time caring for themselves as they age. Furthermore, they found that overweight women also faced slightly increased odds of these outcomes relative to normal weight women. Abdominal adiposity, in particular, appeared to confer risk of death and disease, as elevated odds of mortality and incident disease or disability persisted in women with waist circumference ≥ 88 cm even after adjusting for BMI. As has been suggested by prior research on this topic, this study also supported the finding that being underweight increases ones odds of death, however, there was no increased incidence of disease or mobility disability for underweight women (relative to healthy starting weight).

The authors of the study made a wise decision in separating women with baseline chronic illness from those who had not yet been diagnosed with diabetes, cardiovascular disease or other chronic condition at baseline. As is pointed out in an editorial accompanying this study [6], this creates a scenario where the exposure (obesity) clearly predates the outcome (chronic illness), helping to avoid contamination of risk estimates by reverse causation (ie, is chronic illness leading to increased obesity, with the downstream increase in mortality actually due to the chronic illness?).

Despite the clear strengths of the study, there are several important limitations that must be acknowledged in interpreting the results. The most obvious is that BMI status was only measured at baseline. There is no way of knowing either what a participant’s weight trajectory had been in their younger years, or what happened to the BMI during the study follow-up period, both of which could certainly impact a participant’s risk of morbidity or mortality. Given a follow-up period of nearly 20 years, it is possible that there was crossover between BMI (exposure) categories after baseline assignment. Furthermore, the study does not address the very important question of how an intervention to promote weight loss in older women might impact morbidity and mortality—it is possible that encouraging weight loss in this population may in fact worsen health outcomes for some patients [6].

The generalizability of the study may be somewhat limited. The study population itself represented a group of women who were likely relatively healthy and motivated, having self-selected to participate in the WHI, thus they could have been healthier than groups studied in previous population-based samples. Furthermore, the study results may not generalize to men, however other similar cohort studies with male participants have reached similar conclusions [7].

Applications for Clinical Practice

To promote longevity and maintenance of independence in our growing population of postmenopausal women, it is important that physicians continue to educate and assist their patients in maintaining a healthy weight as they age. Although the impact of intentional weight loss in obese older women is not addressed by this paper, it does support the idea that obese postmenopausal women are at higher risk of death before age 85 years and disability. Therefore, for these patients, physicians should take particular care to reinforce healthy lifestyle choices such as good nutrition and regular physical activity.

—Kristina Lewis, MD, MPH

Study Overview

Objective. To determine whether having a body mass index (BMI) in the obese range (30 kg/m2) as an older adult woman is associated with changes in late-age survival and morbidity.

Design. Observational cohort study.

Setting and participants. This study relied upon data collected as part of the Women’s Health Initiative (WHI), an observational study and clinical trial focusing on the health of postmenopausal women aged 50–79 years at enrollment. For the purposes of the WHI, women were recruited from centers across the United States between 1993 and 1998 and could participate in several intervention studies (hormone replacement therapy, low-fat diet, calcium/vitamin D supplementation) or an observational study [1].

For this paper, the authors utilized data from those WHI participants who, based on their age at enrollment, could have reached age 85 years by September of 2012. The authors excluded women who did not provide follow-up health information within 18 months of their 85th birthdays or who reported mobility disabilities at their baseline data collection. This resulted in a total of 36,611 women for analysis.

There were a number of baseline measures collected on the study participants. Via written survey, participants self-reported their race and ethnicity, hormone use status, smoking status, alcohol consumption, physical activity level, depressive symptoms, and a number of demographic characteristics. Study personnel objectively measured height and weight to calculate baseline BMI and also measured waist circumference (WC, in cm).

The primary exposure measure for this study was BMI category at trial entry categorized as follows: underweight (< 18.5 kg/m2), healthy weight (18.5–24.9 kg/m2), overweight (25.0–29.9 kg/m2) or obese class I (30–34.9 kg/m2), II (35–39.9 kg/m2) or III (≥ 40 kg/m2), using standard accepted cut-points except for Asian/Pacific Islander participants, where alternative World Health Organization (WHO) cut-points were used. The WHO cut-points are slightly lower to account for usual body habitus and disease risk in that population. BMI changes over study follow-up were not included in the exposure measure for this study. WC (dichotomized around 88 cm) was also used as an exposure measure.

Main outcome measures. Disease-free survival status during the follow-up period. In the year at which participants were supposed to reach their 85th birthdays, they were categorized as to whether they had survived or not. Survival status was ascertained by hospital record review, autopsy reports, death certificates and review of the National Death Index. Those who survived were sub-grouped according to type of survival into 1 of the following categories: (1) no incident disease and no mobility disability (healthy), (2) baseline disease present but no incident disease or mobility disability during follow-up (prevalent disease), (3) incident disease but no mobility disability during follow-up (incident disease), and (4) incident mobility disability with or without incident disease (disabled).

Diseases of interest (prevalent and incident) included coronary and cerebrovascular disease, cancer, diabetes and hip fracture—the conditions the investigators felt most increased risk of death or morbidity and mobility disability in this population of aging women. Baseline disease status was defined using self-report, but incident disease in follow-up was more rigorously defined using self-report plus medical record review, except for incident diabetes, which required only self-report of diagnosis plus report of new oral hypoglycemic or insulin use.

Because the outcome of interest (survival status) had 5 possible categories, multinomial logistic regression was used as the analytic technique, with baseline BMI category and WC categories as predictors. The authors adjusted for baseline characteristics including age, race/ethnicity, study arm (intervention or observational for WHI), educational level, marital status, smoking status, ethanol use, self-reported physical activity and depression symptoms. Because of the possibly interrelated predictors (BMI and WC), the authors built BMI models with and without WC, and when WC was the primary predictor they adjusted for a participant’s BMI in order to try to isolate the impact of central adiposity. Additionally, they performed the analyses stratified by race and ethnicity as well as by smoking status.

Results. The mean (SD) baseline age of participants was 72.4 (3) years, and the vast majority (88.5%) self-identified as non-Hispanic white. At the end of the follow-up period, of the initial 36,611 participants, 9079 (24.8%) had died, 6702 (18.3%) had become disabled, 8512 (23.2%) had developed incident disease without disability, 5366 (14.6%) had prevalent but no incident disease, and 6952 (18.9%) were categorized as healthy. There were a number of potentially confounding baseline characteristics that differed between the survival categories. Importantly, race was associated with survival status—non-Hispanic white women were more likely to be in the “healthy” category at follow-up than their counterparts from other races/ethnicities. Baseline smokers were more likely not to live to 85 years, and those with less than a high school education were also more likely not to live to 85 years.

In models adjusting for baseline covariates, with BMI category as the primary predictor, women with an obese baseline BMI had significantly increased odds of not living to 85 years of age, relative to women in a healthy baseline BMI category, with increasing odds of death among those with higher baseline BMI levels (class I obesity odds ratio [OR] 1.72 [95% CI 1.55–1.92], class II obesity OR 3.28 [95% CI 2.69–4.01], class III obesity OR 3.48 [95% CI 2.52–4.80]). Amongst survivors, baseline obesity was also associated with greater odds of developing incident disease, relative to healthy weight women (class I obesity OR 1.65 [95% CI 1.48–1.84], class II obesity OR 2.44 (95% CI 2.02–2.96), class III obesity OR 1.73 [95% CI 1.21–2.46]). There was a striking relationship between baseline obesity and the odds of incident disability during follow-up (class I obesity OR 3.22 [95% CI 2.87–3.61], class II obesity OR 6.62 [95% CI 5.41–8.09], class III obesity OR 6.65 [95% CI 4.80–9.21]).

Women who were overweight at baseline also displayed statistically significant but more modestly increased odds of incident disease, mobility disability, and death relative to their normal-weight counterparts. Importantly, even in multivariable models, being underweight at baseline was also associated with significantly increased odds of death before age 85 relative to healthy weight individuals (OR 2.09 [95% CI 1.54–2.85]) but not with increased odds of incident disease or disability.

When WC status was adjusted for in the “BMI-outcome” models, the odds of death, disability, and incident disease were attenuated for obese women but remained elevated, particularly for women with class II or III obesity. When WC was examined as a primary predictor in multivariable models (adjusted for BMI category), those women with baseline WC ≥ 88 cm experienced increased odds of incident disease (OR 1.47 [95% CI 1.33–1.62]), mobility disability (OR 1.64 [95% CI 1.49–1.84]) and death (OR 1.83 [95% CI 1.66–2.03]) compared to women with smaller baseline WC.

When participants were stratified by race/ethnicity, the relationships for increasing odds of incident disease/disability with baseline obesity persisted for non-Hispanic white and black/African-American participants. Hispanic/Latina participants who were obese at baseline, however, did not have significantly increased odds of death before 85 years relative to healthy weight counterparts, although there were far fewer of these women represented in the cohort (n = 600). Asian/Pacific Islander (API) participants (n = 781), the majority of whom were in the healthy weight range at baseline (57%), showed a somewhat different pattern. Odds ratios for incident disease and death among obese API women were not significantly elevated relative to healthy weight women (although the “n ”s for these groups was relatively small), however the odds of incident disability was significantly elevated amongst API women who were obese at baseline (OR 4.95 [95% CI 1.51–16.23]).

Conclusion. Compared to older women with a healthy BMI, obese women and those with increased abdominal circumference had a lower chance of surviving to age 85 years. Those who did survive were more likely to develop incident disease and/or disability than their healthy weight counterparts.

Commentary

The prevalence of obesity has risen substantially over the past several decades, and few demographic groups have found themselves spared from the epidemic [2]. Although much focus is placed on obesity incidence and prevalence among children and young adults, adults over age 60, a growing segment of the US population, are heavily impacted by the rising rates of obesity as well, with 42% of women and 37% of men in this group characterized as obese in 2010 [2]. This trend has potentially major implications for policy makers who are tasked with cutting the cost of programs such as Medicare.

Obesity has only recently been recognized as a disease by the American Medical Association, and yet it has long been associated with costly and debilitating chronic conditions such as type 2 diabetes, hypertension, sleep apnea, and degenerative joint disease [3]. Despite this fact, several epidemiologic studies have suggested an “obesity paradox”—older adults who are mildly obese have mortality rates similar to normal weight adults, and those who are overweight appear to have lower mortality [4]. These papers have generated controversy among obesity researchers and epidemiologists who have grappled with the following question: How is it possible that overweight and obesity, while clearly linked to so many chronic conditions that increase mortality and morbidity, might be a good thing? Is there such a thing as a “healthy level of obesity,” or, can you be “fit and fat?” In the midst of these discussions and the media storm that inevitably surrounds them, patients are confronted with confusing mixed messages, possibly making them less likely to attempt to maintain a healthy body weight. Unfortunately, as many prior authors have asserted, most of the epidemiologic studies that assert this protective effect of overweight and obesity have not accounted for potentially important confounders of the “weight category–mortality” relationship, such as smoking status [5]. Among older adults, a substantial fraction of those in the normal weight category are at a so-called healthy BMI for very unhealthy reasons, such as cigarette smoking, cancer, or other chronic conditions (ie, they were heavier but lost weight due to underlying illness). Including these sick (but so-called “healthy weight”) people alongside those who are truly healthy and in a healthy BMI range muddies the picture and does not effectively isolate the impact of weight status on morbidity and mortality.

This cohort study by Rillamas-Sun et al makes an important contribution to the discussion by relying on a very large and comprehensive dataset, with an impressive follow-up period of nearly 2 decades, to more fully isolate the relationship between BMI category and survival for postmenopausal women. By adjusting for important potential confounders such as baseline smoking status, alcohol use, chronic disease status and a number of sociodemographic factors, and by separating out the chronically ill patients from the beginning, the investigators reached conclusions that seem to align better with all that we know about the increased health risks conferred by obesity. They found that postmenopausal women who were obese but without prevalent disease at baseline had increased odds of death before age 85, as well as increased odds of incident chronic disease (such as cardiovascular disease or diabetes) and increased odds of incident disability relative to postmenopausal women starting out in a healthy BMI range. Degree of obesity seemed to matter as well; those with class II and III obesity had significantly increased odds of developing mobility impairment, in particular, relative to normal weight women. This is particularly important when viewed through the lens of caring for an aging population—those who have significant mobility impairment will have a much harder time caring for themselves as they age. Furthermore, they found that overweight women also faced slightly increased odds of these outcomes relative to normal weight women. Abdominal adiposity, in particular, appeared to confer risk of death and disease, as elevated odds of mortality and incident disease or disability persisted in women with waist circumference ≥ 88 cm even after adjusting for BMI. As has been suggested by prior research on this topic, this study also supported the finding that being underweight increases ones odds of death, however, there was no increased incidence of disease or mobility disability for underweight women (relative to healthy starting weight).

The authors of the study made a wise decision in separating women with baseline chronic illness from those who had not yet been diagnosed with diabetes, cardiovascular disease or other chronic condition at baseline. As is pointed out in an editorial accompanying this study [6], this creates a scenario where the exposure (obesity) clearly predates the outcome (chronic illness), helping to avoid contamination of risk estimates by reverse causation (ie, is chronic illness leading to increased obesity, with the downstream increase in mortality actually due to the chronic illness?).

Despite the clear strengths of the study, there are several important limitations that must be acknowledged in interpreting the results. The most obvious is that BMI status was only measured at baseline. There is no way of knowing either what a participant’s weight trajectory had been in their younger years, or what happened to the BMI during the study follow-up period, both of which could certainly impact a participant’s risk of morbidity or mortality. Given a follow-up period of nearly 20 years, it is possible that there was crossover between BMI (exposure) categories after baseline assignment. Furthermore, the study does not address the very important question of how an intervention to promote weight loss in older women might impact morbidity and mortality—it is possible that encouraging weight loss in this population may in fact worsen health outcomes for some patients [6].

The generalizability of the study may be somewhat limited. The study population itself represented a group of women who were likely relatively healthy and motivated, having self-selected to participate in the WHI, thus they could have been healthier than groups studied in previous population-based samples. Furthermore, the study results may not generalize to men, however other similar cohort studies with male participants have reached similar conclusions [7].

Applications for Clinical Practice

To promote longevity and maintenance of independence in our growing population of postmenopausal women, it is important that physicians continue to educate and assist their patients in maintaining a healthy weight as they age. Although the impact of intentional weight loss in obese older women is not addressed by this paper, it does support the idea that obese postmenopausal women are at higher risk of death before age 85 years and disability. Therefore, for these patients, physicians should take particular care to reinforce healthy lifestyle choices such as good nutrition and regular physical activity.

—Kristina Lewis, MD, MPH

References

1. Design of the Women’s Health Initiative clinical trial and observational study. The Women’s Health Initiative Study Group. Control Clin Trials 1998;19:61–109.

2. Flegal KM, Carroll MD, Kit BK, Ogden CL. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999-2010. JAMA 2012;307:491–7.

3. Must A, Spadano J, Coakley EH, et al. The disease burden associated with overweight and obesity. JAMA 1999;282:1523–9.

4. Flegal KM, Kit BK, Orpana H, Graubard BI. Association of all-cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta-analysis. JAMA 2013;309:71–82.

5. Jackson CL, Stampfer MJ. Maintaining a healthy body weight is paramount. JAMA Intern Med 2014;174:23–4.

6. Dixon JB, Egger GJ, Finkelstein EA, et al. ‘Obesity Paradox’ misunderstands the biology of optimal weight throughout the life cycle. Int J Obesity 2014.

7. Reed DM, Foley DJ, White LR, et al. Predictors of healthy aging in men with high life expectancies. Am J Public Health 1998;88:1463–8.

References

1. Design of the Women’s Health Initiative clinical trial and observational study. The Women’s Health Initiative Study Group. Control Clin Trials 1998;19:61–109.

2. Flegal KM, Carroll MD, Kit BK, Ogden CL. Prevalence of obesity and trends in the distribution of body mass index among US adults, 1999-2010. JAMA 2012;307:491–7.

3. Must A, Spadano J, Coakley EH, et al. The disease burden associated with overweight and obesity. JAMA 1999;282:1523–9.

4. Flegal KM, Kit BK, Orpana H, Graubard BI. Association of all-cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta-analysis. JAMA 2013;309:71–82.

5. Jackson CL, Stampfer MJ. Maintaining a healthy body weight is paramount. JAMA Intern Med 2014;174:23–4.

6. Dixon JB, Egger GJ, Finkelstein EA, et al. ‘Obesity Paradox’ misunderstands the biology of optimal weight throughout the life cycle. Int J Obesity 2014.

7. Reed DM, Foley DJ, White LR, et al. Predictors of healthy aging in men with high life expectancies. Am J Public Health 1998;88:1463–8.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Article Type
Display Headline
How Valid Is the “Healthy Obese” Phenotype For Older Women?
Display Headline
How Valid Is the “Healthy Obese” Phenotype For Older Women?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Finding the Optimum in the Use of Elective Percutaneous Coronary Intervention

Article Type
Changed
Tue, 03/06/2018 - 16:05
Display Headline
Finding the Optimum in the Use of Elective Percutaneous Coronary Intervention

From the VA Eastern Colorado Health Care System, University of Colorado School of Medicine, and the Colorado Cardiovascular Outcomes Research Group, Denver and Aurora, CO.

 

Abstract

  • Objective: To review the use of elective percutaneous coronary intervention (PCI), evaluate what is currently known about elective PCI in the context of appropriate use criteria, and offer insight into next steps to optimize the use of elective PCI to achieve high-quality care.
  • Methods: Review of the scientific literature, appropriate use criteria, and professional society guidelines relevant to elective PCI.
  • Results: Recent studies have demonstrated as many as 1 in 6 elective PCIs are inappropriate as determined by appropriate use criteria. These inappropriate PCIs are not anticipated to benefit patients and result in unnecessary patient risk and cost. While these studies are consistent with regard to overuse of elective PCI, less is known about potential underuse of PCI for elective indications. We lack health status data on populations of ischemic heart disease patients to inform PCI underuse that may contribute to patient symptom burden, functional status, and quality of life. Optimal use of PCI will be attained with longitudinal capture of patient-reported health status, study of factors contributing to overuse and underuse, refinement of the appropriate use criteria with particular focus on patient-centered measures, and incorporation of patient preference and shared decision making into appropriateness evaluation tools.
  • Conclusion: The use of elective PCI is less than optimal in current clinical practice. Continued effort is needed to ensure elective PCI is targeted to patients with anticipated benefit and use of the procedure is aligned with patient preferences.

 

Providing the right care to the right patient at the right time is essential to the practice of high-quality care. Reducing overuse of health care services is part of this equation, and initiatives to reduce inappropriate use and to encourage physicians and patients to “choose wisely” have been introduced [1]. One procedure that is being examined with a focus on appropriateness is percutaneous coronary intervention (PCI). This procedure is common (nearly 1 million inpatient PCI procedures performed in 2010), presents risks to the patient, and is expensive (attributable cost approximately $10 billion in 2010) [2,3]. While the clinical benefit of PCI in acute settings such as ST-segment elevation myocardial infarction is well established [4], the benefit of PCI in nonacute (elective) settings is less robust [5–7]. Prior studies have demonstrated PCI for stable ischemic heart disease does not result in mortality benefit [6]. Furthermore, PCI as an initial strategy for symptom relief of stable angina may offer little benefit relative to medications alone [5]. Given that PCI is common, costly, and associated with both short- and long-term risks [8,9], ensuring this therapy is provided to the right patient at the right time is important.

In 2009, appropriate use criteria (AUC) were developed by 6 professional organizations to support the rational and judicious use of PCI [10]; a focused update was published in 2012 [11]. In this review, we discuss the recommendations for appropriate use and their application and offer thoughts on next steps to optimize the use of elective PCI as part of high-quality care.

Variation in the Use of PCI

Since 1996, the Dartmouth Atlas of Health Care has documented substantial geographic variation in health care utilization and spending in the United States [12]. This variation includes a 10-fold difference in the use of PCI across geographic regions [13] (Figure 1). Several studies have suggested that much of this variation reflects overuse. For example, in a cohort study of patients with acute myocardial infarction, patients who lived in regions with lower health care expenditures were more likely to receive guideline-recommended medications at discharge, had similar access to follow-up care, reported similar functional health status and satisfaction with care, and had lower mortality than patients in high-expenditure regions [14,15]. These findings suggest overuse, as higher healthcare expenditures were not associated with better quality of care or patient outcomes.

Additionally, significant public attention has been focused on the issue of overuse after lay press investigations into community practice patterns. In particular, a case study presented in the New York Times highlighted the community of Elyria, Ohio, which was found to have a PCI rate that was 4 times the national average [16]. This investigation sparked public debate and further focused attention on the issue of overuse of elective PCI. Conversely, others have pointed to data that suggest underuse of coronary procedural care, particularly among women and racial and ethnic minorities [17–22].

Appropriate Use Criteria

Development Methodology

AUC for PCI, which were developed through the collaborative efforts of 6 major cardiovascular professional organizations, are intended to support the effective, efficient, and equitable use of PCI [10,11]. They were developed in response to a growing need to support rational use of cardiovascular procedures as part of high-quality care. The methods of development for the AUC have been described in detail in the criteria publications [10,11]. We briefly review these methods here.

In developing the criteria, a writing group created clinical scenarios for which coronary revascularization might be considered in everyday clinical practice [23] (Figure 2). These clinical scenarios were then presented to a 17-member technical panel, members of which were nominated by national cardiology societies. Technical panel members then rated the appropriateness of PCI for each scenario based on randomized trial data, clinical practice guidelines, and their expert opinion. For purposes of AUC development, appropriateness was defined as “when the expected benefits, in terms of survival or health outcomes (symptoms, functional status, and/or quality of life) exceed the expected negative consequences of the procedure [10].”

Panel members first individually assigned ratings to each clinical scenario that ranged from 1 (least appropriate) to 9 (most appropriate). This was followed by an in-person meeting in which technical panel members discussed scenarios for which there was wide variation in appropriateness assessment. After this meeting, technical panel members again assigned ratings for each scenario from 1 to 9. After this second round, the median values for the pooled ratings were used as the appropriateness classification for each scenario. Scenarios with median values of 1–3 were classified as “inappropriate,” 4–6 as “uncertain,” and 7–9 as “appropriate.” A rating of “appropriate” represented clinical scenarios in which the indication is considered generally acceptable and likely to improve health outcomes or survival. A rating of “uncertain” represented clinical scenarios where the indication may be reasonable but more research is necessary to further understand the relative benefits and risks of PCI in this setting. Finally, a rating of “inappropriate” represented clinical scenarios in which the indication is not generally acceptable as it is unlikely to improve health outcomes or survival.

The approach used for AUC development appears to be valid, as Class III indications for PCI in the ACC/AHA clinical guideline [24] (Class III = PCI should NOT be performed since it is not helpful and may be harmful) and AUC scenarios rated as inappropriate are in 100% agreement (personal communication, Ralph Brindis, past president of the American College of Cardiology).

Application

It is important to remember that the AUC are intended to aid in patient selection and are not absolute. Unique clinical factors and patient preference cannot feasibly be captured by the AUC scenarios. It should also be noted that the intent of the AUC is not to be punitive but rather to identify and assess variation in practice patterns. To reflect this intent, the terminology applied to appropriateness ratings has recently changed. Clinical scenarios previously classified as “inappropriate” are now termed “rarely appropriate” and clinical scenarios classified as “uncertain” are now termed “may be appropriate.”

Although the AUC were developed to help evaluate practice patterns of care delivery and serve as guides for clinical decision making, they were not intended to serve as mandates for or against treatment in individual patients or to be tied to reimbursement for individual patients. Despite this, health care organizations and payors have used other AUC documents for incentive pay and prior authorization programs, specifically for cardiovascular imaging [25]. Use of the AUC in this manner may still be reasonable if application and measurement is at the level of the practice, rather than the individual patient, but much remains to be understood about the implications of applying AUC in reimbursement
decisions.

Refinement

The AUC for PCI are designed to be dynamic and continually updated. As additional evidence becomes available regarding the efficacy of PCI in specific clinical scenarios, there will be ongoing efforts to update the AUC to reflect this new evidence. This is highlighted by the first update to the AUC occurring less than 3 years after the original publication date [11].

In addition to perpetual review of the data used to inform scenario ratings, there are opportunities to improve measurement of the clinical variables that are considered in rating PCI appropriateness (eg, clinical presentation, symptom severity, ischemia severity, extent of medical therapy, extent of anatomic disease). For example, in the current AUC, symptom severity is dependent on clinician assessment using the Canadian Cardiovascular Society Classification [25]. Moving toward a patient-centered assessment of symptom severity would ensure that the AUC more closely reflect the patient-perceived symptom burden. Further, the use of a patient-centered instrument would reduce the possibility of physician manipulation of symptom severity to influence the apparent appropriateness of PCI. There are similar opportunities to improve reporting of noninvasive stress test data, such as through standardized reporting of ischemic risk. Finally, the use of physiologic assessments of stenosis severity (eg, fractional flow reserve) and quantitative coronary angiography to standardize interpretations of diagnostic angiography may further optimize the assessment of PCI appropriateness.

Application of the Appropriate Use Criteria in Clinical Practice—Study Results

CR2June2014_TableApplication of the AUC to clinical practice has highlighted potential overuse of PCI (Table). The first report came from applying the AUC to the National Cardiovascular Data Registry (NCDR) CathPCI Registry [26]. In this study of more than 500,000 PCIs from over 1000 facilities across the country, the authors found that PCIs performed in the acute setting (STEMI, NSTEMI, and high-risk unstable angina) were almost uniformly classified as appropriate. However, for nonacute (elective) PCI, application of the AUC resulted in the classification of 50% as appropriate, 38% as uncertain, and 12% as inappropriate. The majority of patients who received inappropriate PCI had a low-risk stress test (72%) or were asymptomatic (54%). Additionally, 96% of patients who received PCI classified as inappropriate had not been given a trial of adequate anti-anginal therapy. This analysis was supported by subsequent analyses of 2 other state-specific registries (New York and Washington), which found similar rates of PCI for nonacute indications rated as inappropriate [27,28]. Additionally, all 3 studies showed wide facility-level variation in the percentage of appropriate and inappropriate PCI for elective indications.

These studies also highlight a gap in preprocedural care. The anticipated benefit of elective PCI is related to patient symptom burden, adequacy of anti-anginal therapy, and ischemic risk as determined by noninvasive stress testing. However, 30% to 50% of patients undergo elective PCI without evidence of preprocedural stress testing. Attempts are being made to address this gap with the recent release of PCI performance measures [29]. These performance measures, intended for cardiac catheterization labs, include comprehensive documentation of the indication for PCI, which is central to determination of appropriateness. This integration of procedural indication into a performance measure marks the first such occurrence in cardiology.

As documentation of procedural indication and appropriateness have become part and parcel of assessing quality of care, concerns about “gaming” have become more pertinent. Providers who perform PCI could potentially enhance the appearance of appropriateness by overstating the clinical symptom burden or stress test findings. The incorporation of validated, patient-centered health status questionnaires along with data audit programs have been proposed as measures to prevent this type of abuse. Addressing quality gaps in preprocedural assessment and documentation is critical to optimizing use of elective PCI [28].

The apparent overuse of PCI for elective indications may be a reflection of our fragmented, fee-for-service health care delivery system. However, recent studies challenge these assumptions. In a Canadian study, Ko et al found that 18% of elective PCIs were classified as inappropriate, a proportion similar to what had been found previously in the United States [30]. In a US study of Medicare beneficiaries, Matlock and colleagues observed a fourfold regional variation in use of elective coronary angiography and PCI in both Medicare fee-for-service and capitated Medicare Advantage beneficiaries [31]. Collectively, these studies suggest barriers to optimal patient selection for invasive coronary procedures in both capitated and fee-for-service health care systems. Without addressing factors that contribute to variation in the absence of fee-for-service incentives, efforts to improve integration and reduce fee-for-service reimbursement may be inadequate to optimize PCI use.

Evaluating Underuse

While potential underuse of PCI has been described for acute indications [17–22], study of underuse of PCI for elective indications is more challenging. Population data on the effect of underuse of elective PCI on patient symptom burden, functional status, and quality of life is lacking.

A population-based study from Australia highlights the potential importance of underuse in the care of patients with stable coronary disease. This study assessed symptom burden among patients with chronic stable angina using the Seattle Angina Questionnaire and included patients cared for by 207 primary care practitioners [32]. The authors noted that there was considerable variation in patient symptom burden between practices, with 14% of practices having no patients with more than 1 episode of angina per week and 18% of clinics having more than half of enrolled patients with at least 1 episode of angina per week. The authors postulate that this variability may be due to differences among providers in the identification and management of angina, including using PCI to minimize symptom burden.

In the Ko study mentioned earlier, the AUC was used to examine potential underuse of coronary revascularization procedures. In this study, they analyzed the association between AUC ratings and outcomes in patients undergoing diagnostic coronary angiography [30]. Of patients considered “appropriate” for revascularization following completion of diagnostic angiography, only 69% underwent revascularization. However, the clinical aspects that influence the decision to proceed with revascularization may not be fully captured in this study. Thus, the true degree of underuse of PCI remains elusive.

In summary, the relative lack of data that would allow for the assessment of underuse of elective PCI is an important quality concern. Health systems should work to systematically capture patient-reported health status, including symptom burden data, to identify inadequate symptom control and potential underuse of procedural care for CAD.

Facilitating Optimal Use

CR2June2014_Figure3In current practice, the AUC hold promise to minimize the overuse of elective PCI. This likely involves addressing processes occurring upstream of the cardiac catheterization lab, including employing systems to ensure that procedures are avoided in patients who are unlikely to benefit (eg, asymptomatic, low ischemic burden) (Figure 3) [33]. Studying hospitals that already have low rates of inappropriate PCI may inform the design and dissemination of strategies that will help improve patient selection at hospitals with higher rates. Although professional organizations have developed tools intended to facilitate appropriateness evaluation at the point-of-care [34], the use of these tools are likely to be sporadic without greater integration into the health care delivery system. Further, these applications are currently limited to determination of appropriateness of PCI after completion of the diagnostic coronary angiogram. Identifying processes prior to catheterization that contribute to PCI appropriateness may also streamline appropriate ad hoc PCI, as the need to reassess appropriateness after the diagnostic angiogram may be mitigated.

Significant barriers exist to the application of the AUC for determination of procedural underuse. As described above, we lack adequate data to ascertain gaps in symptom management that could be mitigated by proper use of PCI. Further study of symptom burden in populations of patients with coronary artery disease is needed. This may help in the identification of patient populations whose symptom burden may warrant consideration of invasive coronary procedures, including coronary angiography and PCI.

Finally, it is important to note that the AUC are based on technical considerations, ie, practice guidelines and trial evidence. They do not take into consideration patient preferences. For example, PCI can be technically appropriate for the scenario but inappropriate for the individual if the procedure is not desired by the patient. Similarly, a procedure may be of uncertain benefit but appropriate if the patient desires more aggressive procedural care and has a full understanding of the risks and benefits. Currently, we fail to convey this information to patients, as evidenced by patients’ overestimation of the benefits of PCI [34]. As we continue to work toward optimal use of PCI, we must not only address the technical appropriateness of care, but move toward incorporating patient preferences through a robust process of shared decision-making.

 

Corresponding author: Preston M. Schneider, MD, VA Eastern Colorado Health Care System, Cardiology Section (111B), 1055 Clermont St., Denver, CO 80220, [email protected].

Funding/support: Dr. Schneider is supported by a T32 training grant from the National Institutes of Health (5T32HL00
7822-15). Dr. Bradley is supported by a Career Development Award (HSR&D-CDA2 10-199) from VA Health Services Research & Development.

Financial disclosures: None.

References

1. Cassel CK, Guest JA. Choosing wisely: helping physicians and patients make smart decisions about their care. JAMA 2012;307:1801–2.

2. Go AS, Mozaffarian D, Roger VL, et al. Heart disease and stroke statistics—2013 update: a report from the American Heart Association. Circulation 2013;127:e6–e245.

3. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Accessed 22 Oct 2013 at http://hcupnet.ahrq.gov/HCUPnet.jsp?Parms=
H4sIAAAAAAAAABXBMQ6AIBAEwC9JAg.gsLAhRvjAnnuXgGihFb9XZwYe3EhLdpN2h2aIcsnQLCp9jQVbLDN3ksq
DnSeqVXzNfIAP9mtmLy0rZhdIAAAA83D0C2BCAE02DD1508408B2C5C094F1ADF6E788C&JS=Y.

4. Keeley EC, Boura JA, Grines CL. Primary angioplasty versus intravenous thrombolytic therapy for acute myocardial infarction: a quantitative review of 23 randomised trials. Lancet 2003;361:13–20.

5. Boden WE, O’Rourke RA, Teo KK, et al. Optimal medical therapy with or without PCI for stable coronary disease. N Engl J Med 2007;356:1503–16.

6. Boden WE, O’Rourke RA, Teo KK, et al. Impact of optimal medical therapy with or without percutaneous coronary intervention on long-term cardiovascular end points in patients with stable coronary artery disease (from the COURAGE Trial). Am J Cardiol 2009;104:1–4.

7. Stergiopoulos K, Brown DL. Initial coronary stent implantation with medical therapy vs medical therapy alone for stable coronary artery disease: Meta-analysis of randomized controlled trials. Arch Intern Med 2012;172:312–9.

8. McCullough PA, Adam A, Becker CR, et al. Epidemiology and prognostic implications of contrast-induced nephropathy. Contrast-Induc Nephrop Clin Insights Pract Guid Rep CIN Consens Work Panel 2006;98:5–13.

9. Roe MT, Messenger JC, Weintraub WS, et al. Treatments, trends, and outcomes of acute myocardial infarction and percutaneous coronary intervention. J Am Coll Cardiol 2010;56:254–63.

10. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC 2009 Appropriateness Criteria for Coronary Revascularization: A Report by the American College of Cardiology Foundation Appropriateness Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, and the American Society of Nuclear Cardiology Endorsed by the American Society of Echocardiography, the Heart Failure Society of America, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2009;53:530–53.

11. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC/HFSA/SCCT 2012 Appropriate Use Criteria for Coronary Revascularization Focused Update: A Report of the American College of Cardiology Foundation Appropriate Use Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, American Society of Nuclear Cardiology, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2012;59:857–81.

12. Dartmouth Atlas of Health Care. Accessed 8 Jan 2014 at www.dartmouthatlas.org.

13. Dartmouth Atlas of Health Care: Studies of surgical variation. Cardiac surgery report. 2005. Accessed 8 Jan 2014 at www.dartmouthatlas.org/publications/reports.aspx.

14. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 1: the content, quality, and accessibility of care. Ann Intern Med 2003;138:273–87.

15. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 2: health outcomes and satisfaction with care. Ann Intern Med 2003;138:288–98.

16. Abelson R. Heart procedure is off the charts in an Ohio city. New York Times 2006. Accessed 23 Apr 2013 at www.nytimes.com/2006/08/18/business/18stent.html.

17. Akhter N, Milford-Beland S, Roe MT, et al. Gender differences among patients with acute coronary syndromes undergoing percutaneous coronary intervention in the American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR). Am Heart J 2009;157:141–8.

18. Blomkalns AL, Chen AY, Hochman JS, et al. Gender disparities in the diagnosis and treatment of non–ST-segment elevation acute coronary syndromesLarge-scale observations from the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the American College of Cardiology/American Heart Association Guidelines) National Quality Improvement Initiative. J Am Coll Cardiol 2005;45:832–7.

19. Daly C, Clemens F, Lopez Sendon JL, et al. Gender differences in the management and clinical outcome of stable angina. Circulation 2006;113:490–8.

20. Groeneveld PW, Heidenreich PA, Garber AM. Racial disparity in cardiac procedures and mortality among long-term survivors of cardiac arrest. Circulation 2003;108:286–91.

21. Hannan EL, Zhong Y, Walford G, et al. Underutilization of percutaneous coronary intervention for ST-elevation myocardial infarction in Medicaid patients relative to private insurance patients. J Intervent Cardiol 2013;26:470–81.

22. Sonel AF, Good CB, Mulgund J, et al. Racial variations in treatment and outcomes of black and white patients with high-risk non–ST-elevation acute coronary syndromes: insights From CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines?). Circulation 2005;111:1225–32.

23. Patel MR, Spertus JA, Brindis RG, et al. ACCF proposed method for evaluating the appropriateness of cardiovascular imaging. J Am Coll Cardiol 2005;46:1606–13.

24. Levine GN, Bates ER, Blankenship JC, et al. 2011 ACCF/AHA/SCAI Guideline for percutaneous coronary intervention: executive summary: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines and the Society for Cardiovascular Angiography and Interventions. Circulation 2011;124:2574–609.

25. Campeau L. Letter: Grading of angina pectoris. Circulation 1976;54:522–3.

26. Chan PS, Patel MR, Klein LW, et al. Appropriateness of percutaneous coronary intervention. JAMA 2011;306:53–61.

27. Hannan EL, Cozzens K, Samadashvili Z, et al. Appropriateness of coronary revascularization for patients without acute coronary syndromes. J Am Coll Cardiol 2012;59:1870–6.

28. Bradley SM, Maynard C, Bryson CL. Appropriateness of percutaneous coronary interventions in Washington State. Circ Cardiovasc Qual Outcomes 2012;5:445–53.

29. Nallamothu BK, Tommaso CL, Anderson HV, et al. ACC/AHA/SCAI/AMA–Convened PCPI/NCQA 2013 Performance measures for adults undergoing percutaneous coronary intervention. A report of the American College of Cardiology/American Heart Association Task Force on Performance Measures, the Society for Cardiovascular Angiography and Interventions, the American Medical Association–Convened Physician Consortium for Performance Improvement, and the National Committee for Quality Assurance. J Am Coll Cardiol 2014;63:722–45.

30. Ko DT, Guo H, Wijeysundera HC, et al. Assessing the association of appropriateness of coronary revascularization and clinical outcomes for patients with stable coronary artery disease. J Am Coll Cardiol 2012;60:1876–84.

31. Matlock DD, Groeneveld PW, Sidney S, et al. Geographic variation in cardiovascular procedure use among medicare fee-for-service vs medicare advantage beneficiaries. JAMA 2013;310:155–62.

32. Beltrame JF, Weekes AJ, Morgan C, et al. The prevalence of weekly angina among patients with chronic stable angina in primary care practices: The coronary artery disease in general practice (cadence) study. Arch Intern Med 2009;169:1491–9.

33. Bradley SM, Spertus JA, Nallamothu BK, et al. The association between patient selection for diagnostic coronary angiography and hospital-level PCI appropriateness: Insights from the NCDR. Circ Cardiovasc Qual Outcomes 2013;6:A1. Accessed 20 Nov 2013 at http://circoutcomes.ahajournals.org/cgi/content/short/6/3_MeetingAbstracts/A1?rss=1.

34. Lee J, Chuu K, Spertus J, et al. Patients overestimate the potential benefits of elective percutaneous coronary intervention. Mo Med 2012;109:79.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Topics
Sections

From the VA Eastern Colorado Health Care System, University of Colorado School of Medicine, and the Colorado Cardiovascular Outcomes Research Group, Denver and Aurora, CO.

 

Abstract

  • Objective: To review the use of elective percutaneous coronary intervention (PCI), evaluate what is currently known about elective PCI in the context of appropriate use criteria, and offer insight into next steps to optimize the use of elective PCI to achieve high-quality care.
  • Methods: Review of the scientific literature, appropriate use criteria, and professional society guidelines relevant to elective PCI.
  • Results: Recent studies have demonstrated as many as 1 in 6 elective PCIs are inappropriate as determined by appropriate use criteria. These inappropriate PCIs are not anticipated to benefit patients and result in unnecessary patient risk and cost. While these studies are consistent with regard to overuse of elective PCI, less is known about potential underuse of PCI for elective indications. We lack health status data on populations of ischemic heart disease patients to inform PCI underuse that may contribute to patient symptom burden, functional status, and quality of life. Optimal use of PCI will be attained with longitudinal capture of patient-reported health status, study of factors contributing to overuse and underuse, refinement of the appropriate use criteria with particular focus on patient-centered measures, and incorporation of patient preference and shared decision making into appropriateness evaluation tools.
  • Conclusion: The use of elective PCI is less than optimal in current clinical practice. Continued effort is needed to ensure elective PCI is targeted to patients with anticipated benefit and use of the procedure is aligned with patient preferences.

 

Providing the right care to the right patient at the right time is essential to the practice of high-quality care. Reducing overuse of health care services is part of this equation, and initiatives to reduce inappropriate use and to encourage physicians and patients to “choose wisely” have been introduced [1]. One procedure that is being examined with a focus on appropriateness is percutaneous coronary intervention (PCI). This procedure is common (nearly 1 million inpatient PCI procedures performed in 2010), presents risks to the patient, and is expensive (attributable cost approximately $10 billion in 2010) [2,3]. While the clinical benefit of PCI in acute settings such as ST-segment elevation myocardial infarction is well established [4], the benefit of PCI in nonacute (elective) settings is less robust [5–7]. Prior studies have demonstrated PCI for stable ischemic heart disease does not result in mortality benefit [6]. Furthermore, PCI as an initial strategy for symptom relief of stable angina may offer little benefit relative to medications alone [5]. Given that PCI is common, costly, and associated with both short- and long-term risks [8,9], ensuring this therapy is provided to the right patient at the right time is important.

In 2009, appropriate use criteria (AUC) were developed by 6 professional organizations to support the rational and judicious use of PCI [10]; a focused update was published in 2012 [11]. In this review, we discuss the recommendations for appropriate use and their application and offer thoughts on next steps to optimize the use of elective PCI as part of high-quality care.

Variation in the Use of PCI

Since 1996, the Dartmouth Atlas of Health Care has documented substantial geographic variation in health care utilization and spending in the United States [12]. This variation includes a 10-fold difference in the use of PCI across geographic regions [13] (Figure 1). Several studies have suggested that much of this variation reflects overuse. For example, in a cohort study of patients with acute myocardial infarction, patients who lived in regions with lower health care expenditures were more likely to receive guideline-recommended medications at discharge, had similar access to follow-up care, reported similar functional health status and satisfaction with care, and had lower mortality than patients in high-expenditure regions [14,15]. These findings suggest overuse, as higher healthcare expenditures were not associated with better quality of care or patient outcomes.

Additionally, significant public attention has been focused on the issue of overuse after lay press investigations into community practice patterns. In particular, a case study presented in the New York Times highlighted the community of Elyria, Ohio, which was found to have a PCI rate that was 4 times the national average [16]. This investigation sparked public debate and further focused attention on the issue of overuse of elective PCI. Conversely, others have pointed to data that suggest underuse of coronary procedural care, particularly among women and racial and ethnic minorities [17–22].

Appropriate Use Criteria

Development Methodology

AUC for PCI, which were developed through the collaborative efforts of 6 major cardiovascular professional organizations, are intended to support the effective, efficient, and equitable use of PCI [10,11]. They were developed in response to a growing need to support rational use of cardiovascular procedures as part of high-quality care. The methods of development for the AUC have been described in detail in the criteria publications [10,11]. We briefly review these methods here.

In developing the criteria, a writing group created clinical scenarios for which coronary revascularization might be considered in everyday clinical practice [23] (Figure 2). These clinical scenarios were then presented to a 17-member technical panel, members of which were nominated by national cardiology societies. Technical panel members then rated the appropriateness of PCI for each scenario based on randomized trial data, clinical practice guidelines, and their expert opinion. For purposes of AUC development, appropriateness was defined as “when the expected benefits, in terms of survival or health outcomes (symptoms, functional status, and/or quality of life) exceed the expected negative consequences of the procedure [10].”

Panel members first individually assigned ratings to each clinical scenario that ranged from 1 (least appropriate) to 9 (most appropriate). This was followed by an in-person meeting in which technical panel members discussed scenarios for which there was wide variation in appropriateness assessment. After this meeting, technical panel members again assigned ratings for each scenario from 1 to 9. After this second round, the median values for the pooled ratings were used as the appropriateness classification for each scenario. Scenarios with median values of 1–3 were classified as “inappropriate,” 4–6 as “uncertain,” and 7–9 as “appropriate.” A rating of “appropriate” represented clinical scenarios in which the indication is considered generally acceptable and likely to improve health outcomes or survival. A rating of “uncertain” represented clinical scenarios where the indication may be reasonable but more research is necessary to further understand the relative benefits and risks of PCI in this setting. Finally, a rating of “inappropriate” represented clinical scenarios in which the indication is not generally acceptable as it is unlikely to improve health outcomes or survival.

The approach used for AUC development appears to be valid, as Class III indications for PCI in the ACC/AHA clinical guideline [24] (Class III = PCI should NOT be performed since it is not helpful and may be harmful) and AUC scenarios rated as inappropriate are in 100% agreement (personal communication, Ralph Brindis, past president of the American College of Cardiology).

Application

It is important to remember that the AUC are intended to aid in patient selection and are not absolute. Unique clinical factors and patient preference cannot feasibly be captured by the AUC scenarios. It should also be noted that the intent of the AUC is not to be punitive but rather to identify and assess variation in practice patterns. To reflect this intent, the terminology applied to appropriateness ratings has recently changed. Clinical scenarios previously classified as “inappropriate” are now termed “rarely appropriate” and clinical scenarios classified as “uncertain” are now termed “may be appropriate.”

Although the AUC were developed to help evaluate practice patterns of care delivery and serve as guides for clinical decision making, they were not intended to serve as mandates for or against treatment in individual patients or to be tied to reimbursement for individual patients. Despite this, health care organizations and payors have used other AUC documents for incentive pay and prior authorization programs, specifically for cardiovascular imaging [25]. Use of the AUC in this manner may still be reasonable if application and measurement is at the level of the practice, rather than the individual patient, but much remains to be understood about the implications of applying AUC in reimbursement
decisions.

Refinement

The AUC for PCI are designed to be dynamic and continually updated. As additional evidence becomes available regarding the efficacy of PCI in specific clinical scenarios, there will be ongoing efforts to update the AUC to reflect this new evidence. This is highlighted by the first update to the AUC occurring less than 3 years after the original publication date [11].

In addition to perpetual review of the data used to inform scenario ratings, there are opportunities to improve measurement of the clinical variables that are considered in rating PCI appropriateness (eg, clinical presentation, symptom severity, ischemia severity, extent of medical therapy, extent of anatomic disease). For example, in the current AUC, symptom severity is dependent on clinician assessment using the Canadian Cardiovascular Society Classification [25]. Moving toward a patient-centered assessment of symptom severity would ensure that the AUC more closely reflect the patient-perceived symptom burden. Further, the use of a patient-centered instrument would reduce the possibility of physician manipulation of symptom severity to influence the apparent appropriateness of PCI. There are similar opportunities to improve reporting of noninvasive stress test data, such as through standardized reporting of ischemic risk. Finally, the use of physiologic assessments of stenosis severity (eg, fractional flow reserve) and quantitative coronary angiography to standardize interpretations of diagnostic angiography may further optimize the assessment of PCI appropriateness.

Application of the Appropriate Use Criteria in Clinical Practice—Study Results

CR2June2014_TableApplication of the AUC to clinical practice has highlighted potential overuse of PCI (Table). The first report came from applying the AUC to the National Cardiovascular Data Registry (NCDR) CathPCI Registry [26]. In this study of more than 500,000 PCIs from over 1000 facilities across the country, the authors found that PCIs performed in the acute setting (STEMI, NSTEMI, and high-risk unstable angina) were almost uniformly classified as appropriate. However, for nonacute (elective) PCI, application of the AUC resulted in the classification of 50% as appropriate, 38% as uncertain, and 12% as inappropriate. The majority of patients who received inappropriate PCI had a low-risk stress test (72%) or were asymptomatic (54%). Additionally, 96% of patients who received PCI classified as inappropriate had not been given a trial of adequate anti-anginal therapy. This analysis was supported by subsequent analyses of 2 other state-specific registries (New York and Washington), which found similar rates of PCI for nonacute indications rated as inappropriate [27,28]. Additionally, all 3 studies showed wide facility-level variation in the percentage of appropriate and inappropriate PCI for elective indications.

These studies also highlight a gap in preprocedural care. The anticipated benefit of elective PCI is related to patient symptom burden, adequacy of anti-anginal therapy, and ischemic risk as determined by noninvasive stress testing. However, 30% to 50% of patients undergo elective PCI without evidence of preprocedural stress testing. Attempts are being made to address this gap with the recent release of PCI performance measures [29]. These performance measures, intended for cardiac catheterization labs, include comprehensive documentation of the indication for PCI, which is central to determination of appropriateness. This integration of procedural indication into a performance measure marks the first such occurrence in cardiology.

As documentation of procedural indication and appropriateness have become part and parcel of assessing quality of care, concerns about “gaming” have become more pertinent. Providers who perform PCI could potentially enhance the appearance of appropriateness by overstating the clinical symptom burden or stress test findings. The incorporation of validated, patient-centered health status questionnaires along with data audit programs have been proposed as measures to prevent this type of abuse. Addressing quality gaps in preprocedural assessment and documentation is critical to optimizing use of elective PCI [28].

The apparent overuse of PCI for elective indications may be a reflection of our fragmented, fee-for-service health care delivery system. However, recent studies challenge these assumptions. In a Canadian study, Ko et al found that 18% of elective PCIs were classified as inappropriate, a proportion similar to what had been found previously in the United States [30]. In a US study of Medicare beneficiaries, Matlock and colleagues observed a fourfold regional variation in use of elective coronary angiography and PCI in both Medicare fee-for-service and capitated Medicare Advantage beneficiaries [31]. Collectively, these studies suggest barriers to optimal patient selection for invasive coronary procedures in both capitated and fee-for-service health care systems. Without addressing factors that contribute to variation in the absence of fee-for-service incentives, efforts to improve integration and reduce fee-for-service reimbursement may be inadequate to optimize PCI use.

Evaluating Underuse

While potential underuse of PCI has been described for acute indications [17–22], study of underuse of PCI for elective indications is more challenging. Population data on the effect of underuse of elective PCI on patient symptom burden, functional status, and quality of life is lacking.

A population-based study from Australia highlights the potential importance of underuse in the care of patients with stable coronary disease. This study assessed symptom burden among patients with chronic stable angina using the Seattle Angina Questionnaire and included patients cared for by 207 primary care practitioners [32]. The authors noted that there was considerable variation in patient symptom burden between practices, with 14% of practices having no patients with more than 1 episode of angina per week and 18% of clinics having more than half of enrolled patients with at least 1 episode of angina per week. The authors postulate that this variability may be due to differences among providers in the identification and management of angina, including using PCI to minimize symptom burden.

In the Ko study mentioned earlier, the AUC was used to examine potential underuse of coronary revascularization procedures. In this study, they analyzed the association between AUC ratings and outcomes in patients undergoing diagnostic coronary angiography [30]. Of patients considered “appropriate” for revascularization following completion of diagnostic angiography, only 69% underwent revascularization. However, the clinical aspects that influence the decision to proceed with revascularization may not be fully captured in this study. Thus, the true degree of underuse of PCI remains elusive.

In summary, the relative lack of data that would allow for the assessment of underuse of elective PCI is an important quality concern. Health systems should work to systematically capture patient-reported health status, including symptom burden data, to identify inadequate symptom control and potential underuse of procedural care for CAD.

Facilitating Optimal Use

CR2June2014_Figure3In current practice, the AUC hold promise to minimize the overuse of elective PCI. This likely involves addressing processes occurring upstream of the cardiac catheterization lab, including employing systems to ensure that procedures are avoided in patients who are unlikely to benefit (eg, asymptomatic, low ischemic burden) (Figure 3) [33]. Studying hospitals that already have low rates of inappropriate PCI may inform the design and dissemination of strategies that will help improve patient selection at hospitals with higher rates. Although professional organizations have developed tools intended to facilitate appropriateness evaluation at the point-of-care [34], the use of these tools are likely to be sporadic without greater integration into the health care delivery system. Further, these applications are currently limited to determination of appropriateness of PCI after completion of the diagnostic coronary angiogram. Identifying processes prior to catheterization that contribute to PCI appropriateness may also streamline appropriate ad hoc PCI, as the need to reassess appropriateness after the diagnostic angiogram may be mitigated.

Significant barriers exist to the application of the AUC for determination of procedural underuse. As described above, we lack adequate data to ascertain gaps in symptom management that could be mitigated by proper use of PCI. Further study of symptom burden in populations of patients with coronary artery disease is needed. This may help in the identification of patient populations whose symptom burden may warrant consideration of invasive coronary procedures, including coronary angiography and PCI.

Finally, it is important to note that the AUC are based on technical considerations, ie, practice guidelines and trial evidence. They do not take into consideration patient preferences. For example, PCI can be technically appropriate for the scenario but inappropriate for the individual if the procedure is not desired by the patient. Similarly, a procedure may be of uncertain benefit but appropriate if the patient desires more aggressive procedural care and has a full understanding of the risks and benefits. Currently, we fail to convey this information to patients, as evidenced by patients’ overestimation of the benefits of PCI [34]. As we continue to work toward optimal use of PCI, we must not only address the technical appropriateness of care, but move toward incorporating patient preferences through a robust process of shared decision-making.

 

Corresponding author: Preston M. Schneider, MD, VA Eastern Colorado Health Care System, Cardiology Section (111B), 1055 Clermont St., Denver, CO 80220, [email protected].

Funding/support: Dr. Schneider is supported by a T32 training grant from the National Institutes of Health (5T32HL00
7822-15). Dr. Bradley is supported by a Career Development Award (HSR&D-CDA2 10-199) from VA Health Services Research & Development.

Financial disclosures: None.

From the VA Eastern Colorado Health Care System, University of Colorado School of Medicine, and the Colorado Cardiovascular Outcomes Research Group, Denver and Aurora, CO.

 

Abstract

  • Objective: To review the use of elective percutaneous coronary intervention (PCI), evaluate what is currently known about elective PCI in the context of appropriate use criteria, and offer insight into next steps to optimize the use of elective PCI to achieve high-quality care.
  • Methods: Review of the scientific literature, appropriate use criteria, and professional society guidelines relevant to elective PCI.
  • Results: Recent studies have demonstrated as many as 1 in 6 elective PCIs are inappropriate as determined by appropriate use criteria. These inappropriate PCIs are not anticipated to benefit patients and result in unnecessary patient risk and cost. While these studies are consistent with regard to overuse of elective PCI, less is known about potential underuse of PCI for elective indications. We lack health status data on populations of ischemic heart disease patients to inform PCI underuse that may contribute to patient symptom burden, functional status, and quality of life. Optimal use of PCI will be attained with longitudinal capture of patient-reported health status, study of factors contributing to overuse and underuse, refinement of the appropriate use criteria with particular focus on patient-centered measures, and incorporation of patient preference and shared decision making into appropriateness evaluation tools.
  • Conclusion: The use of elective PCI is less than optimal in current clinical practice. Continued effort is needed to ensure elective PCI is targeted to patients with anticipated benefit and use of the procedure is aligned with patient preferences.

 

Providing the right care to the right patient at the right time is essential to the practice of high-quality care. Reducing overuse of health care services is part of this equation, and initiatives to reduce inappropriate use and to encourage physicians and patients to “choose wisely” have been introduced [1]. One procedure that is being examined with a focus on appropriateness is percutaneous coronary intervention (PCI). This procedure is common (nearly 1 million inpatient PCI procedures performed in 2010), presents risks to the patient, and is expensive (attributable cost approximately $10 billion in 2010) [2,3]. While the clinical benefit of PCI in acute settings such as ST-segment elevation myocardial infarction is well established [4], the benefit of PCI in nonacute (elective) settings is less robust [5–7]. Prior studies have demonstrated PCI for stable ischemic heart disease does not result in mortality benefit [6]. Furthermore, PCI as an initial strategy for symptom relief of stable angina may offer little benefit relative to medications alone [5]. Given that PCI is common, costly, and associated with both short- and long-term risks [8,9], ensuring this therapy is provided to the right patient at the right time is important.

In 2009, appropriate use criteria (AUC) were developed by 6 professional organizations to support the rational and judicious use of PCI [10]; a focused update was published in 2012 [11]. In this review, we discuss the recommendations for appropriate use and their application and offer thoughts on next steps to optimize the use of elective PCI as part of high-quality care.

Variation in the Use of PCI

Since 1996, the Dartmouth Atlas of Health Care has documented substantial geographic variation in health care utilization and spending in the United States [12]. This variation includes a 10-fold difference in the use of PCI across geographic regions [13] (Figure 1). Several studies have suggested that much of this variation reflects overuse. For example, in a cohort study of patients with acute myocardial infarction, patients who lived in regions with lower health care expenditures were more likely to receive guideline-recommended medications at discharge, had similar access to follow-up care, reported similar functional health status and satisfaction with care, and had lower mortality than patients in high-expenditure regions [14,15]. These findings suggest overuse, as higher healthcare expenditures were not associated with better quality of care or patient outcomes.

Additionally, significant public attention has been focused on the issue of overuse after lay press investigations into community practice patterns. In particular, a case study presented in the New York Times highlighted the community of Elyria, Ohio, which was found to have a PCI rate that was 4 times the national average [16]. This investigation sparked public debate and further focused attention on the issue of overuse of elective PCI. Conversely, others have pointed to data that suggest underuse of coronary procedural care, particularly among women and racial and ethnic minorities [17–22].

Appropriate Use Criteria

Development Methodology

AUC for PCI, which were developed through the collaborative efforts of 6 major cardiovascular professional organizations, are intended to support the effective, efficient, and equitable use of PCI [10,11]. They were developed in response to a growing need to support rational use of cardiovascular procedures as part of high-quality care. The methods of development for the AUC have been described in detail in the criteria publications [10,11]. We briefly review these methods here.

In developing the criteria, a writing group created clinical scenarios for which coronary revascularization might be considered in everyday clinical practice [23] (Figure 2). These clinical scenarios were then presented to a 17-member technical panel, members of which were nominated by national cardiology societies. Technical panel members then rated the appropriateness of PCI for each scenario based on randomized trial data, clinical practice guidelines, and their expert opinion. For purposes of AUC development, appropriateness was defined as “when the expected benefits, in terms of survival or health outcomes (symptoms, functional status, and/or quality of life) exceed the expected negative consequences of the procedure [10].”

Panel members first individually assigned ratings to each clinical scenario that ranged from 1 (least appropriate) to 9 (most appropriate). This was followed by an in-person meeting in which technical panel members discussed scenarios for which there was wide variation in appropriateness assessment. After this meeting, technical panel members again assigned ratings for each scenario from 1 to 9. After this second round, the median values for the pooled ratings were used as the appropriateness classification for each scenario. Scenarios with median values of 1–3 were classified as “inappropriate,” 4–6 as “uncertain,” and 7–9 as “appropriate.” A rating of “appropriate” represented clinical scenarios in which the indication is considered generally acceptable and likely to improve health outcomes or survival. A rating of “uncertain” represented clinical scenarios where the indication may be reasonable but more research is necessary to further understand the relative benefits and risks of PCI in this setting. Finally, a rating of “inappropriate” represented clinical scenarios in which the indication is not generally acceptable as it is unlikely to improve health outcomes or survival.

The approach used for AUC development appears to be valid, as Class III indications for PCI in the ACC/AHA clinical guideline [24] (Class III = PCI should NOT be performed since it is not helpful and may be harmful) and AUC scenarios rated as inappropriate are in 100% agreement (personal communication, Ralph Brindis, past president of the American College of Cardiology).

Application

It is important to remember that the AUC are intended to aid in patient selection and are not absolute. Unique clinical factors and patient preference cannot feasibly be captured by the AUC scenarios. It should also be noted that the intent of the AUC is not to be punitive but rather to identify and assess variation in practice patterns. To reflect this intent, the terminology applied to appropriateness ratings has recently changed. Clinical scenarios previously classified as “inappropriate” are now termed “rarely appropriate” and clinical scenarios classified as “uncertain” are now termed “may be appropriate.”

Although the AUC were developed to help evaluate practice patterns of care delivery and serve as guides for clinical decision making, they were not intended to serve as mandates for or against treatment in individual patients or to be tied to reimbursement for individual patients. Despite this, health care organizations and payors have used other AUC documents for incentive pay and prior authorization programs, specifically for cardiovascular imaging [25]. Use of the AUC in this manner may still be reasonable if application and measurement is at the level of the practice, rather than the individual patient, but much remains to be understood about the implications of applying AUC in reimbursement
decisions.

Refinement

The AUC for PCI are designed to be dynamic and continually updated. As additional evidence becomes available regarding the efficacy of PCI in specific clinical scenarios, there will be ongoing efforts to update the AUC to reflect this new evidence. This is highlighted by the first update to the AUC occurring less than 3 years after the original publication date [11].

In addition to perpetual review of the data used to inform scenario ratings, there are opportunities to improve measurement of the clinical variables that are considered in rating PCI appropriateness (eg, clinical presentation, symptom severity, ischemia severity, extent of medical therapy, extent of anatomic disease). For example, in the current AUC, symptom severity is dependent on clinician assessment using the Canadian Cardiovascular Society Classification [25]. Moving toward a patient-centered assessment of symptom severity would ensure that the AUC more closely reflect the patient-perceived symptom burden. Further, the use of a patient-centered instrument would reduce the possibility of physician manipulation of symptom severity to influence the apparent appropriateness of PCI. There are similar opportunities to improve reporting of noninvasive stress test data, such as through standardized reporting of ischemic risk. Finally, the use of physiologic assessments of stenosis severity (eg, fractional flow reserve) and quantitative coronary angiography to standardize interpretations of diagnostic angiography may further optimize the assessment of PCI appropriateness.

Application of the Appropriate Use Criteria in Clinical Practice—Study Results

CR2June2014_TableApplication of the AUC to clinical practice has highlighted potential overuse of PCI (Table). The first report came from applying the AUC to the National Cardiovascular Data Registry (NCDR) CathPCI Registry [26]. In this study of more than 500,000 PCIs from over 1000 facilities across the country, the authors found that PCIs performed in the acute setting (STEMI, NSTEMI, and high-risk unstable angina) were almost uniformly classified as appropriate. However, for nonacute (elective) PCI, application of the AUC resulted in the classification of 50% as appropriate, 38% as uncertain, and 12% as inappropriate. The majority of patients who received inappropriate PCI had a low-risk stress test (72%) or were asymptomatic (54%). Additionally, 96% of patients who received PCI classified as inappropriate had not been given a trial of adequate anti-anginal therapy. This analysis was supported by subsequent analyses of 2 other state-specific registries (New York and Washington), which found similar rates of PCI for nonacute indications rated as inappropriate [27,28]. Additionally, all 3 studies showed wide facility-level variation in the percentage of appropriate and inappropriate PCI for elective indications.

These studies also highlight a gap in preprocedural care. The anticipated benefit of elective PCI is related to patient symptom burden, adequacy of anti-anginal therapy, and ischemic risk as determined by noninvasive stress testing. However, 30% to 50% of patients undergo elective PCI without evidence of preprocedural stress testing. Attempts are being made to address this gap with the recent release of PCI performance measures [29]. These performance measures, intended for cardiac catheterization labs, include comprehensive documentation of the indication for PCI, which is central to determination of appropriateness. This integration of procedural indication into a performance measure marks the first such occurrence in cardiology.

As documentation of procedural indication and appropriateness have become part and parcel of assessing quality of care, concerns about “gaming” have become more pertinent. Providers who perform PCI could potentially enhance the appearance of appropriateness by overstating the clinical symptom burden or stress test findings. The incorporation of validated, patient-centered health status questionnaires along with data audit programs have been proposed as measures to prevent this type of abuse. Addressing quality gaps in preprocedural assessment and documentation is critical to optimizing use of elective PCI [28].

The apparent overuse of PCI for elective indications may be a reflection of our fragmented, fee-for-service health care delivery system. However, recent studies challenge these assumptions. In a Canadian study, Ko et al found that 18% of elective PCIs were classified as inappropriate, a proportion similar to what had been found previously in the United States [30]. In a US study of Medicare beneficiaries, Matlock and colleagues observed a fourfold regional variation in use of elective coronary angiography and PCI in both Medicare fee-for-service and capitated Medicare Advantage beneficiaries [31]. Collectively, these studies suggest barriers to optimal patient selection for invasive coronary procedures in both capitated and fee-for-service health care systems. Without addressing factors that contribute to variation in the absence of fee-for-service incentives, efforts to improve integration and reduce fee-for-service reimbursement may be inadequate to optimize PCI use.

Evaluating Underuse

While potential underuse of PCI has been described for acute indications [17–22], study of underuse of PCI for elective indications is more challenging. Population data on the effect of underuse of elective PCI on patient symptom burden, functional status, and quality of life is lacking.

A population-based study from Australia highlights the potential importance of underuse in the care of patients with stable coronary disease. This study assessed symptom burden among patients with chronic stable angina using the Seattle Angina Questionnaire and included patients cared for by 207 primary care practitioners [32]. The authors noted that there was considerable variation in patient symptom burden between practices, with 14% of practices having no patients with more than 1 episode of angina per week and 18% of clinics having more than half of enrolled patients with at least 1 episode of angina per week. The authors postulate that this variability may be due to differences among providers in the identification and management of angina, including using PCI to minimize symptom burden.

In the Ko study mentioned earlier, the AUC was used to examine potential underuse of coronary revascularization procedures. In this study, they analyzed the association between AUC ratings and outcomes in patients undergoing diagnostic coronary angiography [30]. Of patients considered “appropriate” for revascularization following completion of diagnostic angiography, only 69% underwent revascularization. However, the clinical aspects that influence the decision to proceed with revascularization may not be fully captured in this study. Thus, the true degree of underuse of PCI remains elusive.

In summary, the relative lack of data that would allow for the assessment of underuse of elective PCI is an important quality concern. Health systems should work to systematically capture patient-reported health status, including symptom burden data, to identify inadequate symptom control and potential underuse of procedural care for CAD.

Facilitating Optimal Use

CR2June2014_Figure3In current practice, the AUC hold promise to minimize the overuse of elective PCI. This likely involves addressing processes occurring upstream of the cardiac catheterization lab, including employing systems to ensure that procedures are avoided in patients who are unlikely to benefit (eg, asymptomatic, low ischemic burden) (Figure 3) [33]. Studying hospitals that already have low rates of inappropriate PCI may inform the design and dissemination of strategies that will help improve patient selection at hospitals with higher rates. Although professional organizations have developed tools intended to facilitate appropriateness evaluation at the point-of-care [34], the use of these tools are likely to be sporadic without greater integration into the health care delivery system. Further, these applications are currently limited to determination of appropriateness of PCI after completion of the diagnostic coronary angiogram. Identifying processes prior to catheterization that contribute to PCI appropriateness may also streamline appropriate ad hoc PCI, as the need to reassess appropriateness after the diagnostic angiogram may be mitigated.

Significant barriers exist to the application of the AUC for determination of procedural underuse. As described above, we lack adequate data to ascertain gaps in symptom management that could be mitigated by proper use of PCI. Further study of symptom burden in populations of patients with coronary artery disease is needed. This may help in the identification of patient populations whose symptom burden may warrant consideration of invasive coronary procedures, including coronary angiography and PCI.

Finally, it is important to note that the AUC are based on technical considerations, ie, practice guidelines and trial evidence. They do not take into consideration patient preferences. For example, PCI can be technically appropriate for the scenario but inappropriate for the individual if the procedure is not desired by the patient. Similarly, a procedure may be of uncertain benefit but appropriate if the patient desires more aggressive procedural care and has a full understanding of the risks and benefits. Currently, we fail to convey this information to patients, as evidenced by patients’ overestimation of the benefits of PCI [34]. As we continue to work toward optimal use of PCI, we must not only address the technical appropriateness of care, but move toward incorporating patient preferences through a robust process of shared decision-making.

 

Corresponding author: Preston M. Schneider, MD, VA Eastern Colorado Health Care System, Cardiology Section (111B), 1055 Clermont St., Denver, CO 80220, [email protected].

Funding/support: Dr. Schneider is supported by a T32 training grant from the National Institutes of Health (5T32HL00
7822-15). Dr. Bradley is supported by a Career Development Award (HSR&D-CDA2 10-199) from VA Health Services Research & Development.

Financial disclosures: None.

References

1. Cassel CK, Guest JA. Choosing wisely: helping physicians and patients make smart decisions about their care. JAMA 2012;307:1801–2.

2. Go AS, Mozaffarian D, Roger VL, et al. Heart disease and stroke statistics—2013 update: a report from the American Heart Association. Circulation 2013;127:e6–e245.

3. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Accessed 22 Oct 2013 at http://hcupnet.ahrq.gov/HCUPnet.jsp?Parms=
H4sIAAAAAAAAABXBMQ6AIBAEwC9JAg.gsLAhRvjAnnuXgGihFb9XZwYe3EhLdpN2h2aIcsnQLCp9jQVbLDN3ksq
DnSeqVXzNfIAP9mtmLy0rZhdIAAAA83D0C2BCAE02DD1508408B2C5C094F1ADF6E788C&JS=Y.

4. Keeley EC, Boura JA, Grines CL. Primary angioplasty versus intravenous thrombolytic therapy for acute myocardial infarction: a quantitative review of 23 randomised trials. Lancet 2003;361:13–20.

5. Boden WE, O’Rourke RA, Teo KK, et al. Optimal medical therapy with or without PCI for stable coronary disease. N Engl J Med 2007;356:1503–16.

6. Boden WE, O’Rourke RA, Teo KK, et al. Impact of optimal medical therapy with or without percutaneous coronary intervention on long-term cardiovascular end points in patients with stable coronary artery disease (from the COURAGE Trial). Am J Cardiol 2009;104:1–4.

7. Stergiopoulos K, Brown DL. Initial coronary stent implantation with medical therapy vs medical therapy alone for stable coronary artery disease: Meta-analysis of randomized controlled trials. Arch Intern Med 2012;172:312–9.

8. McCullough PA, Adam A, Becker CR, et al. Epidemiology and prognostic implications of contrast-induced nephropathy. Contrast-Induc Nephrop Clin Insights Pract Guid Rep CIN Consens Work Panel 2006;98:5–13.

9. Roe MT, Messenger JC, Weintraub WS, et al. Treatments, trends, and outcomes of acute myocardial infarction and percutaneous coronary intervention. J Am Coll Cardiol 2010;56:254–63.

10. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC 2009 Appropriateness Criteria for Coronary Revascularization: A Report by the American College of Cardiology Foundation Appropriateness Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, and the American Society of Nuclear Cardiology Endorsed by the American Society of Echocardiography, the Heart Failure Society of America, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2009;53:530–53.

11. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC/HFSA/SCCT 2012 Appropriate Use Criteria for Coronary Revascularization Focused Update: A Report of the American College of Cardiology Foundation Appropriate Use Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, American Society of Nuclear Cardiology, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2012;59:857–81.

12. Dartmouth Atlas of Health Care. Accessed 8 Jan 2014 at www.dartmouthatlas.org.

13. Dartmouth Atlas of Health Care: Studies of surgical variation. Cardiac surgery report. 2005. Accessed 8 Jan 2014 at www.dartmouthatlas.org/publications/reports.aspx.

14. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 1: the content, quality, and accessibility of care. Ann Intern Med 2003;138:273–87.

15. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 2: health outcomes and satisfaction with care. Ann Intern Med 2003;138:288–98.

16. Abelson R. Heart procedure is off the charts in an Ohio city. New York Times 2006. Accessed 23 Apr 2013 at www.nytimes.com/2006/08/18/business/18stent.html.

17. Akhter N, Milford-Beland S, Roe MT, et al. Gender differences among patients with acute coronary syndromes undergoing percutaneous coronary intervention in the American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR). Am Heart J 2009;157:141–8.

18. Blomkalns AL, Chen AY, Hochman JS, et al. Gender disparities in the diagnosis and treatment of non–ST-segment elevation acute coronary syndromesLarge-scale observations from the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the American College of Cardiology/American Heart Association Guidelines) National Quality Improvement Initiative. J Am Coll Cardiol 2005;45:832–7.

19. Daly C, Clemens F, Lopez Sendon JL, et al. Gender differences in the management and clinical outcome of stable angina. Circulation 2006;113:490–8.

20. Groeneveld PW, Heidenreich PA, Garber AM. Racial disparity in cardiac procedures and mortality among long-term survivors of cardiac arrest. Circulation 2003;108:286–91.

21. Hannan EL, Zhong Y, Walford G, et al. Underutilization of percutaneous coronary intervention for ST-elevation myocardial infarction in Medicaid patients relative to private insurance patients. J Intervent Cardiol 2013;26:470–81.

22. Sonel AF, Good CB, Mulgund J, et al. Racial variations in treatment and outcomes of black and white patients with high-risk non–ST-elevation acute coronary syndromes: insights From CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines?). Circulation 2005;111:1225–32.

23. Patel MR, Spertus JA, Brindis RG, et al. ACCF proposed method for evaluating the appropriateness of cardiovascular imaging. J Am Coll Cardiol 2005;46:1606–13.

24. Levine GN, Bates ER, Blankenship JC, et al. 2011 ACCF/AHA/SCAI Guideline for percutaneous coronary intervention: executive summary: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines and the Society for Cardiovascular Angiography and Interventions. Circulation 2011;124:2574–609.

25. Campeau L. Letter: Grading of angina pectoris. Circulation 1976;54:522–3.

26. Chan PS, Patel MR, Klein LW, et al. Appropriateness of percutaneous coronary intervention. JAMA 2011;306:53–61.

27. Hannan EL, Cozzens K, Samadashvili Z, et al. Appropriateness of coronary revascularization for patients without acute coronary syndromes. J Am Coll Cardiol 2012;59:1870–6.

28. Bradley SM, Maynard C, Bryson CL. Appropriateness of percutaneous coronary interventions in Washington State. Circ Cardiovasc Qual Outcomes 2012;5:445–53.

29. Nallamothu BK, Tommaso CL, Anderson HV, et al. ACC/AHA/SCAI/AMA–Convened PCPI/NCQA 2013 Performance measures for adults undergoing percutaneous coronary intervention. A report of the American College of Cardiology/American Heart Association Task Force on Performance Measures, the Society for Cardiovascular Angiography and Interventions, the American Medical Association–Convened Physician Consortium for Performance Improvement, and the National Committee for Quality Assurance. J Am Coll Cardiol 2014;63:722–45.

30. Ko DT, Guo H, Wijeysundera HC, et al. Assessing the association of appropriateness of coronary revascularization and clinical outcomes for patients with stable coronary artery disease. J Am Coll Cardiol 2012;60:1876–84.

31. Matlock DD, Groeneveld PW, Sidney S, et al. Geographic variation in cardiovascular procedure use among medicare fee-for-service vs medicare advantage beneficiaries. JAMA 2013;310:155–62.

32. Beltrame JF, Weekes AJ, Morgan C, et al. The prevalence of weekly angina among patients with chronic stable angina in primary care practices: The coronary artery disease in general practice (cadence) study. Arch Intern Med 2009;169:1491–9.

33. Bradley SM, Spertus JA, Nallamothu BK, et al. The association between patient selection for diagnostic coronary angiography and hospital-level PCI appropriateness: Insights from the NCDR. Circ Cardiovasc Qual Outcomes 2013;6:A1. Accessed 20 Nov 2013 at http://circoutcomes.ahajournals.org/cgi/content/short/6/3_MeetingAbstracts/A1?rss=1.

34. Lee J, Chuu K, Spertus J, et al. Patients overestimate the potential benefits of elective percutaneous coronary intervention. Mo Med 2012;109:79.

References

1. Cassel CK, Guest JA. Choosing wisely: helping physicians and patients make smart decisions about their care. JAMA 2012;307:1801–2.

2. Go AS, Mozaffarian D, Roger VL, et al. Heart disease and stroke statistics—2013 update: a report from the American Heart Association. Circulation 2013;127:e6–e245.

3. HCUPnet: A tool for identifying, tracking, and analyzing national hospital statistics. Accessed 22 Oct 2013 at http://hcupnet.ahrq.gov/HCUPnet.jsp?Parms=
H4sIAAAAAAAAABXBMQ6AIBAEwC9JAg.gsLAhRvjAnnuXgGihFb9XZwYe3EhLdpN2h2aIcsnQLCp9jQVbLDN3ksq
DnSeqVXzNfIAP9mtmLy0rZhdIAAAA83D0C2BCAE02DD1508408B2C5C094F1ADF6E788C&JS=Y.

4. Keeley EC, Boura JA, Grines CL. Primary angioplasty versus intravenous thrombolytic therapy for acute myocardial infarction: a quantitative review of 23 randomised trials. Lancet 2003;361:13–20.

5. Boden WE, O’Rourke RA, Teo KK, et al. Optimal medical therapy with or without PCI for stable coronary disease. N Engl J Med 2007;356:1503–16.

6. Boden WE, O’Rourke RA, Teo KK, et al. Impact of optimal medical therapy with or without percutaneous coronary intervention on long-term cardiovascular end points in patients with stable coronary artery disease (from the COURAGE Trial). Am J Cardiol 2009;104:1–4.

7. Stergiopoulos K, Brown DL. Initial coronary stent implantation with medical therapy vs medical therapy alone for stable coronary artery disease: Meta-analysis of randomized controlled trials. Arch Intern Med 2012;172:312–9.

8. McCullough PA, Adam A, Becker CR, et al. Epidemiology and prognostic implications of contrast-induced nephropathy. Contrast-Induc Nephrop Clin Insights Pract Guid Rep CIN Consens Work Panel 2006;98:5–13.

9. Roe MT, Messenger JC, Weintraub WS, et al. Treatments, trends, and outcomes of acute myocardial infarction and percutaneous coronary intervention. J Am Coll Cardiol 2010;56:254–63.

10. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC 2009 Appropriateness Criteria for Coronary Revascularization: A Report by the American College of Cardiology Foundation Appropriateness Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, and the American Society of Nuclear Cardiology Endorsed by the American Society of Echocardiography, the Heart Failure Society of America, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2009;53:530–53.

11. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/AHA/ASNC/HFSA/SCCT 2012 Appropriate Use Criteria for Coronary Revascularization Focused Update: A Report of the American College of Cardiology Foundation Appropriate Use Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, American Society of Nuclear Cardiology, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2012;59:857–81.

12. Dartmouth Atlas of Health Care. Accessed 8 Jan 2014 at www.dartmouthatlas.org.

13. Dartmouth Atlas of Health Care: Studies of surgical variation. Cardiac surgery report. 2005. Accessed 8 Jan 2014 at www.dartmouthatlas.org/publications/reports.aspx.

14. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 1: the content, quality, and accessibility of care. Ann Intern Med 2003;138:273–87.

15. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in medicare spending. part 2: health outcomes and satisfaction with care. Ann Intern Med 2003;138:288–98.

16. Abelson R. Heart procedure is off the charts in an Ohio city. New York Times 2006. Accessed 23 Apr 2013 at www.nytimes.com/2006/08/18/business/18stent.html.

17. Akhter N, Milford-Beland S, Roe MT, et al. Gender differences among patients with acute coronary syndromes undergoing percutaneous coronary intervention in the American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR). Am Heart J 2009;157:141–8.

18. Blomkalns AL, Chen AY, Hochman JS, et al. Gender disparities in the diagnosis and treatment of non–ST-segment elevation acute coronary syndromesLarge-scale observations from the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the American College of Cardiology/American Heart Association Guidelines) National Quality Improvement Initiative. J Am Coll Cardiol 2005;45:832–7.

19. Daly C, Clemens F, Lopez Sendon JL, et al. Gender differences in the management and clinical outcome of stable angina. Circulation 2006;113:490–8.

20. Groeneveld PW, Heidenreich PA, Garber AM. Racial disparity in cardiac procedures and mortality among long-term survivors of cardiac arrest. Circulation 2003;108:286–91.

21. Hannan EL, Zhong Y, Walford G, et al. Underutilization of percutaneous coronary intervention for ST-elevation myocardial infarction in Medicaid patients relative to private insurance patients. J Intervent Cardiol 2013;26:470–81.

22. Sonel AF, Good CB, Mulgund J, et al. Racial variations in treatment and outcomes of black and white patients with high-risk non–ST-elevation acute coronary syndromes: insights From CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines?). Circulation 2005;111:1225–32.

23. Patel MR, Spertus JA, Brindis RG, et al. ACCF proposed method for evaluating the appropriateness of cardiovascular imaging. J Am Coll Cardiol 2005;46:1606–13.

24. Levine GN, Bates ER, Blankenship JC, et al. 2011 ACCF/AHA/SCAI Guideline for percutaneous coronary intervention: executive summary: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines and the Society for Cardiovascular Angiography and Interventions. Circulation 2011;124:2574–609.

25. Campeau L. Letter: Grading of angina pectoris. Circulation 1976;54:522–3.

26. Chan PS, Patel MR, Klein LW, et al. Appropriateness of percutaneous coronary intervention. JAMA 2011;306:53–61.

27. Hannan EL, Cozzens K, Samadashvili Z, et al. Appropriateness of coronary revascularization for patients without acute coronary syndromes. J Am Coll Cardiol 2012;59:1870–6.

28. Bradley SM, Maynard C, Bryson CL. Appropriateness of percutaneous coronary interventions in Washington State. Circ Cardiovasc Qual Outcomes 2012;5:445–53.

29. Nallamothu BK, Tommaso CL, Anderson HV, et al. ACC/AHA/SCAI/AMA–Convened PCPI/NCQA 2013 Performance measures for adults undergoing percutaneous coronary intervention. A report of the American College of Cardiology/American Heart Association Task Force on Performance Measures, the Society for Cardiovascular Angiography and Interventions, the American Medical Association–Convened Physician Consortium for Performance Improvement, and the National Committee for Quality Assurance. J Am Coll Cardiol 2014;63:722–45.

30. Ko DT, Guo H, Wijeysundera HC, et al. Assessing the association of appropriateness of coronary revascularization and clinical outcomes for patients with stable coronary artery disease. J Am Coll Cardiol 2012;60:1876–84.

31. Matlock DD, Groeneveld PW, Sidney S, et al. Geographic variation in cardiovascular procedure use among medicare fee-for-service vs medicare advantage beneficiaries. JAMA 2013;310:155–62.

32. Beltrame JF, Weekes AJ, Morgan C, et al. The prevalence of weekly angina among patients with chronic stable angina in primary care practices: The coronary artery disease in general practice (cadence) study. Arch Intern Med 2009;169:1491–9.

33. Bradley SM, Spertus JA, Nallamothu BK, et al. The association between patient selection for diagnostic coronary angiography and hospital-level PCI appropriateness: Insights from the NCDR. Circ Cardiovasc Qual Outcomes 2013;6:A1. Accessed 20 Nov 2013 at http://circoutcomes.ahajournals.org/cgi/content/short/6/3_MeetingAbstracts/A1?rss=1.

34. Lee J, Chuu K, Spertus J, et al. Patients overestimate the potential benefits of elective percutaneous coronary intervention. Mo Med 2012;109:79.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
Finding the Optimum in the Use of Elective Percutaneous Coronary Intervention
Display Headline
Finding the Optimum in the Use of Elective Percutaneous Coronary Intervention
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Transition Readiness Assessment for Sickle Cell Patients: A Quality Improvement Project

Article Type
Changed
Tue, 03/06/2018 - 16:00
Display Headline
Transition Readiness Assessment for Sickle Cell Patients: A Quality Improvement Project

From the St. Jude Children’s Research Hospital, Memphis, TN.

This article is the fourth in our Hemoglobinopathy Learning Collaborative series. See the related editorial by Oyeku et al in the February 2014 issue of JCOM. (—Ed.) 

 

Abstract

  • Objective: To describe the use of quality improvement (QI) methodology to implement an assessment tool to evaluate transition readiness in youth with sickle cell disease (SCD).
  • Methods: Plan-Do-Study-Act (PDSA) cycles were run to evaluate the feasibility and effectiveness of a provider-based transition readiness assessment.
  • Results: Seventy-two adolescents aged 17 years (53% male) were assessed for transition readiness from August 2011 to June 2013. Results indicated that it is feasible for a provider transition readiness assessment (PTRA) tool to be integrated into a transition program. The newly created PTRA tool can inform the level of preparedness of adolescents with SCD during planning for adult transition.
  • Conclusion: The PTRA tool may be helpful for planning and preparation of youth with SCD to successfully transition to adult care.

 

Sickle cell disease (SCD) is one of the most common genetic disorders in the world and is caused by a mutation producing the abnormal sickle hemoglobin. Patients with SCD are living longer and transitioning from pediatric to adult providers. However, the transition years are associated with high mortality [1–4], risk for increased utilization of emergency care, and underutilization of care maintenance visits [5,6]. Successful transition from pediatric care to adult care is critical in ensuring care continuity and optimal health [7]. Barriers to successful transition include lack of preparation for transition [8,9]. To address this limitation, transition programs have been created to help foster transition preparation and readiness.

Often, chronological age determines when SCD programs transfer patients to adult care; however, age is an inadequate measure of readiness. To determine the appropriate time for transition and to individualize the subsequent preparation and planning prior to transfer, an assessment of transition readiness is needed. A number of checklists exist in the unpublished literature (eg, on institution and program websites), and a few empirically tested transition readiness measures have been developed through literature review, semi-structured interviews, and pilot testing in patient samples [10–13]. The Transition Readiness Assessment Questionnaire (TRAQ) and TRxANSITION scale are non-disease-specific measures that assess self-management and advocacy skills of youth with special health care needs; the TRAQ is self-report whereas the TRxANSITION scale is provider-administered [10,11]. Disease-specific measures have been developed for pediatric kidney transplant recipients [12] and adolescents with cystic fibrosis [13]. Studies using these measures suggest that transition readiness is associated with age, gender, disease type, increased adolescent responsibility/decreased parental involvement, and adherence [10–12].

For patients with SCD, there is no well-validated measure available to assess transition readiness [14]. Telfair and colleagues developed a sickle cell transfer questionnaire that focused on transition concerns and feelings and suggestions for transition intervention programming from the perspective of adolescents, their primary caregivers, and adults with SCD [15]. In addition, McPherson and colleagues examined SCD transition readiness in 4 areas: prior thought about transition, knowledge about steps to transition, interest in learning more about the transition process, and perceived importance of continuing care with a hematologist as an adult provider [8]. They found that adolescents in general were not prepared for transition but that readiness improved with age [8]. Overall, most readiness measures have involved patient self-report or parent proxy report. No current readiness assessment scales incorporate the provider’s assessment, which could help better define the most appropriate next steps in education and preparation for the upcoming transfer to adult care.

The St. Jude Children’s Research Hospital SCD Transition to Adult Care program was started in 2007 and is a companion program to the SCD teen clinic, serving 250 adolescents aged 12 to 18 years. The transition program curriculum addresses all aspects of the transition process. Based on the curriculum components, St. Jude developed and implemented a transition readiness assessment tool to be completed by providers in the SCD transition program. In this article, we describe our use of quality improvement (QI) methodology to evaluate the utility and impact of the newly created SCD transition readiness assessment tool.

Methods

Transition Program

The transition program is directed by a multidisciplinary team; disciplines represented on the team are medical (hematologist, genetic educator, physician assistant, and nurse coordinators), psychosocial (social workers), emotional/cognitive (psychologists), and academic (academic coordinator). In the program, adolescents with SCD and their families are introduced to the concept of transition to adult care at the age of 12. Every 6 months from 12 to 18 years of age, members of the team address relevant topics with patients to increase patients’ disease knowledge and improve their disease self-management skills. Some of the program components include training in completing a personal health record (PHR), genetic education, academic planning, and independent living skills.

Needs Assessment

Prior to initiation of the project, members of the transition program met monthly to informally discuss the progress of patients who were approaching the age of transition to adult care. We found that adolescents did not appear to be ready or well prepared for transition, including not being aware of the various familial and psychosocial issues that needed to be addressed prior to the transfer to adult care. We realized that these discussions needed to occur earlier to allow more time for preparation and transition planning of the patient, family, and medical team. In addition, members of the team each has differing perspectives and did not have the same information with regard to existing familial and psychosocial issues. The discussions were necessary to ensure all team members had pertinent information to make informed decisions about the patient’s level of transition readiness. Finally, our criteria for readiness were not standardized or quantifiable. As a result, each patient discussion was lengthy, not structured, and not very informative. In 2011, a core group from the transition team attended a Health Resources Services Administration–sponsored Hemoglobinopathies Quality Improvement Workshop to receive training in QI processes. We decided to create a formal, quantitative, individualized assessment of patients’ progress toward transition at age 17.

Readiness Assessment Tool

The assessment is divided into 4 domains based on the disciplines represented on the team: medical, psychosocial, emotional/cognitive, and academic (Table). Each discipline developed transition readiness items based on the transition curriculum content. The pediatric hematologist, midlevel provider (physician assistant), and nurse case managers developed the medical domain checklist to assess disease literacy, self-management, organ and dysfunction screening. The psychosocial domain checklist was developed by the social workers to assess patients’ understanding of information related to independent living and adult rights (eg, advance directives), emotional concerns related to transition, self-advocacy skills, and completion of a personal health record, a document designed to assist adolescents in learning about their medical history.

The emotional/cognitive domain checklist was developed by the pediatric psychologist and pediatric neuropsychologist. Because the psychology service is set up to see patients referred by the medical team and is unable to see all patients coming to hematology clinic, the emotional/cognitive checklist is based on identifying previous utilization of psychological services including psychotherapy and cognitive testing and determining whether initiation of services is warranted. The academic domain checklist was developed by the academic coordinator who serves as a liaison between the medical team and the school system. This checklist assesses whether the adolescent is meeting high school graduation requirements, able to verbalize an educational/job training plan, on track with future planning (eg, completed required testing), knowledgeable about community educational services, and able to self-advocate (eg, apply for SSI benefits).

Items within each domain have equal value (ie, each question on the checklist is worth 1 point) and the sum of points yields the quantifiable assessment of how well patients are performing in each area of their health. Assessment meetings occur monthly when eligible patients are discussed. Domains are evaluated by the health care provider responsible for his/her own domain (eg, social worker completes the psychosocial domain, the academic coordinator completes the academic domain, etc.).

PDSA Methodology

PDSA (Plan-Do-Study-Act) methodology was utilized to develop and evaluate the assessment tool. PDSA is a QI method that utilizes small-scale changes to a process, primarily within health care environments [16]. PDSA is executed in cycles and as changes are made, the process acted upon is improved. Changes are tested on a small scale and barriers are identified. Adjustments are made in subsequent cycles and as needed.

For the QI project, 3 PDSA cycles were completed for the development and implementation of the assessment tool (Figure 1). We established a goal of completing an assessment for 80% of eligible patients (Figure 2). We used the clinical database to track this goal for each PDSA cycle. The period of data collection was August 2011 through May 2013. All adolescents receiving medical care in the SCD teen clinic aged 17 and 18 years were eligible for evaluation. From August 2011 to June 2013 we assessed 72 patients (53% male), median age 17.04 years. The following sickle cell genotypes were represented: 40 HbSS, 19 HbSC, 8 HbSβ+, 3 HbSβ0, and 2 HbS/HPFH. The data were collected for this report with institutional review board approval.

Cycle 1

The objective of the first cycle was to assess feasibility and acceptability of the assessment tool. Patients were assessed during the month of their 17th birthday. Fourteen out of 16 eligible patients (87.5%) were assessed: 1 patient was lost to follow-up, and 1 patient inadvertently was not included in the assessment due to an administrative error. Feedback from the first cycle revealed that some items on the emotional/cognitive domain checklist were not clearly defined, and there was some overlap with the psychosocial domain checklist. Additionally, some items were not readily assessed by psychology based on the structure of psychology services at the institution. Not all patients are seen by psychology; patients are referred to psychology by the team and appointments occur in the psychology clinic and were not well-integrated within the hematology clinic visit.

Cycle 2

The second cycle addressed some of the problems identified during Cycle 1. The emotional/cognitive domain checklist was revised to reflect psychology clinic utilization (psychotherapy and testing) and a section was added where team members could indicate individualized action plans. Seventeen patients out of 18 eligible patients were assessed (94.4%): 1 patient was lost to follow-up. At the conclusion of this cycle, we found that several patients had not completed certain transition program components, such as genetic education or their PHR. Therefore, we decided that we needed to indicate this and create a Plan of Action (POA) to ensure completion of program components. The POA indicated which components were outstanding, when these components would be completed, and when the team would discuss the patient again to track their progress with program components (eg, 6 months later).

Cycle 3

Following a few months using the assessment process, each member of the team provided feedback about their observations from the second cycle. The third cycle of the PDSA addressed some of the barriers identified in Cycle 2 by adding the POA and timeline for reassessment. With this information, the nurse case manager was able to identify and contact families who had significant gaps in the learning curriculum. Additionally, services such as psychological testing were scheduled in a timely manner to address academic problems and to provide rationale for accommodations and academic/vocational services before patients transferred care to the adult provider. With the number of assessed patients increasing, it was determined that a reliable tracking system to monitor progress was essential. Thus, a transition database was created to document the domain scores, individualized plan of action, and other components of the transition program, such as medical literacy quiz scores, completion of pre-transfer visits to adult providers, and completion of the PHR. During this cycle, 20 patients were assessed out of a total of 22 eligible patients (90.9%); 2 patients were lost to follow-up.

Cycle 4

This cycle is currently underway and comprises monthly assessments of eligible 17-year-old patients with SCD. From January 2013 to May 2013 we have assessed 100% of the eligible patients (21/21). All information obtained through the assessment tool is added to the transition database. Future adjustments and modifications are planned for this tool as we continue to evaluate its impact and value.

Discussion

The transition readiness assessment tool was developed to evaluate adolescent patients with SCD aged 17 years regarding their progress in the transition program and level of transition readiness. Most transition readiness measures available in the literature consider the patient and parent perspective but do not include the health care provider perspective or determine if the patient received the information necessary for successful transition. Our readiness assessment tool has been helpful in providing a structured and quantifiable means to identify at-risk patients and families prior to the transfer of care and revealing important gaps in transition planning. It also provides information in a timely manner about points of intervention to ensure patients receive adequate preparation and services (eg, psychological/neuropsychological testing). Additionally, monthly meetings are held during which the tool is scored and discussed, providing an opportunity for members of the transition team to examine patients’ progress toward transition readiness. Finally, completing an individualized tool in a multidisciplinary setting has the added benefit of encouraging increased staff collaboration and creating a venue for ongoing re-evaluation of the QI process.

We achieved our objective of completing the assessment tool for 80% of eligible patients throughout the cycles. The majority of our nonassessed patients was lost to follow-up and had not had a clinic visit in 2 to 3 years. Implementing the tool has provided us with an additional mechanism to verify transition eligibility and has afforded the transition program a systematic way to screen and track patients who are approaching the age of transition and who may have not been seen for an extended period of time. As with any large program following children with special health care and complex needs, the large volume of patients and their complexity may pose a challenge to the program, therefore having an additional tracking system in place may help mitigate possible losses to follow-up. In fact, since the implementation of tool, our team has been able to contact families and in some cases have reinstated services. As a by-product of tool implementation, we have implemented new policies to prevent extended losses to follow-up and patient attrition.

Limitations

A limitation of the assessment tool is that it does not incorporate the perspectives of the other stakeholders (adolescents, parents, adult providers). Further, some of the items in our tool are measuring utilization of services and not specifically transition readiness. As with most transition readiness measures, our provider tool does not have established reliability and validity [14]. We plan to test for reliability and validity once enough data and patient outcomes have been collected. Additionally, because of the small number of patients who have transferred to adult care since implementation of the tool, we did not examine the association between readiness scores and clinical outcomes, such as fulfillment of first adult provider visit and hospital utilization following transition to adult care. As we continue to assess adolescent patients and track their progress following transition, we will be able to examine these associations with a larger group.

Future Plans

Since the implementation of the tool in our program, we have realized that we may need to start assessing patients at an earlier age and perhaps multiple times throughout adolescence. Some of our patients have guardianship and conservatorship issues and require more time to discuss options with the family and put in place the appropriate support and assistance prior to the transfer of care. Further, patients that have low compliance to clinic appointments are not receiving all elements of the transition program curriculum and in turn have fewer opportunities to prepare for transition. To address some of our current limitations, we plan to incorporate a patient and parent readiness assessment and examine the associations between the provider assessment and patient information such as medical literacy quizzes, clinic compliance, and fulfillment of the first adult provider visit. Assessment from all 3 perspectives (patient, parent, and provider) will offer a 360-degree view of transition readiness perception which should improve our ability to identify at-risk families and tailor transition planning to address barriers to care. In addition, our future plans include development of a mechanism to inform patients and families about the domain scores and action plans following the transition readiness meetings and include scores into the electronic medical records. Finally, the readiness assessment tool has revealed some gaps in our transition educational curriculum. Most of our transition learning involves providing and evaluating information provided, but we are not systematically assessing actual acquired transition skills. We are in the process of developing and implementing skill-based learning for activities such as calling to make or reschedule an appointment with an adult provider, arranging transportation, etc.

Conclusion

In conclusion, the provider transition readiness assessment has been a helpful tool to monitor progress of adolescents with SCD towards readiness for transition. The QI methodology and PDSA cycle approach has not only allowed for testing, development, and implementation of the tool, but is also allowing ongoing systematic refinement of our instrument. This approach highlighted the psychosocial challenges of our families as they move toward the transfer of care, in addition to the need for more individualized planning. The next important step is to evaluate the validity and reliability of the measure so we can better evaluate the impact of transition programming on the transfer from pediatric to adult care. We found the PDSA cycle approach to be a framework that can efficiently and systematically improve the quality of care of transitioning patients with SCD and their families.

 

Corresponding author: Jerlym Porter, PhD, MPH, St. Jude Children’s Research Hosp., 262 Danny Thomas Pl., Mail stop 740, Memphis, TN 38105, [email protected].

Funding/support: This work was supported in part by HRSA grant 6 U1EMC19331-03-02.

Financial disclosures: None.

References

1. Quinn CT, Rogers ZR, McCavit TL, Buchanan GR. Improved survival of children and adolescents with sickle cell disease. Blood 2010;115:3447–52.

2. Hassell KL. Population estimates of sickle cell disease in the U.S. Am J Prev Med 2010;38(4 Suppl):S512–S521.

3. Hamideh D, Alvarez O. Sickle cell disease related mortality in the United States (1999-2009). Pediatr Blood Cancer 2013;60:1482–6.

4. Lanzkron S, Carroll CP, Haywood C, Jr. Mortality rates and age at death from sickle cell disease: U.S., 1979-2005. Public Health Rep 2013;128:110–6.

5. Brousseau DC, Owens PL, Mosso AL, et al. Acute care utilization and rehospitalizations for sickle cell disease. JAMA 2010;303:1288–94.

6. Hemker BG, Brousseau DC, Yan K, et al. When children with sickle-cell disease become adults: lack of outpatient care leads to increased use of the emergency department. Am J Hematol 2011;86:863–5.

7. Jordan L, Swerdlow P, Coates TD. Systematic review of transition from adolescent to adult care in patients with sickle cell disease. J Pediatr Hematol Oncol 2013;35:165–9.

8. McPherson M, Thaniel L, Minniti CP. Transition of patients with sickle cell disease from pediatric to adult care: assessing patient readiness. Pediatr Blood Cancer 2009;52:838–41.

9. Lebensburger JD, Bemrich-Stolz CJ, Howard TH. Barriers in transition from pediatrics to adult medicine in sickle cell anemia. J Blood Med 2012;3:105–12.

10. Sawicki GS, Lukens-Bull K, Yin X, et al. Measuring the transition readiness of youth with special healthcare needs: validation of the TRAQ--Transition Readiness Assessment Questionnaire. J Pediatr Psychol 2011;36:160–71.

11. Ferris ME, Harward DH, Bickford K, et al. A clinical tool to measure the components of health-care transition from pediatric care to adult care: the UNC TR(x)ANSITION scale. Ren Fail 2012;34:744–53.

12. Gilleland J, Amaral S, Mee L, Blount R. Getting ready to leave: transition readiness in adolescent kidney transplant recipients. J Pediatr Psychol 2012;37:85–96.

13. Cappelli M, MacDonald NE, McGrath PJ. Assessment of readiness to transfer to adult care for adolescents with cystic fibrosis. Child Health Care 1989;18:218–24.

14. Stinson J, Kohut SA, Spiegel L, et al. A systematic review of transition readiness and transfer satisfaction measures for adolescents with chronic illness. Int J Adolesc Med Health 2013:1–16.

15. Telfair J, Myers J, Drezner S. Transfer as a component of the transition of adolescents with sickle cell disease to adult care: adolescent, adult, and parent perspectives. J Adolesc Health 1994;15:558–65.

16. Walley P, Gowland B. Completing the circle: from PD to PDSA. Int J Health Care Qual Assur Inc Leadersh Health Serv 2004;17:349–58.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Topics
Sections

From the St. Jude Children’s Research Hospital, Memphis, TN.

This article is the fourth in our Hemoglobinopathy Learning Collaborative series. See the related editorial by Oyeku et al in the February 2014 issue of JCOM. (—Ed.) 

 

Abstract

  • Objective: To describe the use of quality improvement (QI) methodology to implement an assessment tool to evaluate transition readiness in youth with sickle cell disease (SCD).
  • Methods: Plan-Do-Study-Act (PDSA) cycles were run to evaluate the feasibility and effectiveness of a provider-based transition readiness assessment.
  • Results: Seventy-two adolescents aged 17 years (53% male) were assessed for transition readiness from August 2011 to June 2013. Results indicated that it is feasible for a provider transition readiness assessment (PTRA) tool to be integrated into a transition program. The newly created PTRA tool can inform the level of preparedness of adolescents with SCD during planning for adult transition.
  • Conclusion: The PTRA tool may be helpful for planning and preparation of youth with SCD to successfully transition to adult care.

 

Sickle cell disease (SCD) is one of the most common genetic disorders in the world and is caused by a mutation producing the abnormal sickle hemoglobin. Patients with SCD are living longer and transitioning from pediatric to adult providers. However, the transition years are associated with high mortality [1–4], risk for increased utilization of emergency care, and underutilization of care maintenance visits [5,6]. Successful transition from pediatric care to adult care is critical in ensuring care continuity and optimal health [7]. Barriers to successful transition include lack of preparation for transition [8,9]. To address this limitation, transition programs have been created to help foster transition preparation and readiness.

Often, chronological age determines when SCD programs transfer patients to adult care; however, age is an inadequate measure of readiness. To determine the appropriate time for transition and to individualize the subsequent preparation and planning prior to transfer, an assessment of transition readiness is needed. A number of checklists exist in the unpublished literature (eg, on institution and program websites), and a few empirically tested transition readiness measures have been developed through literature review, semi-structured interviews, and pilot testing in patient samples [10–13]. The Transition Readiness Assessment Questionnaire (TRAQ) and TRxANSITION scale are non-disease-specific measures that assess self-management and advocacy skills of youth with special health care needs; the TRAQ is self-report whereas the TRxANSITION scale is provider-administered [10,11]. Disease-specific measures have been developed for pediatric kidney transplant recipients [12] and adolescents with cystic fibrosis [13]. Studies using these measures suggest that transition readiness is associated with age, gender, disease type, increased adolescent responsibility/decreased parental involvement, and adherence [10–12].

For patients with SCD, there is no well-validated measure available to assess transition readiness [14]. Telfair and colleagues developed a sickle cell transfer questionnaire that focused on transition concerns and feelings and suggestions for transition intervention programming from the perspective of adolescents, their primary caregivers, and adults with SCD [15]. In addition, McPherson and colleagues examined SCD transition readiness in 4 areas: prior thought about transition, knowledge about steps to transition, interest in learning more about the transition process, and perceived importance of continuing care with a hematologist as an adult provider [8]. They found that adolescents in general were not prepared for transition but that readiness improved with age [8]. Overall, most readiness measures have involved patient self-report or parent proxy report. No current readiness assessment scales incorporate the provider’s assessment, which could help better define the most appropriate next steps in education and preparation for the upcoming transfer to adult care.

The St. Jude Children’s Research Hospital SCD Transition to Adult Care program was started in 2007 and is a companion program to the SCD teen clinic, serving 250 adolescents aged 12 to 18 years. The transition program curriculum addresses all aspects of the transition process. Based on the curriculum components, St. Jude developed and implemented a transition readiness assessment tool to be completed by providers in the SCD transition program. In this article, we describe our use of quality improvement (QI) methodology to evaluate the utility and impact of the newly created SCD transition readiness assessment tool.

Methods

Transition Program

The transition program is directed by a multidisciplinary team; disciplines represented on the team are medical (hematologist, genetic educator, physician assistant, and nurse coordinators), psychosocial (social workers), emotional/cognitive (psychologists), and academic (academic coordinator). In the program, adolescents with SCD and their families are introduced to the concept of transition to adult care at the age of 12. Every 6 months from 12 to 18 years of age, members of the team address relevant topics with patients to increase patients’ disease knowledge and improve their disease self-management skills. Some of the program components include training in completing a personal health record (PHR), genetic education, academic planning, and independent living skills.

Needs Assessment

Prior to initiation of the project, members of the transition program met monthly to informally discuss the progress of patients who were approaching the age of transition to adult care. We found that adolescents did not appear to be ready or well prepared for transition, including not being aware of the various familial and psychosocial issues that needed to be addressed prior to the transfer to adult care. We realized that these discussions needed to occur earlier to allow more time for preparation and transition planning of the patient, family, and medical team. In addition, members of the team each has differing perspectives and did not have the same information with regard to existing familial and psychosocial issues. The discussions were necessary to ensure all team members had pertinent information to make informed decisions about the patient’s level of transition readiness. Finally, our criteria for readiness were not standardized or quantifiable. As a result, each patient discussion was lengthy, not structured, and not very informative. In 2011, a core group from the transition team attended a Health Resources Services Administration–sponsored Hemoglobinopathies Quality Improvement Workshop to receive training in QI processes. We decided to create a formal, quantitative, individualized assessment of patients’ progress toward transition at age 17.

Readiness Assessment Tool

The assessment is divided into 4 domains based on the disciplines represented on the team: medical, psychosocial, emotional/cognitive, and academic (Table). Each discipline developed transition readiness items based on the transition curriculum content. The pediatric hematologist, midlevel provider (physician assistant), and nurse case managers developed the medical domain checklist to assess disease literacy, self-management, organ and dysfunction screening. The psychosocial domain checklist was developed by the social workers to assess patients’ understanding of information related to independent living and adult rights (eg, advance directives), emotional concerns related to transition, self-advocacy skills, and completion of a personal health record, a document designed to assist adolescents in learning about their medical history.

The emotional/cognitive domain checklist was developed by the pediatric psychologist and pediatric neuropsychologist. Because the psychology service is set up to see patients referred by the medical team and is unable to see all patients coming to hematology clinic, the emotional/cognitive checklist is based on identifying previous utilization of psychological services including psychotherapy and cognitive testing and determining whether initiation of services is warranted. The academic domain checklist was developed by the academic coordinator who serves as a liaison between the medical team and the school system. This checklist assesses whether the adolescent is meeting high school graduation requirements, able to verbalize an educational/job training plan, on track with future planning (eg, completed required testing), knowledgeable about community educational services, and able to self-advocate (eg, apply for SSI benefits).

Items within each domain have equal value (ie, each question on the checklist is worth 1 point) and the sum of points yields the quantifiable assessment of how well patients are performing in each area of their health. Assessment meetings occur monthly when eligible patients are discussed. Domains are evaluated by the health care provider responsible for his/her own domain (eg, social worker completes the psychosocial domain, the academic coordinator completes the academic domain, etc.).

PDSA Methodology

PDSA (Plan-Do-Study-Act) methodology was utilized to develop and evaluate the assessment tool. PDSA is a QI method that utilizes small-scale changes to a process, primarily within health care environments [16]. PDSA is executed in cycles and as changes are made, the process acted upon is improved. Changes are tested on a small scale and barriers are identified. Adjustments are made in subsequent cycles and as needed.

For the QI project, 3 PDSA cycles were completed for the development and implementation of the assessment tool (Figure 1). We established a goal of completing an assessment for 80% of eligible patients (Figure 2). We used the clinical database to track this goal for each PDSA cycle. The period of data collection was August 2011 through May 2013. All adolescents receiving medical care in the SCD teen clinic aged 17 and 18 years were eligible for evaluation. From August 2011 to June 2013 we assessed 72 patients (53% male), median age 17.04 years. The following sickle cell genotypes were represented: 40 HbSS, 19 HbSC, 8 HbSβ+, 3 HbSβ0, and 2 HbS/HPFH. The data were collected for this report with institutional review board approval.

Cycle 1

The objective of the first cycle was to assess feasibility and acceptability of the assessment tool. Patients were assessed during the month of their 17th birthday. Fourteen out of 16 eligible patients (87.5%) were assessed: 1 patient was lost to follow-up, and 1 patient inadvertently was not included in the assessment due to an administrative error. Feedback from the first cycle revealed that some items on the emotional/cognitive domain checklist were not clearly defined, and there was some overlap with the psychosocial domain checklist. Additionally, some items were not readily assessed by psychology based on the structure of psychology services at the institution. Not all patients are seen by psychology; patients are referred to psychology by the team and appointments occur in the psychology clinic and were not well-integrated within the hematology clinic visit.

Cycle 2

The second cycle addressed some of the problems identified during Cycle 1. The emotional/cognitive domain checklist was revised to reflect psychology clinic utilization (psychotherapy and testing) and a section was added where team members could indicate individualized action plans. Seventeen patients out of 18 eligible patients were assessed (94.4%): 1 patient was lost to follow-up. At the conclusion of this cycle, we found that several patients had not completed certain transition program components, such as genetic education or their PHR. Therefore, we decided that we needed to indicate this and create a Plan of Action (POA) to ensure completion of program components. The POA indicated which components were outstanding, when these components would be completed, and when the team would discuss the patient again to track their progress with program components (eg, 6 months later).

Cycle 3

Following a few months using the assessment process, each member of the team provided feedback about their observations from the second cycle. The third cycle of the PDSA addressed some of the barriers identified in Cycle 2 by adding the POA and timeline for reassessment. With this information, the nurse case manager was able to identify and contact families who had significant gaps in the learning curriculum. Additionally, services such as psychological testing were scheduled in a timely manner to address academic problems and to provide rationale for accommodations and academic/vocational services before patients transferred care to the adult provider. With the number of assessed patients increasing, it was determined that a reliable tracking system to monitor progress was essential. Thus, a transition database was created to document the domain scores, individualized plan of action, and other components of the transition program, such as medical literacy quiz scores, completion of pre-transfer visits to adult providers, and completion of the PHR. During this cycle, 20 patients were assessed out of a total of 22 eligible patients (90.9%); 2 patients were lost to follow-up.

Cycle 4

This cycle is currently underway and comprises monthly assessments of eligible 17-year-old patients with SCD. From January 2013 to May 2013 we have assessed 100% of the eligible patients (21/21). All information obtained through the assessment tool is added to the transition database. Future adjustments and modifications are planned for this tool as we continue to evaluate its impact and value.

Discussion

The transition readiness assessment tool was developed to evaluate adolescent patients with SCD aged 17 years regarding their progress in the transition program and level of transition readiness. Most transition readiness measures available in the literature consider the patient and parent perspective but do not include the health care provider perspective or determine if the patient received the information necessary for successful transition. Our readiness assessment tool has been helpful in providing a structured and quantifiable means to identify at-risk patients and families prior to the transfer of care and revealing important gaps in transition planning. It also provides information in a timely manner about points of intervention to ensure patients receive adequate preparation and services (eg, psychological/neuropsychological testing). Additionally, monthly meetings are held during which the tool is scored and discussed, providing an opportunity for members of the transition team to examine patients’ progress toward transition readiness. Finally, completing an individualized tool in a multidisciplinary setting has the added benefit of encouraging increased staff collaboration and creating a venue for ongoing re-evaluation of the QI process.

We achieved our objective of completing the assessment tool for 80% of eligible patients throughout the cycles. The majority of our nonassessed patients was lost to follow-up and had not had a clinic visit in 2 to 3 years. Implementing the tool has provided us with an additional mechanism to verify transition eligibility and has afforded the transition program a systematic way to screen and track patients who are approaching the age of transition and who may have not been seen for an extended period of time. As with any large program following children with special health care and complex needs, the large volume of patients and their complexity may pose a challenge to the program, therefore having an additional tracking system in place may help mitigate possible losses to follow-up. In fact, since the implementation of tool, our team has been able to contact families and in some cases have reinstated services. As a by-product of tool implementation, we have implemented new policies to prevent extended losses to follow-up and patient attrition.

Limitations

A limitation of the assessment tool is that it does not incorporate the perspectives of the other stakeholders (adolescents, parents, adult providers). Further, some of the items in our tool are measuring utilization of services and not specifically transition readiness. As with most transition readiness measures, our provider tool does not have established reliability and validity [14]. We plan to test for reliability and validity once enough data and patient outcomes have been collected. Additionally, because of the small number of patients who have transferred to adult care since implementation of the tool, we did not examine the association between readiness scores and clinical outcomes, such as fulfillment of first adult provider visit and hospital utilization following transition to adult care. As we continue to assess adolescent patients and track their progress following transition, we will be able to examine these associations with a larger group.

Future Plans

Since the implementation of the tool in our program, we have realized that we may need to start assessing patients at an earlier age and perhaps multiple times throughout adolescence. Some of our patients have guardianship and conservatorship issues and require more time to discuss options with the family and put in place the appropriate support and assistance prior to the transfer of care. Further, patients that have low compliance to clinic appointments are not receiving all elements of the transition program curriculum and in turn have fewer opportunities to prepare for transition. To address some of our current limitations, we plan to incorporate a patient and parent readiness assessment and examine the associations between the provider assessment and patient information such as medical literacy quizzes, clinic compliance, and fulfillment of the first adult provider visit. Assessment from all 3 perspectives (patient, parent, and provider) will offer a 360-degree view of transition readiness perception which should improve our ability to identify at-risk families and tailor transition planning to address barriers to care. In addition, our future plans include development of a mechanism to inform patients and families about the domain scores and action plans following the transition readiness meetings and include scores into the electronic medical records. Finally, the readiness assessment tool has revealed some gaps in our transition educational curriculum. Most of our transition learning involves providing and evaluating information provided, but we are not systematically assessing actual acquired transition skills. We are in the process of developing and implementing skill-based learning for activities such as calling to make or reschedule an appointment with an adult provider, arranging transportation, etc.

Conclusion

In conclusion, the provider transition readiness assessment has been a helpful tool to monitor progress of adolescents with SCD towards readiness for transition. The QI methodology and PDSA cycle approach has not only allowed for testing, development, and implementation of the tool, but is also allowing ongoing systematic refinement of our instrument. This approach highlighted the psychosocial challenges of our families as they move toward the transfer of care, in addition to the need for more individualized planning. The next important step is to evaluate the validity and reliability of the measure so we can better evaluate the impact of transition programming on the transfer from pediatric to adult care. We found the PDSA cycle approach to be a framework that can efficiently and systematically improve the quality of care of transitioning patients with SCD and their families.

 

Corresponding author: Jerlym Porter, PhD, MPH, St. Jude Children’s Research Hosp., 262 Danny Thomas Pl., Mail stop 740, Memphis, TN 38105, [email protected].

Funding/support: This work was supported in part by HRSA grant 6 U1EMC19331-03-02.

Financial disclosures: None.

From the St. Jude Children’s Research Hospital, Memphis, TN.

This article is the fourth in our Hemoglobinopathy Learning Collaborative series. See the related editorial by Oyeku et al in the February 2014 issue of JCOM. (—Ed.) 

 

Abstract

  • Objective: To describe the use of quality improvement (QI) methodology to implement an assessment tool to evaluate transition readiness in youth with sickle cell disease (SCD).
  • Methods: Plan-Do-Study-Act (PDSA) cycles were run to evaluate the feasibility and effectiveness of a provider-based transition readiness assessment.
  • Results: Seventy-two adolescents aged 17 years (53% male) were assessed for transition readiness from August 2011 to June 2013. Results indicated that it is feasible for a provider transition readiness assessment (PTRA) tool to be integrated into a transition program. The newly created PTRA tool can inform the level of preparedness of adolescents with SCD during planning for adult transition.
  • Conclusion: The PTRA tool may be helpful for planning and preparation of youth with SCD to successfully transition to adult care.

 

Sickle cell disease (SCD) is one of the most common genetic disorders in the world and is caused by a mutation producing the abnormal sickle hemoglobin. Patients with SCD are living longer and transitioning from pediatric to adult providers. However, the transition years are associated with high mortality [1–4], risk for increased utilization of emergency care, and underutilization of care maintenance visits [5,6]. Successful transition from pediatric care to adult care is critical in ensuring care continuity and optimal health [7]. Barriers to successful transition include lack of preparation for transition [8,9]. To address this limitation, transition programs have been created to help foster transition preparation and readiness.

Often, chronological age determines when SCD programs transfer patients to adult care; however, age is an inadequate measure of readiness. To determine the appropriate time for transition and to individualize the subsequent preparation and planning prior to transfer, an assessment of transition readiness is needed. A number of checklists exist in the unpublished literature (eg, on institution and program websites), and a few empirically tested transition readiness measures have been developed through literature review, semi-structured interviews, and pilot testing in patient samples [10–13]. The Transition Readiness Assessment Questionnaire (TRAQ) and TRxANSITION scale are non-disease-specific measures that assess self-management and advocacy skills of youth with special health care needs; the TRAQ is self-report whereas the TRxANSITION scale is provider-administered [10,11]. Disease-specific measures have been developed for pediatric kidney transplant recipients [12] and adolescents with cystic fibrosis [13]. Studies using these measures suggest that transition readiness is associated with age, gender, disease type, increased adolescent responsibility/decreased parental involvement, and adherence [10–12].

For patients with SCD, there is no well-validated measure available to assess transition readiness [14]. Telfair and colleagues developed a sickle cell transfer questionnaire that focused on transition concerns and feelings and suggestions for transition intervention programming from the perspective of adolescents, their primary caregivers, and adults with SCD [15]. In addition, McPherson and colleagues examined SCD transition readiness in 4 areas: prior thought about transition, knowledge about steps to transition, interest in learning more about the transition process, and perceived importance of continuing care with a hematologist as an adult provider [8]. They found that adolescents in general were not prepared for transition but that readiness improved with age [8]. Overall, most readiness measures have involved patient self-report or parent proxy report. No current readiness assessment scales incorporate the provider’s assessment, which could help better define the most appropriate next steps in education and preparation for the upcoming transfer to adult care.

The St. Jude Children’s Research Hospital SCD Transition to Adult Care program was started in 2007 and is a companion program to the SCD teen clinic, serving 250 adolescents aged 12 to 18 years. The transition program curriculum addresses all aspects of the transition process. Based on the curriculum components, St. Jude developed and implemented a transition readiness assessment tool to be completed by providers in the SCD transition program. In this article, we describe our use of quality improvement (QI) methodology to evaluate the utility and impact of the newly created SCD transition readiness assessment tool.

Methods

Transition Program

The transition program is directed by a multidisciplinary team; disciplines represented on the team are medical (hematologist, genetic educator, physician assistant, and nurse coordinators), psychosocial (social workers), emotional/cognitive (psychologists), and academic (academic coordinator). In the program, adolescents with SCD and their families are introduced to the concept of transition to adult care at the age of 12. Every 6 months from 12 to 18 years of age, members of the team address relevant topics with patients to increase patients’ disease knowledge and improve their disease self-management skills. Some of the program components include training in completing a personal health record (PHR), genetic education, academic planning, and independent living skills.

Needs Assessment

Prior to initiation of the project, members of the transition program met monthly to informally discuss the progress of patients who were approaching the age of transition to adult care. We found that adolescents did not appear to be ready or well prepared for transition, including not being aware of the various familial and psychosocial issues that needed to be addressed prior to the transfer to adult care. We realized that these discussions needed to occur earlier to allow more time for preparation and transition planning of the patient, family, and medical team. In addition, members of the team each has differing perspectives and did not have the same information with regard to existing familial and psychosocial issues. The discussions were necessary to ensure all team members had pertinent information to make informed decisions about the patient’s level of transition readiness. Finally, our criteria for readiness were not standardized or quantifiable. As a result, each patient discussion was lengthy, not structured, and not very informative. In 2011, a core group from the transition team attended a Health Resources Services Administration–sponsored Hemoglobinopathies Quality Improvement Workshop to receive training in QI processes. We decided to create a formal, quantitative, individualized assessment of patients’ progress toward transition at age 17.

Readiness Assessment Tool

The assessment is divided into 4 domains based on the disciplines represented on the team: medical, psychosocial, emotional/cognitive, and academic (Table). Each discipline developed transition readiness items based on the transition curriculum content. The pediatric hematologist, midlevel provider (physician assistant), and nurse case managers developed the medical domain checklist to assess disease literacy, self-management, organ and dysfunction screening. The psychosocial domain checklist was developed by the social workers to assess patients’ understanding of information related to independent living and adult rights (eg, advance directives), emotional concerns related to transition, self-advocacy skills, and completion of a personal health record, a document designed to assist adolescents in learning about their medical history.

The emotional/cognitive domain checklist was developed by the pediatric psychologist and pediatric neuropsychologist. Because the psychology service is set up to see patients referred by the medical team and is unable to see all patients coming to hematology clinic, the emotional/cognitive checklist is based on identifying previous utilization of psychological services including psychotherapy and cognitive testing and determining whether initiation of services is warranted. The academic domain checklist was developed by the academic coordinator who serves as a liaison between the medical team and the school system. This checklist assesses whether the adolescent is meeting high school graduation requirements, able to verbalize an educational/job training plan, on track with future planning (eg, completed required testing), knowledgeable about community educational services, and able to self-advocate (eg, apply for SSI benefits).

Items within each domain have equal value (ie, each question on the checklist is worth 1 point) and the sum of points yields the quantifiable assessment of how well patients are performing in each area of their health. Assessment meetings occur monthly when eligible patients are discussed. Domains are evaluated by the health care provider responsible for his/her own domain (eg, social worker completes the psychosocial domain, the academic coordinator completes the academic domain, etc.).

PDSA Methodology

PDSA (Plan-Do-Study-Act) methodology was utilized to develop and evaluate the assessment tool. PDSA is a QI method that utilizes small-scale changes to a process, primarily within health care environments [16]. PDSA is executed in cycles and as changes are made, the process acted upon is improved. Changes are tested on a small scale and barriers are identified. Adjustments are made in subsequent cycles and as needed.

For the QI project, 3 PDSA cycles were completed for the development and implementation of the assessment tool (Figure 1). We established a goal of completing an assessment for 80% of eligible patients (Figure 2). We used the clinical database to track this goal for each PDSA cycle. The period of data collection was August 2011 through May 2013. All adolescents receiving medical care in the SCD teen clinic aged 17 and 18 years were eligible for evaluation. From August 2011 to June 2013 we assessed 72 patients (53% male), median age 17.04 years. The following sickle cell genotypes were represented: 40 HbSS, 19 HbSC, 8 HbSβ+, 3 HbSβ0, and 2 HbS/HPFH. The data were collected for this report with institutional review board approval.

Cycle 1

The objective of the first cycle was to assess feasibility and acceptability of the assessment tool. Patients were assessed during the month of their 17th birthday. Fourteen out of 16 eligible patients (87.5%) were assessed: 1 patient was lost to follow-up, and 1 patient inadvertently was not included in the assessment due to an administrative error. Feedback from the first cycle revealed that some items on the emotional/cognitive domain checklist were not clearly defined, and there was some overlap with the psychosocial domain checklist. Additionally, some items were not readily assessed by psychology based on the structure of psychology services at the institution. Not all patients are seen by psychology; patients are referred to psychology by the team and appointments occur in the psychology clinic and were not well-integrated within the hematology clinic visit.

Cycle 2

The second cycle addressed some of the problems identified during Cycle 1. The emotional/cognitive domain checklist was revised to reflect psychology clinic utilization (psychotherapy and testing) and a section was added where team members could indicate individualized action plans. Seventeen patients out of 18 eligible patients were assessed (94.4%): 1 patient was lost to follow-up. At the conclusion of this cycle, we found that several patients had not completed certain transition program components, such as genetic education or their PHR. Therefore, we decided that we needed to indicate this and create a Plan of Action (POA) to ensure completion of program components. The POA indicated which components were outstanding, when these components would be completed, and when the team would discuss the patient again to track their progress with program components (eg, 6 months later).

Cycle 3

Following a few months using the assessment process, each member of the team provided feedback about their observations from the second cycle. The third cycle of the PDSA addressed some of the barriers identified in Cycle 2 by adding the POA and timeline for reassessment. With this information, the nurse case manager was able to identify and contact families who had significant gaps in the learning curriculum. Additionally, services such as psychological testing were scheduled in a timely manner to address academic problems and to provide rationale for accommodations and academic/vocational services before patients transferred care to the adult provider. With the number of assessed patients increasing, it was determined that a reliable tracking system to monitor progress was essential. Thus, a transition database was created to document the domain scores, individualized plan of action, and other components of the transition program, such as medical literacy quiz scores, completion of pre-transfer visits to adult providers, and completion of the PHR. During this cycle, 20 patients were assessed out of a total of 22 eligible patients (90.9%); 2 patients were lost to follow-up.

Cycle 4

This cycle is currently underway and comprises monthly assessments of eligible 17-year-old patients with SCD. From January 2013 to May 2013 we have assessed 100% of the eligible patients (21/21). All information obtained through the assessment tool is added to the transition database. Future adjustments and modifications are planned for this tool as we continue to evaluate its impact and value.

Discussion

The transition readiness assessment tool was developed to evaluate adolescent patients with SCD aged 17 years regarding their progress in the transition program and level of transition readiness. Most transition readiness measures available in the literature consider the patient and parent perspective but do not include the health care provider perspective or determine if the patient received the information necessary for successful transition. Our readiness assessment tool has been helpful in providing a structured and quantifiable means to identify at-risk patients and families prior to the transfer of care and revealing important gaps in transition planning. It also provides information in a timely manner about points of intervention to ensure patients receive adequate preparation and services (eg, psychological/neuropsychological testing). Additionally, monthly meetings are held during which the tool is scored and discussed, providing an opportunity for members of the transition team to examine patients’ progress toward transition readiness. Finally, completing an individualized tool in a multidisciplinary setting has the added benefit of encouraging increased staff collaboration and creating a venue for ongoing re-evaluation of the QI process.

We achieved our objective of completing the assessment tool for 80% of eligible patients throughout the cycles. The majority of our nonassessed patients was lost to follow-up and had not had a clinic visit in 2 to 3 years. Implementing the tool has provided us with an additional mechanism to verify transition eligibility and has afforded the transition program a systematic way to screen and track patients who are approaching the age of transition and who may have not been seen for an extended period of time. As with any large program following children with special health care and complex needs, the large volume of patients and their complexity may pose a challenge to the program, therefore having an additional tracking system in place may help mitigate possible losses to follow-up. In fact, since the implementation of tool, our team has been able to contact families and in some cases have reinstated services. As a by-product of tool implementation, we have implemented new policies to prevent extended losses to follow-up and patient attrition.

Limitations

A limitation of the assessment tool is that it does not incorporate the perspectives of the other stakeholders (adolescents, parents, adult providers). Further, some of the items in our tool are measuring utilization of services and not specifically transition readiness. As with most transition readiness measures, our provider tool does not have established reliability and validity [14]. We plan to test for reliability and validity once enough data and patient outcomes have been collected. Additionally, because of the small number of patients who have transferred to adult care since implementation of the tool, we did not examine the association between readiness scores and clinical outcomes, such as fulfillment of first adult provider visit and hospital utilization following transition to adult care. As we continue to assess adolescent patients and track their progress following transition, we will be able to examine these associations with a larger group.

Future Plans

Since the implementation of the tool in our program, we have realized that we may need to start assessing patients at an earlier age and perhaps multiple times throughout adolescence. Some of our patients have guardianship and conservatorship issues and require more time to discuss options with the family and put in place the appropriate support and assistance prior to the transfer of care. Further, patients that have low compliance to clinic appointments are not receiving all elements of the transition program curriculum and in turn have fewer opportunities to prepare for transition. To address some of our current limitations, we plan to incorporate a patient and parent readiness assessment and examine the associations between the provider assessment and patient information such as medical literacy quizzes, clinic compliance, and fulfillment of the first adult provider visit. Assessment from all 3 perspectives (patient, parent, and provider) will offer a 360-degree view of transition readiness perception which should improve our ability to identify at-risk families and tailor transition planning to address barriers to care. In addition, our future plans include development of a mechanism to inform patients and families about the domain scores and action plans following the transition readiness meetings and include scores into the electronic medical records. Finally, the readiness assessment tool has revealed some gaps in our transition educational curriculum. Most of our transition learning involves providing and evaluating information provided, but we are not systematically assessing actual acquired transition skills. We are in the process of developing and implementing skill-based learning for activities such as calling to make or reschedule an appointment with an adult provider, arranging transportation, etc.

Conclusion

In conclusion, the provider transition readiness assessment has been a helpful tool to monitor progress of adolescents with SCD towards readiness for transition. The QI methodology and PDSA cycle approach has not only allowed for testing, development, and implementation of the tool, but is also allowing ongoing systematic refinement of our instrument. This approach highlighted the psychosocial challenges of our families as they move toward the transfer of care, in addition to the need for more individualized planning. The next important step is to evaluate the validity and reliability of the measure so we can better evaluate the impact of transition programming on the transfer from pediatric to adult care. We found the PDSA cycle approach to be a framework that can efficiently and systematically improve the quality of care of transitioning patients with SCD and their families.

 

Corresponding author: Jerlym Porter, PhD, MPH, St. Jude Children’s Research Hosp., 262 Danny Thomas Pl., Mail stop 740, Memphis, TN 38105, [email protected].

Funding/support: This work was supported in part by HRSA grant 6 U1EMC19331-03-02.

Financial disclosures: None.

References

1. Quinn CT, Rogers ZR, McCavit TL, Buchanan GR. Improved survival of children and adolescents with sickle cell disease. Blood 2010;115:3447–52.

2. Hassell KL. Population estimates of sickle cell disease in the U.S. Am J Prev Med 2010;38(4 Suppl):S512–S521.

3. Hamideh D, Alvarez O. Sickle cell disease related mortality in the United States (1999-2009). Pediatr Blood Cancer 2013;60:1482–6.

4. Lanzkron S, Carroll CP, Haywood C, Jr. Mortality rates and age at death from sickle cell disease: U.S., 1979-2005. Public Health Rep 2013;128:110–6.

5. Brousseau DC, Owens PL, Mosso AL, et al. Acute care utilization and rehospitalizations for sickle cell disease. JAMA 2010;303:1288–94.

6. Hemker BG, Brousseau DC, Yan K, et al. When children with sickle-cell disease become adults: lack of outpatient care leads to increased use of the emergency department. Am J Hematol 2011;86:863–5.

7. Jordan L, Swerdlow P, Coates TD. Systematic review of transition from adolescent to adult care in patients with sickle cell disease. J Pediatr Hematol Oncol 2013;35:165–9.

8. McPherson M, Thaniel L, Minniti CP. Transition of patients with sickle cell disease from pediatric to adult care: assessing patient readiness. Pediatr Blood Cancer 2009;52:838–41.

9. Lebensburger JD, Bemrich-Stolz CJ, Howard TH. Barriers in transition from pediatrics to adult medicine in sickle cell anemia. J Blood Med 2012;3:105–12.

10. Sawicki GS, Lukens-Bull K, Yin X, et al. Measuring the transition readiness of youth with special healthcare needs: validation of the TRAQ--Transition Readiness Assessment Questionnaire. J Pediatr Psychol 2011;36:160–71.

11. Ferris ME, Harward DH, Bickford K, et al. A clinical tool to measure the components of health-care transition from pediatric care to adult care: the UNC TR(x)ANSITION scale. Ren Fail 2012;34:744–53.

12. Gilleland J, Amaral S, Mee L, Blount R. Getting ready to leave: transition readiness in adolescent kidney transplant recipients. J Pediatr Psychol 2012;37:85–96.

13. Cappelli M, MacDonald NE, McGrath PJ. Assessment of readiness to transfer to adult care for adolescents with cystic fibrosis. Child Health Care 1989;18:218–24.

14. Stinson J, Kohut SA, Spiegel L, et al. A systematic review of transition readiness and transfer satisfaction measures for adolescents with chronic illness. Int J Adolesc Med Health 2013:1–16.

15. Telfair J, Myers J, Drezner S. Transfer as a component of the transition of adolescents with sickle cell disease to adult care: adolescent, adult, and parent perspectives. J Adolesc Health 1994;15:558–65.

16. Walley P, Gowland B. Completing the circle: from PD to PDSA. Int J Health Care Qual Assur Inc Leadersh Health Serv 2004;17:349–58.

References

1. Quinn CT, Rogers ZR, McCavit TL, Buchanan GR. Improved survival of children and adolescents with sickle cell disease. Blood 2010;115:3447–52.

2. Hassell KL. Population estimates of sickle cell disease in the U.S. Am J Prev Med 2010;38(4 Suppl):S512–S521.

3. Hamideh D, Alvarez O. Sickle cell disease related mortality in the United States (1999-2009). Pediatr Blood Cancer 2013;60:1482–6.

4. Lanzkron S, Carroll CP, Haywood C, Jr. Mortality rates and age at death from sickle cell disease: U.S., 1979-2005. Public Health Rep 2013;128:110–6.

5. Brousseau DC, Owens PL, Mosso AL, et al. Acute care utilization and rehospitalizations for sickle cell disease. JAMA 2010;303:1288–94.

6. Hemker BG, Brousseau DC, Yan K, et al. When children with sickle-cell disease become adults: lack of outpatient care leads to increased use of the emergency department. Am J Hematol 2011;86:863–5.

7. Jordan L, Swerdlow P, Coates TD. Systematic review of transition from adolescent to adult care in patients with sickle cell disease. J Pediatr Hematol Oncol 2013;35:165–9.

8. McPherson M, Thaniel L, Minniti CP. Transition of patients with sickle cell disease from pediatric to adult care: assessing patient readiness. Pediatr Blood Cancer 2009;52:838–41.

9. Lebensburger JD, Bemrich-Stolz CJ, Howard TH. Barriers in transition from pediatrics to adult medicine in sickle cell anemia. J Blood Med 2012;3:105–12.

10. Sawicki GS, Lukens-Bull K, Yin X, et al. Measuring the transition readiness of youth with special healthcare needs: validation of the TRAQ--Transition Readiness Assessment Questionnaire. J Pediatr Psychol 2011;36:160–71.

11. Ferris ME, Harward DH, Bickford K, et al. A clinical tool to measure the components of health-care transition from pediatric care to adult care: the UNC TR(x)ANSITION scale. Ren Fail 2012;34:744–53.

12. Gilleland J, Amaral S, Mee L, Blount R. Getting ready to leave: transition readiness in adolescent kidney transplant recipients. J Pediatr Psychol 2012;37:85–96.

13. Cappelli M, MacDonald NE, McGrath PJ. Assessment of readiness to transfer to adult care for adolescents with cystic fibrosis. Child Health Care 1989;18:218–24.

14. Stinson J, Kohut SA, Spiegel L, et al. A systematic review of transition readiness and transfer satisfaction measures for adolescents with chronic illness. Int J Adolesc Med Health 2013:1–16.

15. Telfair J, Myers J, Drezner S. Transfer as a component of the transition of adolescents with sickle cell disease to adult care: adolescent, adult, and parent perspectives. J Adolesc Health 1994;15:558–65.

16. Walley P, Gowland B. Completing the circle: from PD to PDSA. Int J Health Care Qual Assur Inc Leadersh Health Serv 2004;17:349–58.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
Transition Readiness Assessment for Sickle Cell Patients: A Quality Improvement Project
Display Headline
Transition Readiness Assessment for Sickle Cell Patients: A Quality Improvement Project
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Long-Term Outcomes of Bariatric Surgery in Obese Adults

Article Type
Changed
Tue, 03/06/2018 - 15:55
Display Headline
Long-Term Outcomes of Bariatric Surgery in Obese Adults

Study Overview

Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.

Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.

Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m, were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.

Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.

Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.

Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.

Commentary

Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.

In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.

This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.

There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.

A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.

Applications for Clinical Practice

This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.

—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN

References

1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.

2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.

3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.

4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Topics
Sections

Study Overview

Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.

Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.

Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m, were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.

Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.

Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.

Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.

Commentary

Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.

In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.

This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.

There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.

A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.

Applications for Clinical Practice

This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.

—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN

Study Overview

Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.

Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.

Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m, were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.

Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.

Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.

Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.

Commentary

Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.

In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.

This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.

There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.

A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.

Applications for Clinical Practice

This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.

—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN

References

1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.

2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.

3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.

4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.

References

1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.

2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.

3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.

4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
Long-Term Outcomes of Bariatric Surgery in Obese Adults
Display Headline
Long-Term Outcomes of Bariatric Surgery in Obese Adults
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Light Intensity Physical Activity May Reduce Risk of Disability Among Adults with or At Risk For Knee Osteoarthritis

Article Type
Changed
Tue, 03/06/2018 - 15:51
Display Headline
Light Intensity Physical Activity May Reduce Risk of Disability Among Adults with or At Risk For Knee Osteoarthritis

Study Overview

Objective. To determine if time spent in light intensity physical activity is related to incident disability and disability progression.

Design. Prospective cohort study.

Setting and participants. This study uses a subcohort from the Osteoarthritis Initiative, a longitudinal study that enrolled 4796 men and women aged 45 to 79 years with or at high risk of developing knee osteoarthritis. Inclusion criteria for the main cohort study were: (1) presence of osteoarthritis with symptoms in at least 1 knee (with a definite tibiofemoral osteophyte) and pain, aching, or stiffness on most days for at least 1 month during the previous 12 months; or (2) presence of at least 1 from a set of established risk factors for knee osteoarthritis: knee symptoms in the previous 12 months; overweight; knee injury causing difficulty walking for at least a week; history of knee surgery; family history of a total knee replacement for osteoarthritis; Heberden’s nodes; repetitive knee bending at work or outside work; and age 70–79 years. The subcohort of the current study draws from the 2127 participants that enrolled in the substudy with accelerometer monitoring, included those without disability at study onset; exclusion criteria include insufficient baseline accelerometer monitoring, incomplete outcome or covariate data, decedents and those lost to follow up. A total of 1680 were included in the main analysis, and an additional 134 participants (for a total of 1814) with baseline mild or moderate disability were included in a secondary analysis. between September 2008 to December 2012 at 4 sites (Baltimore, Pittsburgh, Columbus, Ohio, and Pawtucket, Rhode Island)

Main outcome measure. Disability at the 2-year follow-up visit among those without disability at baseline. Disability was ascertained by using a set of questions asking if participants have any difficulty performing each basic or instrumental activity of daily living because of a health or memory problem. Basic activities include walking across a room, dressing, bathing, eating, using the toilet and bed transfer. Instrumental activities of daily living include preparing hot meals, grocery shopping, making telephone calls, taking drugs, and managing money. Disability levels were defined as none, mild (only instrumental activities limitations), moderate (1–2 basic activities limitations), and severe (more than 2 basic activities limitations).

Statistical analysis. Main predictor variable was physical activity monitored using accelerometers measured at baseline. Participants wear the accelerometer for 7 consecutive days on a belt from arising in the morning until retiring, except during water activities. Participants also recorded on a daily log the time spent in water and cycling. Intensity thresholds were applied on a minute by minute basis to identify non-sedentary activity of light intensity and moderate to vigorous intensity. The primary variable was the accelerometer assessment of physical activity measured as daily minutes spent in light or moderate-vigorous activity. The time spent was divided in quartiles; the quartile cut-points for light activity were 229, 277, and 331 minutes, and the cut-points for moderate-vigorous activity were 4.3, 12.2, and 28.2 average minutes per day. Other covariates were socioeconomic factors including race and ethnicity, age, sex education and income, health factors including chronic conditions by self report, body mass index, knee-specific health factors and symptoms, smoking, and gait speed. The main analysis of the relationship between baseline physical activity and the development of disability was done using survival analysis techniques and hazard ratios. Secondary analysis using the larger cohort evaluated hazard ratios for disability progression as defined by progression to a more severe level among the 1814 participants.

Main results. In the main analysis, with 1680 participants without disability at baseline, 149 participants had new disability over the 2 years of follow-up. Average age of the cohort was 65 years, the majority (85%) were white, and approximately 54% were female. The cohort averaged 302 minutes a day of non-sedentary activity, the majority of which was light-intensity activities (284 minutes). Older age was associated with lower physical activity (P < 0.001), as was male sex (P < 0.001), higher body mass index, a number of chronic medical conditions (cancer, cerebrovascular disease, congestive heart failure), lower extremity pain, and higher grade of knee osteoarthritis severity. Onset of disability was associated with daily light-intensity activity times, even after adjusting for covariates. Using the group with the lowest quartile of light intensity activity time as reference, groups with higher quartiles of activity level had lower hazard ratios for onset of disability—hazard ratios were 0.64, 0.51, and 0.67 for the second, third, and highest quartile, respectively. Using daily moderate to vigorous activity time–defined quartile, longer duration of moderate-vigorous activity time was associated with delayed onset of disability. In the secondary analysis using the cohort with and without disability at baseline (n = 1814), similar results were found. Participants who spent more time in light intensity activity were associated with less incident disability.

Conclusion. Greater daily time spent in light intensity physical activity was associated with lower risk of onset and progression of disability among adults with knee osteoarthritis and those with risk factors for knee osteoarthritis.

Commentary

Disability, such as the inability to dress, bathe, or manage one’s medications, is prevalent among older adults in the United States [1,2]. The development of such disability among older adults is often complex and multifactorial. One significant contributor is osteoarthritis of the knee [3]. Although prior observational and randomized controlled trials have established that moderate to vigorous physical activity reduces disability incidence and progression [4,5], less is known about light intensity physical activity—activities that may be more realistically introduced for adults with symptomatic knee arthritis.

The current prospective cohort study included adults with and at risk for knee osteoarthritis; the authors found that physical activity, even if it is of light intensity, is associated with lower risk of disability onset and progression. A major strength of the study is the objective measurements of physical activity using an accelerometer rather than relying on recall or diaries, which are more subject to bias. Another strength is the long follow-up period, which allowed for the examination of incident disability or disability progression over 2 years. The results confirm that even light intensity activity is associated with reduced risk of incident disability.

It is important to note that causation cannot be inferred in this study. As the authors stated, those who can do longer periods of physical activity may be at lower risk of developing incident disability because of factors other than the physical activity itself. A different study design, such as a randomized trial, is needed to demonstrate that light intensity physical activity, when introduced to adults with or at risk for knee arthritis, may lead to reduced risk of disability.

Applications for Clinical Practice

Prior studies suggest that introducing regular exercise have significant health benefits. The recommendation for exercise for adults with knee arthritis remains the same. Whether introducing light intensity activity, particularly for those who are unable to perform more vigorous exercises, yields similar benefits will need further studies that are designed to determine therapeutic effect.

—William Hung, MD, MPH

References

1. Manton KG, Gu XL, Lamb VL. Change in chronic disability from 1982 to 2004/2005 as measured by long-term changes in function and health in the U.S. elderly population. PNAS 2006;103:18374–9.

2. Hung WW, Ross JS, Boockvar KS, Siu AL. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatrics 2011;11:47.

3. Ettinger, WH, Davis MA, Neuhaus JM, Mallon KP. Long-term physical functioning in persons with knee osteoarthritis from NHANES I: Effects of comorbid medical conditions. J Clin Epidemiol 1994;47:809–15.

4. Penninx BW, Messier SP, Rejesko WJ, et al. Physical exercise and the prevention of disability in activities of daily living in older persons with osteoarthritis. Arch Intern Med 2001;161:2309–16.

5. Ettinger WH, Burns R, Messier SP, et al. A randomized trial comparing aerobic exercise and resistance exercise with a health education program in older adults with knee osteoarthritis. The Fitness Arthritis and Seniors Trial (FAST). JAMA 1997;277:25–31.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Topics
Sections

Study Overview

Objective. To determine if time spent in light intensity physical activity is related to incident disability and disability progression.

Design. Prospective cohort study.

Setting and participants. This study uses a subcohort from the Osteoarthritis Initiative, a longitudinal study that enrolled 4796 men and women aged 45 to 79 years with or at high risk of developing knee osteoarthritis. Inclusion criteria for the main cohort study were: (1) presence of osteoarthritis with symptoms in at least 1 knee (with a definite tibiofemoral osteophyte) and pain, aching, or stiffness on most days for at least 1 month during the previous 12 months; or (2) presence of at least 1 from a set of established risk factors for knee osteoarthritis: knee symptoms in the previous 12 months; overweight; knee injury causing difficulty walking for at least a week; history of knee surgery; family history of a total knee replacement for osteoarthritis; Heberden’s nodes; repetitive knee bending at work or outside work; and age 70–79 years. The subcohort of the current study draws from the 2127 participants that enrolled in the substudy with accelerometer monitoring, included those without disability at study onset; exclusion criteria include insufficient baseline accelerometer monitoring, incomplete outcome or covariate data, decedents and those lost to follow up. A total of 1680 were included in the main analysis, and an additional 134 participants (for a total of 1814) with baseline mild or moderate disability were included in a secondary analysis. between September 2008 to December 2012 at 4 sites (Baltimore, Pittsburgh, Columbus, Ohio, and Pawtucket, Rhode Island)

Main outcome measure. Disability at the 2-year follow-up visit among those without disability at baseline. Disability was ascertained by using a set of questions asking if participants have any difficulty performing each basic or instrumental activity of daily living because of a health or memory problem. Basic activities include walking across a room, dressing, bathing, eating, using the toilet and bed transfer. Instrumental activities of daily living include preparing hot meals, grocery shopping, making telephone calls, taking drugs, and managing money. Disability levels were defined as none, mild (only instrumental activities limitations), moderate (1–2 basic activities limitations), and severe (more than 2 basic activities limitations).

Statistical analysis. Main predictor variable was physical activity monitored using accelerometers measured at baseline. Participants wear the accelerometer for 7 consecutive days on a belt from arising in the morning until retiring, except during water activities. Participants also recorded on a daily log the time spent in water and cycling. Intensity thresholds were applied on a minute by minute basis to identify non-sedentary activity of light intensity and moderate to vigorous intensity. The primary variable was the accelerometer assessment of physical activity measured as daily minutes spent in light or moderate-vigorous activity. The time spent was divided in quartiles; the quartile cut-points for light activity were 229, 277, and 331 minutes, and the cut-points for moderate-vigorous activity were 4.3, 12.2, and 28.2 average minutes per day. Other covariates were socioeconomic factors including race and ethnicity, age, sex education and income, health factors including chronic conditions by self report, body mass index, knee-specific health factors and symptoms, smoking, and gait speed. The main analysis of the relationship between baseline physical activity and the development of disability was done using survival analysis techniques and hazard ratios. Secondary analysis using the larger cohort evaluated hazard ratios for disability progression as defined by progression to a more severe level among the 1814 participants.

Main results. In the main analysis, with 1680 participants without disability at baseline, 149 participants had new disability over the 2 years of follow-up. Average age of the cohort was 65 years, the majority (85%) were white, and approximately 54% were female. The cohort averaged 302 minutes a day of non-sedentary activity, the majority of which was light-intensity activities (284 minutes). Older age was associated with lower physical activity (P < 0.001), as was male sex (P < 0.001), higher body mass index, a number of chronic medical conditions (cancer, cerebrovascular disease, congestive heart failure), lower extremity pain, and higher grade of knee osteoarthritis severity. Onset of disability was associated with daily light-intensity activity times, even after adjusting for covariates. Using the group with the lowest quartile of light intensity activity time as reference, groups with higher quartiles of activity level had lower hazard ratios for onset of disability—hazard ratios were 0.64, 0.51, and 0.67 for the second, third, and highest quartile, respectively. Using daily moderate to vigorous activity time–defined quartile, longer duration of moderate-vigorous activity time was associated with delayed onset of disability. In the secondary analysis using the cohort with and without disability at baseline (n = 1814), similar results were found. Participants who spent more time in light intensity activity were associated with less incident disability.

Conclusion. Greater daily time spent in light intensity physical activity was associated with lower risk of onset and progression of disability among adults with knee osteoarthritis and those with risk factors for knee osteoarthritis.

Commentary

Disability, such as the inability to dress, bathe, or manage one’s medications, is prevalent among older adults in the United States [1,2]. The development of such disability among older adults is often complex and multifactorial. One significant contributor is osteoarthritis of the knee [3]. Although prior observational and randomized controlled trials have established that moderate to vigorous physical activity reduces disability incidence and progression [4,5], less is known about light intensity physical activity—activities that may be more realistically introduced for adults with symptomatic knee arthritis.

The current prospective cohort study included adults with and at risk for knee osteoarthritis; the authors found that physical activity, even if it is of light intensity, is associated with lower risk of disability onset and progression. A major strength of the study is the objective measurements of physical activity using an accelerometer rather than relying on recall or diaries, which are more subject to bias. Another strength is the long follow-up period, which allowed for the examination of incident disability or disability progression over 2 years. The results confirm that even light intensity activity is associated with reduced risk of incident disability.

It is important to note that causation cannot be inferred in this study. As the authors stated, those who can do longer periods of physical activity may be at lower risk of developing incident disability because of factors other than the physical activity itself. A different study design, such as a randomized trial, is needed to demonstrate that light intensity physical activity, when introduced to adults with or at risk for knee arthritis, may lead to reduced risk of disability.

Applications for Clinical Practice

Prior studies suggest that introducing regular exercise have significant health benefits. The recommendation for exercise for adults with knee arthritis remains the same. Whether introducing light intensity activity, particularly for those who are unable to perform more vigorous exercises, yields similar benefits will need further studies that are designed to determine therapeutic effect.

—William Hung, MD, MPH

Study Overview

Objective. To determine if time spent in light intensity physical activity is related to incident disability and disability progression.

Design. Prospective cohort study.

Setting and participants. This study uses a subcohort from the Osteoarthritis Initiative, a longitudinal study that enrolled 4796 men and women aged 45 to 79 years with or at high risk of developing knee osteoarthritis. Inclusion criteria for the main cohort study were: (1) presence of osteoarthritis with symptoms in at least 1 knee (with a definite tibiofemoral osteophyte) and pain, aching, or stiffness on most days for at least 1 month during the previous 12 months; or (2) presence of at least 1 from a set of established risk factors for knee osteoarthritis: knee symptoms in the previous 12 months; overweight; knee injury causing difficulty walking for at least a week; history of knee surgery; family history of a total knee replacement for osteoarthritis; Heberden’s nodes; repetitive knee bending at work or outside work; and age 70–79 years. The subcohort of the current study draws from the 2127 participants that enrolled in the substudy with accelerometer monitoring, included those without disability at study onset; exclusion criteria include insufficient baseline accelerometer monitoring, incomplete outcome or covariate data, decedents and those lost to follow up. A total of 1680 were included in the main analysis, and an additional 134 participants (for a total of 1814) with baseline mild or moderate disability were included in a secondary analysis. between September 2008 to December 2012 at 4 sites (Baltimore, Pittsburgh, Columbus, Ohio, and Pawtucket, Rhode Island)

Main outcome measure. Disability at the 2-year follow-up visit among those without disability at baseline. Disability was ascertained by using a set of questions asking if participants have any difficulty performing each basic or instrumental activity of daily living because of a health or memory problem. Basic activities include walking across a room, dressing, bathing, eating, using the toilet and bed transfer. Instrumental activities of daily living include preparing hot meals, grocery shopping, making telephone calls, taking drugs, and managing money. Disability levels were defined as none, mild (only instrumental activities limitations), moderate (1–2 basic activities limitations), and severe (more than 2 basic activities limitations).

Statistical analysis. Main predictor variable was physical activity monitored using accelerometers measured at baseline. Participants wear the accelerometer for 7 consecutive days on a belt from arising in the morning until retiring, except during water activities. Participants also recorded on a daily log the time spent in water and cycling. Intensity thresholds were applied on a minute by minute basis to identify non-sedentary activity of light intensity and moderate to vigorous intensity. The primary variable was the accelerometer assessment of physical activity measured as daily minutes spent in light or moderate-vigorous activity. The time spent was divided in quartiles; the quartile cut-points for light activity were 229, 277, and 331 minutes, and the cut-points for moderate-vigorous activity were 4.3, 12.2, and 28.2 average minutes per day. Other covariates were socioeconomic factors including race and ethnicity, age, sex education and income, health factors including chronic conditions by self report, body mass index, knee-specific health factors and symptoms, smoking, and gait speed. The main analysis of the relationship between baseline physical activity and the development of disability was done using survival analysis techniques and hazard ratios. Secondary analysis using the larger cohort evaluated hazard ratios for disability progression as defined by progression to a more severe level among the 1814 participants.

Main results. In the main analysis, with 1680 participants without disability at baseline, 149 participants had new disability over the 2 years of follow-up. Average age of the cohort was 65 years, the majority (85%) were white, and approximately 54% were female. The cohort averaged 302 minutes a day of non-sedentary activity, the majority of which was light-intensity activities (284 minutes). Older age was associated with lower physical activity (P < 0.001), as was male sex (P < 0.001), higher body mass index, a number of chronic medical conditions (cancer, cerebrovascular disease, congestive heart failure), lower extremity pain, and higher grade of knee osteoarthritis severity. Onset of disability was associated with daily light-intensity activity times, even after adjusting for covariates. Using the group with the lowest quartile of light intensity activity time as reference, groups with higher quartiles of activity level had lower hazard ratios for onset of disability—hazard ratios were 0.64, 0.51, and 0.67 for the second, third, and highest quartile, respectively. Using daily moderate to vigorous activity time–defined quartile, longer duration of moderate-vigorous activity time was associated with delayed onset of disability. In the secondary analysis using the cohort with and without disability at baseline (n = 1814), similar results were found. Participants who spent more time in light intensity activity were associated with less incident disability.

Conclusion. Greater daily time spent in light intensity physical activity was associated with lower risk of onset and progression of disability among adults with knee osteoarthritis and those with risk factors for knee osteoarthritis.

Commentary

Disability, such as the inability to dress, bathe, or manage one’s medications, is prevalent among older adults in the United States [1,2]. The development of such disability among older adults is often complex and multifactorial. One significant contributor is osteoarthritis of the knee [3]. Although prior observational and randomized controlled trials have established that moderate to vigorous physical activity reduces disability incidence and progression [4,5], less is known about light intensity physical activity—activities that may be more realistically introduced for adults with symptomatic knee arthritis.

The current prospective cohort study included adults with and at risk for knee osteoarthritis; the authors found that physical activity, even if it is of light intensity, is associated with lower risk of disability onset and progression. A major strength of the study is the objective measurements of physical activity using an accelerometer rather than relying on recall or diaries, which are more subject to bias. Another strength is the long follow-up period, which allowed for the examination of incident disability or disability progression over 2 years. The results confirm that even light intensity activity is associated with reduced risk of incident disability.

It is important to note that causation cannot be inferred in this study. As the authors stated, those who can do longer periods of physical activity may be at lower risk of developing incident disability because of factors other than the physical activity itself. A different study design, such as a randomized trial, is needed to demonstrate that light intensity physical activity, when introduced to adults with or at risk for knee arthritis, may lead to reduced risk of disability.

Applications for Clinical Practice

Prior studies suggest that introducing regular exercise have significant health benefits. The recommendation for exercise for adults with knee arthritis remains the same. Whether introducing light intensity activity, particularly for those who are unable to perform more vigorous exercises, yields similar benefits will need further studies that are designed to determine therapeutic effect.

—William Hung, MD, MPH

References

1. Manton KG, Gu XL, Lamb VL. Change in chronic disability from 1982 to 2004/2005 as measured by long-term changes in function and health in the U.S. elderly population. PNAS 2006;103:18374–9.

2. Hung WW, Ross JS, Boockvar KS, Siu AL. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatrics 2011;11:47.

3. Ettinger, WH, Davis MA, Neuhaus JM, Mallon KP. Long-term physical functioning in persons with knee osteoarthritis from NHANES I: Effects of comorbid medical conditions. J Clin Epidemiol 1994;47:809–15.

4. Penninx BW, Messier SP, Rejesko WJ, et al. Physical exercise and the prevention of disability in activities of daily living in older persons with osteoarthritis. Arch Intern Med 2001;161:2309–16.

5. Ettinger WH, Burns R, Messier SP, et al. A randomized trial comparing aerobic exercise and resistance exercise with a health education program in older adults with knee osteoarthritis. The Fitness Arthritis and Seniors Trial (FAST). JAMA 1997;277:25–31.

References

1. Manton KG, Gu XL, Lamb VL. Change in chronic disability from 1982 to 2004/2005 as measured by long-term changes in function and health in the U.S. elderly population. PNAS 2006;103:18374–9.

2. Hung WW, Ross JS, Boockvar KS, Siu AL. Recent trends in chronic disease, impairment and disability among older adults in the United States. BMC Geriatrics 2011;11:47.

3. Ettinger, WH, Davis MA, Neuhaus JM, Mallon KP. Long-term physical functioning in persons with knee osteoarthritis from NHANES I: Effects of comorbid medical conditions. J Clin Epidemiol 1994;47:809–15.

4. Penninx BW, Messier SP, Rejesko WJ, et al. Physical exercise and the prevention of disability in activities of daily living in older persons with osteoarthritis. Arch Intern Med 2001;161:2309–16.

5. Ettinger WH, Burns R, Messier SP, et al. A randomized trial comparing aerobic exercise and resistance exercise with a health education program in older adults with knee osteoarthritis. The Fitness Arthritis and Seniors Trial (FAST). JAMA 1997;277:25–31.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
Light Intensity Physical Activity May Reduce Risk of Disability Among Adults with or At Risk For Knee Osteoarthritis
Display Headline
Light Intensity Physical Activity May Reduce Risk of Disability Among Adults with or At Risk For Knee Osteoarthritis
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Capturing the Impact of Language Barriers on Asthma Management During an Emergency Department Visit

Article Type
Changed
Tue, 03/06/2018 - 15:47
Display Headline
Capturing the Impact of Language Barriers on Asthma Management During an Emergency Department Visit

Study Overview

Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.

Design. Cross-sectional survey.

Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.

Main outcome measure. Caregiver use of an asthma action plan.

Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years.  Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.

Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.

Commentary

With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.

Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].

This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.

The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.

There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.

Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.

Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.

Applications for Clinical Practice

Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.

—Allison Squires, PhD, RN

References

1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.

2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.

3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.

4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.

5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.

6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.

7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.

8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Topics
Sections

Study Overview

Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.

Design. Cross-sectional survey.

Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.

Main outcome measure. Caregiver use of an asthma action plan.

Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years.  Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.

Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.

Commentary

With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.

Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].

This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.

The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.

There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.

Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.

Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.

Applications for Clinical Practice

Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.

—Allison Squires, PhD, RN

Study Overview

Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.

Design. Cross-sectional survey.

Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.

Main outcome measure. Caregiver use of an asthma action plan.

Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years.  Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.

Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.

Commentary

With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.

Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].

This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.

The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.

There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.

Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.

Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.

Applications for Clinical Practice

Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.

—Allison Squires, PhD, RN

References

1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.

2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.

3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.

4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.

5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.

6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.

7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.

8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.

References

1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.

2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.

3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.

4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.

5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.

6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.

7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.

8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.

Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Issue
Journal of Clinical Outcomes Management - June 2014, VOL. 21, NO. 6
Publications
Publications
Topics
Article Type
Display Headline
Capturing the Impact of Language Barriers on Asthma Management During an Emergency Department Visit
Display Headline
Capturing the Impact of Language Barriers on Asthma Management During an Emergency Department Visit
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Blood sterilization processes harmful to platelets

Article Type
Changed
Sun, 06/15/2014 - 05:00
Display Headline
Blood sterilization processes harmful to platelets

Platelets in a blood smear

Some processes used to sterilize blood for transfusion are harmful to platelet function and could cause serious health issues in transfusion recipients, researchers say.

They found that some pathogen-reduction treatments impact platelets to the extent that they may be the cause of hemorrhages in recipients. 

The pathogen reduction treatments “were developed more than 20 years ago, before we understood the importance of the genetic material contained in platelets,” explained study author Patrick Provost, PhD, of Université Laval and the CHU de Québec Research Center in Canada. 

Platelets contain up to a third of the human genome in the form of ribonucleic acid (RNA), which enables them to synthesize over 1,000 proteins essential to the normal functioning of the human body.

The researchers studied the effects of 3 pathogen-reduction strategies—irradiation, riboflavin plus UVB light (Mirasol), and amotosalen plus UVA light (Intercept)—on platelet microRNAs, messenger RNAs (mRNAs), activation, and function. 

They reported their findings in the journal Platelets

The investigators collected 50 single-donor (apheresis) platelet concentrates (PCs) and subjected them to 5 treatments.

The control platelets were stored in donor plasma; additive solution platelets were stored in 65% storage solution and 35% donor plasma; the irradiation platelets were treated with 30Gy gamma irradiation and stored in donor plasma; the platelets treated with Mirasol were stored in donor plasma; and the platelets treated with Intercept were stored in the same solution as the additive solution group.

All treatments followed standard procedures or the manufacturer’s instructions.

After platelet isolation and RNA extraction, the investigators analyzed the levels of microRNA and mRNA levels of the platelets and assessed the impact of those levels on platelet activation and function.

MicroRNA profiles

They learned that platelets stored with additive solution or irradiation had significantly (P<0.05) reduced levels of one microRNA each, and only on day 7 of storage. Additive solution reduced the level of miR-223 and irradiation reduced the level of let-73. 

Mirasol did not significantly reduce the level of any of the 11 tested micro RNAs. 

And Intercept significantly reduced the level of 6 microRNAs on day 1, 1 microRNA on day 4, and 2 microRNAs on day 7. By day 7, let-7e was reduced by up to 70%.

The microRNA levels remained stable in the control sample for the entire 7-day storage period.

Platelet activation and function

Platelet counts in the Mirasol- and Intercept-treated platelets were significantly lower (P<0.001) on storage days 1, 4, and 7 compared with control platelets.

Pathogen-reduction treatments did not affect platelet microRNA synthesis, platelet microRNA function, nor did they induce the formation of cross-linked RNA adducts.

However, pathogen reduction caused platelet activation, which correlates with the observed reduction in platelet microRNAs. 

The investigators measured CD62P expression, a marker of platelet activation, on the platelet surface. The additive solution platelets and Intercept-treated platelets, and to a lesser degree the irradiation group, had greater CD62P surface expression than the control group (P<0.05) on day 1. 

The Mirasol group had similar activation to that of the control group.

On day 4, all treatment groups showed more activation than the control group (P<0.05). And on day 7, all groups had about the same activation level as the control group.

Pathogen reduction also impacted the aggregation response of platelets. Mirasol-treated platelets, which had the same aggregation response as that of controls on day 1, had no response on days 4 and 7. 

And the aggregation response for Intercept-treated and additive solution platelets was already absent on day 1 and remained so on days 4 and 7.

 

 

Additive solution and Intercept also reduced platelet volume on day 1, which the investigators say could be explained by the platelet activation and release of microparticles induced by the treatments.

MicroRNA release

The investigators hypothesized that activated stored platelets could release microRNAs through microparticles in the supernatant. So they collected supernatant from each of the 5 groups and analyzed their total content of miR-223, which is one of the most abundant platelet microRNAs. 

They discovered that the total amount of miR-223 was increased 30% to 86% in the microparticles released from additive solution and Intercept-treated platelets. They did not observe this increase in irradiation- or Mirasol-treated platelets compared to controls. 

"The platelets end up depleted of RNA so, once transfused, they're unable to do what they normally would," Dr Provost said. Nevertheless, the clinical implications of the reduction in platelet activation and impaired platelet aggregation after Intercept treatment remain to be established.

The  pathogen-reduction treatments are already marketed in some European countries, notably Switzerland, France, and Germany, and are under consideration in other countries, including Canada and the United States. 

"In light of what we have demonstrated, the potentially harmful effects of these treatments should be carefully evaluated in the countries where they are not yet approved. It should also be re-evaluated in those countries where they are," Dr Provost said.

Publications
Topics

Platelets in a blood smear

Some processes used to sterilize blood for transfusion are harmful to platelet function and could cause serious health issues in transfusion recipients, researchers say.

They found that some pathogen-reduction treatments impact platelets to the extent that they may be the cause of hemorrhages in recipients. 

The pathogen reduction treatments “were developed more than 20 years ago, before we understood the importance of the genetic material contained in platelets,” explained study author Patrick Provost, PhD, of Université Laval and the CHU de Québec Research Center in Canada. 

Platelets contain up to a third of the human genome in the form of ribonucleic acid (RNA), which enables them to synthesize over 1,000 proteins essential to the normal functioning of the human body.

The researchers studied the effects of 3 pathogen-reduction strategies—irradiation, riboflavin plus UVB light (Mirasol), and amotosalen plus UVA light (Intercept)—on platelet microRNAs, messenger RNAs (mRNAs), activation, and function. 

They reported their findings in the journal Platelets

The investigators collected 50 single-donor (apheresis) platelet concentrates (PCs) and subjected them to 5 treatments.

The control platelets were stored in donor plasma; additive solution platelets were stored in 65% storage solution and 35% donor plasma; the irradiation platelets were treated with 30Gy gamma irradiation and stored in donor plasma; the platelets treated with Mirasol were stored in donor plasma; and the platelets treated with Intercept were stored in the same solution as the additive solution group.

All treatments followed standard procedures or the manufacturer’s instructions.

After platelet isolation and RNA extraction, the investigators analyzed the levels of microRNA and mRNA levels of the platelets and assessed the impact of those levels on platelet activation and function.

MicroRNA profiles

They learned that platelets stored with additive solution or irradiation had significantly (P<0.05) reduced levels of one microRNA each, and only on day 7 of storage. Additive solution reduced the level of miR-223 and irradiation reduced the level of let-73. 

Mirasol did not significantly reduce the level of any of the 11 tested micro RNAs. 

And Intercept significantly reduced the level of 6 microRNAs on day 1, 1 microRNA on day 4, and 2 microRNAs on day 7. By day 7, let-7e was reduced by up to 70%.

The microRNA levels remained stable in the control sample for the entire 7-day storage period.

Platelet activation and function

Platelet counts in the Mirasol- and Intercept-treated platelets were significantly lower (P<0.001) on storage days 1, 4, and 7 compared with control platelets.

Pathogen-reduction treatments did not affect platelet microRNA synthesis, platelet microRNA function, nor did they induce the formation of cross-linked RNA adducts.

However, pathogen reduction caused platelet activation, which correlates with the observed reduction in platelet microRNAs. 

The investigators measured CD62P expression, a marker of platelet activation, on the platelet surface. The additive solution platelets and Intercept-treated platelets, and to a lesser degree the irradiation group, had greater CD62P surface expression than the control group (P<0.05) on day 1. 

The Mirasol group had similar activation to that of the control group.

On day 4, all treatment groups showed more activation than the control group (P<0.05). And on day 7, all groups had about the same activation level as the control group.

Pathogen reduction also impacted the aggregation response of platelets. Mirasol-treated platelets, which had the same aggregation response as that of controls on day 1, had no response on days 4 and 7. 

And the aggregation response for Intercept-treated and additive solution platelets was already absent on day 1 and remained so on days 4 and 7.

 

 

Additive solution and Intercept also reduced platelet volume on day 1, which the investigators say could be explained by the platelet activation and release of microparticles induced by the treatments.

MicroRNA release

The investigators hypothesized that activated stored platelets could release microRNAs through microparticles in the supernatant. So they collected supernatant from each of the 5 groups and analyzed their total content of miR-223, which is one of the most abundant platelet microRNAs. 

They discovered that the total amount of miR-223 was increased 30% to 86% in the microparticles released from additive solution and Intercept-treated platelets. They did not observe this increase in irradiation- or Mirasol-treated platelets compared to controls. 

"The platelets end up depleted of RNA so, once transfused, they're unable to do what they normally would," Dr Provost said. Nevertheless, the clinical implications of the reduction in platelet activation and impaired platelet aggregation after Intercept treatment remain to be established.

The  pathogen-reduction treatments are already marketed in some European countries, notably Switzerland, France, and Germany, and are under consideration in other countries, including Canada and the United States. 

"In light of what we have demonstrated, the potentially harmful effects of these treatments should be carefully evaluated in the countries where they are not yet approved. It should also be re-evaluated in those countries where they are," Dr Provost said.

Platelets in a blood smear

Some processes used to sterilize blood for transfusion are harmful to platelet function and could cause serious health issues in transfusion recipients, researchers say.

They found that some pathogen-reduction treatments impact platelets to the extent that they may be the cause of hemorrhages in recipients. 

The pathogen reduction treatments “were developed more than 20 years ago, before we understood the importance of the genetic material contained in platelets,” explained study author Patrick Provost, PhD, of Université Laval and the CHU de Québec Research Center in Canada. 

Platelets contain up to a third of the human genome in the form of ribonucleic acid (RNA), which enables them to synthesize over 1,000 proteins essential to the normal functioning of the human body.

The researchers studied the effects of 3 pathogen-reduction strategies—irradiation, riboflavin plus UVB light (Mirasol), and amotosalen plus UVA light (Intercept)—on platelet microRNAs, messenger RNAs (mRNAs), activation, and function. 

They reported their findings in the journal Platelets

The investigators collected 50 single-donor (apheresis) platelet concentrates (PCs) and subjected them to 5 treatments.

The control platelets were stored in donor plasma; additive solution platelets were stored in 65% storage solution and 35% donor plasma; the irradiation platelets were treated with 30Gy gamma irradiation and stored in donor plasma; the platelets treated with Mirasol were stored in donor plasma; and the platelets treated with Intercept were stored in the same solution as the additive solution group.

All treatments followed standard procedures or the manufacturer’s instructions.

After platelet isolation and RNA extraction, the investigators analyzed the levels of microRNA and mRNA levels of the platelets and assessed the impact of those levels on platelet activation and function.

MicroRNA profiles

They learned that platelets stored with additive solution or irradiation had significantly (P<0.05) reduced levels of one microRNA each, and only on day 7 of storage. Additive solution reduced the level of miR-223 and irradiation reduced the level of let-73. 

Mirasol did not significantly reduce the level of any of the 11 tested micro RNAs. 

And Intercept significantly reduced the level of 6 microRNAs on day 1, 1 microRNA on day 4, and 2 microRNAs on day 7. By day 7, let-7e was reduced by up to 70%.

The microRNA levels remained stable in the control sample for the entire 7-day storage period.

Platelet activation and function

Platelet counts in the Mirasol- and Intercept-treated platelets were significantly lower (P<0.001) on storage days 1, 4, and 7 compared with control platelets.

Pathogen-reduction treatments did not affect platelet microRNA synthesis, platelet microRNA function, nor did they induce the formation of cross-linked RNA adducts.

However, pathogen reduction caused platelet activation, which correlates with the observed reduction in platelet microRNAs. 

The investigators measured CD62P expression, a marker of platelet activation, on the platelet surface. The additive solution platelets and Intercept-treated platelets, and to a lesser degree the irradiation group, had greater CD62P surface expression than the control group (P<0.05) on day 1. 

The Mirasol group had similar activation to that of the control group.

On day 4, all treatment groups showed more activation than the control group (P<0.05). And on day 7, all groups had about the same activation level as the control group.

Pathogen reduction also impacted the aggregation response of platelets. Mirasol-treated platelets, which had the same aggregation response as that of controls on day 1, had no response on days 4 and 7. 

And the aggregation response for Intercept-treated and additive solution platelets was already absent on day 1 and remained so on days 4 and 7.

 

 

Additive solution and Intercept also reduced platelet volume on day 1, which the investigators say could be explained by the platelet activation and release of microparticles induced by the treatments.

MicroRNA release

The investigators hypothesized that activated stored platelets could release microRNAs through microparticles in the supernatant. So they collected supernatant from each of the 5 groups and analyzed their total content of miR-223, which is one of the most abundant platelet microRNAs. 

They discovered that the total amount of miR-223 was increased 30% to 86% in the microparticles released from additive solution and Intercept-treated platelets. They did not observe this increase in irradiation- or Mirasol-treated platelets compared to controls. 

"The platelets end up depleted of RNA so, once transfused, they're unable to do what they normally would," Dr Provost said. Nevertheless, the clinical implications of the reduction in platelet activation and impaired platelet aggregation after Intercept treatment remain to be established.

The  pathogen-reduction treatments are already marketed in some European countries, notably Switzerland, France, and Germany, and are under consideration in other countries, including Canada and the United States. 

"In light of what we have demonstrated, the potentially harmful effects of these treatments should be carefully evaluated in the countries where they are not yet approved. It should also be re-evaluated in those countries where they are," Dr Provost said.

Publications
Publications
Topics
Article Type
Display Headline
Blood sterilization processes harmful to platelets
Display Headline
Blood sterilization processes harmful to platelets
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

VIDEO: ACC/AHA lipid guidelines and diabetes

Article Type
Changed
Tue, 05/03/2022 - 15:49
Display Headline
VIDEO: ACC/AHA lipid guidelines and diabetes

SAN FRANCISCO – Those looking for guidance from the American Diabetes Association regarding the guidelines released last fall from the American College of Cardiology and the American Heart Association dropping cholesterol treatment goals will have to wait until next year.

That’s when the ADA’s Clinical Practice Recommendations, released each year in January, will incorporate the Professional Practice Committee’s review of the ACC/AHA guidelines and the evidence behind it. The new recommendations caused some controversy and raised some questions about treatment of certain patient groups, most notably those with diabetes.

The ADA hasn’t recommended any changes to its current guidelines, which still incorporate treatment to target. But it has been reviewing the guidelines to see if it would recommend any changes for its 2015 guidelines.

Dr. Robert E. Ratner, chief scientific and medical officer for the American Diabetes Association, further explained the organization’s position on treatment of lipids in patients with diabetes in a video interview at the annual scientific sessions of the ADA.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel

The association is also holding a debate at this year’s meeting to discuss the pros and cons of the new lipid guidelines for patients with diabetes.

In a press conference, Dr. Robert Eckel, professor of medicine and Charles A. Boettcher chair in atherosclerosis at University of Colorado, Anschutz Medical Campus, Aurora, said he was in support of the ACC/AHA guidelines, having served on the Task Force on Practice Guidelines, and that he believed that almost all patients with diabetes should be on a statin. He stressed that the new guidelines are evidence based.

But Dr. Henry Ginsberg, Irving Professor of Medicine and Director of the Irving Institute for Clinical and Translational Research at Columbia University, New York, argued that the guidelines’ evidence-based construct was too narrow.

In a video interview, Dr. Ginsberg further discussed his position and his practice tips for physicians.

Both physicians agreed that patients should be treated on an individual basis. For instance, patients who are statin intolerant won’t meet the guidelines’ criteria and "we’ll have to go beyond the guidelines," said Dr. Eckel.

[email protected]

On Twitter @naseemmiller

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
American Diabetes Association, guidelines, American College of Cardiology, American Heart Association, cholesterol, diabetes
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

SAN FRANCISCO – Those looking for guidance from the American Diabetes Association regarding the guidelines released last fall from the American College of Cardiology and the American Heart Association dropping cholesterol treatment goals will have to wait until next year.

That’s when the ADA’s Clinical Practice Recommendations, released each year in January, will incorporate the Professional Practice Committee’s review of the ACC/AHA guidelines and the evidence behind it. The new recommendations caused some controversy and raised some questions about treatment of certain patient groups, most notably those with diabetes.

The ADA hasn’t recommended any changes to its current guidelines, which still incorporate treatment to target. But it has been reviewing the guidelines to see if it would recommend any changes for its 2015 guidelines.

Dr. Robert E. Ratner, chief scientific and medical officer for the American Diabetes Association, further explained the organization’s position on treatment of lipids in patients with diabetes in a video interview at the annual scientific sessions of the ADA.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel

The association is also holding a debate at this year’s meeting to discuss the pros and cons of the new lipid guidelines for patients with diabetes.

In a press conference, Dr. Robert Eckel, professor of medicine and Charles A. Boettcher chair in atherosclerosis at University of Colorado, Anschutz Medical Campus, Aurora, said he was in support of the ACC/AHA guidelines, having served on the Task Force on Practice Guidelines, and that he believed that almost all patients with diabetes should be on a statin. He stressed that the new guidelines are evidence based.

But Dr. Henry Ginsberg, Irving Professor of Medicine and Director of the Irving Institute for Clinical and Translational Research at Columbia University, New York, argued that the guidelines’ evidence-based construct was too narrow.

In a video interview, Dr. Ginsberg further discussed his position and his practice tips for physicians.

Both physicians agreed that patients should be treated on an individual basis. For instance, patients who are statin intolerant won’t meet the guidelines’ criteria and "we’ll have to go beyond the guidelines," said Dr. Eckel.

[email protected]

On Twitter @naseemmiller

SAN FRANCISCO – Those looking for guidance from the American Diabetes Association regarding the guidelines released last fall from the American College of Cardiology and the American Heart Association dropping cholesterol treatment goals will have to wait until next year.

That’s when the ADA’s Clinical Practice Recommendations, released each year in January, will incorporate the Professional Practice Committee’s review of the ACC/AHA guidelines and the evidence behind it. The new recommendations caused some controversy and raised some questions about treatment of certain patient groups, most notably those with diabetes.

The ADA hasn’t recommended any changes to its current guidelines, which still incorporate treatment to target. But it has been reviewing the guidelines to see if it would recommend any changes for its 2015 guidelines.

Dr. Robert E. Ratner, chief scientific and medical officer for the American Diabetes Association, further explained the organization’s position on treatment of lipids in patients with diabetes in a video interview at the annual scientific sessions of the ADA.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel

The association is also holding a debate at this year’s meeting to discuss the pros and cons of the new lipid guidelines for patients with diabetes.

In a press conference, Dr. Robert Eckel, professor of medicine and Charles A. Boettcher chair in atherosclerosis at University of Colorado, Anschutz Medical Campus, Aurora, said he was in support of the ACC/AHA guidelines, having served on the Task Force on Practice Guidelines, and that he believed that almost all patients with diabetes should be on a statin. He stressed that the new guidelines are evidence based.

But Dr. Henry Ginsberg, Irving Professor of Medicine and Director of the Irving Institute for Clinical and Translational Research at Columbia University, New York, argued that the guidelines’ evidence-based construct was too narrow.

In a video interview, Dr. Ginsberg further discussed his position and his practice tips for physicians.

Both physicians agreed that patients should be treated on an individual basis. For instance, patients who are statin intolerant won’t meet the guidelines’ criteria and "we’ll have to go beyond the guidelines," said Dr. Eckel.

[email protected]

On Twitter @naseemmiller

Publications
Publications
Topics
Article Type
Display Headline
VIDEO: ACC/AHA lipid guidelines and diabetes
Display Headline
VIDEO: ACC/AHA lipid guidelines and diabetes
Legacy Keywords
American Diabetes Association, guidelines, American College of Cardiology, American Heart Association, cholesterol, diabetes
Legacy Keywords
American Diabetes Association, guidelines, American College of Cardiology, American Heart Association, cholesterol, diabetes
Sections
Article Source

AT THE ADA ANNUAL SCIENTIFIC SESSIONS

PURLs Copyright

Inside the Article